Sniffing Out Answers: A Conversation with Markus Meister

Blindfolded and asked to distinguish between a rose and, say, smoke from a burning candle, most people would find the task easy. Even differentiating between two rose varieties can be a snap because the human olfactory system—made up of the nerve cells in our noses and everything that allows the brain to process smell—is quite adept. But just how sensitive is it to different smells?

In 2014, a team of scientists from the Rockefeller University published a paper in the journal Science, arguing that humans can discriminate at least 1 trillion odors. Now Markus Meister, the Anne P. and Benjamin F. Biaggini Professor of Biological Sciences at Caltech, has published a paper in the open-access journal eLife, in which he disputes the 2014 claim, saying that the science is not yet in a place where such a number can be determined.

We recently spoke with Meister about his new paper and what it says about the claim that we can distinguish a trillion smells.


What was the goal of the 2014 paper, and why do you take issue with it?

The overt question the authors asked was: How many different smells can humans distinguish? That is a naturally interesting question, in part because in other fields of sensory biology, similar questions have already been answered. People quibble about the exact numbers, but in general scientists agree that humans can distinguish about 1 to 2 million colors and something on the order of 100,000 pure tones.

But as interesting as the question is, I argue that we, as a field, are not yet prepared to address it. First we need to know how many dimensions span the perceptual space of odors. And by that I mean: how many olfactory variables are needed to fully describe all of the odors that humans can experience?

In the case of human vision, we say that the perceptual space for colors has three dimensions, which means that every physical light can be described by three numbers—how it activates the red, green, and blue cone photoreceptors in the retina.

As long as we don't know the dimensionality of odor space, we don't know how to even start interpreting measurements. Once we know the dimensionality, we can start probing the space systematically and ask how many different odors fit into it in the same way that we've looked at how many different colors fit into the three-dimensional space of colors.

The fundamental conceptual mistake that the authors of the Science paper made was to assume that the space of odor perception has 128 dimensions or more and then interpret the data as though that was the case . . . even though there is absolutely no evidence to suggest that the odor space has such high dimensionality.


What makes it so hard to determine the dimensionality of odor?

Well, there are a couple of things. First, there is no natural coordinate system in which olfactory stimuli exist. This stands in contrast with visual and auditory stimuli. For example, pure (monochromatic) lights or tones can be represented nicely as sinusoidal waves with just two variables, the frequency and the amplitude of the wave. We can easily control those two variables, and they correspond nicely to things we perceive. For pure tones, the amplitude of the sine wave corresponds to loudness and the frequency corresponds to perceived pitch. For a pure light, the frequency determines your perception of the color; if you change the intensity of the light, that alters your perception of the brightness. These simple physical parameters of the stimulus allow us to explore those spaces more easily.

In the case of odors, there are probably several hundred thousand substances that have a smell that can be perceived. But they all have different structures. There is no intuitive way to organize the stimuli. There has been some recent progress in this area, but in general we have not been successful in isolating a few physical variables that can account for a lot of what we smell.

Another aspect of olfaction that has complicated people's thinking is that humans have about 400 types of primary smell receptors. These are the actual neurons in the lining of the nasal cavity that detect odorants. So at the very input to the nervous system, every smell is characterized by the action it has on those 400 different sensors. Based on that, you might assume that smell lives in a much larger space than color vision—one with as many as 400 dimensions.

But can we perceive all of those 400 dimensions? Just because two odors cause a different pattern of activation of nerve cells in the nose doesn't mean you can actually tell them apart. Think about our sense of touch. Every one of our hairs has at its root several mechanoreceptors. If you run a comb through the hair on your head, you activate a hundred thousand mechanoreceptors in a particular pattern. If you repeat the action, you activate a different pattern of receptors, but you will be unable to perceive a difference. Similarly, I argue, there's no reason to think that we can perceive a difference between all the different patterns of activation of nerve cells in the nasal cavity. So the number of dimensions could, in fact, be much lower than 400. In fact, some recent studies have suggested that odor lives in a space with 10 or fewer perceptual dimensions.


In your work you describe a couple of basic experimental design failures of the 2014 paper. Can you walk us through those?

Basically, two scientific errors were made in the original study. They have to do with the concept of a positive-control experiment and the concept of testing alternative hypotheses.

In science, when we come up with a new way of analyzing things, we need to perform a test—called a positive control—that gives us confidence that the new analysis can find the right answer in a case where we already know what the answer is. So, for example, if you have devised a new way of weighing things, you will want to test it by weighing something whose weight you already know very well based on some accepted procedure. If the new procedure gives a different answer, we say it failed the positive control.

The 2014 paper did not include a positive-control test. In my paper, I provide two; applying the system that the authors propose to a very simple model microbe and to the human color-vision system. In both cases, the answers come out wrong by huge factors.

The other failure of the 2014 paper is a failure to consider alternate hypotheses. When scientists interpret the outcome of an experiment, we need to seriously analyze alternate hypotheses to the ones we believe are most likely and show why they are not reasonable explanations for what we are seeing.

In my paper, I show that an alternate model that is clearly absurd—that humans can only discriminate 10 odors—explains the data just as well as the very complicated explanation that the authors propose, which involves 400 dimensions and 1 trillion odor percepts. What this really means is that the experiment was poorly designed, in the sense that it didn't constrain the answer to the question.

By the way, there is an accompanying paper by Gerkin and Castro in the same issue of eLife that critiques the experimental design from an entirely different angle, regarding the use of statistics. I found this article very instructive, and have used it already in teaching.


How do you suggest scientists go about determining the dimensionality of the odor space?

One concrete idea is to try to figure out what the number of dimensions is in the vicinity of a particular point in that space. If you did that with color, you would arrive at the number three from the vast majority of points. So I suggest we start at some arbitrary point in odor space—say a 50 percent mixture of 30 different odors—and systematically go in each of the directions from there and ask: can humans actually distinguish the odor when you change the concentration a little bit up or down from there? If you do that in 30 different dimensions you might find that maybe only five of those dimensions contribute to changing the perceived odor and that along the other dimensions there is very little change. So let's figure out the dimensionality that comes out of a study like that. Is it two? Probably not. I would guess for something like 10 or 20.

Once we know that, we can start to ask how many odors fit into that space.


Why does all of this matter? Why do we need to know how many odors we can smell?

The question of how many smells we can discriminate has fascinated people for at least a century, and the whole industry of flavors and fragrances has been very interested in finding out whether there is a systematic set of rules by which one could mix together some small number of primary odors in order to produce any target smell.

In the field of color vision, that problem has been solved. As a result, we all use color monitors that only have three types of lights—red, green, and blue. And yet by mixing them together, they can make just about every color impression that you might care about. So there's a real technological incentive to figuring out how you can mix together primary stimuli to make any kind of perceived smell.


What is the big lesson you would like people to take away from this scientific exchange?

One lesson I try to convey to my students is the value of a simple simulation—to ask, "Could this idea work even in principle? Let's try it in the simplest case we can imagine." That sort of triage can often keep you from walking down an unproductive path.

On a more general note, people should remain skeptical of spectacular claims. This is particularly important when we referee for the high-glamour journals, where the editors have a predilection for unexpected results. As a community we should let things simmer a bit before allowing a spectacular claim to become the conventional wisdom. Maybe we all need to stop and smell the roses.

Kimm Fesenmaier
Listing Title: 
Sniffing Out Answers
Exclude from News Hub: 
Short Title: 
Sniffing Out Answers
News Type: 
Research News
Teaser Image: 

Better Memory with Faster Lasers

DVDs and Blu-ray disks contain so-called phase-change materials that morph from one atomic state to another after being struck with pulses of laser light, with data "recorded" in those two atomic states. Using ultrafast laser pulses that speed up the data recording process, Caltech researchers adopted a novel technique, ultrafast electron crystallography (UEC), to visualize directly in four dimensions the changing atomic configurations of the materials undergoing the phase changes. In doing so, they discovered a previously unknown intermediate atomic state—one that may represent an unavoidable limit to data recording speeds.

By shedding light on the fundamental physical processes involved in data storage, the work may lead to better, faster computer memory systems with larger storage capacity. The research, done in the laboratory of Ahmed Zewail, Linus Pauling Professor of Chemistry and professor of physics, will be published in the July 28 print issue of the journal ACS Nano.

When the laser light interacts with a phase-change material, its atomic structure changes from an ordered crystalline arrangement to a more disordered, or amorphous, configuration. These two states represent 0s and 1s of digital data.

"Today, nanosecond lasers—lasers that pulse light at one-billionth of a second—are used to record information on DVDs and Blu-ray disks, by driving the material from one state to another," explains Giovanni Vanacore, a postdoctoral scholar and an author on the study. The speed with which data can be recorded is determined both by the speed of the laser—that is, by the duration of each "pulse" of light—and by how fast the material itself can shift from one state to the other.

Thus, with a nanosecond laser, "the fastest you can record information is one information unit, one 0 or 1, every nanosecond," says Jianbo Hu, a postdoctoral scholar and the first author of the paper. "To go even faster, people have started to use femtosecond lasers, which can potentially record one unit every one millionth of a billionth of a second. We wanted to know what actually happens to the material at this speed and if there is a limit to how fast you can go from one structural phase to another."

To study this, the researchers used their technique, ultrafast electron crystallography. The technique, a new development—different from Zewail's Nobel Prize–winning work in femtochemistry, the visual study of chemical processes occurring at femtosecond scales—allowed researchers to observe directly the transitioning atomic configuration of a prototypical phase-change material, germanium telluride (GeTe), when it is hit by a femtosecond laser pulse.

In UEC, a sample of crystalline GeTe is bombarded with a femtosecond laser pulse, followed by a pulse of electrons. The laser pulse causes the atomic structure to change from the crystalline to other structures, and then ultimately to the amorphous state. Then, when the electron pulse hits the sample, its electrons scatter in a pattern that provides a picture of the sample's atomic configuration as a function of the time.

With this technique, the researchers could see directly, for the first time, the structural shift in GeTe caused by the laser pulses. However, they also saw something more: a previously unknown intermediate phase that appears during the transition from the crystalline to the amorphous configuration. Because moving through the intermediate phase takes additional time, the researchers believe that it represents a physical limit to how quickly the overall transition can occur—and to how fast data can be recorded, regardless of the laser speeds used.

"Even if there is a laser faster than a femtosecond laser, there will be a limit as to how fast this transition can occur and information can be recorded, just because of the physics of these phase-change materials," Vanacore says. "It's something that cannot be solved technologically—it's fundamental."

Despite revealing such limits, the research could one day aid the development of better data storage for computers, the researchers say. Right now, computers generally store information in several ways, among them the well-known random-access memory (RAM) and read-only memory (ROM). RAM, which is used to run the programs on your computer, can record and rewrite information very quickly via an electrical current. However, the information is lost whenever the computer is powered down. ROM storage, including CDs and DVDs, uses phase-change materials and lasers to store information. Although ROM records and reads data more slowly, the information can be stored for decades.

Finding ways to speed up the recording process of phase-change materials and understanding the limits to this speed could lead to a new type of memory that harnesses the best of both worlds.

The researchers say that their next step will be to use UEC to study the transition of the amorphous atomic structure of GeTe back into the crystalline phase—comparable to the phenomenon that occurs when you erase and then rewrite a DVD.

Although these applications could mean exciting changes for future computer technologies, this work is also very important from a fundamental point of view, Zewail says.

"Understanding the fundamental behavior of materials transformation is what we are after, and these new techniques developed at Caltech have made it possible to visualize such behavior in both space and time," Zewail says.

The work is published in a paper titled "Transient Structures and Possible Limits of Data Recording in Phase-Change Materials." In addition to Hu, Vanacore, and Zewail, Xiangshui Miao and Zhe Yang are also coauthors on the paper. The work was supported by the National Science Foundation and the Air Force Office of Scientific Research and was carried out in Caltech's Center for Physical Biology, which is funded by the Gordon and Betty Moore Foundation.

Exclude from News Hub: 
News Type: 
Research News

New Approach Holds Promise for Earlier, Easier Detection of Colorectal Cancer

Caltech chemists develop a technique that could one day lead to early detection of tumors

Chemists at Caltech have developed a new sensitive technique capable of detecting colorectal cancer in tissue samples—a method that could one day be used in clinical settings for the early diagnosis of colorectal cancer.

Colorectal cancer is the third most prevalent cancer worldwide and is estimated to cause about 700,000 deaths every year. Metastasis due to late detection is one of the major causes of mortality from this disease; therefore, a sensitive and early indicator could be a critical tool for physicians and patients.

A paper describing the new detection technique currently appears online in Chemistry & Biology and will be published in the July 23 issue of the journal's print edition. Caltech graduate student Ariel Furst (PhD '15) and her adviser, Jacqueline K. Barton, the Arthur and Marian Hanisch Memorial Professor of Chemistry, are the paper's authors.

"Currently, the average biopsy size required for a colorectal biopsy is about 300 milligrams," says Furst. "With our experimental setup, we require only about 500 micrograms of tissue, which could be taken with a syringe biopsy versus a punch biopsy. So it would be much less invasive." One microgram is one thousandth of a milligram.

The researchers zeroed in on the activity of a protein called DNMT1 as a possible indicator of a cancerous transformation. DNMT1 is a methyltransferase, an enzyme responsible for DNA methylation—the addition of a methyl group to one of DNA's bases. This essential and normal process is a genetic editing technique that primarily turns genes off but that has also recently been identified as an early indicator of cancer, especially the development of tumors, if the process goes awry.

When all is working well, DNMT1 maintains the normal methylation pattern set in the embryonic stages, copying that pattern from the parent DNA strand to the daughter strand. But sometimes DNMT1 goes haywire, and methylation goes into overdrive, causing what is called hypermethylation. Hypermethylation can lead to the repression of genes that typically do beneficial things, like suppress the growth of tumors or express proteins that repair damaged DNA, and that, in turn, can lead to cancer.

Building on previous work in Barton's group, Furst and Barton devised an electrochemical platform to measure the activity of DNMT1 in crude tissue samples—those that contain all of the material from a tissue, not just DNA or RNA, for example. Fundamentally, the design of this platform is based on the concept of DNA-mediated charge transport—the idea that DNA can behave like a wire, allowing electrons to flow through it and that the conductivity of that DNA wire is extremely sensitive to mistakes in the DNA itself. Barton earned the 2010 National Medal of Science for her work establishing this field of research and has demonstrated that it can be used not only to locate DNA mutations but also to detect the presence of proteins such as DNMT1 that bind to DNA.

In the present study, Furst and Barton started with two arrays of gold electrodes—one atop the other—embedded in Teflon blocks and separated by a thin spacer that formed a well for solution. They attached strands of DNA to the lower electrodes, then added the broken-down contents of a tissue sample to the solution well. After allowing time for any DNMT1 in the tissue sample to methylate the DNA, they added a restriction enzyme that severed the DNA if no methylation had occurred—i.e., if DNMT1 was inactive. When they applied a current to the lower electrodes, the samples with DNMT1 activity passed the current clear through to the upper electrodes, where the activity could be measured. 

"No methylation means cutting, which means the signal turns off," explains Furst. "If the DNMT1 is active, the signal remains on. So we call this a signal-on assay for methylation activity. But beyond on or off, it also allows us to measure the amount of activity." This assay for DNMT1 activity was first developed in Barton's group by Natalie Muren (PhD '13).

Using the new setup, the researchers measured DNMT1 activity in 10 pairs of human tissue samples, each composed of a colorectal tumor sample and an adjacent healthy tissue from the same patient. When they compared the samples within each pair, they consistently found significantly higher DNMT1 activity, hypermethylation, in the tumorous tissue. Notably, they found little correlation between the amount of DNMT1 in the samples and the presence of cancer—the correlation was with activity.

"The assay provides a reliable and sensitive measure of hypermethylation," says Barton, also the chair of the Division of Chemistry and Chemical Engineering.  "It looks like hypermethylation is good indicator of tumorigenesis, so this technique could provide a useful route to early detection of cancer when hypermethylation is involved."

Looking to the future, Barton's group hopes to use the same general approach in devising assays for other DNA-binding proteins and possibly using the sensitivity of their electrochemical devices to measure protein activities in single cells. Such a platform might even open up the possibility of inexpensive, portable tests that could be used in the home to catch colorectal cancer in its earliest, most treatable stages.

The work described in the paper, "DNA Electrochemistry shows DNMT1 Methyltransferase Hyperactivity in Colorectal Tumors," was supported by the National Institutes of Health. 

Kimm Fesenmaier
Home Page Title: 
A New Approach to Detecting Colorectal Cancer
Listing Title: 
A New Approach to Detecting Colorectal Cancer
Exclude from News Hub: 
Short Title: 
A New Approach to Detecting Colorectal Cancer
News Type: 
Research News

Discovering a New Stage in the Galactic Lifecycle

On its own, dust seems fairly unremarkable. However, by observing the clouds of gas and dust within a galaxy, astronomers can determine important information about the history of star formation and the evolution of galaxies. Now thanks to the unprecedented sensitivity of the telescope at the Atacama Large Millimeter Array (ALMA) in Chile, a Caltech-led team has been able to observe the dust contents of galaxies as seen just 1 billion years after the Big Bang—a time period known as redshift 5-6. These are the earliest average-sized galaxies to ever be directly observed and characterized in this way.

The work is published in the June 25 edition of the journal Nature.

Dust in galaxies is created by the elements released during the formation and collapse of stars. Although the most abundant elements in the universe—hydrogen and helium—were created by the Big Bang, stars are responsible for making all of the heavier elements in the universe, such as carbon, oxygen, nitrogen, and iron. And because young, distant galaxies have had less time to make stars, these galaxies should contain less dust. Previous observations had suggested this, but until now nobody could directly measure the dust in these faraway galaxies.

"Before we started this study, we knew that stars formed out of these clouds of gas and dust, and we knew that star formation was probably somehow different in the early universe, where dust is likely less common. But the previous information only really hinted that the properties of the gas and the dust in earlier galaxies were different than in galaxies we see around us today. We wanted to find data that showed that," says Peter Capak, a staff scientist at the Infrared Processing and Analysis Center (IPAC) at Caltech and the first author of the study.

Armed with the high sensitivity of ALMA, Capak and his colleagues set out to perform a direct analysis of the dust in these very early galaxies.

Young, faraway galaxies are often difficult to observe because they appear very dim from Earth. Previous observations of these young galaxies, which formed just 1 billion years after the Big Bang, were made with the Hubble Space Telescope and the W. M. Keck Observatory—both of which detect light in the near-infrared and visible bands of the electromagnetic spectrum. The color of these galaxies at these wavelengths can be used to make inferences about the dust—for example, galaxies that appear bluer in color tend to have less dust, while those that are red have more dust. However, other effects like the age of the stars and our distance from the galaxy can mimic the effects of dust, making it difficult to understand exactly what the color means.

The researchers began their observations by first analyzing these early galaxies with the Keck Observatory. Keck confirmed the distance from the galaxies as redshift greater than 5—verifying that the galaxies were at least as young as they previously had been thought to be. The researchers then observed the same galaxies using ALMA to detect light at the longer millimeter and submillimeter wavelengths of light. The ALMA readings provided a wealth of information that could not be seen with visible-light telescopes, including details about the dust and gas content of these very early galaxies.

Capak and his colleagues were able to use ALMA to—for the first time—directly view the dust and gas clouds of nine average-sized galaxies during this epoch. Specifically, they focused on a feature called the carbon II spectral line, which comes from carbon atoms in the gas around newly formed stars. The carbon line itself traces this gas, while the data collected around the carbon line traces a so-called continuum emission, which provides a measurement of the dust. The researchers knew that the carbon line was bright enough to be seen in mature, dust-filled nearby galaxies, so they reasoned that the line would be even brighter if there was indeed less dust in the young faraway galaxies.

Using the carbon line, their results confirmed what had previously been suggested by the data from Hubble and Keck: these older galaxies contained, on average, 12 times less dust than galaxies from 2 billion years later (at a redshift of approximately 4).

"In galaxies like our Milky Way or nearby Andromeda, all of the stars form in very dusty environments, so more than half of the light that is observed from young stars is absorbed by the dust," Capak says. "But in these faraway galaxies we observed with ALMA, less than 20 percent of the light is being absorbed. In the local universe, only very young galaxies and very odd ones look like that. So what we're showing is that the normal galaxy at these very high redshifts doesn't look like the normal galaxy today. Clearly there is something different going on."

That "something different" gives astronomers like Capak a peek into the lifecycle of galaxies. Galaxies form because gas and dust are present and eventually turn into stars—which then die, creating even more gas and dust, and releasing energy. Because it is impossible to watch this evolution from young galaxy to old galaxy happen in real time on the scale of a human lifespan, the researchers use telescopes like ALMA to take a survey of galaxies at different evolutionary stages. Capak and his colleagues believe that this lack of dust in early galaxies signifies a never-before-seen evolutionary stage for galaxies.

"This result is really exciting. It's the first time that we're seeing the gas that the stars are forming out of in the early universe. We are starting to see the transition from just gas to the first generation of galaxies to more mature systems like those around us today. Furthermore, because the carbon line is so bright, we can now easily find even more distant galaxies that formed even longer ago, sooner after the Big Bang," Capak says.

Lin Yan, a staff scientist at IPAC and coauthor on the paper, says that their results are also especially important because they represent typical early galaxies. "Galaxies come in different sizes. Earlier observations could only spot the largest or the brightest galaxies, and those tend to be very special—they actually appear very rarely in the population," she says. "Our findings tell you something about a typical galaxy in that early epoch, so they're results can be observed as a whole, not just as special cases."

Yan says that their ability to analyze the properties of these and earlier galaxies will only expand with ALMA's newly completed capabilities. During the study, ALMA was operating with only a portion of its antennas, 20 at the time; the capabilities to see and analyze distant galaxies will be further improved now that the array is complete with 66 antennas, Yan adds.

"This is just an initial observation, and we've only just started to peek into this really distant universe at redshift of a little over 5. An astronomer's dream is basically to go as far distant as we can. And when it's complete, we should be able to see all the distant galaxies that we've only ever dreamed of seeing," she says.

The findings are published in a paper titled, "Galaxies at redshifts 5 to 6 with systematically low dust content and high [C II] emission." The work was supported by funds from NASA and the European Union's Seventh Framework Program. Nick Scoville, the Francis L. Moseley Professor of Astronomy, was an additional coauthor on this paper. In addition to Keck, Hubble, and ALMA data, observations from the Spitzer Space Telescope were used to measure the stellar mass and age of the galaxies in this study. Coauthors and collaborators from other institutions include C. Carilli, G. Jones, C.M. Casey, D. Riechers, K. Sheth, C.M. Corollo, O. Ilbert, A. Karim, O. LeFevre, S. Lilly, and V. Smolcic.

Exclude from News Hub: 
News Type: 
Research News

Voting Rights: A Conversation with Morgan Kousser

Three years ago this week, the U.S. Supreme Court ruled unconstitutional a key provision of the Voting Rights Act (VRA), which was enacted in 1965 and extended four times since then by Congress. Section 5 of the act required certain "covered" jurisdictions in the Deep South and in states and counties outside the Deep South that had large populations of Hispanics and Native Americans to obtain "pre-clearance" from the Justice Department or the U.S. District Court in the District of Columbia before changing any election law. The provision was designed to prevent election officials from replacing one law that had been declared to be racially discriminatory with a different but still discriminatory law. A second provision, Section 4(b), contained the formula for coverage.

The VRA, notes Morgan Kousser, the William R. Kenan, Jr., Professor of History and Social Science, has been "very effective. You went from 7 percent of the black voters in Mississippi being registered to vote to 60 percent within three or four years. That was just an amazing change. Even more amazing, Section 5 was flexible enough to prevent almost every kind of new discriminatory technique or device over a period of nearly 50 years." For instance, Kousser notes, "when white supremacists in Mississippi saw that African Americans would soon comprise majorities in some state or local legislative districts, they merged the districts to preserve white majorities everywhere. But Section 5 stopped this runaround and allowed the new black voters real democracy. Voting rights was the one area in which federal law came close to eliminating the country's long, sad history of racial discrimination."

But on June 25, 2013, in a landmark ruling in Shelby County v. Holder, the Court overturned Section 4(b), effectively dismantling Section 5. Without a formula that defines covered jurisdictions, no area falls within the scope of Section 5. Chief Justice John Roberts, writing the 5–4 majority opinion, argued that although the original coverage formula "made sense," it was now outdated, based on "decades-old data and eradicated practices." Asserting that voter turnout and registration rates in covered jurisdictions are nearly equal for whites and African Americans, Roberts also noted that "blatantly discriminatory evasions of federal decrees are rare. And minority candidates hold office at unprecedented levels."

The decision, says Kousser, was wrong. In a comprehensive study recently published in the journal Transatlantica, he, with the help of three Caltech students who worked on the study during Summer Undergraduate Research Fellowship (SURF) projects, examined more than four thousand successful voting-rights cases around the country as well as Justice Department inquiries and settlements and changes to laws in response to the threat of lawsuits. Over 90 percent, they found, occurred in the covered jurisdictions—indicating, Kousser says, that the coverage scheme was still working very well.

The study found that—even when excluding all of the actions brought under Section 5 of the VRA, and only looking at those that can be brought anywhere in the country—83.2 percent of successful cases originated in covered jurisdictions. This shows, Kousser says, that whatever the coverage formula measured, it still captured the "overwhelming number of instances of proven racial discrimination in elections."

We talked with Kousser about the ruling and his findings—and how this constitutional law scholar made his way to Caltech.


Why do you think Justice Roberts and the other justices in the majority ruled the way they did?

He had a sense that there had been a lot of cases outside of the covered jurisdictions. But if you look at all of the data, you see that the coverage scheme captures 94 percent of all of the cases and other events that took place from 1957 through 2013 and an even larger proportion up to 2006. Suppose that you were a stockbroker, and you could make a decision that was right 94 percent of the time. Your clients would be very, very wealthy. No one would be dissatisfied with you. That's what the congressional coverage scheme did.

I wish very much that I had finished this paper two years earlier and that the data would have been published in a scholarly journal or at least made available in a pre-print by the time that the decision was cooking up. That was a mistake on my part. I should have let it out into the world a little earlier. Sometimes I have a fantasy that if this had been shown to the right justices at the right time, maybe they would have decided differently.


The Court did not rule on the VRA in general—but said that the coverage formula is outdated because voting discrimination is not as bad as it once was. Do you agree?

This is one of the reasons that I looked at the coverage of the California Voting Rights Act (CVRA), passed in 2002. In Section 2 of the National VRA, you have to prove what is called the "totality of the circumstances." You have to prove not only that voting is racially polarized and that there is a kind of election structure used for discrimination, but also show that there is a history of discrimination in the area, that there are often special informal procedures that go against minorities, and a whole series of other things. A Section 2 case is quite difficult to prove.

The CVRA attempted to simplify those circumstances so all you have to show is that there is racially polarized voting, usually shown by a statistical analysis of how various groups voted, and that there is a potentially discriminatory electoral structure, particularly at-large elections for city council, for school board, for community college district, and so on.

The CVRA, in effect, only became operative in 2007 after some preliminary litigation. And in 2007, after the city of Modesto settled a long-running lawsuit, lawyers for the successful plaintiffs presented the city with a bill for about $3 million. This scared jurisdictions throughout California, which were faced with the potential of paying out large amounts of money if they had racially polarized voting. Again and again, you suddenly saw jurisdictions settling short of going to trial and a lot of Hispanics elected to particular boards. This has changed about 100 or 125 local boards throughout California from holding their elections at-large to holding them by sub-districts, which allow geographically segregated minorities to elect candidates of their choice. If you graph that over time, you see a huge jump in the number of successful CVRA cases after 2007. What does this mean? Does it mean that there was suddenly a huge increase in discrimination? No, it means that there was a tool that allowed the discrimination that had previously existed to be legally identified.

If we had that across the country, and it was easier to bring cases, you would expose a lot more discrimination. That's my argument.


Do you think the coverage plan will be restored?

If there were hearings and an assessment of this scheme or any other potentially competing schemes, then Congress might decide on a new coverage scheme. If the bill was passed, it would go back up to the U.S. Supreme Court, and maybe the Court would be more interested in the actual empirical evidence instead of simply guessing what they thought might have existed. But I think right now the possibilities of getting any changes through the Congress are zero.

I would like to see some small changes in the coverage scheme, but they have to be made on the basis of evidence. Just throwing out the whole thing because allegedly it didn't fit anymore is an irrational way to make public policy.


As a professor of history, do you think it is your responsibility to help change policy?

Well, it has been interesting to me from the very beginning. Let me tell you how I got started in voting rights cases. My doctoral dissertation was on the disfranchisement of blacks and poor whites in the South in the late 19th and early 20th centuries. In about 1979, a lawyer who was cooperating with the ACLU [American Civil Liberties Union] in Birmingham, Alabama, called me up—I didn't know who he was—and he said, "Do you have an opinion about whether section 201 of the Alabama constitution of 1901 was adopted with a racially discriminatory purpose?" I said, "I do. I've studied that. I think it was adopted with a racially discriminatory purpose."

Writing expert witness reports and testifying in cases are exactly like what I have always done as a scholar. I have looked at the racially discriminatory effects of laws; I have looked at the racially discriminatory intent of laws. I have examined them by looking at a lot of evidence. I write very long papers for these cases. They are scholarly publications, and whether they relate to something that happened 100 years ago or something that happened five years ago or yesterday doesn't really, in principle, seem to make any difference.


How did you get started as a historian studying politics?

Well, I'm old. I grew up in the South during the period of segregation, but just as it was breaking down. When I was a junior in high school, the sit-ins took place in Nashville, Tennessee, which is where I'm from. I was sympathetic. I never liked segregation. I was always in favor of equal rights.

I had been fascinated by politics from the very beginning. By the time I was 8 or 9 years old, I was reading two newspapers a day. One was a very conservative newspaper, pro-segregation, and the other paper was a liberal newspaper, critical of segregation. They both covered politics. And if you read news stories in each about the same event on the same day, you'd get a completely different slant. It was a wonderful training for a historian. From reading two newspapers that I knew to be biased, one in one direction, the other in another direction, I had to try to figure out what was happening and what I should believe to be fact.


How did you end up at Caltech?

To be very frank, Yale, where I was a graduate student, didn't want me around anymore. When I was there, I started a graduate student senate. I wrote its constitution, and I served as its first president. We were obnoxious. This was in 1967 and 1968, and students were revolting around the country, trying to bring an end to the war in Vietnam, trying to stop racial discrimination, trying to change the world. I had less lofty aims.


Such as?

There was no bathroom for women in the hall of graduate studies where the vast majority of humanities and social sciences classes took place. We made a nonnegotiable demand for a bathroom for women. Yale was embarrassed. Yale granted our request. We did other things. We protested against a rent increase in graduate student married housing. Yale couldn't justify the increase and gave way. We formed a committee to get women equal access to the Yale swimming pools. Yale opened the pool.



In addition to doing research, you are an acclaimed teacher at Caltech—the winner of Caltech highest teaching honor, the Feynman Prize, in 2011. Do you think of yourself as more of a teacher or as a scholar?

I really like to do both. I can't avoid teaching. If you look at my scholarship, a lot of it is really in teaching format. I would like to school Chief Justice Roberts on what he had done wrong and to persuade him, convince him, that he should change his mind on this. A lot of my friends who are at my advanced age have quit teaching, because they can't take it anymore. When the term is over, they are jubilant.

I'm always sad when the term ends, particularly with my Supreme Court class, because the classes are small, so I know each individual student pretty well. I hate to say goodbye to them.


Do any particular students stand out in your mind?

I had one student who took my class in 2000. He was a computer science major. We used to talk a lot. We disagreed about practically everything politically, but he was a very nice and very intelligent guy.

When he finished the class, he decided that he would go to work for Microsoft. He did that for three years. Then he decided he wanted to go to law school, where he did very well; he clerked for an appeals court judge and he clerked for a Supreme Court justice. This spring, he argued his first case before the U.S. Supreme Court. The case that he argued was very complicated. I don't understand it, I don't understand the issues, I don't understand the precedents. It's relatively obscure, and it won't make big headlines. But he did it, and he's promised me that he'll share his impressions of being on that stage and that I can pass them on to current Caltech students. I know that they will find his experience as exciting as I will—a Techer arguing a case before the Supreme Court within 15 years of graduating from college! I can't quit teaching.

Exclude from News Hub: 
News Type: 
Research News

JPL News: NASA Joins North Sea Oil Cleanup Training Exercise

NASA participated for the first time in Norway's annual oil spill cleanup exercise in the North Sea on June 8 through 11. Scientists flew a specialized NASA airborne instrument called the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) on NASA's C-20A piloted research aircraft to monitor a controlled release of oil into the sea, testing the radar's ability to distinguish between more and less damaging types of oil slicks.

Read the full story from JPL News

Exclude from News Hub: 
News Type: 
Research News

Injured Jellyfish Seek to Regain Symmetry

Self-repair is extremely important for living things. Get a cut on your finger and your skin can make new cells to heal the wound; lose your tail—if you are a particular kind of lizard—and tissue regeneration may produce a new one. Now, Caltech researchers have discovered a previously unknown self-repair mechanism—the reorganization of existing anatomy to regain symmetry—in a certain species of jellyfish.

The results are published in the June 15 online edition of the journal Proceedings of the National Academy of Sciences (PNAS).

Many marine animals, including some jellyfish, can rapidly regenerate tissues in response to injury, and this trait is important for survival. If a sea turtle takes a bite out of a jellyfish, the injured animal can quickly grow new cells to replace the lost tissue. In fact, a jellyfish-like animal called the hydra is a very commonly used model organism in studies of regeneration.

But Caltech assistant professor of biology Lea Goentoro, along with graduate student Michael Abrams and associate research technician Ty Basinger, were interested in another organism, the moon jellyfish (Aurelia aurita). Abrams, Basinger, and Goentoro, lead authors of the PNAS study, wanted to know if the moon jellyfish would respond to injuries in the same manner as an injured hydra. The team focused their study on the jellyfish's juvenile, or ephyra, stage, because the ephyra's simple body plan—a disk-shaped body with eight symmetrical arms—would make any tissue regeneration clearly visible.

To simulate injury—like that caused by a predator in the wild—the team performed amputations on anesthetized ephyra, producing animals with two, three, four, five, six, or seven arms, rather than the usual eight. They then returned the jellyfish to their habitat of artificial seawater, and monitored the tissue response.

Although wounds healed up as expected, with the tissue around the cut closing up in just a few hours, the researchers noticed something unexpected: the jellyfish were not regenerating tissues to replace the lost arms. Instead, within the first two days after the injury, the ephyra had reorganized its existing arms to be symmetrical and evenly spaced around the animal's disklike body. This so-called resymmetrization occurred whether the animal had as few as two limbs remaining or as many as seven, and the process was observed in three additional species of jellyfish ephyra.

"This is a different strategy of self-repair," says Goentoro. "Some animals just heal their wounds, other animals regenerate what is lost, but the moon jelly ephyrae don't regenerate their lost limbs. They heal the wound, but then they reorganize to regain symmetry."

There are several reasons why symmetry might be more important to the developing jellyfish than regenerating a lost limb. Jellyfish and many other marine animals such as sea urchins, sea stars, and sea anemones have what is known as radial symmetry. Although the bodies of these animals have a distinct top and bottom, they do not have distinguishable left and right sides—an arrangement, present in humans and other higher life forms, known as bilateral symmetry. And this radial symmetry is essential to how the jellyfish moves and eats, first author Abrams says.

"Jellyfish move by 'flapping' their arms; this allows for propulsion through the water, which also moves water—and food—past the mouth," he says. "As they are swimming, a boundary layer of viscous—that is, thick—fluid forms between their arms, creating a continuous paddling surface. And you can imagine how this paddling surface would be disturbed if you have a big gap between the arms."

Maintaining symmetry appears to be vital not just for propulsion and feeding, the researchers found. In the few cases when the injured animals do not symmetrize—only about 15 percent of the injured animals they studied—the unsymmetrical ephyra also cannot develop into normal adult jellyfish, called medusa.

The researchers next wanted to figure out how the new self-repair mechanism works. Cell proliferation and cell death are commonly involved in tissue regeneration and injury response, but, the team found, the amputee jellyfish were neither making new cells nor killing existing cells as they redistributed their existing arms around their bodies.

Instead, the mechanical forces created by the jellyfish's own muscle contractions were essential for symmetrization. In fact, when muscle relaxants were added to the seawater surrounding an injured jellyfish, slowing the animal's muscle contractions, the symmetrization of the intact arms also was slowed down. In contrast, a reduction in the amount of magnesium in the artificial seawater sped up the rate at which the jellyfish pulsed their muscles, and these faster muscle contractions increased the symmetrization rate.

"Symmetrization is a combination of the mechanical forces created by the muscle contractions and the viscoelastic jellyfish body material," Abrams says. "The cycle of contraction and the viscoelastic response from the jellyfish tissues leads to reorganization of the body. You can imagine that in the absence of symmetry, the mechanical forces are unbalanced, but over time, as the body and arms reorganize, the forces rebalance."

To test this idea, the team collaborated with coauthor Chin-Lin Guo, from Academia Sinica in Taiwan, to build a mathematical model, and succeeded in simulating the symmetrization process.

In addition to adding to our understanding about self-repair mechanisms, the discovery could help engineers design new biomaterials, Goentoro says. "Symmetrization may provide a new avenue for thinking about biomaterials that could be designed to 'heal' by regaining functional geometry rather than regenerating precise shapes," she says. "Other self-repair mechanisms require cell proliferation and cell death—biological processes that aren't easily translated to technology. But we can more easily apply mechanical forces to a material."

And the impact of mechanical forces on development is being increasingly studied in a variety of organisms, Goentoro says. "Recently, mechanical forces have been increasingly found to play a role in development and tissue regulation," she says. "So the symmetrization process in Aurelia, with its simple geometry, lends itself as a good model system where we can study how mechanical forces play a role in morphogenesis."

These results are published in a paper titled "Self-repairing symmetry in jellyfish through mechanically driven reorganization." In addition to Abrams, Basinger, Goentoro, and Guo, former SURF student William Yuan from the University of Oxford was also a coauthor. Jellyfish were provided by the Cabrillo Marine Aquarium and the Monterey Bay Aquarium. John Dabiri, professor of aeronautics and bioengineering, provided discussions and suggestions throughout the study. Abrams is funded by the Graduate Research Fellowship Program of the National Science Foundation.

Home Page Title: 
Injured Jellyfish Seek to Regain Symmetry
Exclude from News Hub: 
News Type: 
Research News
Exclude from Home Page: 

Behavior Matters: Redesigning the Clinical Trial

When a new type of drug or therapy is discovered, double-blind randomized controlled trials (DBRCTs) are the gold standard for evaluating them. These trials, which have been used for years, were designed to determine the true efficacy of a treatment free from patient or doctor bias, but they do not factor in the effects that patient behaviors, such as diet and lifestyle choices, can have on the tested treatment.

A recent meta-analysis of six such clinical trials, led by Caltech's Erik Snowberg, professor of economics and political science, and his colleagues Sylvain Chassang from Princeton University and Ben Seymour from Cambridge University, shows that behavior can have a serious impact on the effectiveness of a treatment—and that the currently used DBRCT procedures may not be able to assess the effects of behavior on the treatment. To solve this, the researchers propose a new trial design, called a two-by-two trial, that can account for behavior–treatment interactions.

The study was published online on June 10 in the journal PLOS ONE.

Patients behave in different ways during a trial. These behaviors can directly relate to the trial—for example, one patient who believes in the drug may religiously stick to his or her treatment regimen while someone more skeptical might skip a few doses. The behaviors may also simply relate to how the person acts in general, such as preferences in diet, exercise, and social engagement. And in the design of today's standard trials, these behaviors are not accounted for, Snowberg says.

For example, a DBRCT might randomly assign patients to one of two groups: an experimental group that receives the new treatment and a control group that does not. As the trial is double-blinded, neither the subjects nor their doctors know who falls into which group. This is intended to reduce bias from the behavior and beliefs of the patient and the doctor; the thinking is that because patients have not been specifically selected for treatment, any effects on health outcomes must be solely due to the treatment or lack of treatment.

Although the patients do not know whether they have received the treatment, they do know their probability of getting the treatment—in this case, 50 percent. And that 50 percent likelihood of getting the new treatment might not be enough to encourage a patient to change behaviors that could influence the efficacy of the drug under study, Snowberg says. For example, if you really want to lose weight and know you have a high probability—say 70 percent chance—of being in the experimental group for a new weight loss drug, you may be more likely to take the drug as directed and to make other healthy lifestyle choices that can contribute to weight loss. As a result, you might lose more weight, boosting the apparent effectiveness of the drug.

However, if you know you only have a 30 percent chance of being in the experimental group, you might be less motivated to both take the drug as directed and to make those other changes. As a result, you might lose less weight—even if you are in the treatment group—and the same drug would seem less effective.

"Most medical research just wants to know if a drug will work or not. We wanted to go a step further, designing new trials that would take into account the way people behave. As social scientists, we naturally turned to the mathematical tools of formal social science to do this," Snowberg says.

Snowberg and his colleagues found that with a new trial design, the two-by-two trial, they can tease out the effects of behavior and the interaction of behavior and treatment, as well as the effects of treatment alone. The new trial, which still randomizes treatment, also randomizes the probability of treatment—which can change a patient's behavior.

In a two-by-two trial, instead of patients first being assigned to either the experimental or control groups, they are randomly assigned to either a "high probability of treatment" group or a "low probability of treatment" group. The patients in the high probability group are then randomly assigned to either the treatment or the control group, giving them a 70 percent chance of receiving the treatment. Patients in the low probability group are also randomly assigned to treatment or control; their likelihood of receiving the treatment is 30 percent. The patients are then informed of their probability of treatment.

By randomizing both the treatment and the probability of treatment, medical researchers can quantify the effects of treatment, the effects of behavior, and the effects of the interaction between treatment and behavior. Determining each, Snowberg says, is essential for understanding the overall efficacy of treatment.

Credit: Sylvain Chassang, Princeton University

"It's a very small change to the design of the trial, but it's important. The effect of a treatment has these two constituent parts: pure treatment effect and the treatment–behavior interaction. Standard blind trials just randomize the likelihood of treatment, so you can't see this interaction. Although you can't just tell someone to randomize their behavior, we found a way that you can randomize the probability that a patient will get something that will change their behavior."

Because it is difficult to implement new trial design changes in active trials, Snowberg and his colleagues wanted to first test their idea with a meta-analysis of data from previous clinical trials. They developed a way to test this idea by coming up with a new mathematical formula that can be used to analyze DBRCT data. The formula, which teases out the health outcomes resulting from treatment alone as well as outcomes resulting from an interaction between treatment and behavior, was then used to statistically analyze six previous DBRCTs that had tested the efficacy of two antidepressant drugs, imipramine (a tricyclic antidepressant also known as Tofranil) and paroxetine (a selective serotonin reuptake inhibitor sold as Paxil).

First, the researchers wanted to see if there was evidence that patients behave differently when they have a high probability of treatment versus when they have a low probability of treatment. The previous trials recorded how many patients dropped out of the study, so this was the behavior that Snowberg and his colleagues analyzed. They found that in trials where patients happened to have a relatively high probability of treatment—near 70 percent—the dropout rate was significantly lower than in other trials with patients who had a lower probability of treatment, around 50 percent.

Although the team did not have any specific behaviors to analyze, other than dropping out of the study, they also wanted to determine if behavior in general could have added to the effect of the treatments. Using their statistical techniques, they determined that imipramine seemed to have a pure treatment effect, but no effect from the interaction between treatment and behavior—that is, the drug seemed to work fine, regardless of any behavioral differences that may have been present.

However, after their analysis, they determined that paroxetine seemed to have no effect from the treatment alone or behavior alone. However, an interaction between the treatment and behavior did effectively decrease depression. Because this was a previously performed study, the researchers cannot know which specific behavior was responsible for the interaction, but with the mathematical formula, they can tell that this behavior was necessary for the drug to be effective.

In their paper, Snowberg and his colleagues speculate how a situation like this might come about. "Maybe there is a drug, for instance, that makes people feel better in social situations, and if you're in the high probability group, then maybe you'd be more willing to go out to parties to see if the drug helps you talk to people," Snowberg explains. "Your behavior drives you to go to the party, and once you're at the party, the drug helps you feel comfortable talking to people. That would be an example of an interaction effect; you couldn't get that if people just took this drug alone at home."

Although this specific example is just speculation, Snowberg says that the team's actual results reveal that there is some behavior or set of behaviors that interact with paroxetine to effectively treat depression—and without this behavior, the drug appears to be ineffective.

"Normally what you get when you run a standard blind trial is some sort of mishmash of the treatment effect and the treatment-behavior interaction effect. But, knowing the full interaction effect is important. Our work indicates that clinical trials underestimate the efficacy of a drug where behavior matters," Snowberg says. "It may be the case that 50 percent probability isn't high enough for people to change any of their behaviors, especially if it's a really uncertain new treatment. Then it's just going to look like the drug doesn't work, and that isn't the case."

Because the meta-analysis supported the team's hypothesis—that the interaction between treatment and behavior can have an effect on health outcomes—the next step is incorporating these new ideas into an active clinical trial. Snowberg says that the best fit would be a drug trial for a condition, such as a mental health disorder or an addiction, that is known to be associated with behavior. At the very least, he says, he hopes that these results will lead the medical research community to a conversation about ways to improve the DBRCT and move past the current "gold standard."

These results are published in a paper titled "Accounting for Behavior in Treatment Effects: New Applications for Blind Trials." Cayley Bowles, a student in the UCLA/Caltech MD/PhD program, was also a coauthor on the paper. The work was supported by funding to Snowberg and Chassang from the National Science Foundation.

Exclude from News Hub: 
News Type: 
Research News

Celebrating 11 Years of CARMA Discoveries

For more than a decade, large, moveable telescopes tucked away on a remote, high-altitude site in the Inyo Mountains, about 250 miles northeast of Los Angeles, have worked together to paint a picture of the universe through radio-wave observations.

Known as the Combined Array for Research in Millimeter-wave Astronomy, or CARMA, the telescopes formed one of the most powerful millimeter interferometers in the world. CARMA was created in 2004 through the merger of the Owens Valley Radio Observatory (OVRO) Millimeter Array and the Berkeley Illinois Maryland Association (BIMA) Array and initially consisted of 15 telescopes. In 2008, the University of Chicago joined CARMA, increasing the telescope count to 23.

Dalmation Drawing

An artist's depiction of a gamma ray burst, the most powerful explosive event in the universe. CARMA detected the millimeter-wavelength emission from the afterglow of the gamma ray burst 130427A only 18 hours after it first exploded on April 27, 2013. The CARMA observations revealed a surprise: in addition to the forward moving shock, CARMA showed the presence of a backward moving shock, or "reverse" shock, that had long been predicted, but never conclusively observed until now.
Credit: Gemini Observatory/AURA, artwork by Lynette Cook

CARMA's higher elevation, improved electronics, and greater number of connected antennae enabled more precise observations of radio emission from molecules and cold dust across the universe, leading to ground-breaking studies that encompass a range of cosmic objects and phenomena—including stellar birth, early planet formation, supermassive black holes, galaxies, galaxy mergers, and sudden, unexpected events such as gamma-ray bursts and supernova explosions.

"Over its lifetime, it has moved well beyond its initial goals both scientifically and technically," says Anneila Sargent (MS '67, PhD '78, both degrees in astronomy), the Ira S. Bowen Professor of Astronomy at Caltech and the first director of CARMA.

On April 3, CARMA probed the skies for the last time. The project ceased operations and its telescopes will be repurposed and integrated into other survey projects.

Here is a look back at some of CARMA's most significant discoveries and contributions to the field of astronomy.

Planet formation

Dalmation Drawing

These CARMA images highlight the range of morphologies observed in circumstellar disks, which may indicate that the disks are in different stages in the planet formation process, or that they are evolving along distinct pathways. The bottom row highlights the disk around the star LkCa 15, where CARMA detected a 40 AU diameter inner hole. The two-color Keck image (bottom right) reveals an infrared source along the inner edge of this hole. The infrared luminosity is consistent with a 6M Jupiter planet, which may have cleared the hole.
Credit: CARMA

Newly formed stars are surrounded by a rotating disk of gas and dust, known as a circumstellar disk. These disks provide the building materials for planetary systems like our own solar system, and can contain important clues about the planet formation process.

During its operation, CARMA imaged disks around dozens of young stars such as RY Tau and DG Tau. The observations revealed that circumstellar disks often are larger in size than our solar system and contain enough material to form Jupiter-size planets. Interestingly, these disks exhibit a variety of morphologies, and scientists think the different shapes reflect different stages or pathways of the planet formation process.

CARMA also helped gather evidence that supported planet formation theories by capturing some of the first images of gaps in circumstellar disks. According to conventional wisdom, planets can form in disks when stars are as young as half a million years old. Computer models show that if these so-called protoplanets are the size of Jupiter or larger, they should carve out gaps or holes in the rings through gravitational interactions with the disk material. In 2012, the team of OVRO executive director John Carpenter reported using CARMA to observe one such gap in the disk surrounding the young star LkCa 15. Observations by the Keck Observatory in Hawaii revealed an infrared source along the inner edge of the gap that was consistent with a planet that has six times the mass of Jupiter.

"Until ALMA"—the Atacama Large Millimeter/submillimeter Array in Chile, a billion-dollar international collaboration involving the United States, Europe, and Japan—"came along, CARMA produced the highest-resolution images of circumstellar disks at millimeter wavelengths," says Carpenter.

Star formation

Dalmation Drawing

A color image of the Whirlpool galaxy M51 from the Hubble Space Telescope (HST). A three composite of images taken at wavelengths of 4350 Angstroms (blue), 5550 Angstroms (green), and 6580 Angstroms (red). Bright regions in the red color are the regions of recent massive star formation, where ultraviolet photons from the massive stars ionize the surrounding gas which radiates the hydrogen recombination line emission. Dark lanes run along spiral arms, indicating the location where the dense interstellar medium is abundant.
Credit: Jin Koda

Stars form in "clouds" of gas, consisting primarily of molecular hydrogen, that contain as much as a million times the mass of the sun. "We do not understand yet how the diffuse molecular gas distributed over large scales flows to the small dense regions that ultimately form stars," Carpenter says.

Magnetic fields may play a key role in the star formation process, but obtaining observations of these fields, especially on small scales, is challenging. Using CARMA, astronomers were able to chart the direction of the magnetic field in the dense material that surrounds newly formed protostars by mapping the polarized thermal radiation from dust grains in molecular clouds. A CARMA survey of the polarized dust emission from 29 sources showed that magnetic fields in the dense gas are randomly aligned with outflowing gas entrained by jets from the protostars.

If the outflows emerge along the rotation axes of circumstellar disks, as has been observed in a few cases, the results suggest that, contrary to theoretical expectations, the circumstellar disks are not aligned with the fields in the dense gas from which they formed. "We don't know the punch line—are magnetic fields critical in the star formation process or not?—because, as always, the observations just raise more questions," Carpenter admits. "But the CARMA observations are pointing the direction for further observations with ALMA."

Molecular gas in galaxies

Dalmation Drawing

CARMA was used to image molecular gas in the nearby Andromeda galaxy. All stars form in dense clouds of molecular gas and thus to understand star formation it is important to analyze the properties of molecular clouds.
Credit: Andreas Schruba

The molecular gas in galaxies is the raw material for star formation. "Being able to study how much gas there is in a galaxy, how it's converted to stars, and at what rate is very important for understanding how galaxies evolve over time," Carpenter says.

By resolving the molecular gas reservoirs in local galaxies and measuring the mass of gas in distant galaxies that existed when the cosmos was a fraction of its current age, CARMA made fundamental contributions to understanding the processes that shape the observable universe.

For example, CARMA revealed the evolution, in the spiral galaxy M51, of giant molecular clouds (GMCs) driven by large-scale galactic structure and dynamics. CARMA was used to show that giant molecular clouds grow through coalescence and then break up into smaller clouds that may again come together in the future. Furthermore, the process can occur multiple times over a cloud's lifetime. This new picture of molecular cloud evolution is more complex than previous scenarios, which treated the clouds as discrete objects that dissolved back into the atomic interstellar medium after a certain period of time. "CARMA's imaging capability showed the full cycle of GMCs' dynamical evolution for the first time," Carpenter says.

The Milky Way's black hole

CARMA worked as a standalone array, but it was also able to function as part of very-long-baseline interferometry (VLBI), in which astronomical radio signals are gathered from multiple radio telescopes on Earth to create higher-resolution images than is possible with single telescopes working alone.

In this fashion, CARMA has been linked together with the Submillimeter Telescope in Arizona and the James Clerk Maxwell Telescope and Submillimeter Array in Hawaii to paint one of the most detailed pictures to date of the monstrous black hole at the heart of our Milky Way galaxy. The combined observations achieved an angular resolution of 40 microarcseconds—the equivalent of seeing a tennis ball on the moon.

"If you just used CARMA alone, then the best resolution you would get is 0.15 arcseconds. So VLBI improved the resolution by a factor of 3,750," Carpenter says.

Astronomers have used the VLBI technique to successfully detect radio signals emitted from gas orbiting just outside of this supermassive black hole's event horizon, the radius around the black hole where gravity is so strong that even light cannot escape. "These observations measured the size of the emitting region around the black hole and placed constraints on the accretion disk that is feeding the black hole," he explains.

In other work, VLBI observations showed that the black hole at the center of M87, a giant elliptical galaxy, is spinning.


CARMA also played an important role in following up "transients," objects that unexpectedly burst into existence and then dim and fade equally rapidly (on an astronomical timescale), over periods from seconds to years. Some transients can be attributed to powerful cosmic explosions such as gamma-ray bursts (GRBs) or supernovas, but the mechanisms by which they originate remain unexplained.

"By looking at transients at different wavelengths—and, in particular, looking at them soon after they are discovered—we can understand the progenitors that are causing these bursts," says Carpenter, who notes that CARMA led the field in observations of these events at millimeter wavelengths. Indeed, on April 27, 2013, CARMA detected the millimeter-wavelength emission from the afterglow of GRB 130427A only 18 hours after it first exploded. The CARMA observations revealed a surprise: in addition to the forward-moving shock, there was one moving backward. This "reverse" shock had long been predicted, but never conclusively observed.

Getting data on such unpredictable transient events is difficult at many observatories, because of logistics and the complexity of scheduling. "Targets of opportunity require flexibility on the part of the organization to respond to an event when it happens," says Sterl Phinney (BS '80, astronomy), professor of theoretical astrophysics and executive officer for astronomy and astrophysics at Caltech. "CARMA was excellent for this purpose, because it was so nimble."

Galaxy clusters

Dalmation Drawing

Multi-wavelength view of the redshift z=0.2 cluster MS0735+7421. Left to right: CARMA observations of the SZ effect, X-ray data from Chandra, radio data from the VLA, and a three-color composite of the three. The SZ image reveals a large-scale distortion of the intra-cluster medium coincident with X-ray cavities produced by a massive AGN outflow, an example of the wide dynamic-range, multi-wavelength cluster imaging enabled by CARMA.
Credit: Erik Leitch (University of Chicago, Owens Valley Radio Observatory)

Galaxy clusters are the largest gravitationally bound objects in the universe. CARMA studied galaxy clusters by taking advantage of a phenomenon known as the Sunyaev-Zel'dovich (SZ) effect. The SZ effect results when primordial radiation left over from the Big Bang, known as the cosmic microwave background (CMB), is scattered to higher energies after interacting with the hot ionized gas that permeates galaxy clusters. Using CARMA, astronomers recently confirmed a galaxy cluster candidate at redshifts of 1.75 and 1.9, making them the two most distant clusters for which an SZ effect has been measured.

"CARMA can detect the distortion in the CMB spectrum," Carpenter says. "We've observed over 100 clusters at very good resolution. These data have been very important to calibrating the relation between the SZ signal and the cluster mass, probing the structure of clusters, and helping discover the most distant clusters known in the universe."

Training the next generation

In addition to its many scientific contributions, CARMA also served as an important teaching facility for the next generation of astronomers. About 300 graduate students and postdoctoral researchers have cut their teeth on interferometry astronomy at CARMA over the years. "They were able to get hands-on experience in millimeter-wave astronomy at the observatory, something that is becoming more and more rare these days," Sargent says.

Tom Soifer (BS '68, physics), professor of physics and Kent and Joyce Kresa Leadership Chair of the Division of Physics, Mathematics and Astronomy, notes that many of those trainees now hold prestigious positions at the National Radio Astronomy Observatory (NRAO) or are professors at universities across the country, where they educate future scientists and engineers and help with the North American ALMA effort. "The United States is currently part of a tripartite international collaboration that operates ALMA. Most of the North American ALMA team trained either at CARMA or the Caltech OVRO Millimeter Array, CARMA's precursor," he says.

Looking ahead

Following CARMA's shutdown, the Cedar Flats sites will be restored to prior conditions, and the telescopes will be moved to OVRO. Although the astronomers closest to the observatory find the closure disappointing, Phinney takes a broader view, seeing the shutdown as part of the steady march of progress in astronomy. "CARMA was the cutting edge of high-frequency astronomy for the past decade. Now that mantle has passed to the global facility called ALMA, and Caltech will take on new frontiers."

Indeed, Caltech continues to push the technological frontier of astronomy through other projects. For example, Caltech Assistant Professor of Astronomy Greg Hallinan is leading the effort to build a Long Wavelength Array (LWA) station at OVRO that will instantaneously image the entire viewable sky every few seconds at low-frequency wavelengths to search for radio transients.

The success of CARMA and OVRO, Soifer says, gives him confidence that the LWA will also be successful. "We have a tremendously capable group of scientists and engineers. If anybody can make this challenging enterprise work, they can."

Exclude from News Hub: 
News Type: 
Research News

Yeast Protein Network Could Provide Insights into Human Obesity

A team of biologists and a mathematician have identified and characterized a network composed of 94 proteins that work together to regulate fat storage in yeast.

"Removal of any one of the proteins results in an increase in cellular fat content, which is analogous to obesity," says study coauthor Bader Al-Anzi, a research scientist at Caltech.

The findings, detailed in the May issue of the journal PLOS Computational Biology, suggest that yeast could serve as a valuable test organism for studying human obesity.

"Many of the proteins we identified have mammalian counterparts, but detailed examinations of their role in humans has been challenging," says Al-Anzi. "The obesity research field would benefit greatly if a single-cell model organism such as yeast could be used—one that can be analyzed using easy, fast, and affordable methods."

Using genetic tools, Al-Anzi and his research assistant Patrick Arpp screened a collection of about 5,000 different mutant yeast strains and identified 94 genes that, when removed, produced yeast with increases in fat content, as measured by quantitating fat bands on thin-layer chromatography plates. Other studies have shown that such "obese" yeast cells grow more slowly than normal, an indication that in yeast as in humans, too much fat accumulation is not a good thing. "A yeast cell that uses most of its energy to synthesize fat that is not needed does so at the expense of other critical functions, and that ultimately slows down its growth and reproduction," Al-Anzi says.

When the team looked at the protein products of the genes, they discovered that those proteins are physically bonded to one another to form an extensive, highly clustered network within the cell.

Such a configuration cannot be generated through a random process, say study coauthors Sherif Gerges, a bioinformatician at Princeton University, and Noah Olsman, a graduate student in Caltech's Division of Engineering and Applied Science, who independently evaluated the details of the network. Both concluded that the network must have formed as the result of evolutionary selection.

In human-scale networks, such as the Internet, power grids, and social networks, the most influential or critical nodes are often, but not always, those that are the most highly connected.

The team wondered whether the fat-storage network exhibits this feature, and, if not, whether some other characteristics of the nodes would determine which ones were most critical. Then, they could ask if removing the genes that encode the most critical nodes would have the largest effect on fat content.

To examine this hypothesis further, Al-Anzi sought out the help of a mathematician familiar with graph theory, the branch of mathematics that considers the structure of nodes connected by edges, or pathways. "When I realized I needed help, I closed my laptop and went across campus to the mathematics department at Caltech," Al-Anzi recalls. "I walked into the only office door that was open at the time, and introduced myself."

The mathematician that Al-Anzi found that day was Christopher Ormerod, a Taussky–Todd Instructor in Mathematics at Caltech. Al-Anzi's data piqued Ormerod's curiosity. "I was especially struck by the fact that connections between the proteins in the network didn't appear to be random," says Ormerod, who is also a coauthor on the study. "I suspected there was something mathematically interesting happening in this network."

With the help of Ormerod, the team created a computer model that suggested the yeast fat network exhibits what is known as the small-world property. This is akin to a social network that contains many different local clusters of people who are linked to each other by mutual acquaintances, so that any person within the cluster can be reached via another person through a small number of steps.

This pattern is also seen in a well-known network model in graph theory, called the Watts-Strogatz model. The model was originally devised to explain the clustering phenomenon often observed in real networks, but had not previously been applied to cellular networks.

Ormerod suggested that graph theory might be used to make predictions that could be experimentally proven. For example, graph theory says that the most important nodes in the network are not necessarily the ones with the most connections, but rather those that have the most high-quality connections. In particular, nodes having many distant or circuitous connections are less important than those with more direct connections to other nodes, and, especially, direct connections to other important nodes. In mathematical jargon, these important nodes are said to have a high "centrality score."

"In network analysis, the centrality of a node serves as an indicator of its importance to the overall network," Ormerod says.

"Our work predicts that changing the proteins with the highest centrality scores will have a bigger effect on network output than average," he adds. And indeed, the researchers found that the removal of proteins with the highest predicted centrality scores produced yeast cells with a larger fat band than in yeast whose less-important proteins had been removed.

The use of centrality scores to gauge the relative importance of a protein in a cellular network is a marked departure from how proteins traditionally have been viewed and studied—that is, as lone players, whose characteristics are individually assessed. "It was a very local view of how cells functioned," Al-Anzi says. "Now we're realizing that the majority of proteins are parts of signaling networks that perform specific tasks within the cell."

Moving forward, the researchers think their technique could be applicable to protein networks that control other cellular functions—such as abnormal cell division, which can lead to cancer.

"These kinds of methods might allow researchers to determine which proteins are most important to study in order to understand diseases that arise when these functions are disrupted," says Kai Zinn, a professor of biology at Caltech and the study's senior author. "For example, defects in the control of cell growth and division can lead to cancer, and one might be able to use centrality scores to identify key proteins that regulate these processes. These might be proteins that had been overlooked in the past, and they could represent new targets for drug development."

Funding support for the paper, "Experimental and Computational Analysis of a Large Protein Network That Controls Fat Storage Reveals the Design Principles of a Signaling Network," was provided by the National Institutes of Health.

Exclude from News Hub: 
News Type: 
Research News