New Approach Holds Promise for Earlier, Easier Detection of Colorectal Cancer

Caltech chemists develop a technique that could one day lead to early detection of tumors

Chemists at Caltech have developed a new sensitive technique capable of detecting colorectal cancer in tissue samples—a method that could one day be used in clinical settings for the early diagnosis of colorectal cancer.

Colorectal cancer is the third most prevalent cancer worldwide and is estimated to cause about 700,000 deaths every year. Metastasis due to late detection is one of the major causes of mortality from this disease; therefore, a sensitive and early indicator could be a critical tool for physicians and patients.

A paper describing the new detection technique currently appears online in Chemistry & Biology and will be published in the July 23 issue of the journal's print edition. Caltech graduate student Ariel Furst (PhD '15) and her adviser, Jacqueline K. Barton, the Arthur and Marian Hanisch Memorial Professor of Chemistry, are the paper's authors.

"Currently, the average biopsy size required for a colorectal biopsy is about 300 milligrams," says Furst. "With our experimental setup, we require only about 500 micrograms of tissue, which could be taken with a syringe biopsy versus a punch biopsy. So it would be much less invasive." One microgram is one thousandth of a milligram.

The researchers zeroed in on the activity of a protein called DNMT1 as a possible indicator of a cancerous transformation. DNMT1 is a methyltransferase, an enzyme responsible for DNA methylation—the addition of a methyl group to one of DNA's bases. This essential and normal process is a genetic editing technique that primarily turns genes off but that has also recently been identified as an early indicator of cancer, especially the development of tumors, if the process goes awry.

When all is working well, DNMT1 maintains the normal methylation pattern set in the embryonic stages, copying that pattern from the parent DNA strand to the daughter strand. But sometimes DNMT1 goes haywire, and methylation goes into overdrive, causing what is called hypermethylation. Hypermethylation can lead to the repression of genes that typically do beneficial things, like suppress the growth of tumors or express proteins that repair damaged DNA, and that, in turn, can lead to cancer.

Building on previous work in Barton's group, Furst and Barton devised an electrochemical platform to measure the activity of DNMT1 in crude tissue samples—those that contain all of the material from a tissue, not just DNA or RNA, for example. Fundamentally, the design of this platform is based on the concept of DNA-mediated charge transport—the idea that DNA can behave like a wire, allowing electrons to flow through it and that the conductivity of that DNA wire is extremely sensitive to mistakes in the DNA itself. Barton earned the 2010 National Medal of Science for her work establishing this field of research and has demonstrated that it can be used not only to locate DNA mutations but also to detect the presence of proteins such as DNMT1 that bind to DNA.

In the present study, Furst and Barton started with two arrays of gold electrodes—one atop the other—embedded in Teflon blocks and separated by a thin spacer that formed a well for solution. They attached strands of DNA to the lower electrodes, then added the broken-down contents of a tissue sample to the solution well. After allowing time for any DNMT1 in the tissue sample to methylate the DNA, they added a restriction enzyme that severed the DNA if no methylation had occurred—i.e., if DNMT1 was inactive. When they applied a current to the lower electrodes, the samples with DNMT1 activity passed the current clear through to the upper electrodes, where the activity could be measured. 

"No methylation means cutting, which means the signal turns off," explains Furst. "If the DNMT1 is active, the signal remains on. So we call this a signal-on assay for methylation activity. But beyond on or off, it also allows us to measure the amount of activity." This assay for DNMT1 activity was first developed in Barton's group by Natalie Muren (PhD '13).

Using the new setup, the researchers measured DNMT1 activity in 10 pairs of human tissue samples, each composed of a colorectal tumor sample and an adjacent healthy tissue from the same patient. When they compared the samples within each pair, they consistently found significantly higher DNMT1 activity, hypermethylation, in the tumorous tissue. Notably, they found little correlation between the amount of DNMT1 in the samples and the presence of cancer—the correlation was with activity.

"The assay provides a reliable and sensitive measure of hypermethylation," says Barton, also the chair of the Division of Chemistry and Chemical Engineering.  "It looks like hypermethylation is good indicator of tumorigenesis, so this technique could provide a useful route to early detection of cancer when hypermethylation is involved."

Looking to the future, Barton's group hopes to use the same general approach in devising assays for other DNA-binding proteins and possibly using the sensitivity of their electrochemical devices to measure protein activities in single cells. Such a platform might even open up the possibility of inexpensive, portable tests that could be used in the home to catch colorectal cancer in its earliest, most treatable stages.

The work described in the paper, "DNA Electrochemistry shows DNMT1 Methyltransferase Hyperactivity in Colorectal Tumors," was supported by the National Institutes of Health. 

Kimm Fesenmaier
Home Page Title: 
A New Approach to Detecting Colorectal Cancer
Listing Title: 
A New Approach to Detecting Colorectal Cancer
Exclude from News Hub: 
Short Title: 
A New Approach to Detecting Colorectal Cancer
News Type: 
Research News

Discovering a New Stage in the Galactic Lifecycle

On its own, dust seems fairly unremarkable. However, by observing the clouds of gas and dust within a galaxy, astronomers can determine important information about the history of star formation and the evolution of galaxies. Now thanks to the unprecedented sensitivity of the telescope at the Atacama Large Millimeter Array (ALMA) in Chile, a Caltech-led team has been able to observe the dust contents of galaxies as seen just 1 billion years after the Big Bang—a time period known as redshift 5-6. These are the earliest average-sized galaxies to ever be directly observed and characterized in this way.

The work is published in the June 25 edition of the journal Nature.

Dust in galaxies is created by the elements released during the formation and collapse of stars. Although the most abundant elements in the universe—hydrogen and helium—were created by the Big Bang, stars are responsible for making all of the heavier elements in the universe, such as carbon, oxygen, nitrogen, and iron. And because young, distant galaxies have had less time to make stars, these galaxies should contain less dust. Previous observations had suggested this, but until now nobody could directly measure the dust in these faraway galaxies.

"Before we started this study, we knew that stars formed out of these clouds of gas and dust, and we knew that star formation was probably somehow different in the early universe, where dust is likely less common. But the previous information only really hinted that the properties of the gas and the dust in earlier galaxies were different than in galaxies we see around us today. We wanted to find data that showed that," says Peter Capak, a staff scientist at the Infrared Processing and Analysis Center (IPAC) at Caltech and the first author of the study.

Armed with the high sensitivity of ALMA, Capak and his colleagues set out to perform a direct analysis of the dust in these very early galaxies.

Young, faraway galaxies are often difficult to observe because they appear very dim from Earth. Previous observations of these young galaxies, which formed just 1 billion years after the Big Bang, were made with the Hubble Space Telescope and the W. M. Keck Observatory—both of which detect light in the near-infrared and visible bands of the electromagnetic spectrum. The color of these galaxies at these wavelengths can be used to make inferences about the dust—for example, galaxies that appear bluer in color tend to have less dust, while those that are red have more dust. However, other effects like the age of the stars and our distance from the galaxy can mimic the effects of dust, making it difficult to understand exactly what the color means.

The researchers began their observations by first analyzing these early galaxies with the Keck Observatory. Keck confirmed the distance from the galaxies as redshift greater than 5—verifying that the galaxies were at least as young as they previously had been thought to be. The researchers then observed the same galaxies using ALMA to detect light at the longer millimeter and submillimeter wavelengths of light. The ALMA readings provided a wealth of information that could not be seen with visible-light telescopes, including details about the dust and gas content of these very early galaxies.

Capak and his colleagues were able to use ALMA to—for the first time—directly view the dust and gas clouds of nine average-sized galaxies during this epoch. Specifically, they focused on a feature called the carbon II spectral line, which comes from carbon atoms in the gas around newly formed stars. The carbon line itself traces this gas, while the data collected around the carbon line traces a so-called continuum emission, which provides a measurement of the dust. The researchers knew that the carbon line was bright enough to be seen in mature, dust-filled nearby galaxies, so they reasoned that the line would be even brighter if there was indeed less dust in the young faraway galaxies.

Using the carbon line, their results confirmed what had previously been suggested by the data from Hubble and Keck: these older galaxies contained, on average, 12 times less dust than galaxies from 2 billion years later (at a redshift of approximately 4).

"In galaxies like our Milky Way or nearby Andromeda, all of the stars form in very dusty environments, so more than half of the light that is observed from young stars is absorbed by the dust," Capak says. "But in these faraway galaxies we observed with ALMA, less than 20 percent of the light is being absorbed. In the local universe, only very young galaxies and very odd ones look like that. So what we're showing is that the normal galaxy at these very high redshifts doesn't look like the normal galaxy today. Clearly there is something different going on."

That "something different" gives astronomers like Capak a peek into the lifecycle of galaxies. Galaxies form because gas and dust are present and eventually turn into stars—which then die, creating even more gas and dust, and releasing energy. Because it is impossible to watch this evolution from young galaxy to old galaxy happen in real time on the scale of a human lifespan, the researchers use telescopes like ALMA to take a survey of galaxies at different evolutionary stages. Capak and his colleagues believe that this lack of dust in early galaxies signifies a never-before-seen evolutionary stage for galaxies.

"This result is really exciting. It's the first time that we're seeing the gas that the stars are forming out of in the early universe. We are starting to see the transition from just gas to the first generation of galaxies to more mature systems like those around us today. Furthermore, because the carbon line is so bright, we can now easily find even more distant galaxies that formed even longer ago, sooner after the Big Bang," Capak says.

Lin Yan, a staff scientist at IPAC and coauthor on the paper, says that their results are also especially important because they represent typical early galaxies. "Galaxies come in different sizes. Earlier observations could only spot the largest or the brightest galaxies, and those tend to be very special—they actually appear very rarely in the population," she says. "Our findings tell you something about a typical galaxy in that early epoch, so they're results can be observed as a whole, not just as special cases."

Yan says that their ability to analyze the properties of these and earlier galaxies will only expand with ALMA's newly completed capabilities. During the study, ALMA was operating with only a portion of its antennas, 20 at the time; the capabilities to see and analyze distant galaxies will be further improved now that the array is complete with 66 antennas, Yan adds.

"This is just an initial observation, and we've only just started to peek into this really distant universe at redshift of a little over 5. An astronomer's dream is basically to go as far distant as we can. And when it's complete, we should be able to see all the distant galaxies that we've only ever dreamed of seeing," she says.

The findings are published in a paper titled, "Galaxies at redshifts 5 to 6 with systematically low dust content and high [C II] emission." The work was supported by funds from NASA and the European Union's Seventh Framework Program. Nick Scoville, the Francis L. Moseley Professor of Astronomy, was an additional coauthor on this paper. In addition to Keck, Hubble, and ALMA data, observations from the Spitzer Space Telescope were used to measure the stellar mass and age of the galaxies in this study. Coauthors and collaborators from other institutions include C. Carilli, G. Jones, C.M. Casey, D. Riechers, K. Sheth, C.M. Corollo, O. Ilbert, A. Karim, O. LeFevre, S. Lilly, and V. Smolcic.

Exclude from News Hub: 
News Type: 
Research News

Voting Rights: A Conversation with Morgan Kousser

Three years ago this week, the U.S. Supreme Court ruled unconstitutional a key provision of the Voting Rights Act (VRA), which was enacted in 1965 and extended four times since then by Congress. Section 5 of the act required certain "covered" jurisdictions in the Deep South and in states and counties outside the Deep South that had large populations of Hispanics and Native Americans to obtain "pre-clearance" from the Justice Department or the U.S. District Court in the District of Columbia before changing any election law. The provision was designed to prevent election officials from replacing one law that had been declared to be racially discriminatory with a different but still discriminatory law. A second provision, Section 4(b), contained the formula for coverage.

The VRA, notes Morgan Kousser, the William R. Kenan, Jr., Professor of History and Social Science, has been "very effective. You went from 7 percent of the black voters in Mississippi being registered to vote to 60 percent within three or four years. That was just an amazing change. Even more amazing, Section 5 was flexible enough to prevent almost every kind of new discriminatory technique or device over a period of nearly 50 years." For instance, Kousser notes, "when white supremacists in Mississippi saw that African Americans would soon comprise majorities in some state or local legislative districts, they merged the districts to preserve white majorities everywhere. But Section 5 stopped this runaround and allowed the new black voters real democracy. Voting rights was the one area in which federal law came close to eliminating the country's long, sad history of racial discrimination."

But on June 25, 2013, in a landmark ruling in Shelby County v. Holder, the Court overturned Section 4(b), effectively dismantling Section 5. Without a formula that defines covered jurisdictions, no area falls within the scope of Section 5. Chief Justice John Roberts, writing the 5–4 majority opinion, argued that although the original coverage formula "made sense," it was now outdated, based on "decades-old data and eradicated practices." Asserting that voter turnout and registration rates in covered jurisdictions are nearly equal for whites and African Americans, Roberts also noted that "blatantly discriminatory evasions of federal decrees are rare. And minority candidates hold office at unprecedented levels."

The decision, says Kousser, was wrong. In a comprehensive study recently published in the journal Transatlantica, he, with the help of three Caltech students who worked on the study during Summer Undergraduate Research Fellowship (SURF) projects, examined more than four thousand successful voting-rights cases around the country as well as Justice Department inquiries and settlements and changes to laws in response to the threat of lawsuits. Over 90 percent, they found, occurred in the covered jurisdictions—indicating, Kousser says, that the coverage scheme was still working very well.

The study found that—even when excluding all of the actions brought under Section 5 of the VRA, and only looking at those that can be brought anywhere in the country—83.2 percent of successful cases originated in covered jurisdictions. This shows, Kousser says, that whatever the coverage formula measured, it still captured the "overwhelming number of instances of proven racial discrimination in elections."

We talked with Kousser about the ruling and his findings—and how this constitutional law scholar made his way to Caltech.


Why do you think Justice Roberts and the other justices in the majority ruled the way they did?

He had a sense that there had been a lot of cases outside of the covered jurisdictions. But if you look at all of the data, you see that the coverage scheme captures 94 percent of all of the cases and other events that took place from 1957 through 2013 and an even larger proportion up to 2006. Suppose that you were a stockbroker, and you could make a decision that was right 94 percent of the time. Your clients would be very, very wealthy. No one would be dissatisfied with you. That's what the congressional coverage scheme did.

I wish very much that I had finished this paper two years earlier and that the data would have been published in a scholarly journal or at least made available in a pre-print by the time that the decision was cooking up. That was a mistake on my part. I should have let it out into the world a little earlier. Sometimes I have a fantasy that if this had been shown to the right justices at the right time, maybe they would have decided differently.


The Court did not rule on the VRA in general—but said that the coverage formula is outdated because voting discrimination is not as bad as it once was. Do you agree?

This is one of the reasons that I looked at the coverage of the California Voting Rights Act (CVRA), passed in 2002. In Section 2 of the National VRA, you have to prove what is called the "totality of the circumstances." You have to prove not only that voting is racially polarized and that there is a kind of election structure used for discrimination, but also show that there is a history of discrimination in the area, that there are often special informal procedures that go against minorities, and a whole series of other things. A Section 2 case is quite difficult to prove.

The CVRA attempted to simplify those circumstances so all you have to show is that there is racially polarized voting, usually shown by a statistical analysis of how various groups voted, and that there is a potentially discriminatory electoral structure, particularly at-large elections for city council, for school board, for community college district, and so on.

The CVRA, in effect, only became operative in 2007 after some preliminary litigation. And in 2007, after the city of Modesto settled a long-running lawsuit, lawyers for the successful plaintiffs presented the city with a bill for about $3 million. This scared jurisdictions throughout California, which were faced with the potential of paying out large amounts of money if they had racially polarized voting. Again and again, you suddenly saw jurisdictions settling short of going to trial and a lot of Hispanics elected to particular boards. This has changed about 100 or 125 local boards throughout California from holding their elections at-large to holding them by sub-districts, which allow geographically segregated minorities to elect candidates of their choice. If you graph that over time, you see a huge jump in the number of successful CVRA cases after 2007. What does this mean? Does it mean that there was suddenly a huge increase in discrimination? No, it means that there was a tool that allowed the discrimination that had previously existed to be legally identified.

If we had that across the country, and it was easier to bring cases, you would expose a lot more discrimination. That's my argument.


Do you think the coverage plan will be restored?

If there were hearings and an assessment of this scheme or any other potentially competing schemes, then Congress might decide on a new coverage scheme. If the bill was passed, it would go back up to the U.S. Supreme Court, and maybe the Court would be more interested in the actual empirical evidence instead of simply guessing what they thought might have existed. But I think right now the possibilities of getting any changes through the Congress are zero.

I would like to see some small changes in the coverage scheme, but they have to be made on the basis of evidence. Just throwing out the whole thing because allegedly it didn't fit anymore is an irrational way to make public policy.


As a professor of history, do you think it is your responsibility to help change policy?

Well, it has been interesting to me from the very beginning. Let me tell you how I got started in voting rights cases. My doctoral dissertation was on the disfranchisement of blacks and poor whites in the South in the late 19th and early 20th centuries. In about 1979, a lawyer who was cooperating with the ACLU [American Civil Liberties Union] in Birmingham, Alabama, called me up—I didn't know who he was—and he said, "Do you have an opinion about whether section 201 of the Alabama constitution of 1901 was adopted with a racially discriminatory purpose?" I said, "I do. I've studied that. I think it was adopted with a racially discriminatory purpose."

Writing expert witness reports and testifying in cases are exactly like what I have always done as a scholar. I have looked at the racially discriminatory effects of laws; I have looked at the racially discriminatory intent of laws. I have examined them by looking at a lot of evidence. I write very long papers for these cases. They are scholarly publications, and whether they relate to something that happened 100 years ago or something that happened five years ago or yesterday doesn't really, in principle, seem to make any difference.


How did you get started as a historian studying politics?

Well, I'm old. I grew up in the South during the period of segregation, but just as it was breaking down. When I was a junior in high school, the sit-ins took place in Nashville, Tennessee, which is where I'm from. I was sympathetic. I never liked segregation. I was always in favor of equal rights.

I had been fascinated by politics from the very beginning. By the time I was 8 or 9 years old, I was reading two newspapers a day. One was a very conservative newspaper, pro-segregation, and the other paper was a liberal newspaper, critical of segregation. They both covered politics. And if you read news stories in each about the same event on the same day, you'd get a completely different slant. It was a wonderful training for a historian. From reading two newspapers that I knew to be biased, one in one direction, the other in another direction, I had to try to figure out what was happening and what I should believe to be fact.


How did you end up at Caltech?

To be very frank, Yale, where I was a graduate student, didn't want me around anymore. When I was there, I started a graduate student senate. I wrote its constitution, and I served as its first president. We were obnoxious. This was in 1967 and 1968, and students were revolting around the country, trying to bring an end to the war in Vietnam, trying to stop racial discrimination, trying to change the world. I had less lofty aims.


Such as?

There was no bathroom for women in the hall of graduate studies where the vast majority of humanities and social sciences classes took place. We made a nonnegotiable demand for a bathroom for women. Yale was embarrassed. Yale granted our request. We did other things. We protested against a rent increase in graduate student married housing. Yale couldn't justify the increase and gave way. We formed a committee to get women equal access to the Yale swimming pools. Yale opened the pool.



In addition to doing research, you are an acclaimed teacher at Caltech—the winner of Caltech highest teaching honor, the Feynman Prize, in 2011. Do you think of yourself as more of a teacher or as a scholar?

I really like to do both. I can't avoid teaching. If you look at my scholarship, a lot of it is really in teaching format. I would like to school Chief Justice Roberts on what he had done wrong and to persuade him, convince him, that he should change his mind on this. A lot of my friends who are at my advanced age have quit teaching, because they can't take it anymore. When the term is over, they are jubilant.

I'm always sad when the term ends, particularly with my Supreme Court class, because the classes are small, so I know each individual student pretty well. I hate to say goodbye to them.


Do any particular students stand out in your mind?

I had one student who took my class in 2000. He was a computer science major. We used to talk a lot. We disagreed about practically everything politically, but he was a very nice and very intelligent guy.

When he finished the class, he decided that he would go to work for Microsoft. He did that for three years. Then he decided he wanted to go to law school, where he did very well; he clerked for an appeals court judge and he clerked for a Supreme Court justice. This spring, he argued his first case before the U.S. Supreme Court. The case that he argued was very complicated. I don't understand it, I don't understand the issues, I don't understand the precedents. It's relatively obscure, and it won't make big headlines. But he did it, and he's promised me that he'll share his impressions of being on that stage and that I can pass them on to current Caltech students. I know that they will find his experience as exciting as I will—a Techer arguing a case before the Supreme Court within 15 years of graduating from college! I can't quit teaching.

Exclude from News Hub: 
News Type: 
Research News

JPL News: NASA Joins North Sea Oil Cleanup Training Exercise

NASA participated for the first time in Norway's annual oil spill cleanup exercise in the North Sea on June 8 through 11. Scientists flew a specialized NASA airborne instrument called the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) on NASA's C-20A piloted research aircraft to monitor a controlled release of oil into the sea, testing the radar's ability to distinguish between more and less damaging types of oil slicks.

Read the full story from JPL News

Exclude from News Hub: 
News Type: 
Research News

Injured Jellyfish Seek to Regain Symmetry

Self-repair is extremely important for living things. Get a cut on your finger and your skin can make new cells to heal the wound; lose your tail—if you are a particular kind of lizard—and tissue regeneration may produce a new one. Now, Caltech researchers have discovered a previously unknown self-repair mechanism—the reorganization of existing anatomy to regain symmetry—in a certain species of jellyfish.

The results are published in the June 15 online edition of the journal Proceedings of the National Academy of Sciences (PNAS).

Many marine animals, including some jellyfish, can rapidly regenerate tissues in response to injury, and this trait is important for survival. If a sea turtle takes a bite out of a jellyfish, the injured animal can quickly grow new cells to replace the lost tissue. In fact, a jellyfish-like animal called the hydra is a very commonly used model organism in studies of regeneration.

But Caltech assistant professor of biology Lea Goentoro, along with graduate student Michael Abrams and associate research technician Ty Basinger, were interested in another organism, the moon jellyfish (Aurelia aurita). Abrams, Basinger, and Goentoro, lead authors of the PNAS study, wanted to know if the moon jellyfish would respond to injuries in the same manner as an injured hydra. The team focused their study on the jellyfish's juvenile, or ephyra, stage, because the ephyra's simple body plan—a disk-shaped body with eight symmetrical arms—would make any tissue regeneration clearly visible.

To simulate injury—like that caused by a predator in the wild—the team performed amputations on anesthetized ephyra, producing animals with two, three, four, five, six, or seven arms, rather than the usual eight. They then returned the jellyfish to their habitat of artificial seawater, and monitored the tissue response.

Although wounds healed up as expected, with the tissue around the cut closing up in just a few hours, the researchers noticed something unexpected: the jellyfish were not regenerating tissues to replace the lost arms. Instead, within the first two days after the injury, the ephyra had reorganized its existing arms to be symmetrical and evenly spaced around the animal's disklike body. This so-called resymmetrization occurred whether the animal had as few as two limbs remaining or as many as seven, and the process was observed in three additional species of jellyfish ephyra.

"This is a different strategy of self-repair," says Goentoro. "Some animals just heal their wounds, other animals regenerate what is lost, but the moon jelly ephyrae don't regenerate their lost limbs. They heal the wound, but then they reorganize to regain symmetry."

There are several reasons why symmetry might be more important to the developing jellyfish than regenerating a lost limb. Jellyfish and many other marine animals such as sea urchins, sea stars, and sea anemones have what is known as radial symmetry. Although the bodies of these animals have a distinct top and bottom, they do not have distinguishable left and right sides—an arrangement, present in humans and other higher life forms, known as bilateral symmetry. And this radial symmetry is essential to how the jellyfish moves and eats, first author Abrams says.

"Jellyfish move by 'flapping' their arms; this allows for propulsion through the water, which also moves water—and food—past the mouth," he says. "As they are swimming, a boundary layer of viscous—that is, thick—fluid forms between their arms, creating a continuous paddling surface. And you can imagine how this paddling surface would be disturbed if you have a big gap between the arms."

Maintaining symmetry appears to be vital not just for propulsion and feeding, the researchers found. In the few cases when the injured animals do not symmetrize—only about 15 percent of the injured animals they studied—the unsymmetrical ephyra also cannot develop into normal adult jellyfish, called medusa.

The researchers next wanted to figure out how the new self-repair mechanism works. Cell proliferation and cell death are commonly involved in tissue regeneration and injury response, but, the team found, the amputee jellyfish were neither making new cells nor killing existing cells as they redistributed their existing arms around their bodies.

Instead, the mechanical forces created by the jellyfish's own muscle contractions were essential for symmetrization. In fact, when muscle relaxants were added to the seawater surrounding an injured jellyfish, slowing the animal's muscle contractions, the symmetrization of the intact arms also was slowed down. In contrast, a reduction in the amount of magnesium in the artificial seawater sped up the rate at which the jellyfish pulsed their muscles, and these faster muscle contractions increased the symmetrization rate.

"Symmetrization is a combination of the mechanical forces created by the muscle contractions and the viscoelastic jellyfish body material," Abrams says. "The cycle of contraction and the viscoelastic response from the jellyfish tissues leads to reorganization of the body. You can imagine that in the absence of symmetry, the mechanical forces are unbalanced, but over time, as the body and arms reorganize, the forces rebalance."

To test this idea, the team collaborated with coauthor Chin-Lin Guo, from Academia Sinica in Taiwan, to build a mathematical model, and succeeded in simulating the symmetrization process.

In addition to adding to our understanding about self-repair mechanisms, the discovery could help engineers design new biomaterials, Goentoro says. "Symmetrization may provide a new avenue for thinking about biomaterials that could be designed to 'heal' by regaining functional geometry rather than regenerating precise shapes," she says. "Other self-repair mechanisms require cell proliferation and cell death—biological processes that aren't easily translated to technology. But we can more easily apply mechanical forces to a material."

And the impact of mechanical forces on development is being increasingly studied in a variety of organisms, Goentoro says. "Recently, mechanical forces have been increasingly found to play a role in development and tissue regulation," she says. "So the symmetrization process in Aurelia, with its simple geometry, lends itself as a good model system where we can study how mechanical forces play a role in morphogenesis."

These results are published in a paper titled "Self-repairing symmetry in jellyfish through mechanically driven reorganization." In addition to Abrams, Basinger, Goentoro, and Guo, former SURF student William Yuan from the University of Oxford was also a coauthor. Jellyfish were provided by the Cabrillo Marine Aquarium and the Monterey Bay Aquarium. John Dabiri, professor of aeronautics and bioengineering, provided discussions and suggestions throughout the study. Abrams is funded by the Graduate Research Fellowship Program of the National Science Foundation.

Home Page Title: 
Injured Jellyfish Seek to Regain Symmetry
Exclude from News Hub: 
News Type: 
Research News
Exclude from Home Page: 

Behavior Matters: Redesigning the Clinical Trial

When a new type of drug or therapy is discovered, double-blind randomized controlled trials (DBRCTs) are the gold standard for evaluating them. These trials, which have been used for years, were designed to determine the true efficacy of a treatment free from patient or doctor bias, but they do not factor in the effects that patient behaviors, such as diet and lifestyle choices, can have on the tested treatment.

A recent meta-analysis of six such clinical trials, led by Caltech's Erik Snowberg, professor of economics and political science, and his colleagues Sylvain Chassang from Princeton University and Ben Seymour from Cambridge University, shows that behavior can have a serious impact on the effectiveness of a treatment—and that the currently used DBRCT procedures may not be able to assess the effects of behavior on the treatment. To solve this, the researchers propose a new trial design, called a two-by-two trial, that can account for behavior–treatment interactions.

The study was published online on June 10 in the journal PLOS ONE.

Patients behave in different ways during a trial. These behaviors can directly relate to the trial—for example, one patient who believes in the drug may religiously stick to his or her treatment regimen while someone more skeptical might skip a few doses. The behaviors may also simply relate to how the person acts in general, such as preferences in diet, exercise, and social engagement. And in the design of today's standard trials, these behaviors are not accounted for, Snowberg says.

For example, a DBRCT might randomly assign patients to one of two groups: an experimental group that receives the new treatment and a control group that does not. As the trial is double-blinded, neither the subjects nor their doctors know who falls into which group. This is intended to reduce bias from the behavior and beliefs of the patient and the doctor; the thinking is that because patients have not been specifically selected for treatment, any effects on health outcomes must be solely due to the treatment or lack of treatment.

Although the patients do not know whether they have received the treatment, they do know their probability of getting the treatment—in this case, 50 percent. And that 50 percent likelihood of getting the new treatment might not be enough to encourage a patient to change behaviors that could influence the efficacy of the drug under study, Snowberg says. For example, if you really want to lose weight and know you have a high probability—say 70 percent chance—of being in the experimental group for a new weight loss drug, you may be more likely to take the drug as directed and to make other healthy lifestyle choices that can contribute to weight loss. As a result, you might lose more weight, boosting the apparent effectiveness of the drug.

However, if you know you only have a 30 percent chance of being in the experimental group, you might be less motivated to both take the drug as directed and to make those other changes. As a result, you might lose less weight—even if you are in the treatment group—and the same drug would seem less effective.

"Most medical research just wants to know if a drug will work or not. We wanted to go a step further, designing new trials that would take into account the way people behave. As social scientists, we naturally turned to the mathematical tools of formal social science to do this," Snowberg says.

Snowberg and his colleagues found that with a new trial design, the two-by-two trial, they can tease out the effects of behavior and the interaction of behavior and treatment, as well as the effects of treatment alone. The new trial, which still randomizes treatment, also randomizes the probability of treatment—which can change a patient's behavior.

In a two-by-two trial, instead of patients first being assigned to either the experimental or control groups, they are randomly assigned to either a "high probability of treatment" group or a "low probability of treatment" group. The patients in the high probability group are then randomly assigned to either the treatment or the control group, giving them a 70 percent chance of receiving the treatment. Patients in the low probability group are also randomly assigned to treatment or control; their likelihood of receiving the treatment is 30 percent. The patients are then informed of their probability of treatment.

By randomizing both the treatment and the probability of treatment, medical researchers can quantify the effects of treatment, the effects of behavior, and the effects of the interaction between treatment and behavior. Determining each, Snowberg says, is essential for understanding the overall efficacy of treatment.

Credit: Sylvain Chassang, Princeton University

"It's a very small change to the design of the trial, but it's important. The effect of a treatment has these two constituent parts: pure treatment effect and the treatment–behavior interaction. Standard blind trials just randomize the likelihood of treatment, so you can't see this interaction. Although you can't just tell someone to randomize their behavior, we found a way that you can randomize the probability that a patient will get something that will change their behavior."

Because it is difficult to implement new trial design changes in active trials, Snowberg and his colleagues wanted to first test their idea with a meta-analysis of data from previous clinical trials. They developed a way to test this idea by coming up with a new mathematical formula that can be used to analyze DBRCT data. The formula, which teases out the health outcomes resulting from treatment alone as well as outcomes resulting from an interaction between treatment and behavior, was then used to statistically analyze six previous DBRCTs that had tested the efficacy of two antidepressant drugs, imipramine (a tricyclic antidepressant also known as Tofranil) and paroxetine (a selective serotonin reuptake inhibitor sold as Paxil).

First, the researchers wanted to see if there was evidence that patients behave differently when they have a high probability of treatment versus when they have a low probability of treatment. The previous trials recorded how many patients dropped out of the study, so this was the behavior that Snowberg and his colleagues analyzed. They found that in trials where patients happened to have a relatively high probability of treatment—near 70 percent—the dropout rate was significantly lower than in other trials with patients who had a lower probability of treatment, around 50 percent.

Although the team did not have any specific behaviors to analyze, other than dropping out of the study, they also wanted to determine if behavior in general could have added to the effect of the treatments. Using their statistical techniques, they determined that imipramine seemed to have a pure treatment effect, but no effect from the interaction between treatment and behavior—that is, the drug seemed to work fine, regardless of any behavioral differences that may have been present.

However, after their analysis, they determined that paroxetine seemed to have no effect from the treatment alone or behavior alone. However, an interaction between the treatment and behavior did effectively decrease depression. Because this was a previously performed study, the researchers cannot know which specific behavior was responsible for the interaction, but with the mathematical formula, they can tell that this behavior was necessary for the drug to be effective.

In their paper, Snowberg and his colleagues speculate how a situation like this might come about. "Maybe there is a drug, for instance, that makes people feel better in social situations, and if you're in the high probability group, then maybe you'd be more willing to go out to parties to see if the drug helps you talk to people," Snowberg explains. "Your behavior drives you to go to the party, and once you're at the party, the drug helps you feel comfortable talking to people. That would be an example of an interaction effect; you couldn't get that if people just took this drug alone at home."

Although this specific example is just speculation, Snowberg says that the team's actual results reveal that there is some behavior or set of behaviors that interact with paroxetine to effectively treat depression—and without this behavior, the drug appears to be ineffective.

"Normally what you get when you run a standard blind trial is some sort of mishmash of the treatment effect and the treatment-behavior interaction effect. But, knowing the full interaction effect is important. Our work indicates that clinical trials underestimate the efficacy of a drug where behavior matters," Snowberg says. "It may be the case that 50 percent probability isn't high enough for people to change any of their behaviors, especially if it's a really uncertain new treatment. Then it's just going to look like the drug doesn't work, and that isn't the case."

Because the meta-analysis supported the team's hypothesis—that the interaction between treatment and behavior can have an effect on health outcomes—the next step is incorporating these new ideas into an active clinical trial. Snowberg says that the best fit would be a drug trial for a condition, such as a mental health disorder or an addiction, that is known to be associated with behavior. At the very least, he says, he hopes that these results will lead the medical research community to a conversation about ways to improve the DBRCT and move past the current "gold standard."

These results are published in a paper titled "Accounting for Behavior in Treatment Effects: New Applications for Blind Trials." Cayley Bowles, a student in the UCLA/Caltech MD/PhD program, was also a coauthor on the paper. The work was supported by funding to Snowberg and Chassang from the National Science Foundation.

Exclude from News Hub: 
News Type: 
Research News

Celebrating 11 Years of CARMA Discoveries

For more than a decade, large, moveable telescopes tucked away on a remote, high-altitude site in the Inyo Mountains, about 250 miles northeast of Los Angeles, have worked together to paint a picture of the universe through radio-wave observations.

Known as the Combined Array for Research in Millimeter-wave Astronomy, or CARMA, the telescopes formed one of the most powerful millimeter interferometers in the world. CARMA was created in 2004 through the merger of the Owens Valley Radio Observatory (OVRO) Millimeter Array and the Berkeley Illinois Maryland Association (BIMA) Array and initially consisted of 15 telescopes. In 2008, the University of Chicago joined CARMA, increasing the telescope count to 23.

Dalmation Drawing

An artist's depiction of a gamma ray burst, the most powerful explosive event in the universe. CARMA detected the millimeter-wavelength emission from the afterglow of the gamma ray burst 130427A only 18 hours after it first exploded on April 27, 2013. The CARMA observations revealed a surprise: in addition to the forward moving shock, CARMA showed the presence of a backward moving shock, or "reverse" shock, that had long been predicted, but never conclusively observed until now.
Credit: Gemini Observatory/AURA, artwork by Lynette Cook

CARMA's higher elevation, improved electronics, and greater number of connected antennae enabled more precise observations of radio emission from molecules and cold dust across the universe, leading to ground-breaking studies that encompass a range of cosmic objects and phenomena—including stellar birth, early planet formation, supermassive black holes, galaxies, galaxy mergers, and sudden, unexpected events such as gamma-ray bursts and supernova explosions.

"Over its lifetime, it has moved well beyond its initial goals both scientifically and technically," says Anneila Sargent (MS '67, PhD '78, both degrees in astronomy), the Ira S. Bowen Professor of Astronomy at Caltech and the first director of CARMA.

On April 3, CARMA probed the skies for the last time. The project ceased operations and its telescopes will be repurposed and integrated into other survey projects.

Here is a look back at some of CARMA's most significant discoveries and contributions to the field of astronomy.

Planet formation

Dalmation Drawing

These CARMA images highlight the range of morphologies observed in circumstellar disks, which may indicate that the disks are in different stages in the planet formation process, or that they are evolving along distinct pathways. The bottom row highlights the disk around the star LkCa 15, where CARMA detected a 40 AU diameter inner hole. The two-color Keck image (bottom right) reveals an infrared source along the inner edge of this hole. The infrared luminosity is consistent with a 6M Jupiter planet, which may have cleared the hole.
Credit: CARMA

Newly formed stars are surrounded by a rotating disk of gas and dust, known as a circumstellar disk. These disks provide the building materials for planetary systems like our own solar system, and can contain important clues about the planet formation process.

During its operation, CARMA imaged disks around dozens of young stars such as RY Tau and DG Tau. The observations revealed that circumstellar disks often are larger in size than our solar system and contain enough material to form Jupiter-size planets. Interestingly, these disks exhibit a variety of morphologies, and scientists think the different shapes reflect different stages or pathways of the planet formation process.

CARMA also helped gather evidence that supported planet formation theories by capturing some of the first images of gaps in circumstellar disks. According to conventional wisdom, planets can form in disks when stars are as young as half a million years old. Computer models show that if these so-called protoplanets are the size of Jupiter or larger, they should carve out gaps or holes in the rings through gravitational interactions with the disk material. In 2012, the team of OVRO executive director John Carpenter reported using CARMA to observe one such gap in the disk surrounding the young star LkCa 15. Observations by the Keck Observatory in Hawaii revealed an infrared source along the inner edge of the gap that was consistent with a planet that has six times the mass of Jupiter.

"Until ALMA"—the Atacama Large Millimeter/submillimeter Array in Chile, a billion-dollar international collaboration involving the United States, Europe, and Japan—"came along, CARMA produced the highest-resolution images of circumstellar disks at millimeter wavelengths," says Carpenter.

Star formation

Dalmation Drawing

A color image of the Whirlpool galaxy M51 from the Hubble Space Telescope (HST). A three composite of images taken at wavelengths of 4350 Angstroms (blue), 5550 Angstroms (green), and 6580 Angstroms (red). Bright regions in the red color are the regions of recent massive star formation, where ultraviolet photons from the massive stars ionize the surrounding gas which radiates the hydrogen recombination line emission. Dark lanes run along spiral arms, indicating the location where the dense interstellar medium is abundant.
Credit: Jin Koda

Stars form in "clouds" of gas, consisting primarily of molecular hydrogen, that contain as much as a million times the mass of the sun. "We do not understand yet how the diffuse molecular gas distributed over large scales flows to the small dense regions that ultimately form stars," Carpenter says.

Magnetic fields may play a key role in the star formation process, but obtaining observations of these fields, especially on small scales, is challenging. Using CARMA, astronomers were able to chart the direction of the magnetic field in the dense material that surrounds newly formed protostars by mapping the polarized thermal radiation from dust grains in molecular clouds. A CARMA survey of the polarized dust emission from 29 sources showed that magnetic fields in the dense gas are randomly aligned with outflowing gas entrained by jets from the protostars.

If the outflows emerge along the rotation axes of circumstellar disks, as has been observed in a few cases, the results suggest that, contrary to theoretical expectations, the circumstellar disks are not aligned with the fields in the dense gas from which they formed. "We don't know the punch line—are magnetic fields critical in the star formation process or not?—because, as always, the observations just raise more questions," Carpenter admits. "But the CARMA observations are pointing the direction for further observations with ALMA."

Molecular gas in galaxies

Dalmation Drawing

CARMA was used to image molecular gas in the nearby Andromeda galaxy. All stars form in dense clouds of molecular gas and thus to understand star formation it is important to analyze the properties of molecular clouds.
Credit: Andreas Schruba

The molecular gas in galaxies is the raw material for star formation. "Being able to study how much gas there is in a galaxy, how it's converted to stars, and at what rate is very important for understanding how galaxies evolve over time," Carpenter says.

By resolving the molecular gas reservoirs in local galaxies and measuring the mass of gas in distant galaxies that existed when the cosmos was a fraction of its current age, CARMA made fundamental contributions to understanding the processes that shape the observable universe.

For example, CARMA revealed the evolution, in the spiral galaxy M51, of giant molecular clouds (GMCs) driven by large-scale galactic structure and dynamics. CARMA was used to show that giant molecular clouds grow through coalescence and then break up into smaller clouds that may again come together in the future. Furthermore, the process can occur multiple times over a cloud's lifetime. This new picture of molecular cloud evolution is more complex than previous scenarios, which treated the clouds as discrete objects that dissolved back into the atomic interstellar medium after a certain period of time. "CARMA's imaging capability showed the full cycle of GMCs' dynamical evolution for the first time," Carpenter says.

The Milky Way's black hole

CARMA worked as a standalone array, but it was also able to function as part of very-long-baseline interferometry (VLBI), in which astronomical radio signals are gathered from multiple radio telescopes on Earth to create higher-resolution images than is possible with single telescopes working alone.

In this fashion, CARMA has been linked together with the Submillimeter Telescope in Arizona and the James Clerk Maxwell Telescope and Submillimeter Array in Hawaii to paint one of the most detailed pictures to date of the monstrous black hole at the heart of our Milky Way galaxy. The combined observations achieved an angular resolution of 40 microarcseconds—the equivalent of seeing a tennis ball on the moon.

"If you just used CARMA alone, then the best resolution you would get is 0.15 arcseconds. So VLBI improved the resolution by a factor of 3,750," Carpenter says.

Astronomers have used the VLBI technique to successfully detect radio signals emitted from gas orbiting just outside of this supermassive black hole's event horizon, the radius around the black hole where gravity is so strong that even light cannot escape. "These observations measured the size of the emitting region around the black hole and placed constraints on the accretion disk that is feeding the black hole," he explains.

In other work, VLBI observations showed that the black hole at the center of M87, a giant elliptical galaxy, is spinning.


CARMA also played an important role in following up "transients," objects that unexpectedly burst into existence and then dim and fade equally rapidly (on an astronomical timescale), over periods from seconds to years. Some transients can be attributed to powerful cosmic explosions such as gamma-ray bursts (GRBs) or supernovas, but the mechanisms by which they originate remain unexplained.

"By looking at transients at different wavelengths—and, in particular, looking at them soon after they are discovered—we can understand the progenitors that are causing these bursts," says Carpenter, who notes that CARMA led the field in observations of these events at millimeter wavelengths. Indeed, on April 27, 2013, CARMA detected the millimeter-wavelength emission from the afterglow of GRB 130427A only 18 hours after it first exploded. The CARMA observations revealed a surprise: in addition to the forward-moving shock, there was one moving backward. This "reverse" shock had long been predicted, but never conclusively observed.

Getting data on such unpredictable transient events is difficult at many observatories, because of logistics and the complexity of scheduling. "Targets of opportunity require flexibility on the part of the organization to respond to an event when it happens," says Sterl Phinney (BS '80, astronomy), professor of theoretical astrophysics and executive officer for astronomy and astrophysics at Caltech. "CARMA was excellent for this purpose, because it was so nimble."

Galaxy clusters

Dalmation Drawing

Multi-wavelength view of the redshift z=0.2 cluster MS0735+7421. Left to right: CARMA observations of the SZ effect, X-ray data from Chandra, radio data from the VLA, and a three-color composite of the three. The SZ image reveals a large-scale distortion of the intra-cluster medium coincident with X-ray cavities produced by a massive AGN outflow, an example of the wide dynamic-range, multi-wavelength cluster imaging enabled by CARMA.
Credit: Erik Leitch (University of Chicago, Owens Valley Radio Observatory)

Galaxy clusters are the largest gravitationally bound objects in the universe. CARMA studied galaxy clusters by taking advantage of a phenomenon known as the Sunyaev-Zel'dovich (SZ) effect. The SZ effect results when primordial radiation left over from the Big Bang, known as the cosmic microwave background (CMB), is scattered to higher energies after interacting with the hot ionized gas that permeates galaxy clusters. Using CARMA, astronomers recently confirmed a galaxy cluster candidate at redshifts of 1.75 and 1.9, making them the two most distant clusters for which an SZ effect has been measured.

"CARMA can detect the distortion in the CMB spectrum," Carpenter says. "We've observed over 100 clusters at very good resolution. These data have been very important to calibrating the relation between the SZ signal and the cluster mass, probing the structure of clusters, and helping discover the most distant clusters known in the universe."

Training the next generation

In addition to its many scientific contributions, CARMA also served as an important teaching facility for the next generation of astronomers. About 300 graduate students and postdoctoral researchers have cut their teeth on interferometry astronomy at CARMA over the years. "They were able to get hands-on experience in millimeter-wave astronomy at the observatory, something that is becoming more and more rare these days," Sargent says.

Tom Soifer (BS '68, physics), professor of physics and Kent and Joyce Kresa Leadership Chair of the Division of Physics, Mathematics and Astronomy, notes that many of those trainees now hold prestigious positions at the National Radio Astronomy Observatory (NRAO) or are professors at universities across the country, where they educate future scientists and engineers and help with the North American ALMA effort. "The United States is currently part of a tripartite international collaboration that operates ALMA. Most of the North American ALMA team trained either at CARMA or the Caltech OVRO Millimeter Array, CARMA's precursor," he says.

Looking ahead

Following CARMA's shutdown, the Cedar Flats sites will be restored to prior conditions, and the telescopes will be moved to OVRO. Although the astronomers closest to the observatory find the closure disappointing, Phinney takes a broader view, seeing the shutdown as part of the steady march of progress in astronomy. "CARMA was the cutting edge of high-frequency astronomy for the past decade. Now that mantle has passed to the global facility called ALMA, and Caltech will take on new frontiers."

Indeed, Caltech continues to push the technological frontier of astronomy through other projects. For example, Caltech Assistant Professor of Astronomy Greg Hallinan is leading the effort to build a Long Wavelength Array (LWA) station at OVRO that will instantaneously image the entire viewable sky every few seconds at low-frequency wavelengths to search for radio transients.

The success of CARMA and OVRO, Soifer says, gives him confidence that the LWA will also be successful. "We have a tremendously capable group of scientists and engineers. If anybody can make this challenging enterprise work, they can."

Exclude from News Hub: 
News Type: 
Research News

Yeast Protein Network Could Provide Insights into Human Obesity

A team of biologists and a mathematician have identified and characterized a network composed of 94 proteins that work together to regulate fat storage in yeast.

"Removal of any one of the proteins results in an increase in cellular fat content, which is analogous to obesity," says study coauthor Bader Al-Anzi, a research scientist at Caltech.

The findings, detailed in the May issue of the journal PLOS Computational Biology, suggest that yeast could serve as a valuable test organism for studying human obesity.

"Many of the proteins we identified have mammalian counterparts, but detailed examinations of their role in humans has been challenging," says Al-Anzi. "The obesity research field would benefit greatly if a single-cell model organism such as yeast could be used—one that can be analyzed using easy, fast, and affordable methods."

Using genetic tools, Al-Anzi and his research assistant Patrick Arpp screened a collection of about 5,000 different mutant yeast strains and identified 94 genes that, when removed, produced yeast with increases in fat content, as measured by quantitating fat bands on thin-layer chromatography plates. Other studies have shown that such "obese" yeast cells grow more slowly than normal, an indication that in yeast as in humans, too much fat accumulation is not a good thing. "A yeast cell that uses most of its energy to synthesize fat that is not needed does so at the expense of other critical functions, and that ultimately slows down its growth and reproduction," Al-Anzi says.

When the team looked at the protein products of the genes, they discovered that those proteins are physically bonded to one another to form an extensive, highly clustered network within the cell.

Such a configuration cannot be generated through a random process, say study coauthors Sherif Gerges, a bioinformatician at Princeton University, and Noah Olsman, a graduate student in Caltech's Division of Engineering and Applied Science, who independently evaluated the details of the network. Both concluded that the network must have formed as the result of evolutionary selection.

In human-scale networks, such as the Internet, power grids, and social networks, the most influential or critical nodes are often, but not always, those that are the most highly connected.

The team wondered whether the fat-storage network exhibits this feature, and, if not, whether some other characteristics of the nodes would determine which ones were most critical. Then, they could ask if removing the genes that encode the most critical nodes would have the largest effect on fat content.

To examine this hypothesis further, Al-Anzi sought out the help of a mathematician familiar with graph theory, the branch of mathematics that considers the structure of nodes connected by edges, or pathways. "When I realized I needed help, I closed my laptop and went across campus to the mathematics department at Caltech," Al-Anzi recalls. "I walked into the only office door that was open at the time, and introduced myself."

The mathematician that Al-Anzi found that day was Christopher Ormerod, a Taussky–Todd Instructor in Mathematics at Caltech. Al-Anzi's data piqued Ormerod's curiosity. "I was especially struck by the fact that connections between the proteins in the network didn't appear to be random," says Ormerod, who is also a coauthor on the study. "I suspected there was something mathematically interesting happening in this network."

With the help of Ormerod, the team created a computer model that suggested the yeast fat network exhibits what is known as the small-world property. This is akin to a social network that contains many different local clusters of people who are linked to each other by mutual acquaintances, so that any person within the cluster can be reached via another person through a small number of steps.

This pattern is also seen in a well-known network model in graph theory, called the Watts-Strogatz model. The model was originally devised to explain the clustering phenomenon often observed in real networks, but had not previously been applied to cellular networks.

Ormerod suggested that graph theory might be used to make predictions that could be experimentally proven. For example, graph theory says that the most important nodes in the network are not necessarily the ones with the most connections, but rather those that have the most high-quality connections. In particular, nodes having many distant or circuitous connections are less important than those with more direct connections to other nodes, and, especially, direct connections to other important nodes. In mathematical jargon, these important nodes are said to have a high "centrality score."

"In network analysis, the centrality of a node serves as an indicator of its importance to the overall network," Ormerod says.

"Our work predicts that changing the proteins with the highest centrality scores will have a bigger effect on network output than average," he adds. And indeed, the researchers found that the removal of proteins with the highest predicted centrality scores produced yeast cells with a larger fat band than in yeast whose less-important proteins had been removed.

The use of centrality scores to gauge the relative importance of a protein in a cellular network is a marked departure from how proteins traditionally have been viewed and studied—that is, as lone players, whose characteristics are individually assessed. "It was a very local view of how cells functioned," Al-Anzi says. "Now we're realizing that the majority of proteins are parts of signaling networks that perform specific tasks within the cell."

Moving forward, the researchers think their technique could be applicable to protein networks that control other cellular functions—such as abnormal cell division, which can lead to cancer.

"These kinds of methods might allow researchers to determine which proteins are most important to study in order to understand diseases that arise when these functions are disrupted," says Kai Zinn, a professor of biology at Caltech and the study's senior author. "For example, defects in the control of cell growth and division can lead to cancer, and one might be able to use centrality scores to identify key proteins that regulate these processes. These might be proteins that had been overlooked in the past, and they could represent new targets for drug development."

Funding support for the paper, "Experimental and Computational Analysis of a Large Protein Network That Controls Fat Storage Reveals the Design Principles of a Signaling Network," was provided by the National Institutes of Health.

Exclude from News Hub: 
News Type: 
Research News

Using Radar Satellites to Study Icelandic Volcanoes and Glaciers

On August 16 of last year, Mark Simons, a professor of geophysics at Caltech, landed in Reykjavik with 15 students and two other faculty members to begin leading a tour of the volcanic, tectonic, and glaciological highlights of Iceland. That same day, a swarm of earthquakes began shaking the island nation—seismicity that was related to one of Iceland's many volcanoes, Bárðarbunga caldera, which lies beneath Vatnajökull ice cap.

As the trip proceeded, it became clear to scientists studying the event that magma beneath the caldera was feeding a dyke, a vertical sheet of magma slicing through the crust in a northeasterly direction. On August 29, as the Caltech group departed Iceland, the dike triggered an eruption in a lava field called Holuhraun, about 40 kilometers (roughly 25 miles) from the caldera just beyond the northern limit of the ice cap.

Although the timing of the volcanic activity necessitated some shuffling of the trip's activities, such as canceling planned overnight visits near what was soon to become the eruption zone, it was also scientifically fortuitous. Simons is one of the leaders of a Caltech/JPL project known as the Advanced Rapid Imaging and Analysis (ARIA) program, which aims to use a growing constellation of international imaging radar satellites that will improve situational awareness, and thus response, following natural disasters. Under the ARIA umbrella, Caltech and JPL/NASA had already formed a collaboration with the Italian Space Agency (ASI) to use its COSMO-SkyMed (CSK) constellation (consisting of four orbiting X-Band radar satellites) following such events.

Through the ASI/ARIA collaboration, the managers of CSK agreed to target the activity at Bárðarbunga for imaging using a technique called interferometric synthetic aperture radar (InSAR). As two CSK satellites flew over, separated by just one day, they bounced signals off the ground to create images of the surface of the glacier above the caldera. By comparing those two images in what is called an interferogram, the scientists could see how the glacier surface had moved during that intervening day. By the evening of August 28, Simons was able to pull up that first interferogram on his cell phone. It showed that the ice above the caldera was subsiding at a rate of 50 centimeters (more than a foot and a half) a day—a clear indication that the magma chamber below Bárðarbunga caldera was deflating.

The next morning, before his return flight to the United States, Simons took the data to researchers at the University of Iceland who were tracking Bárðarbunga's activity.

"At that point, there had been no recognition that the caldera was collapsing. Naturally, they were focused on the dyke and all the earthquakes to the north," says Simons. "Our goal was just to let them know about the activity at the caldera because we were really worried about the possibility of triggering a subglacial melt event that would generate a catastrophic flood."

Luckily, that flood never happened, but the researchers at the University of Iceland did ramp up observations of the caldera with radar altimetry flights and installed a continuous GPS station on the ice overlying the center of the caldera.

Last December, Icelandic researchers published a paper in Nature about the Bárðarbunga event, largely focusing on the dyke and eruption. Now, completing the picture, Simons and his colleagues have developed a model to describe the collapsing caldera and the earthquakes produced by that action. The new findings appear in the journal Geophysical Journal International.

"Over a span of two months, there were more than 50 magnitude-5 earthquakes in this area. But they didn't look like regular faulting—like shearing a crack," says Simons. "Instead, the earthquakes looked like they resulted from movement inward along a vertical axis and horizontally outward in a radial direction—like an aluminum can when it's being crushed."

To try to determine what was actually generating the unusual earthquakes, Bryan Riel, a graduate student in Simons's group and lead author on the paper, used the original one-day interferogram of the Bárðarbunga area along with four others collected by CSK in September and October. Most of those one-day pairs spanned at least one of the earthquakes, but in a couple of cases, they did not. That allowed Riel to isolate the effect of the earthquakes and determine that most of the subsidence of the ice was due to what is called aseismic activity—the kind that does not produce big earthquakes. Thus, Riel was able to show that the earthquakes were not the primary cause of the surface deformation inferred from the satellite radar data.

"What we know for sure is that the magma chamber was deflating as the magma was feeding the dyke going northward," says Riel. "We have come up with two different models to explain what was actually generating the earthquakes."

In the first scenario, because the magma chamber deflated, pressure from the overlying rock and ice caused the caldera to collapse, producing the unusual earthquakes. This mechanism has been observed in cases of collapsing mines (e.g., the Crandall Canyon Mine in Utah).

The second model hypothesizes that there is a ring fault arcing around a significant portion of the caldera. As the magma chamber deflated, the large block of rock above it dropped but periodically got stuck on portions of the ring fault. As the block became unstuck, it caused rapid slip on the curved fault, producing the unusual earthquakes.

"Because we had access to these satellite images as well as GPS data, we have been able to produce two potential interpretations for the collapse of a caldera—a rare event that occurs maybe once every 50 to 100 years," says Simons. "To be able to see this documented as it's happening is truly phenomenal."

Additional authors on the paper, "The collapse of Bárðarbunga caldera, Iceland," are Hiroo Kanamori, John E. and Hazel S. Smits Professor of Geophysics, Emeritus, at Caltech; Pietro Milillo of the University of Basilicata in Potenza, Italy; Paul Lundgren of JPL; and Sergey Samsonov of the Canada Centre for Mapping and Earth Observation. The work was supported by a NASA Earth and Space Science Fellowship and by the Caltech/JPL President's and Director's Fund.

Kimm Fesenmaier
Home Page Title: 
Using Radar Satellites to Study Volcanoes
Listing Title: 
Using Radar Satellites to Study Volcanoes
Exclude from News Hub: 
Short Title: 
Using Radar Satellites to Study Volcanoes
News Type: 
Research News

Caltech Astronomers Observe a Supernova Colliding with Its Companion Star

Type Ia supernovae, some of the most dazzling phenomena in the universe, are produced when small dense stars called white dwarfs explode with ferocious intensity. At their peak, these supernovae can outshine an entire galaxy. Although thousands of supernovae of this kind were found in the last decades, the process by which a white dwarf becomes one has been unclear.

That began to change on May 3, 2014, when a team of Caltech astronomers working on a robotic observing system known as the intermediate Palomar Transient Factory (iPTF)—a multi-institute collaboration led by Shrinivas Kulkarni, the John D. and Catherine T. MacArthur Professor of Astronomy and Planetary Science and director of the Caltech Optical Observatories—discovered a Type Ia supernova, designated iPTF14atg, in nearby galaxy IC831, located 300 million light-years away.

The data that were immediately collected by the iPTF team lend support to one of two competing theories about the origin of white dwarf supernovae, and also suggest the possibility that there are actually two distinct populations of this type of supernova.

The details are outlined in a paper with Caltech graduate student Yi Cao the lead author, appearing May 21 in the journal Nature.

Type Ia supernovae are known as "standardizable candles" because they allow astronomers to gauge cosmic distances by how dim they appear relative to how bright they actually are. It is like knowing that, from one mile away, a light bulb looks 100 times dimmer than another located only one-tenth of a mile away. This consistency is what made these stellar objects instrumental in measuring the accelerating expansion of the universe in the 1990s, earning three scientists the Nobel Prize in Physics in 2011.

There are two competing origin theories, both starting with the same general scenario: the white dwarf that eventually explodes is one of a pair of stars orbiting around a common center of mass. The interaction between these two stars, the theories say, is responsible for triggering supernova development. What is the nature of that interaction? At this point, the theories diverge.

According to one theory, the so-called double-degenerate model, the companion to the exploding white dwarf is also a white dwarf, and the supernova explosion initiates when the two similar objects merge.

However, in the second theory, called the single-degenerate model, the second star is instead a sunlike star—or even a red giant, a much larger type of star. In this model, the white dwarf's powerful gravity pulls, or accretes, material from the second star. This process, in turn, increases the temperature and pressure in the center of the white dwarf until a runaway nuclear reaction begins, ending in a dramatic explosion.

The difficulty in determining which model is correct stems from the facts that supernova events are very rare—occurring about once every few centuries in our galaxy—and that the stars involved are very dim before the explosions.

That is where the iPTF comes in. From atop Palomar Mountain in Southern California, where it is mounted on the 48-inch Samuel Oschin Telescope, the project's fully automated camera optically surveys roughly 1000 square degrees of sky per night (approximately 1/20th of the visible sky above the horizon), looking for transients—objects, including Type Ia supernovae, whose brightness changes over timescales that range from hours to days.

On May 3, the iPTF took images of IC831 and transmitted the data for analysis to computers at the National Energy Research Scientific Computing Center, where a machine-learning algorithm analyzed the images and prioritized real celestial objects over digital artifacts. Because this first-pass analysis occurred when it was nighttime in the United States but daytime in Europe, the iPTF's European and Israeli collaborators were the first to sift through the prioritized objects, looking for intriguing signals. After they spotted the possible supernova—a signal that had not been visible in the images taken just the night before—the European and Israeli team alerted their U.S. counterparts, including Caltech graduate student and iPTF team member Yi Cao.

Cao and his colleagues then mobilized both ground- and space-based telescopes, including NASA's Swift satellite, which observes ultraviolet (UV) light, to take a closer look at the young supernova.

"My colleagues and I spent many sleepless nights on designing our system to search for luminous ultraviolet emission from baby Type Ia supernovae," says Cao. "As you can imagine, I was fired up when I first saw a bright spot at the location of this supernova in the ultraviolet image. I knew this was likely what we had been hoping for."

UV radiation has higher energy than visible light, so it is particularly suited to observing very hot objects like supernovae (although such observations are possible only from space, because Earth's atmosphere and ozone later absorbs almost all of this incoming UV). Swift measured a pulse of UV radiation that declined initially but then rose as the supernova brightened. Because such a pulse is short-lived, it can be missed by surveys that scan the sky less frequently than does the iPTF.

This observed ultraviolet pulse is consistent with a formation scenario in which the material ejected from a supernova explosion slams into a companion star, generating a shock wave that ignites the surrounding material. In other words, the data are in agreement with the single-degenerate model.

Back in 2010, Daniel Kasen, an associate professor of astronomy and physics at UC Berkeley and Lawrence Berkeley National Laboratory, used theoretical calculations and supercomputer simulations to predict just such a pulse from supernova-companion collisions. "After I made that prediction, a lot of people tried to look for that signature," Kasen says. "This is the first time that anyone has seen it. It opens up an entirely new way to study the origins of exploding stars."

According to Kulkarni, the discovery "provides direct evidence for the existence of a companion star in a Type Ia supernova, and demonstrates that at least some Type Ia supernovae originate from the single-degenerate channel."

Although the data from supernova iPTF14atg support it being made by a single-degenerate system, other Type Ia supernovae may result from double-degenerate systems. In fact, observations in 2011 of SN2011fe, another Type Ia supernova discovered in the nearby galaxy Messier 101 by PTF (the precursor to the iPTF), appeared to rule out the single-degenerate model for that particular supernova. And that means that both theories actually may be valid, says Caltech professor of theoretical astrophysics Sterl Phinney, who was not involved in the research. "The news is that it seems that both sets of theoretical models are right, and there are two very different kinds of Type Ia supernovae."

"Both rapid discovery of supernovae in their infancy by iPTF, and rapid follow-up by the Swift satellite, were essential to unveil the companion to this exploding white dwarf. Now we have to do this again and again to determine the fractions of Type Ia supernovae akin to different origin theories," says iPTF team member Mansi Kasliwal, who will join the Caltech astronomy faculty as an assistant professor in September 2015.

The iPTF project is a scientific collaboration between Caltech; Los Alamos National Laboratory; the University of Wisconsin–Milwaukee; the Oskar Klein Centre in Sweden; the Weizmann Institute of Science in Israel; the TANGO Program of the University System of Taiwan; and the Kavli Institute for the Physics and Mathematics of the Universe in Japan. The Caltech team is funded in part by the National Science Foundation.

Home Page Title: 
Caltech Astronomers See Supernova Collide with Companion Star
Listing Title: 
Caltech Astronomers See Supernova Collide with Companion Star
Exclude from News Hub: 
News Type: 
Research News
Exclude from Home Page: