Distant Black Hole Wave Twists Like Giant Whip

Fast-moving magnetic waves emanating from a distant supermassive black hole undulate like a whip whose handle is being shaken by a giant hand, according to a new study involving Caltech scientists, which used data from the National Radio Astronomy Observatory's Very Long Baseline Array (VLBA) to explore the galaxy-black hole system known as BL Lacertae (BL Lac) in high resolution.

The team's findings, detailed in the April 10 issue of the Astrophysical Journal, mark the first time so-called Alfvén (pronounced Alf-vain) waves have been identified in a black hole system.

Alfvén waves are generated when magnetic field lines, such as those coming from the sun or the disk around a black hole, interact with charged particles, or ions, and become twisted, and in the case of BL Lac and sometimes for the sun, are coiled into a helix. In the case of BL Lac, the ions are in the form of particle jets that are flung from opposite sides of the black hole at near light speed.

"Imagine running a water hose through a slinky that has been stretched taut," says first author Marshall Cohen, professor emeritus of astronomy at Caltech. "A sideways disturbance at one end of the slinky will create a wave that travels to the other end, and if the slinky sways to and fro, the hose running through its center has no choice but to move with it."

A similar thing is happening in BL Lac, Cohen says. The Alfvén waves are analogous to the propagating transverse motions of the slinky, and as the waves propagate along the magnetic field lines, they can cause the field lines—and the particle jets encompassed by the field lines—to move as well.

It's common for black hole particle jets to bend—and some even swing back and forth. But those movements typically take place on timescales of thousands or millions of years. "What we see is happening on a timescale of weeks," Cohen says. "We're taking pictures once a month, and the position of the waves is different each month."

Interestingly, from the vantage of astronomers on Earth, the Alfvén waves emanating from BL Lac appear to be traveling about five times faster than the speed of light. "The waves only appear to be superluminal, or moving faster than light," Cohen says. "The high speed is an optical illusion resulting from the fact that the waves are traveling very close to, but below, the speed of light, and are passing just to the side of our line of sight."

Co-author David Meier, a visiting associate in astronomy and now-retired astrophysicist from JPL, added, "By analyzing these waves, we are able to determine the internal properties of the jet, and this will help us ultimately understand how jets are produced by black holes."

Other authors on the paper, "Studies of the Jet in BL Lacertae II Superluminal Alfvén Waves," include Talvikki Hovatta, a former Caltech postdoctoral scholar; as well as scientists from the University of Cologne and the Max Planck Institute for Radio Astronomy in Germany; the Isaac Newton Institute of Chile; Aalto University in Finland; the Astro Space Center of Lebedev Physical Institute, the Pulkovo Observatory, and the Crimean Astrophysical Observatory in Russia. Purdue University, Denison University, and the Jet Propulsion Laboratory were also involved in the study.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Teaser Image: 

JPL News: Searing Sun Seen in X-rays

X-rays light up the surface of our sun in a bouquet of colors in this new image containing data from NASA's Nuclear Spectroscopic Telescope Array, or NuSTAR. The high-energy X-rays seen by NuSTAR are shown in blue, while green represents lower-energy X-rays from the X-ray Telescope instrument on the Hinode spacecraft, named after the Japanese word for sunrise. The yellow and green colors show ultraviolet light from NASA's Solar Dynamics Observatory.

NuSTAR usually spends its time investigating the mysteries of black holes, supernovae, and other high-energy objects in space. But it can also look closer to home to study our sun.

"What's great about NuSTAR is that the telescope is so versatile that we can hunt black holes millions of light-years away and we can also learn something fundamental about the star in our own backyard," said Brian Grefenstette, a Caltech research scientist and an astronomer on the NuSTAR team.

NuSTAR is a Small Explorer mission led by Caltech and managed by NASA's Jet Propulsion Laboratory in Pasadena, California, for NASA's Science Mission Directorate in Washington. JPL is managed by Caltech for NASA.

Read the full story from JPL News

Images: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Sniffing Out Answers: A Conversation with Markus Meister

Blindfolded and asked to distinguish between a rose and, say, smoke from a burning candle, most people would find the task easy. Even differentiating between two rose varieties can be a snap because the human olfactory system—made up of the nerve cells in our noses and everything that allows the brain to process smell—is quite adept. But just how sensitive is it to different smells?

In 2014, a team of scientists from the Rockefeller University published a paper in the journal Science, arguing that humans can discriminate at least 1 trillion odors. Now Markus Meister, the Anne P. and Benjamin F. Biaggini Professor of Biological Sciences at Caltech, has published a paper in the open-access journal eLife, in which he disputes the 2014 claim, saying that the science is not yet in a place where such a number can be determined.

We recently spoke with Meister about his new paper and what it says about the claim that we can distinguish a trillion smells.

 

What was the goal of the 2014 paper, and why do you take issue with it?

The overt question the authors asked was: How many different smells can humans distinguish? That is a naturally interesting question, in part because in other fields of sensory biology, similar questions have already been answered. People quibble about the exact numbers, but in general scientists agree that humans can distinguish about 1 to 2 million colors and something on the order of 100,000 pure tones.

But as interesting as the question is, I argue that we, as a field, are not yet prepared to address it. First we need to know how many dimensions span the perceptual space of odors. And by that I mean: how many olfactory variables are needed to fully describe all of the odors that humans can experience?

In the case of human vision, we say that the perceptual space for colors has three dimensions, which means that every physical light can be described by three numbers—how it activates the red, green, and blue cone photoreceptors in the retina.

As long as we don't know the dimensionality of odor space, we don't know how to even start interpreting measurements. Once we know the dimensionality, we can start probing the space systematically and ask how many different odors fit into it in the same way that we've looked at how many different colors fit into the three-dimensional space of colors.

The fundamental conceptual mistake that the authors of the Science paper made was to assume that the space of odor perception has 128 dimensions or more and then interpret the data as though that was the case . . . even though there is absolutely no evidence to suggest that the odor space has such high dimensionality.

 

What makes it so hard to determine the dimensionality of odor?

Well, there are a couple of things. First, there is no natural coordinate system in which olfactory stimuli exist. This stands in contrast with visual and auditory stimuli. For example, pure (monochromatic) lights or tones can be represented nicely as sinusoidal waves with just two variables, the frequency and the amplitude of the wave. We can easily control those two variables, and they correspond nicely to things we perceive. For pure tones, the amplitude of the sine wave corresponds to loudness and the frequency corresponds to perceived pitch. For a pure light, the frequency determines your perception of the color; if you change the intensity of the light, that alters your perception of the brightness. These simple physical parameters of the stimulus allow us to explore those spaces more easily.

In the case of odors, there are probably several hundred thousand substances that have a smell that can be perceived. But they all have different structures. There is no intuitive way to organize the stimuli. There has been some recent progress in this area, but in general we have not been successful in isolating a few physical variables that can account for a lot of what we smell.

Another aspect of olfaction that has complicated people's thinking is that humans have about 400 types of primary smell receptors. These are the actual neurons in the lining of the nasal cavity that detect odorants. So at the very input to the nervous system, every smell is characterized by the action it has on those 400 different sensors. Based on that, you might assume that smell lives in a much larger space than color vision—one with as many as 400 dimensions.

But can we perceive all of those 400 dimensions? Just because two odors cause a different pattern of activation of nerve cells in the nose doesn't mean you can actually tell them apart. Think about our sense of touch. Every one of our hairs has at its root several mechanoreceptors. If you run a comb through the hair on your head, you activate a hundred thousand mechanoreceptors in a particular pattern. If you repeat the action, you activate a different pattern of receptors, but you will be unable to perceive a difference. Similarly, I argue, there's no reason to think that we can perceive a difference between all the different patterns of activation of nerve cells in the nasal cavity. So the number of dimensions could, in fact, be much lower than 400. In fact, some recent studies have suggested that odor lives in a space with 10 or fewer perceptual dimensions.

 

In your work you describe a couple of basic experimental design failures of the 2014 paper. Can you walk us through those?

Basically, two scientific errors were made in the original study. They have to do with the concept of a positive-control experiment and the concept of testing alternative hypotheses.

In science, when we come up with a new way of analyzing things, we need to perform a test—called a positive control—that gives us confidence that the new analysis can find the right answer in a case where we already know what the answer is. So, for example, if you have devised a new way of weighing things, you will want to test it by weighing something whose weight you already know very well based on some accepted procedure. If the new procedure gives a different answer, we say it failed the positive control.

The 2014 paper did not include a positive-control test. In my paper, I provide two; applying the system that the authors propose to a very simple model microbe and to the human color-vision system. In both cases, the answers come out wrong by huge factors.

The other failure of the 2014 paper is a failure to consider alternate hypotheses. When scientists interpret the outcome of an experiment, we need to seriously analyze alternate hypotheses to the ones we believe are most likely and show why they are not reasonable explanations for what we are seeing.

In my paper, I show that an alternate model that is clearly absurd—that humans can only discriminate 10 odors—explains the data just as well as the very complicated explanation that the authors propose, which involves 400 dimensions and 1 trillion odor percepts. What this really means is that the experiment was poorly designed, in the sense that it didn't constrain the answer to the question.

By the way, there is an accompanying paper by Gerkin and Castro in the same issue of eLife that critiques the experimental design from an entirely different angle, regarding the use of statistics. I found this article very instructive, and have used it already in teaching.

 

How do you suggest scientists go about determining the dimensionality of the odor space?

One concrete idea is to try to figure out what the number of dimensions is in the vicinity of a particular point in that space. If you did that with color, you would arrive at the number three from the vast majority of points. So I suggest we start at some arbitrary point in odor space—say a 50 percent mixture of 30 different odors—and systematically go in each of the directions from there and ask: can humans actually distinguish the odor when you change the concentration a little bit up or down from there? If you do that in 30 different dimensions you might find that maybe only five of those dimensions contribute to changing the perceived odor and that along the other dimensions there is very little change. So let's figure out the dimensionality that comes out of a study like that. Is it two? Probably not. I would guess for something like 10 or 20.

Once we know that, we can start to ask how many odors fit into that space.

 

Why does all of this matter? Why do we need to know how many odors we can smell?

The question of how many smells we can discriminate has fascinated people for at least a century, and the whole industry of flavors and fragrances has been very interested in finding out whether there is a systematic set of rules by which one could mix together some small number of primary odors in order to produce any target smell.

In the field of color vision, that problem has been solved. As a result, we all use color monitors that only have three types of lights—red, green, and blue. And yet by mixing them together, they can make just about every color impression that you might care about. So there's a real technological incentive to figuring out how you can mix together primary stimuli to make any kind of perceived smell.

 

What is the big lesson you would like people to take away from this scientific exchange?

One lesson I try to convey to my students is the value of a simple simulation—to ask, "Could this idea work even in principle? Let's try it in the simplest case we can imagine." That sort of triage can often keep you from walking down an unproductive path.

On a more general note, people should remain skeptical of spectacular claims. This is particularly important when we referee for the high-glamour journals, where the editors have a predilection for unexpected results. As a community we should let things simmer a bit before allowing a spectacular claim to become the conventional wisdom. Maybe we all need to stop and smell the roses.

Writer: 
Kimm Fesenmaier
Listing Title: 
Sniffing Out Answers
Writer: 
Exclude from News Hub: 
No
Short Title: 
Sniffing Out Answers
News Type: 
Research News
Teaser Image: 

Better Memory with Faster Lasers

DVDs and Blu-ray disks contain so-called phase-change materials that morph from one atomic state to another after being struck with pulses of laser light, with data "recorded" in those two atomic states. Using ultrafast laser pulses that speed up the data recording process, Caltech researchers adopted a novel technique, ultrafast electron crystallography (UEC), to visualize directly in four dimensions the changing atomic configurations of the materials undergoing the phase changes. In doing so, they discovered a previously unknown intermediate atomic state—one that may represent an unavoidable limit to data recording speeds.

By shedding light on the fundamental physical processes involved in data storage, the work may lead to better, faster computer memory systems with larger storage capacity. The research, done in the laboratory of Ahmed Zewail, Linus Pauling Professor of Chemistry and professor of physics, will be published in the July 28 print issue of the journal ACS Nano.

When the laser light interacts with a phase-change material, its atomic structure changes from an ordered crystalline arrangement to a more disordered, or amorphous, configuration. These two states represent 0s and 1s of digital data.

"Today, nanosecond lasers—lasers that pulse light at one-billionth of a second—are used to record information on DVDs and Blu-ray disks, by driving the material from one state to another," explains Giovanni Vanacore, a postdoctoral scholar and an author on the study. The speed with which data can be recorded is determined both by the speed of the laser—that is, by the duration of each "pulse" of light—and by how fast the material itself can shift from one state to the other.

Thus, with a nanosecond laser, "the fastest you can record information is one information unit, one 0 or 1, every nanosecond," says Jianbo Hu, a postdoctoral scholar and the first author of the paper. "To go even faster, people have started to use femtosecond lasers, which can potentially record one unit every one millionth of a billionth of a second. We wanted to know what actually happens to the material at this speed and if there is a limit to how fast you can go from one structural phase to another."

To study this, the researchers used their technique, ultrafast electron crystallography. The technique, a new development—different from Zewail's Nobel Prize–winning work in femtochemistry, the visual study of chemical processes occurring at femtosecond scales—allowed researchers to observe directly the transitioning atomic configuration of a prototypical phase-change material, germanium telluride (GeTe), when it is hit by a femtosecond laser pulse.

In UEC, a sample of crystalline GeTe is bombarded with a femtosecond laser pulse, followed by a pulse of electrons. The laser pulse causes the atomic structure to change from the crystalline to other structures, and then ultimately to the amorphous state. Then, when the electron pulse hits the sample, its electrons scatter in a pattern that provides a picture of the sample's atomic configuration as a function of the time.

With this technique, the researchers could see directly, for the first time, the structural shift in GeTe caused by the laser pulses. However, they also saw something more: a previously unknown intermediate phase that appears during the transition from the crystalline to the amorphous configuration. Because moving through the intermediate phase takes additional time, the researchers believe that it represents a physical limit to how quickly the overall transition can occur—and to how fast data can be recorded, regardless of the laser speeds used.

"Even if there is a laser faster than a femtosecond laser, there will be a limit as to how fast this transition can occur and information can be recorded, just because of the physics of these phase-change materials," Vanacore says. "It's something that cannot be solved technologically—it's fundamental."

Despite revealing such limits, the research could one day aid the development of better data storage for computers, the researchers say. Right now, computers generally store information in several ways, among them the well-known random-access memory (RAM) and read-only memory (ROM). RAM, which is used to run the programs on your computer, can record and rewrite information very quickly via an electrical current. However, the information is lost whenever the computer is powered down. ROM storage, including CDs and DVDs, uses phase-change materials and lasers to store information. Although ROM records and reads data more slowly, the information can be stored for decades.

Finding ways to speed up the recording process of phase-change materials and understanding the limits to this speed could lead to a new type of memory that harnesses the best of both worlds.

The researchers say that their next step will be to use UEC to study the transition of the amorphous atomic structure of GeTe back into the crystalline phase—comparable to the phenomenon that occurs when you erase and then rewrite a DVD.

Although these applications could mean exciting changes for future computer technologies, this work is also very important from a fundamental point of view, Zewail says.

"Understanding the fundamental behavior of materials transformation is what we are after, and these new techniques developed at Caltech have made it possible to visualize such behavior in both space and time," Zewail says.

The work is published in a paper titled "Transient Structures and Possible Limits of Data Recording in Phase-Change Materials." In addition to Hu, Vanacore, and Zewail, Xiangshui Miao and Zhe Yang are also coauthors on the paper. The work was supported by the National Science Foundation and the Air Force Office of Scientific Research and was carried out in Caltech's Center for Physical Biology, which is funded by the Gordon and Betty Moore Foundation.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

New Approach Holds Promise for Earlier, Easier Detection of Colorectal Cancer

Caltech chemists develop a technique that could one day lead to early detection of tumors

Chemists at Caltech have developed a new sensitive technique capable of detecting colorectal cancer in tissue samples—a method that could one day be used in clinical settings for the early diagnosis of colorectal cancer.

Colorectal cancer is the third most prevalent cancer worldwide and is estimated to cause about 700,000 deaths every year. Metastasis due to late detection is one of the major causes of mortality from this disease; therefore, a sensitive and early indicator could be a critical tool for physicians and patients.

A paper describing the new detection technique currently appears online in Chemistry & Biology and will be published in the July 23 issue of the journal's print edition. Caltech graduate student Ariel Furst (PhD '15) and her adviser, Jacqueline K. Barton, the Arthur and Marian Hanisch Memorial Professor of Chemistry, are the paper's authors.

"Currently, the average biopsy size required for a colorectal biopsy is about 300 milligrams," says Furst. "With our experimental setup, we require only about 500 micrograms of tissue, which could be taken with a syringe biopsy versus a punch biopsy. So it would be much less invasive." One microgram is one thousandth of a milligram.

The researchers zeroed in on the activity of a protein called DNMT1 as a possible indicator of a cancerous transformation. DNMT1 is a methyltransferase, an enzyme responsible for DNA methylation—the addition of a methyl group to one of DNA's bases. This essential and normal process is a genetic editing technique that primarily turns genes off but that has also recently been identified as an early indicator of cancer, especially the development of tumors, if the process goes awry.

When all is working well, DNMT1 maintains the normal methylation pattern set in the embryonic stages, copying that pattern from the parent DNA strand to the daughter strand. But sometimes DNMT1 goes haywire, and methylation goes into overdrive, causing what is called hypermethylation. Hypermethylation can lead to the repression of genes that typically do beneficial things, like suppress the growth of tumors or express proteins that repair damaged DNA, and that, in turn, can lead to cancer.

Building on previous work in Barton's group, Furst and Barton devised an electrochemical platform to measure the activity of DNMT1 in crude tissue samples—those that contain all of the material from a tissue, not just DNA or RNA, for example. Fundamentally, the design of this platform is based on the concept of DNA-mediated charge transport—the idea that DNA can behave like a wire, allowing electrons to flow through it and that the conductivity of that DNA wire is extremely sensitive to mistakes in the DNA itself. Barton earned the 2010 National Medal of Science for her work establishing this field of research and has demonstrated that it can be used not only to locate DNA mutations but also to detect the presence of proteins such as DNMT1 that bind to DNA.

In the present study, Furst and Barton started with two arrays of gold electrodes—one atop the other—embedded in Teflon blocks and separated by a thin spacer that formed a well for solution. They attached strands of DNA to the lower electrodes, then added the broken-down contents of a tissue sample to the solution well. After allowing time for any DNMT1 in the tissue sample to methylate the DNA, they added a restriction enzyme that severed the DNA if no methylation had occurred—i.e., if DNMT1 was inactive. When they applied a current to the lower electrodes, the samples with DNMT1 activity passed the current clear through to the upper electrodes, where the activity could be measured. 

"No methylation means cutting, which means the signal turns off," explains Furst. "If the DNMT1 is active, the signal remains on. So we call this a signal-on assay for methylation activity. But beyond on or off, it also allows us to measure the amount of activity." This assay for DNMT1 activity was first developed in Barton's group by Natalie Muren (PhD '13).

Using the new setup, the researchers measured DNMT1 activity in 10 pairs of human tissue samples, each composed of a colorectal tumor sample and an adjacent healthy tissue from the same patient. When they compared the samples within each pair, they consistently found significantly higher DNMT1 activity, hypermethylation, in the tumorous tissue. Notably, they found little correlation between the amount of DNMT1 in the samples and the presence of cancer—the correlation was with activity.

"The assay provides a reliable and sensitive measure of hypermethylation," says Barton, also the chair of the Division of Chemistry and Chemical Engineering.  "It looks like hypermethylation is good indicator of tumorigenesis, so this technique could provide a useful route to early detection of cancer when hypermethylation is involved."

Looking to the future, Barton's group hopes to use the same general approach in devising assays for other DNA-binding proteins and possibly using the sensitivity of their electrochemical devices to measure protein activities in single cells. Such a platform might even open up the possibility of inexpensive, portable tests that could be used in the home to catch colorectal cancer in its earliest, most treatable stages.

The work described in the paper, "DNA Electrochemistry shows DNMT1 Methyltransferase Hyperactivity in Colorectal Tumors," was supported by the National Institutes of Health. 

Writer: 
Kimm Fesenmaier
Frontpage Title: 
A New Approach to Detecting Colorectal Cancer
Listing Title: 
A New Approach to Detecting Colorectal Cancer
Writer: 
Exclude from News Hub: 
No
Short Title: 
A New Approach to Detecting Colorectal Cancer
News Type: 
Research News

Discovering a New Stage in the Galactic Lifecycle

On its own, dust seems fairly unremarkable. However, by observing the clouds of gas and dust within a galaxy, astronomers can determine important information about the history of star formation and the evolution of galaxies. Now thanks to the unprecedented sensitivity of the telescope at the Atacama Large Millimeter Array (ALMA) in Chile, a Caltech-led team has been able to observe the dust contents of galaxies as seen just 1 billion years after the Big Bang—a time period known as redshift 5-6. These are the earliest average-sized galaxies to ever be directly observed and characterized in this way.

The work is published in the June 25 edition of the journal Nature.

Dust in galaxies is created by the elements released during the formation and collapse of stars. Although the most abundant elements in the universe—hydrogen and helium—were created by the Big Bang, stars are responsible for making all of the heavier elements in the universe, such as carbon, oxygen, nitrogen, and iron. And because young, distant galaxies have had less time to make stars, these galaxies should contain less dust. Previous observations had suggested this, but until now nobody could directly measure the dust in these faraway galaxies.

"Before we started this study, we knew that stars formed out of these clouds of gas and dust, and we knew that star formation was probably somehow different in the early universe, where dust is likely less common. But the previous information only really hinted that the properties of the gas and the dust in earlier galaxies were different than in galaxies we see around us today. We wanted to find data that showed that," says Peter Capak, a staff scientist at the Infrared Processing and Analysis Center (IPAC) at Caltech and the first author of the study.

Armed with the high sensitivity of ALMA, Capak and his colleagues set out to perform a direct analysis of the dust in these very early galaxies.

Young, faraway galaxies are often difficult to observe because they appear very dim from Earth. Previous observations of these young galaxies, which formed just 1 billion years after the Big Bang, were made with the Hubble Space Telescope and the W. M. Keck Observatory—both of which detect light in the near-infrared and visible bands of the electromagnetic spectrum. The color of these galaxies at these wavelengths can be used to make inferences about the dust—for example, galaxies that appear bluer in color tend to have less dust, while those that are red have more dust. However, other effects like the age of the stars and our distance from the galaxy can mimic the effects of dust, making it difficult to understand exactly what the color means.

The researchers began their observations by first analyzing these early galaxies with the Keck Observatory. Keck confirmed the distance from the galaxies as redshift greater than 5—verifying that the galaxies were at least as young as they previously had been thought to be. The researchers then observed the same galaxies using ALMA to detect light at the longer millimeter and submillimeter wavelengths of light. The ALMA readings provided a wealth of information that could not be seen with visible-light telescopes, including details about the dust and gas content of these very early galaxies.

Capak and his colleagues were able to use ALMA to—for the first time—directly view the dust and gas clouds of nine average-sized galaxies during this epoch. Specifically, they focused on a feature called the carbon II spectral line, which comes from carbon atoms in the gas around newly formed stars. The carbon line itself traces this gas, while the data collected around the carbon line traces a so-called continuum emission, which provides a measurement of the dust. The researchers knew that the carbon line was bright enough to be seen in mature, dust-filled nearby galaxies, so they reasoned that the line would be even brighter if there was indeed less dust in the young faraway galaxies.

Using the carbon line, their results confirmed what had previously been suggested by the data from Hubble and Keck: these older galaxies contained, on average, 12 times less dust than galaxies from 2 billion years later (at a redshift of approximately 4).

"In galaxies like our Milky Way or nearby Andromeda, all of the stars form in very dusty environments, so more than half of the light that is observed from young stars is absorbed by the dust," Capak says. "But in these faraway galaxies we observed with ALMA, less than 20 percent of the light is being absorbed. In the local universe, only very young galaxies and very odd ones look like that. So what we're showing is that the normal galaxy at these very high redshifts doesn't look like the normal galaxy today. Clearly there is something different going on."

That "something different" gives astronomers like Capak a peek into the lifecycle of galaxies. Galaxies form because gas and dust are present and eventually turn into stars—which then die, creating even more gas and dust, and releasing energy. Because it is impossible to watch this evolution from young galaxy to old galaxy happen in real time on the scale of a human lifespan, the researchers use telescopes like ALMA to take a survey of galaxies at different evolutionary stages. Capak and his colleagues believe that this lack of dust in early galaxies signifies a never-before-seen evolutionary stage for galaxies.

"This result is really exciting. It's the first time that we're seeing the gas that the stars are forming out of in the early universe. We are starting to see the transition from just gas to the first generation of galaxies to more mature systems like those around us today. Furthermore, because the carbon line is so bright, we can now easily find even more distant galaxies that formed even longer ago, sooner after the Big Bang," Capak says.

Lin Yan, a staff scientist at IPAC and coauthor on the paper, says that their results are also especially important because they represent typical early galaxies. "Galaxies come in different sizes. Earlier observations could only spot the largest or the brightest galaxies, and those tend to be very special—they actually appear very rarely in the population," she says. "Our findings tell you something about a typical galaxy in that early epoch, so they're results can be observed as a whole, not just as special cases."

Yan says that their ability to analyze the properties of these and earlier galaxies will only expand with ALMA's newly completed capabilities. During the study, ALMA was operating with only a portion of its antennas, 20 at the time; the capabilities to see and analyze distant galaxies will be further improved now that the array is complete with 66 antennas, Yan adds.

"This is just an initial observation, and we've only just started to peek into this really distant universe at redshift of a little over 5. An astronomer's dream is basically to go as far distant as we can. And when it's complete, we should be able to see all the distant galaxies that we've only ever dreamed of seeing," she says.

The findings are published in a paper titled, "Galaxies at redshifts 5 to 6 with systematically low dust content and high [C II] emission." The work was supported by funds from NASA and the European Union's Seventh Framework Program. Nick Scoville, the Francis L. Moseley Professor of Astronomy, was an additional coauthor on this paper. In addition to Keck, Hubble, and ALMA data, observations from the Spitzer Space Telescope were used to measure the stellar mass and age of the galaxies in this study. Coauthors and collaborators from other institutions include C. Carilli, G. Jones, C.M. Casey, D. Riechers, K. Sheth, C.M. Corollo, O. Ilbert, A. Karim, O. LeFevre, S. Lilly, and V. Smolcic.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Voting Rights: A Conversation with Morgan Kousser

Three years ago this week, the U.S. Supreme Court ruled unconstitutional a key provision of the Voting Rights Act (VRA), which was enacted in 1965 and extended four times since then by Congress. Section 5 of the act required certain "covered" jurisdictions in the Deep South and in states and counties outside the Deep South that had large populations of Hispanics and Native Americans to obtain "pre-clearance" from the Justice Department or the U.S. District Court in the District of Columbia before changing any election law. The provision was designed to prevent election officials from replacing one law that had been declared to be racially discriminatory with a different but still discriminatory law. A second provision, Section 4(b), contained the formula for coverage.

The VRA, notes Morgan Kousser, the William R. Kenan, Jr., Professor of History and Social Science, has been "very effective. You went from 7 percent of the black voters in Mississippi being registered to vote to 60 percent within three or four years. That was just an amazing change. Even more amazing, Section 5 was flexible enough to prevent almost every kind of new discriminatory technique or device over a period of nearly 50 years." For instance, Kousser notes, "when white supremacists in Mississippi saw that African Americans would soon comprise majorities in some state or local legislative districts, they merged the districts to preserve white majorities everywhere. But Section 5 stopped this runaround and allowed the new black voters real democracy. Voting rights was the one area in which federal law came close to eliminating the country's long, sad history of racial discrimination."

But on June 25, 2013, in a landmark ruling in Shelby County v. Holder, the Court overturned Section 4(b), effectively dismantling Section 5. Without a formula that defines covered jurisdictions, no area falls within the scope of Section 5. Chief Justice John Roberts, writing the 5–4 majority opinion, argued that although the original coverage formula "made sense," it was now outdated, based on "decades-old data and eradicated practices." Asserting that voter turnout and registration rates in covered jurisdictions are nearly equal for whites and African Americans, Roberts also noted that "blatantly discriminatory evasions of federal decrees are rare. And minority candidates hold office at unprecedented levels."

The decision, says Kousser, was wrong. In a comprehensive study recently published in the journal Transatlantica, he, with the help of three Caltech students who worked on the study during Summer Undergraduate Research Fellowship (SURF) projects, examined more than four thousand successful voting-rights cases around the country as well as Justice Department inquiries and settlements and changes to laws in response to the threat of lawsuits. Over 90 percent, they found, occurred in the covered jurisdictions—indicating, Kousser says, that the coverage scheme was still working very well.

The study found that—even when excluding all of the actions brought under Section 5 of the VRA, and only looking at those that can be brought anywhere in the country—83.2 percent of successful cases originated in covered jurisdictions. This shows, Kousser says, that whatever the coverage formula measured, it still captured the "overwhelming number of instances of proven racial discrimination in elections."

We talked with Kousser about the ruling and his findings—and how this constitutional law scholar made his way to Caltech.

 

Why do you think Justice Roberts and the other justices in the majority ruled the way they did?

He had a sense that there had been a lot of cases outside of the covered jurisdictions. But if you look at all of the data, you see that the coverage scheme captures 94 percent of all of the cases and other events that took place from 1957 through 2013 and an even larger proportion up to 2006. Suppose that you were a stockbroker, and you could make a decision that was right 94 percent of the time. Your clients would be very, very wealthy. No one would be dissatisfied with you. That's what the congressional coverage scheme did.

I wish very much that I had finished this paper two years earlier and that the data would have been published in a scholarly journal or at least made available in a pre-print by the time that the decision was cooking up. That was a mistake on my part. I should have let it out into the world a little earlier. Sometimes I have a fantasy that if this had been shown to the right justices at the right time, maybe they would have decided differently.

 

The Court did not rule on the VRA in general—but said that the coverage formula is outdated because voting discrimination is not as bad as it once was. Do you agree?

This is one of the reasons that I looked at the coverage of the California Voting Rights Act (CVRA), passed in 2002. In Section 2 of the National VRA, you have to prove what is called the "totality of the circumstances." You have to prove not only that voting is racially polarized and that there is a kind of election structure used for discrimination, but also show that there is a history of discrimination in the area, that there are often special informal procedures that go against minorities, and a whole series of other things. A Section 2 case is quite difficult to prove.

The CVRA attempted to simplify those circumstances so all you have to show is that there is racially polarized voting, usually shown by a statistical analysis of how various groups voted, and that there is a potentially discriminatory electoral structure, particularly at-large elections for city council, for school board, for community college district, and so on.

The CVRA, in effect, only became operative in 2007 after some preliminary litigation. And in 2007, after the city of Modesto settled a long-running lawsuit, lawyers for the successful plaintiffs presented the city with a bill for about $3 million. This scared jurisdictions throughout California, which were faced with the potential of paying out large amounts of money if they had racially polarized voting. Again and again, you suddenly saw jurisdictions settling short of going to trial and a lot of Hispanics elected to particular boards. This has changed about 100 or 125 local boards throughout California from holding their elections at-large to holding them by sub-districts, which allow geographically segregated minorities to elect candidates of their choice. If you graph that over time, you see a huge jump in the number of successful CVRA cases after 2007. What does this mean? Does it mean that there was suddenly a huge increase in discrimination? No, it means that there was a tool that allowed the discrimination that had previously existed to be legally identified.

If we had that across the country, and it was easier to bring cases, you would expose a lot more discrimination. That's my argument.

 

Do you think the coverage plan will be restored?

If there were hearings and an assessment of this scheme or any other potentially competing schemes, then Congress might decide on a new coverage scheme. If the bill was passed, it would go back up to the U.S. Supreme Court, and maybe the Court would be more interested in the actual empirical evidence instead of simply guessing what they thought might have existed. But I think right now the possibilities of getting any changes through the Congress are zero.

I would like to see some small changes in the coverage scheme, but they have to be made on the basis of evidence. Just throwing out the whole thing because allegedly it didn't fit anymore is an irrational way to make public policy.

 

As a professor of history, do you think it is your responsibility to help change policy?

Well, it has been interesting to me from the very beginning. Let me tell you how I got started in voting rights cases. My doctoral dissertation was on the disfranchisement of blacks and poor whites in the South in the late 19th and early 20th centuries. In about 1979, a lawyer who was cooperating with the ACLU [American Civil Liberties Union] in Birmingham, Alabama, called me up—I didn't know who he was—and he said, "Do you have an opinion about whether section 201 of the Alabama constitution of 1901 was adopted with a racially discriminatory purpose?" I said, "I do. I've studied that. I think it was adopted with a racially discriminatory purpose."

Writing expert witness reports and testifying in cases are exactly like what I have always done as a scholar. I have looked at the racially discriminatory effects of laws; I have looked at the racially discriminatory intent of laws. I have examined them by looking at a lot of evidence. I write very long papers for these cases. They are scholarly publications, and whether they relate to something that happened 100 years ago or something that happened five years ago or yesterday doesn't really, in principle, seem to make any difference.

 

How did you get started as a historian studying politics?

Well, I'm old. I grew up in the South during the period of segregation, but just as it was breaking down. When I was a junior in high school, the sit-ins took place in Nashville, Tennessee, which is where I'm from. I was sympathetic. I never liked segregation. I was always in favor of equal rights.

I had been fascinated by politics from the very beginning. By the time I was 8 or 9 years old, I was reading two newspapers a day. One was a very conservative newspaper, pro-segregation, and the other paper was a liberal newspaper, critical of segregation. They both covered politics. And if you read news stories in each about the same event on the same day, you'd get a completely different slant. It was a wonderful training for a historian. From reading two newspapers that I knew to be biased, one in one direction, the other in another direction, I had to try to figure out what was happening and what I should believe to be fact.

 

How did you end up at Caltech?

To be very frank, Yale, where I was a graduate student, didn't want me around anymore. When I was there, I started a graduate student senate. I wrote its constitution, and I served as its first president. We were obnoxious. This was in 1967 and 1968, and students were revolting around the country, trying to bring an end to the war in Vietnam, trying to stop racial discrimination, trying to change the world. I had less lofty aims.

 

Such as?

There was no bathroom for women in the hall of graduate studies where the vast majority of humanities and social sciences classes took place. We made a nonnegotiable demand for a bathroom for women. Yale was embarrassed. Yale granted our request. We did other things. We protested against a rent increase in graduate student married housing. Yale couldn't justify the increase and gave way. We formed a committee to get women equal access to the Yale swimming pools. Yale opened the pool.

 

 

In addition to doing research, you are an acclaimed teacher at Caltech—the winner of Caltech highest teaching honor, the Feynman Prize, in 2011. Do you think of yourself as more of a teacher or as a scholar?

I really like to do both. I can't avoid teaching. If you look at my scholarship, a lot of it is really in teaching format. I would like to school Chief Justice Roberts on what he had done wrong and to persuade him, convince him, that he should change his mind on this. A lot of my friends who are at my advanced age have quit teaching, because they can't take it anymore. When the term is over, they are jubilant.

I'm always sad when the term ends, particularly with my Supreme Court class, because the classes are small, so I know each individual student pretty well. I hate to say goodbye to them.

 

Do any particular students stand out in your mind?

I had one student who took my class in 2000. He was a computer science major. We used to talk a lot. We disagreed about practically everything politically, but he was a very nice and very intelligent guy.

When he finished the class, he decided that he would go to work for Microsoft. He did that for three years. Then he decided he wanted to go to law school, where he did very well; he clerked for an appeals court judge and he clerked for a Supreme Court justice. This spring, he argued his first case before the U.S. Supreme Court. The case that he argued was very complicated. I don't understand it, I don't understand the issues, I don't understand the precedents. It's relatively obscure, and it won't make big headlines. But he did it, and he's promised me that he'll share his impressions of being on that stage and that I can pass them on to current Caltech students. I know that they will find his experience as exciting as I will—a Techer arguing a case before the Supreme Court within 15 years of graduating from college! I can't quit teaching.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

JPL News: NASA Joins North Sea Oil Cleanup Training Exercise

NASA participated for the first time in Norway's annual oil spill cleanup exercise in the North Sea on June 8 through 11. Scientists flew a specialized NASA airborne instrument called the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) on NASA's C-20A piloted research aircraft to monitor a controlled release of oil into the sea, testing the radar's ability to distinguish between more and less damaging types of oil slicks.

Read the full story from JPL News

Exclude from News Hub: 
No
News Type: 
Research News

Injured Jellyfish Seek to Regain Symmetry

Self-repair is extremely important for living things. Get a cut on your finger and your skin can make new cells to heal the wound; lose your tail—if you are a particular kind of lizard—and tissue regeneration may produce a new one. Now, Caltech researchers have discovered a previously unknown self-repair mechanism—the reorganization of existing anatomy to regain symmetry—in a certain species of jellyfish.

The results are published in the June 15 online edition of the journal Proceedings of the National Academy of Sciences (PNAS).

Many marine animals, including some jellyfish, can rapidly regenerate tissues in response to injury, and this trait is important for survival. If a sea turtle takes a bite out of a jellyfish, the injured animal can quickly grow new cells to replace the lost tissue. In fact, a jellyfish-like animal called the hydra is a very commonly used model organism in studies of regeneration.

But Caltech assistant professor of biology Lea Goentoro, along with graduate student Michael Abrams and associate research technician Ty Basinger, were interested in another organism, the moon jellyfish (Aurelia aurita). Abrams, Basinger, and Goentoro, lead authors of the PNAS study, wanted to know if the moon jellyfish would respond to injuries in the same manner as an injured hydra. The team focused their study on the jellyfish's juvenile, or ephyra, stage, because the ephyra's simple body plan—a disk-shaped body with eight symmetrical arms—would make any tissue regeneration clearly visible.

To simulate injury—like that caused by a predator in the wild—the team performed amputations on anesthetized ephyra, producing animals with two, three, four, five, six, or seven arms, rather than the usual eight. They then returned the jellyfish to their habitat of artificial seawater, and monitored the tissue response.

Although wounds healed up as expected, with the tissue around the cut closing up in just a few hours, the researchers noticed something unexpected: the jellyfish were not regenerating tissues to replace the lost arms. Instead, within the first two days after the injury, the ephyra had reorganized its existing arms to be symmetrical and evenly spaced around the animal's disklike body. This so-called resymmetrization occurred whether the animal had as few as two limbs remaining or as many as seven, and the process was observed in three additional species of jellyfish ephyra.

"This is a different strategy of self-repair," says Goentoro. "Some animals just heal their wounds, other animals regenerate what is lost, but the moon jelly ephyrae don't regenerate their lost limbs. They heal the wound, but then they reorganize to regain symmetry."

There are several reasons why symmetry might be more important to the developing jellyfish than regenerating a lost limb. Jellyfish and many other marine animals such as sea urchins, sea stars, and sea anemones have what is known as radial symmetry. Although the bodies of these animals have a distinct top and bottom, they do not have distinguishable left and right sides—an arrangement, present in humans and other higher life forms, known as bilateral symmetry. And this radial symmetry is essential to how the jellyfish moves and eats, first author Abrams says.

"Jellyfish move by 'flapping' their arms; this allows for propulsion through the water, which also moves water—and food—past the mouth," he says. "As they are swimming, a boundary layer of viscous—that is, thick—fluid forms between their arms, creating a continuous paddling surface. And you can imagine how this paddling surface would be disturbed if you have a big gap between the arms."

Maintaining symmetry appears to be vital not just for propulsion and feeding, the researchers found. In the few cases when the injured animals do not symmetrize—only about 15 percent of the injured animals they studied—the unsymmetrical ephyra also cannot develop into normal adult jellyfish, called medusa.

The researchers next wanted to figure out how the new self-repair mechanism works. Cell proliferation and cell death are commonly involved in tissue regeneration and injury response, but, the team found, the amputee jellyfish were neither making new cells nor killing existing cells as they redistributed their existing arms around their bodies.

Instead, the mechanical forces created by the jellyfish's own muscle contractions were essential for symmetrization. In fact, when muscle relaxants were added to the seawater surrounding an injured jellyfish, slowing the animal's muscle contractions, the symmetrization of the intact arms also was slowed down. In contrast, a reduction in the amount of magnesium in the artificial seawater sped up the rate at which the jellyfish pulsed their muscles, and these faster muscle contractions increased the symmetrization rate.

"Symmetrization is a combination of the mechanical forces created by the muscle contractions and the viscoelastic jellyfish body material," Abrams says. "The cycle of contraction and the viscoelastic response from the jellyfish tissues leads to reorganization of the body. You can imagine that in the absence of symmetry, the mechanical forces are unbalanced, but over time, as the body and arms reorganize, the forces rebalance."

To test this idea, the team collaborated with coauthor Chin-Lin Guo, from Academia Sinica in Taiwan, to build a mathematical model, and succeeded in simulating the symmetrization process.

In addition to adding to our understanding about self-repair mechanisms, the discovery could help engineers design new biomaterials, Goentoro says. "Symmetrization may provide a new avenue for thinking about biomaterials that could be designed to 'heal' by regaining functional geometry rather than regenerating precise shapes," she says. "Other self-repair mechanisms require cell proliferation and cell death—biological processes that aren't easily translated to technology. But we can more easily apply mechanical forces to a material."

And the impact of mechanical forces on development is being increasingly studied in a variety of organisms, Goentoro says. "Recently, mechanical forces have been increasingly found to play a role in development and tissue regulation," she says. "So the symmetrization process in Aurelia, with its simple geometry, lends itself as a good model system where we can study how mechanical forces play a role in morphogenesis."

These results are published in a paper titled "Self-repairing symmetry in jellyfish through mechanically driven reorganization." In addition to Abrams, Basinger, Goentoro, and Guo, former SURF student William Yuan from the University of Oxford was also a coauthor. Jellyfish were provided by the Cabrillo Marine Aquarium and the Monterey Bay Aquarium. John Dabiri, professor of aeronautics and bioengineering, provided discussions and suggestions throughout the study. Abrams is funded by the Graduate Research Fellowship Program of the National Science Foundation.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Teaser Image: 

Behavior Matters: Redesigning the Clinical Trial

When a new type of drug or therapy is discovered, double-blind randomized controlled trials (DBRCTs) are the gold standard for evaluating them. These trials, which have been used for years, were designed to determine the true efficacy of a treatment free from patient or doctor bias, but they do not factor in the effects that patient behaviors, such as diet and lifestyle choices, can have on the tested treatment.

A recent meta-analysis of six such clinical trials, led by Caltech's Erik Snowberg, professor of economics and political science, and his colleagues Sylvain Chassang from Princeton University and Ben Seymour from Cambridge University, shows that behavior can have a serious impact on the effectiveness of a treatment—and that the currently used DBRCT procedures may not be able to assess the effects of behavior on the treatment. To solve this, the researchers propose a new trial design, called a two-by-two trial, that can account for behavior–treatment interactions.

The study was published online on June 10 in the journal PLOS ONE.

Patients behave in different ways during a trial. These behaviors can directly relate to the trial—for example, one patient who believes in the drug may religiously stick to his or her treatment regimen while someone more skeptical might skip a few doses. The behaviors may also simply relate to how the person acts in general, such as preferences in diet, exercise, and social engagement. And in the design of today's standard trials, these behaviors are not accounted for, Snowberg says.

For example, a DBRCT might randomly assign patients to one of two groups: an experimental group that receives the new treatment and a control group that does not. As the trial is double-blinded, neither the subjects nor their doctors know who falls into which group. This is intended to reduce bias from the behavior and beliefs of the patient and the doctor; the thinking is that because patients have not been specifically selected for treatment, any effects on health outcomes must be solely due to the treatment or lack of treatment.

Although the patients do not know whether they have received the treatment, they do know their probability of getting the treatment—in this case, 50 percent. And that 50 percent likelihood of getting the new treatment might not be enough to encourage a patient to change behaviors that could influence the efficacy of the drug under study, Snowberg says. For example, if you really want to lose weight and know you have a high probability—say 70 percent chance—of being in the experimental group for a new weight loss drug, you may be more likely to take the drug as directed and to make other healthy lifestyle choices that can contribute to weight loss. As a result, you might lose more weight, boosting the apparent effectiveness of the drug.

However, if you know you only have a 30 percent chance of being in the experimental group, you might be less motivated to both take the drug as directed and to make those other changes. As a result, you might lose less weight—even if you are in the treatment group—and the same drug would seem less effective.

"Most medical research just wants to know if a drug will work or not. We wanted to go a step further, designing new trials that would take into account the way people behave. As social scientists, we naturally turned to the mathematical tools of formal social science to do this," Snowberg says.

Snowberg and his colleagues found that with a new trial design, the two-by-two trial, they can tease out the effects of behavior and the interaction of behavior and treatment, as well as the effects of treatment alone. The new trial, which still randomizes treatment, also randomizes the probability of treatment—which can change a patient's behavior.

In a two-by-two trial, instead of patients first being assigned to either the experimental or control groups, they are randomly assigned to either a "high probability of treatment" group or a "low probability of treatment" group. The patients in the high probability group are then randomly assigned to either the treatment or the control group, giving them a 70 percent chance of receiving the treatment. Patients in the low probability group are also randomly assigned to treatment or control; their likelihood of receiving the treatment is 30 percent. The patients are then informed of their probability of treatment.

By randomizing both the treatment and the probability of treatment, medical researchers can quantify the effects of treatment, the effects of behavior, and the effects of the interaction between treatment and behavior. Determining each, Snowberg says, is essential for understanding the overall efficacy of treatment.


Credit: Sylvain Chassang, Princeton University

"It's a very small change to the design of the trial, but it's important. The effect of a treatment has these two constituent parts: pure treatment effect and the treatment–behavior interaction. Standard blind trials just randomize the likelihood of treatment, so you can't see this interaction. Although you can't just tell someone to randomize their behavior, we found a way that you can randomize the probability that a patient will get something that will change their behavior."

Because it is difficult to implement new trial design changes in active trials, Snowberg and his colleagues wanted to first test their idea with a meta-analysis of data from previous clinical trials. They developed a way to test this idea by coming up with a new mathematical formula that can be used to analyze DBRCT data. The formula, which teases out the health outcomes resulting from treatment alone as well as outcomes resulting from an interaction between treatment and behavior, was then used to statistically analyze six previous DBRCTs that had tested the efficacy of two antidepressant drugs, imipramine (a tricyclic antidepressant also known as Tofranil) and paroxetine (a selective serotonin reuptake inhibitor sold as Paxil).

First, the researchers wanted to see if there was evidence that patients behave differently when they have a high probability of treatment versus when they have a low probability of treatment. The previous trials recorded how many patients dropped out of the study, so this was the behavior that Snowberg and his colleagues analyzed. They found that in trials where patients happened to have a relatively high probability of treatment—near 70 percent—the dropout rate was significantly lower than in other trials with patients who had a lower probability of treatment, around 50 percent.

Although the team did not have any specific behaviors to analyze, other than dropping out of the study, they also wanted to determine if behavior in general could have added to the effect of the treatments. Using their statistical techniques, they determined that imipramine seemed to have a pure treatment effect, but no effect from the interaction between treatment and behavior—that is, the drug seemed to work fine, regardless of any behavioral differences that may have been present.

However, after their analysis, they determined that paroxetine seemed to have no effect from the treatment alone or behavior alone. However, an interaction between the treatment and behavior did effectively decrease depression. Because this was a previously performed study, the researchers cannot know which specific behavior was responsible for the interaction, but with the mathematical formula, they can tell that this behavior was necessary for the drug to be effective.

In their paper, Snowberg and his colleagues speculate how a situation like this might come about. "Maybe there is a drug, for instance, that makes people feel better in social situations, and if you're in the high probability group, then maybe you'd be more willing to go out to parties to see if the drug helps you talk to people," Snowberg explains. "Your behavior drives you to go to the party, and once you're at the party, the drug helps you feel comfortable talking to people. That would be an example of an interaction effect; you couldn't get that if people just took this drug alone at home."

Although this specific example is just speculation, Snowberg says that the team's actual results reveal that there is some behavior or set of behaviors that interact with paroxetine to effectively treat depression—and without this behavior, the drug appears to be ineffective.

"Normally what you get when you run a standard blind trial is some sort of mishmash of the treatment effect and the treatment-behavior interaction effect. But, knowing the full interaction effect is important. Our work indicates that clinical trials underestimate the efficacy of a drug where behavior matters," Snowberg says. "It may be the case that 50 percent probability isn't high enough for people to change any of their behaviors, especially if it's a really uncertain new treatment. Then it's just going to look like the drug doesn't work, and that isn't the case."

Because the meta-analysis supported the team's hypothesis—that the interaction between treatment and behavior can have an effect on health outcomes—the next step is incorporating these new ideas into an active clinical trial. Snowberg says that the best fit would be a drug trial for a condition, such as a mental health disorder or an addiction, that is known to be associated with behavior. At the very least, he says, he hopes that these results will lead the medical research community to a conversation about ways to improve the DBRCT and move past the current "gold standard."

These results are published in a paper titled "Accounting for Behavior in Treatment Effects: New Applications for Blind Trials." Cayley Bowles, a student in the UCLA/Caltech MD/PhD program, was also a coauthor on the paper. The work was supported by funding to Snowberg and Chassang from the National Science Foundation.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - research_news