"Failed Stars" Host Powerful Auroral Displays

Caltech astronomers say brown dwarfs behave more like planets than stars

Brown dwarfs are relatively cool, dim objects that are difficult to detect and hard to classify. They are too massive to be planets, yet possess some planetlike characteristics; they are too small to sustain hydrogen fusion reactions at their cores, a defining characteristic of stars, yet they have starlike attributes.

By observing a brown dwarf 20 light-years away using both radio and optical telescopes, a team led by Gregg Hallinan, assistant professor of astronomy at Caltech, has found another feature that makes these so-called failed stars more like supersized planets—they host powerful auroras near their magnetic poles.

The findings appear in the July 30 issue of the journal Nature.

"We're finding that brown dwarfs are not like small stars in terms of their magnetic activity; they're like giant planets with hugely powerful auroras," says Hallinan. "If you were able to stand on the surface of the brown dwarf we observed—something you could never do because of its extremely hot temperatures and crushing surface gravity—you would sometimes be treated to a fantastic light show courtesy of auroras hundreds of thousands of times more powerful than any detected in our solar system."

In the early 2000s, astronomers began finding that brown dwarfs emit radio waves. At first, everyone assumed that the brown dwarfs were creating the radio waves in basically the same way that stars do—through the action of an extremely hot atmosphere, or corona, heated by magnetic activity near the object's surface. But brown dwarfs do not generate large flares and charged-particle emissions in the way that our sun and other stars do, so the radio emissions were surprising.

While in graduate school, in 2006, Hallinan discovered that brown dwarfs can actually pulse at radio frequencies. "We see a similar pulsing phenomenon from planets in our solar system," says Hallinan, "and that radio emission is actually due to auroras." Since then he has wondered if the radio emissions seen on brown dwarfs might be caused by auroras.

Auroral displays result when charged particles, carried by the stellar wind for example, manage to enter a planet's magnetosphere, the region where such charged particles are influenced by the planet's magnetic field. Once within the magnetosphere, those particles get accelerated along the planet's magnetic field lines to the planet's poles, where they collide with gas atoms in the atmosphere and produce the bright emissions associated with auroras.

Following his hunch, Hallinan and his colleagues conducted an extensive observation campaign of a brown dwarf called LSRJ 1835+3259, using the National Radio Astronomy Observatory's Very Large Array (VLA), the most powerful radio telescope in the world, as well as optical instruments that included Palomar's Hale Telescope and the W. M. Keck Observatory's telescopes.


This movie shows the brown dwarf, LSRJ 1835+3259, as seen with the National Radio Astronomy Observatory's Very Large Array, pulsing as a result of the process that creates powerful auroras.
Credit: Stephen Bourke/Caltech

Using the VLA they detected a bright pulse of radio waves that appeared as the brown dwarf rotated around. The object rotates every 2.84 hours, so the researchers were able to watch nearly three full rotations over the course of a single night.

Next, the astronomers used the Hale Telescope to observe that the brown dwarf varied optically on the same period as the radio pulses. Focusing on one of the spectral lines associated with excited hydrogen—the h-alpha emission line—they found that the object's brightness varied periodically.

Finally, Hallinan and his colleagues used the Keck telescopes to measure precisely the brightness of the brown dwarf over time—no simple feat given that these objects are many thousands of times fainter than our own sun. Hallinan and his team were able to establish that this hydrogen emission is a signature of auroras near the surface of the brown dwarf.

"As the electrons spiral down toward the atmosphere, they produce radio emissions, and then when they hit the atmosphere, they excite hydrogen in a process that occurs at Earth and other planets, albeit tens of thousands of times more intense," explains Hallinan. "We now know that this kind of auroral behavior is extending all the way from planets up to brown dwarfs."

In the case of brown dwarfs, charged particles cannot be driven into their magnetosphere by a stellar wind, as there is no stellar wind to do so. Hallinan says that some other source, such as an orbiting planet moving through the brown dwarf's magnetosphere, may be generating a current and producing the auroras. "But until we map the aurora accurately, we won't be able to say where it's coming from," he says.

He notes that brown dwarfs offer a convenient stepping stone to studying exoplanets, planets orbiting stars other than our own sun. "For the coolest brown dwarfs we've discovered, their atmosphere is pretty similar to what we would expect for many exoplanets, and you can actually look at a brown dwarf and study its atmosphere without having a star nearby that's a factor of a million times brighter obscuring your observations," says Hallinan.

Just as he has used measurements of radio waves to determine the strength of magnetic fields around brown dwarfs, he hopes to use the low-frequency radio observations of the newly built Owens Valley Long Wavelength Array to measure the magnetic fields of exoplanets. "That could be particularly interesting because whether or not a planet has a magnetic field may be an important factor in habitability," he says. "I'm trying to build a picture of magnetic field strength and topology and the role that magnetic fields play as we go from stars to brown dwarfs and eventually right down into the planetary regime."

The work, "Magnetospherically driven optical and radio aurorae at the end of the main sequence," was supported by funding from the National Science Foundation. Additional authors on the paper include Caltech senior postdoctoral scholar Stephen Bourke, Caltech graduate students Sebastian Pineda and Melodie Kao, Leon Harding of JPL, Stuart Littlefair of the University of Sheffield, Garret Cotter of the University of Oxford, Ray Butler of National University of Ireland, Galway, Aaron Golden of Yeshiva University, Gibor Basri of UC Berkeley, Gerry Doyle of Armagh Observatory, Svetlana Berdyugina of the Kiepenheuer Institute for Solar Physics, Alexey Kuznetsov of the Institute of Solar-Terrestrial Physics in Irkutsk, Russia, Michael Rupen of the National Radio Astronomy Observatory, and Antoaneta Antonova of Sofia University.

 

 

Writer: 
Kimm Fesenmaier
Frontpage Title: 
Powerful Auroras Shed Light On Brown Dwarfs
Writer: 
Exclude from News Hub: 
No
Short Title: 
Powerful Auroras Shed Light On Brown Dwarfs
News Type: 
Research News

Mosquitoes Use Smell to See Their Hosts

On summer evenings, we try our best to avoid mosquito bites by dousing our skin with bug repellents and lighting citronella candles. These efforts may keep the mosquitoes at bay for a while, but no solution is perfect because the pests have evolved to use a triple threat of visual, olfactory, and thermal cues to home in on their human targets, a new Caltech study suggests.

The study, published by researchers in the laboratory of Michael Dickinson, the Esther M. and Abe M. Zarem Professor of Bioengineering, appears in the July 17 online version of the journal Current Biology.

When an adult female mosquito needs a blood meal to feed her young, she searches for a host—often a human. Many insects, mosquitoes included, are attracted by the odor of the carbon dioxide (CO2) gas that humans and other animals naturally exhale. However, mosquitoes can also pick up other cues that signal a human is nearby. They use their vision to spot a host and thermal sensory information to detect body heat.

But how do the mosquitoes combine this information to map out the path to their next meal?

To find out how and when the mosquitoes use each type of sensory information, the researchers released hungry, mated female mosquitoes into a wind tunnel in which different sensory cues could be independently controlled. In one set of experiments, a high-concentration CO2 plume was injected into the tunnel, mimicking the signal created by the breath of a human. In control experiments, the researchers introduced a plume consisting of background air with a low concentration of CO2. For each experiment, researchers released 20 mosquitoes into the wind tunnel and used video cameras and 3-D tracking software to follow their paths.

When a concentrated CO2 plume was present, the mosquitos followed it within the tunnel as expected, whereas they showed no interest in a control plume consisting of background air.

"In a previous experiment with fruit flies, we found that exposure to an attractive odor led the animals to be more attracted to visual features," says Floris van Breugel, a postdoctoral scholar in Dickinson's lab and first author of the study. "This was a new finding for flies, and we suspected that mosquitoes would exhibit a similar behavior. That is, we predicted that when the mosquitoes were exposed to CO2, which is an indicator of a nearby host, they would also spend a lot of time hovering near high-contrast objects, such as a black object on a neutral background."

To test this hypothesis, van Breugel and his colleagues did the same CO2 plume experiment, but this time they provided a dark object on the floor of the wind tunnel. They found that in the presence of the carbon dioxide plumes, the mosquitoes were attracted to the dark high-contrast object. In the wind tunnel with no CO2 plume, the insects ignored the dark object entirely.

While it was no surprise to see the mosquitoes tracking a CO2 plume, "the new part that we found is that the CO2 plume increases the likelihood that they'll fly toward an object. This is particularly interesting because there's no CO2 down near that object—it's about 10 centimeters away," van Breugel says. "That means that they smell the CO2, then they leave the plume, and several seconds later they continue flying toward this little object. So you could think of it as a type of memory or lasting effect."

Next, the researchers wanted to see how a mosquito factors thermal information into its flight path. It is difficult to test, van Breugel says. "Obviously, we know that if you have an object in the presence of a CO2 plume—warm or cold—they will fly toward it because they see it," he says. "So we had to find a way to separate the visual attraction from the thermal attraction."

To do this, the researchers constructed two glass objects that were coated with a clear chemical substance that made it possible to heat them to any desired temperature. They heated one object to 37 degrees Celsius (approximately human body temperature) and allowed one to remain at room temperature, and then placed them on the floor of the wind tunnel with and without CO2 plumes, and observed mosquito behavior. They found that mosquitoes showed a preference for the warm object. But contrary to the mosquitoes' visual attraction to objects, the preference for warmth was not dependent on the presence of CO2.

"These experiments show that the attraction to a visual feature and the attraction to a warm object are separate. They are independent, and they don't have to happen in order, but they do often happen in this particular order because of the spatial arrangement of the stimuli: a mosquito can see a visual feature from much further away, so that happens first. Only when the mosquito gets closer does it detect an object's thermal signature," van Breugel says.

Information gathered from all of these experiments enabled the researchers to create a model of how the mosquito finds its host over different distances. They hypothesize that from 10 to 50 meters away, a mosquito smells a host's CO2 plume. As it flies closer—to within 5 to 15 meters—it begins to see the host. Then, guided by visual cues that draw it even closer, the mosquito can sense the host's body heat. This occurs at a distance of less than a meter.

"Understanding how brains combine information from different senses to make appropriate decisions is one of the central challenges in neuroscience," says Dickinson, the principal investigator of the study. "Our experiments suggest that female mosquitoes do this in a rather elegant way when searching for food. They only pay attention to visual features after they detect an odor that indicates the presence of a host nearby. This helps ensure that they don't waste their time investigating false targets like rocks and vegetation. Our next challenge is to uncover the circuits in the brain that allow an odor to so profoundly change the way they respond to a visual image."

The work provides researchers with exciting new information about insect behavior and may even help companies design better mosquito traps in the future. But it also paints a bleak picture for those hoping to avoid mosquito bites.

"Even if it were possible to hold one's breath indefinitely," the authors note toward the end of the paper, "another human breathing nearby, or several meters upwind, would create a CO2 plume that could lead mosquitoes close enough to you that they may lock on to your visual signature. The strongest defense is therefore to become invisible, or at least visually camouflaged. Even in this case, however, mosquitoes could still locate you by tracking the heat signature of your body . . . The independent and iterative nature of the sensory-motor reflexes renders mosquitoes' host seeking strategy annoyingly robust."

These results were published in a paper titled "Mosquitoes use vision to associate odor plumes with thermal targets." In addition to Dickinson and van Breugel, the other authors are Jeff Riffell and Adrienne Fairhall from the University of Washington. The work was funded by a grant from the National Institutes of Health.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Alone in the Darkness: Mariner 4 to Mars, 50 Years Later

July 14 marks 50 years of visual reconnaissance of the solar system by NASA's Jet Propulsion Laboratory (JPL), beginning with Mariner 4's flyby of Mars in 1965.

Among JPL's first planetary efforts, Mariners 3 and 4 (known collectively as "Mariner Mars") were planned and executed by a group of pioneering scientists at Caltech in partnership with JPL. NASA was only 4 years old when the first Mars flyby was approved in 1962, but the core science team had been working together at Caltech for many years. The team included Caltech faculty Robert Sharp (after whom Mount Sharp, the main target of the Mars rover Curiosity, is named) and Gerry Neugebauer, professor of geology and of professor of physics, respectively; Robert Leighton and H. Victor Neher, professors of physics; and Bill Pickering, professor of electrical engineering, who was the director of JPL from 1954–1976. Rounding out the Caltech contingent was a young Bruce Murray, a new addition to the geology faculty, who would follow Pickering as JPL director in 1976.

"The Mariner missions marked the beginning of planetary geology, led by researchers at Caltech including Bruce Murray and Robert Sharp," said John Grotzinger, the Fletcher Jones Professor of Geology and chair of the Division of Geological and Planetary Sciences. "These early flyby missions showed the enormous potential of Mars to provide insight into the evolution of a close cousin to Earth and stimulated the creation of a program dedicated to iterative exploration involving orbiters, landers, and rovers."

By today's standards, Mariner Mars was a virtual leap into the unknown. NASA and JPL had little spaceflight experience to guide them. There had been just one successful planetary mission—Mariner 2's journey past Venus in 1962—to build upon. Sending spacecraft to other planets was still a new endeavor.  

The Mariner Mars spacecraft were originally designed without cameras. Neugebauer, Murray, and Leighton felt that a lot of science questions could be answered via images from this close encounter with Mars. As it turned out, sending back photos of the planet that had so long captured the imaginations of millions had the added benefit of making the Mars flyby more accessible to the public.

Mariner 3 launched on November 5, 1964. The Atlas rocket that boosted it clear of the atmosphere functioned perfectly (not always the case in the early years of spaceflight), but the shroud enclosing the payload failed to fully open and the spacecraft, unable to collect sunlight on its solar panels, ceased to function after about nine hours of flight.

Mariner 4 launched three weeks later on November 28 with a redesigned shroud. The probe deployed as planned and began its journey to Mars. But there was still drama in store for the mission. Within the first hour of the flight, the rocket's upper stage had pushed the spacecraft out of Earth orbit, and the solar panels had deployed. Then the guidance system acquired a lock on the sun, but a second object was needed to guide the spacecraft. This depended on a photocell finding the bright star Canopus, which was attempted about 15 hours later. During these first attempts, however, the primitive onboard electronics erroneously identified other stars of similar brightness.

Controllers managed to solve this problem but over the next few weeks realized that a small cloud of dust and paint flecks, ejected when Mariner 4 deployed, was traveling along with the spacecraft and interfering with the tracking of Canopus. A tiny paint chip, if close enough to the star tracker, could mimic the star. After more corrective action, Canopus was reacquired and Mariner's journey continued largely without incident. This star-tracking technology, along with many other design features of the spacecraft, has been used in every interplanetary mission JPL has flown since.

At the time, what was known about Mars had been learned from Earth-based telescopes. The images were fuzzy and indistinct—at its closest, Mars is still about 35 million miles distant. Scientific measurements derived from visual observations of the planet were inexact. While ideas about the true nature of Mars evolved throughout the first half of the 20th century, in 1965 nobody could say with any confidence how dense the martian atmosphere was or determine its exact composition. Telescopic surveys had recorded a visual event called the "wave of darkening," which some scientists theorized could be plant life blooming and perishing as the harsh martian seasons changed. A few of them still thought of Mars as a place capable of supporting advanced life, although most thought it unlikely. However, there was no conclusive evidence for either scenario.

So, as Mariner 4 flew past Mars, much was at stake, both for the scientific community and a curious general public. Were there canals or channels on the surface, as some astronomers had reported? Would we find advanced life forms or vast collections of plant life? Would there be liquid water on the surface?

Just over seven months after launch, the encounter with Mars was imminent. On July 14, 1965, Mariner's science instruments were activated. These included a magnetometer to measure magnetic fields, a Geiger counter to measure radiation, a cosmic ray telescope, a cosmic dust detector, and the television camera.

About seven hours before the encounter, the TV camera began acquiring images. After the probe passed Mars, an onboard data recorder—which used a 330-foot endless loop of magnetic tape to store still pictures—initiated playback of the raw images to Earth, transmitting them twice for certainty. Each image took 10 hours to transmit.

The 22 images sent by Mariner 4 appeared conclusive. Although they were low-resolution and black-and-white, they indicated that Mars was not a place likely to be friendly to life. It was a cold, dry desert, covered with so many craters as to strongly resemble Earth's moon. The atmospheric density was about one-thousandth that of Earth, and no liquid water was apparent on the surface.

When discussing the mission during an interview at Caltech in 1977, Leighton recalled viewing the first images at JPL. "If someone had asked 'What do you expect to see?' we would have said 'craters'…[yet] the fact that craters were there, and a predominant land form, was somehow surprising."

Leighton also recalled a letter he received from, of all people, a dairy farmer. It read, "I'm not very close to your world, but I really appreciate what you are doing. Keep it going." Leighton said of the sentiment, "A letter from a milkman…I thought that was kind of nice."

After its voyage past Mars, Mariner 4 maintained intermittent communication with JPL and returned data about the interplanetary environment for two more years. But by the end of 1967, the spacecraft had suffered tens of thousands of micrometeoroid impacts and was out of the nitrogen gas it used for maneuvering. The mission officially ended on December 21.

"Mariner 4 defined and pioneered the systems and technologies needed for a truly interplanetary spacecraft," says Rob Manning (BS '81), JPL's chief engineer for the Low-Density Supersonic Decelerator and formerly chief engineer for the Mars Science Laboratory. "All U.S. interplanetary missions that have followed were directly derived from the architecture and innovations that engineers behind Mariner invented. We stand on the shoulders of giants."

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Teaser Image: 

Distant Black Hole Wave Twists Like Giant Whip

Fast-moving magnetic waves emanating from a distant supermassive black hole undulate like a whip whose handle is being shaken by a giant hand, according to a new study involving Caltech scientists, which used data from the National Radio Astronomy Observatory's Very Long Baseline Array (VLBA) to explore the galaxy-black hole system known as BL Lacertae (BL Lac) in high resolution.

The team's findings, detailed in the April 10 issue of the Astrophysical Journal, mark the first time so-called Alfvén (pronounced Alf-vain) waves have been identified in a black hole system.

Alfvén waves are generated when magnetic field lines, such as those coming from the sun or the disk around a black hole, interact with charged particles, or ions, and become twisted, and in the case of BL Lac and sometimes for the sun, are coiled into a helix. In the case of BL Lac, the ions are in the form of particle jets that are flung from opposite sides of the black hole at near light speed.

"Imagine running a water hose through a slinky that has been stretched taut," says first author Marshall Cohen, professor emeritus of astronomy at Caltech. "A sideways disturbance at one end of the slinky will create a wave that travels to the other end, and if the slinky sways to and fro, the hose running through its center has no choice but to move with it."

A similar thing is happening in BL Lac, Cohen says. The Alfvén waves are analogous to the propagating transverse motions of the slinky, and as the waves propagate along the magnetic field lines, they can cause the field lines—and the particle jets encompassed by the field lines—to move as well.

It's common for black hole particle jets to bend—and some even swing back and forth. But those movements typically take place on timescales of thousands or millions of years. "What we see is happening on a timescale of weeks," Cohen says. "We're taking pictures once a month, and the position of the waves is different each month."

Interestingly, from the vantage of astronomers on Earth, the Alfvén waves emanating from BL Lac appear to be traveling about five times faster than the speed of light. "The waves only appear to be superluminal, or moving faster than light," Cohen says. "The high speed is an optical illusion resulting from the fact that the waves are traveling very close to, but below, the speed of light, and are passing just to the side of our line of sight."

Co-author David Meier, a visiting associate in astronomy and now-retired astrophysicist from JPL, added, "By analyzing these waves, we are able to determine the internal properties of the jet, and this will help us ultimately understand how jets are produced by black holes."

Other authors on the paper, "Studies of the Jet in BL Lacertae II Superluminal Alfvén Waves," include Talvikki Hovatta, a former Caltech postdoctoral scholar; as well as scientists from the University of Cologne and the Max Planck Institute for Radio Astronomy in Germany; the Isaac Newton Institute of Chile; Aalto University in Finland; the Astro Space Center of Lebedev Physical Institute, the Pulkovo Observatory, and the Crimean Astrophysical Observatory in Russia. Purdue University, Denison University, and the Jet Propulsion Laboratory were also involved in the study.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Teaser Image: 

JPL News: Searing Sun Seen in X-rays

X-rays light up the surface of our sun in a bouquet of colors in this new image containing data from NASA's Nuclear Spectroscopic Telescope Array, or NuSTAR. The high-energy X-rays seen by NuSTAR are shown in blue, while green represents lower-energy X-rays from the X-ray Telescope instrument on the Hinode spacecraft, named after the Japanese word for sunrise. The yellow and green colors show ultraviolet light from NASA's Solar Dynamics Observatory.

NuSTAR usually spends its time investigating the mysteries of black holes, supernovae, and other high-energy objects in space. But it can also look closer to home to study our sun.

"What's great about NuSTAR is that the telescope is so versatile that we can hunt black holes millions of light-years away and we can also learn something fundamental about the star in our own backyard," said Brian Grefenstette, a Caltech research scientist and an astronomer on the NuSTAR team.

NuSTAR is a Small Explorer mission led by Caltech and managed by NASA's Jet Propulsion Laboratory in Pasadena, California, for NASA's Science Mission Directorate in Washington. JPL is managed by Caltech for NASA.

Read the full story from JPL News

Images: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Sniffing Out Answers: A Conversation with Markus Meister

Blindfolded and asked to distinguish between a rose and, say, smoke from a burning candle, most people would find the task easy. Even differentiating between two rose varieties can be a snap because the human olfactory system—made up of the nerve cells in our noses and everything that allows the brain to process smell—is quite adept. But just how sensitive is it to different smells?

In 2014, a team of scientists from the Rockefeller University published a paper in the journal Science, arguing that humans can discriminate at least 1 trillion odors. Now Markus Meister, the Anne P. and Benjamin F. Biaggini Professor of Biological Sciences at Caltech, has published a paper in the open-access journal eLife, in which he disputes the 2014 claim, saying that the science is not yet in a place where such a number can be determined.

We recently spoke with Meister about his new paper and what it says about the claim that we can distinguish a trillion smells.

 

What was the goal of the 2014 paper, and why do you take issue with it?

The overt question the authors asked was: How many different smells can humans distinguish? That is a naturally interesting question, in part because in other fields of sensory biology, similar questions have already been answered. People quibble about the exact numbers, but in general scientists agree that humans can distinguish about 1 to 2 million colors and something on the order of 100,000 pure tones.

But as interesting as the question is, I argue that we, as a field, are not yet prepared to address it. First we need to know how many dimensions span the perceptual space of odors. And by that I mean: how many olfactory variables are needed to fully describe all of the odors that humans can experience?

In the case of human vision, we say that the perceptual space for colors has three dimensions, which means that every physical light can be described by three numbers—how it activates the red, green, and blue cone photoreceptors in the retina.

As long as we don't know the dimensionality of odor space, we don't know how to even start interpreting measurements. Once we know the dimensionality, we can start probing the space systematically and ask how many different odors fit into it in the same way that we've looked at how many different colors fit into the three-dimensional space of colors.

The fundamental conceptual mistake that the authors of the Science paper made was to assume that the space of odor perception has 128 dimensions or more and then interpret the data as though that was the case . . . even though there is absolutely no evidence to suggest that the odor space has such high dimensionality.

 

What makes it so hard to determine the dimensionality of odor?

Well, there are a couple of things. First, there is no natural coordinate system in which olfactory stimuli exist. This stands in contrast with visual and auditory stimuli. For example, pure (monochromatic) lights or tones can be represented nicely as sinusoidal waves with just two variables, the frequency and the amplitude of the wave. We can easily control those two variables, and they correspond nicely to things we perceive. For pure tones, the amplitude of the sine wave corresponds to loudness and the frequency corresponds to perceived pitch. For a pure light, the frequency determines your perception of the color; if you change the intensity of the light, that alters your perception of the brightness. These simple physical parameters of the stimulus allow us to explore those spaces more easily.

In the case of odors, there are probably several hundred thousand substances that have a smell that can be perceived. But they all have different structures. There is no intuitive way to organize the stimuli. There has been some recent progress in this area, but in general we have not been successful in isolating a few physical variables that can account for a lot of what we smell.

Another aspect of olfaction that has complicated people's thinking is that humans have about 400 types of primary smell receptors. These are the actual neurons in the lining of the nasal cavity that detect odorants. So at the very input to the nervous system, every smell is characterized by the action it has on those 400 different sensors. Based on that, you might assume that smell lives in a much larger space than color vision—one with as many as 400 dimensions.

But can we perceive all of those 400 dimensions? Just because two odors cause a different pattern of activation of nerve cells in the nose doesn't mean you can actually tell them apart. Think about our sense of touch. Every one of our hairs has at its root several mechanoreceptors. If you run a comb through the hair on your head, you activate a hundred thousand mechanoreceptors in a particular pattern. If you repeat the action, you activate a different pattern of receptors, but you will be unable to perceive a difference. Similarly, I argue, there's no reason to think that we can perceive a difference between all the different patterns of activation of nerve cells in the nasal cavity. So the number of dimensions could, in fact, be much lower than 400. In fact, some recent studies have suggested that odor lives in a space with 10 or fewer perceptual dimensions.

 

In your work you describe a couple of basic experimental design failures of the 2014 paper. Can you walk us through those?

Basically, two scientific errors were made in the original study. They have to do with the concept of a positive-control experiment and the concept of testing alternative hypotheses.

In science, when we come up with a new way of analyzing things, we need to perform a test—called a positive control—that gives us confidence that the new analysis can find the right answer in a case where we already know what the answer is. So, for example, if you have devised a new way of weighing things, you will want to test it by weighing something whose weight you already know very well based on some accepted procedure. If the new procedure gives a different answer, we say it failed the positive control.

The 2014 paper did not include a positive-control test. In my paper, I provide two; applying the system that the authors propose to a very simple model microbe and to the human color-vision system. In both cases, the answers come out wrong by huge factors.

The other failure of the 2014 paper is a failure to consider alternate hypotheses. When scientists interpret the outcome of an experiment, we need to seriously analyze alternate hypotheses to the ones we believe are most likely and show why they are not reasonable explanations for what we are seeing.

In my paper, I show that an alternate model that is clearly absurd—that humans can only discriminate 10 odors—explains the data just as well as the very complicated explanation that the authors propose, which involves 400 dimensions and 1 trillion odor percepts. What this really means is that the experiment was poorly designed, in the sense that it didn't constrain the answer to the question.

By the way, there is an accompanying paper by Gerkin and Castro in the same issue of eLife that critiques the experimental design from an entirely different angle, regarding the use of statistics. I found this article very instructive, and have used it already in teaching.

 

How do you suggest scientists go about determining the dimensionality of the odor space?

One concrete idea is to try to figure out what the number of dimensions is in the vicinity of a particular point in that space. If you did that with color, you would arrive at the number three from the vast majority of points. So I suggest we start at some arbitrary point in odor space—say a 50 percent mixture of 30 different odors—and systematically go in each of the directions from there and ask: can humans actually distinguish the odor when you change the concentration a little bit up or down from there? If you do that in 30 different dimensions you might find that maybe only five of those dimensions contribute to changing the perceived odor and that along the other dimensions there is very little change. So let's figure out the dimensionality that comes out of a study like that. Is it two? Probably not. I would guess for something like 10 or 20.

Once we know that, we can start to ask how many odors fit into that space.

 

Why does all of this matter? Why do we need to know how many odors we can smell?

The question of how many smells we can discriminate has fascinated people for at least a century, and the whole industry of flavors and fragrances has been very interested in finding out whether there is a systematic set of rules by which one could mix together some small number of primary odors in order to produce any target smell.

In the field of color vision, that problem has been solved. As a result, we all use color monitors that only have three types of lights—red, green, and blue. And yet by mixing them together, they can make just about every color impression that you might care about. So there's a real technological incentive to figuring out how you can mix together primary stimuli to make any kind of perceived smell.

 

What is the big lesson you would like people to take away from this scientific exchange?

One lesson I try to convey to my students is the value of a simple simulation—to ask, "Could this idea work even in principle? Let's try it in the simplest case we can imagine." That sort of triage can often keep you from walking down an unproductive path.

On a more general note, people should remain skeptical of spectacular claims. This is particularly important when we referee for the high-glamour journals, where the editors have a predilection for unexpected results. As a community we should let things simmer a bit before allowing a spectacular claim to become the conventional wisdom. Maybe we all need to stop and smell the roses.

Writer: 
Kimm Fesenmaier
Listing Title: 
Sniffing Out Answers
Writer: 
Exclude from News Hub: 
No
Short Title: 
Sniffing Out Answers
News Type: 
Research News
Teaser Image: 

Better Memory with Faster Lasers

DVDs and Blu-ray disks contain so-called phase-change materials that morph from one atomic state to another after being struck with pulses of laser light, with data "recorded" in those two atomic states. Using ultrafast laser pulses that speed up the data recording process, Caltech researchers adopted a novel technique, ultrafast electron crystallography (UEC), to visualize directly in four dimensions the changing atomic configurations of the materials undergoing the phase changes. In doing so, they discovered a previously unknown intermediate atomic state—one that may represent an unavoidable limit to data recording speeds.

By shedding light on the fundamental physical processes involved in data storage, the work may lead to better, faster computer memory systems with larger storage capacity. The research, done in the laboratory of Ahmed Zewail, Linus Pauling Professor of Chemistry and professor of physics, will be published in the July 28 print issue of the journal ACS Nano.

When the laser light interacts with a phase-change material, its atomic structure changes from an ordered crystalline arrangement to a more disordered, or amorphous, configuration. These two states represent 0s and 1s of digital data.

"Today, nanosecond lasers—lasers that pulse light at one-billionth of a second—are used to record information on DVDs and Blu-ray disks, by driving the material from one state to another," explains Giovanni Vanacore, a postdoctoral scholar and an author on the study. The speed with which data can be recorded is determined both by the speed of the laser—that is, by the duration of each "pulse" of light—and by how fast the material itself can shift from one state to the other.

Thus, with a nanosecond laser, "the fastest you can record information is one information unit, one 0 or 1, every nanosecond," says Jianbo Hu, a postdoctoral scholar and the first author of the paper. "To go even faster, people have started to use femtosecond lasers, which can potentially record one unit every one millionth of a billionth of a second. We wanted to know what actually happens to the material at this speed and if there is a limit to how fast you can go from one structural phase to another."

To study this, the researchers used their technique, ultrafast electron crystallography. The technique, a new development—different from Zewail's Nobel Prize–winning work in femtochemistry, the visual study of chemical processes occurring at femtosecond scales—allowed researchers to observe directly the transitioning atomic configuration of a prototypical phase-change material, germanium telluride (GeTe), when it is hit by a femtosecond laser pulse.

In UEC, a sample of crystalline GeTe is bombarded with a femtosecond laser pulse, followed by a pulse of electrons. The laser pulse causes the atomic structure to change from the crystalline to other structures, and then ultimately to the amorphous state. Then, when the electron pulse hits the sample, its electrons scatter in a pattern that provides a picture of the sample's atomic configuration as a function of the time.

With this technique, the researchers could see directly, for the first time, the structural shift in GeTe caused by the laser pulses. However, they also saw something more: a previously unknown intermediate phase that appears during the transition from the crystalline to the amorphous configuration. Because moving through the intermediate phase takes additional time, the researchers believe that it represents a physical limit to how quickly the overall transition can occur—and to how fast data can be recorded, regardless of the laser speeds used.

"Even if there is a laser faster than a femtosecond laser, there will be a limit as to how fast this transition can occur and information can be recorded, just because of the physics of these phase-change materials," Vanacore says. "It's something that cannot be solved technologically—it's fundamental."

Despite revealing such limits, the research could one day aid the development of better data storage for computers, the researchers say. Right now, computers generally store information in several ways, among them the well-known random-access memory (RAM) and read-only memory (ROM). RAM, which is used to run the programs on your computer, can record and rewrite information very quickly via an electrical current. However, the information is lost whenever the computer is powered down. ROM storage, including CDs and DVDs, uses phase-change materials and lasers to store information. Although ROM records and reads data more slowly, the information can be stored for decades.

Finding ways to speed up the recording process of phase-change materials and understanding the limits to this speed could lead to a new type of memory that harnesses the best of both worlds.

The researchers say that their next step will be to use UEC to study the transition of the amorphous atomic structure of GeTe back into the crystalline phase—comparable to the phenomenon that occurs when you erase and then rewrite a DVD.

Although these applications could mean exciting changes for future computer technologies, this work is also very important from a fundamental point of view, Zewail says.

"Understanding the fundamental behavior of materials transformation is what we are after, and these new techniques developed at Caltech have made it possible to visualize such behavior in both space and time," Zewail says.

The work is published in a paper titled "Transient Structures and Possible Limits of Data Recording in Phase-Change Materials." In addition to Hu, Vanacore, and Zewail, Xiangshui Miao and Zhe Yang are also coauthors on the paper. The work was supported by the National Science Foundation and the Air Force Office of Scientific Research and was carried out in Caltech's Center for Physical Biology, which is funded by the Gordon and Betty Moore Foundation.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

New Approach Holds Promise for Earlier, Easier Detection of Colorectal Cancer

Caltech chemists develop a technique that could one day lead to early detection of tumors

Chemists at Caltech have developed a new sensitive technique capable of detecting colorectal cancer in tissue samples—a method that could one day be used in clinical settings for the early diagnosis of colorectal cancer.

Colorectal cancer is the third most prevalent cancer worldwide and is estimated to cause about 700,000 deaths every year. Metastasis due to late detection is one of the major causes of mortality from this disease; therefore, a sensitive and early indicator could be a critical tool for physicians and patients.

A paper describing the new detection technique currently appears online in Chemistry & Biology and will be published in the July 23 issue of the journal's print edition. Caltech graduate student Ariel Furst (PhD '15) and her adviser, Jacqueline K. Barton, the Arthur and Marian Hanisch Memorial Professor of Chemistry, are the paper's authors.

"Currently, the average biopsy size required for a colorectal biopsy is about 300 milligrams," says Furst. "With our experimental setup, we require only about 500 micrograms of tissue, which could be taken with a syringe biopsy versus a punch biopsy. So it would be much less invasive." One microgram is one thousandth of a milligram.

The researchers zeroed in on the activity of a protein called DNMT1 as a possible indicator of a cancerous transformation. DNMT1 is a methyltransferase, an enzyme responsible for DNA methylation—the addition of a methyl group to one of DNA's bases. This essential and normal process is a genetic editing technique that primarily turns genes off but that has also recently been identified as an early indicator of cancer, especially the development of tumors, if the process goes awry.

When all is working well, DNMT1 maintains the normal methylation pattern set in the embryonic stages, copying that pattern from the parent DNA strand to the daughter strand. But sometimes DNMT1 goes haywire, and methylation goes into overdrive, causing what is called hypermethylation. Hypermethylation can lead to the repression of genes that typically do beneficial things, like suppress the growth of tumors or express proteins that repair damaged DNA, and that, in turn, can lead to cancer.

Building on previous work in Barton's group, Furst and Barton devised an electrochemical platform to measure the activity of DNMT1 in crude tissue samples—those that contain all of the material from a tissue, not just DNA or RNA, for example. Fundamentally, the design of this platform is based on the concept of DNA-mediated charge transport—the idea that DNA can behave like a wire, allowing electrons to flow through it and that the conductivity of that DNA wire is extremely sensitive to mistakes in the DNA itself. Barton earned the 2010 National Medal of Science for her work establishing this field of research and has demonstrated that it can be used not only to locate DNA mutations but also to detect the presence of proteins such as DNMT1 that bind to DNA.

In the present study, Furst and Barton started with two arrays of gold electrodes—one atop the other—embedded in Teflon blocks and separated by a thin spacer that formed a well for solution. They attached strands of DNA to the lower electrodes, then added the broken-down contents of a tissue sample to the solution well. After allowing time for any DNMT1 in the tissue sample to methylate the DNA, they added a restriction enzyme that severed the DNA if no methylation had occurred—i.e., if DNMT1 was inactive. When they applied a current to the lower electrodes, the samples with DNMT1 activity passed the current clear through to the upper electrodes, where the activity could be measured. 

"No methylation means cutting, which means the signal turns off," explains Furst. "If the DNMT1 is active, the signal remains on. So we call this a signal-on assay for methylation activity. But beyond on or off, it also allows us to measure the amount of activity." This assay for DNMT1 activity was first developed in Barton's group by Natalie Muren (PhD '13).

Using the new setup, the researchers measured DNMT1 activity in 10 pairs of human tissue samples, each composed of a colorectal tumor sample and an adjacent healthy tissue from the same patient. When they compared the samples within each pair, they consistently found significantly higher DNMT1 activity, hypermethylation, in the tumorous tissue. Notably, they found little correlation between the amount of DNMT1 in the samples and the presence of cancer—the correlation was with activity.

"The assay provides a reliable and sensitive measure of hypermethylation," says Barton, also the chair of the Division of Chemistry and Chemical Engineering.  "It looks like hypermethylation is good indicator of tumorigenesis, so this technique could provide a useful route to early detection of cancer when hypermethylation is involved."

Looking to the future, Barton's group hopes to use the same general approach in devising assays for other DNA-binding proteins and possibly using the sensitivity of their electrochemical devices to measure protein activities in single cells. Such a platform might even open up the possibility of inexpensive, portable tests that could be used in the home to catch colorectal cancer in its earliest, most treatable stages.

The work described in the paper, "DNA Electrochemistry shows DNMT1 Methyltransferase Hyperactivity in Colorectal Tumors," was supported by the National Institutes of Health. 

Writer: 
Kimm Fesenmaier
Frontpage Title: 
A New Approach to Detecting Colorectal Cancer
Listing Title: 
A New Approach to Detecting Colorectal Cancer
Writer: 
Exclude from News Hub: 
No
Short Title: 
A New Approach to Detecting Colorectal Cancer
News Type: 
Research News

Discovering a New Stage in the Galactic Lifecycle

On its own, dust seems fairly unremarkable. However, by observing the clouds of gas and dust within a galaxy, astronomers can determine important information about the history of star formation and the evolution of galaxies. Now thanks to the unprecedented sensitivity of the telescope at the Atacama Large Millimeter Array (ALMA) in Chile, a Caltech-led team has been able to observe the dust contents of galaxies as seen just 1 billion years after the Big Bang—a time period known as redshift 5-6. These are the earliest average-sized galaxies to ever be directly observed and characterized in this way.

The work is published in the June 25 edition of the journal Nature.

Dust in galaxies is created by the elements released during the formation and collapse of stars. Although the most abundant elements in the universe—hydrogen and helium—were created by the Big Bang, stars are responsible for making all of the heavier elements in the universe, such as carbon, oxygen, nitrogen, and iron. And because young, distant galaxies have had less time to make stars, these galaxies should contain less dust. Previous observations had suggested this, but until now nobody could directly measure the dust in these faraway galaxies.

"Before we started this study, we knew that stars formed out of these clouds of gas and dust, and we knew that star formation was probably somehow different in the early universe, where dust is likely less common. But the previous information only really hinted that the properties of the gas and the dust in earlier galaxies were different than in galaxies we see around us today. We wanted to find data that showed that," says Peter Capak, a staff scientist at the Infrared Processing and Analysis Center (IPAC) at Caltech and the first author of the study.

Armed with the high sensitivity of ALMA, Capak and his colleagues set out to perform a direct analysis of the dust in these very early galaxies.

Young, faraway galaxies are often difficult to observe because they appear very dim from Earth. Previous observations of these young galaxies, which formed just 1 billion years after the Big Bang, were made with the Hubble Space Telescope and the W. M. Keck Observatory—both of which detect light in the near-infrared and visible bands of the electromagnetic spectrum. The color of these galaxies at these wavelengths can be used to make inferences about the dust—for example, galaxies that appear bluer in color tend to have less dust, while those that are red have more dust. However, other effects like the age of the stars and our distance from the galaxy can mimic the effects of dust, making it difficult to understand exactly what the color means.

The researchers began their observations by first analyzing these early galaxies with the Keck Observatory. Keck confirmed the distance from the galaxies as redshift greater than 5—verifying that the galaxies were at least as young as they previously had been thought to be. The researchers then observed the same galaxies using ALMA to detect light at the longer millimeter and submillimeter wavelengths of light. The ALMA readings provided a wealth of information that could not be seen with visible-light telescopes, including details about the dust and gas content of these very early galaxies.

Capak and his colleagues were able to use ALMA to—for the first time—directly view the dust and gas clouds of nine average-sized galaxies during this epoch. Specifically, they focused on a feature called the carbon II spectral line, which comes from carbon atoms in the gas around newly formed stars. The carbon line itself traces this gas, while the data collected around the carbon line traces a so-called continuum emission, which provides a measurement of the dust. The researchers knew that the carbon line was bright enough to be seen in mature, dust-filled nearby galaxies, so they reasoned that the line would be even brighter if there was indeed less dust in the young faraway galaxies.

Using the carbon line, their results confirmed what had previously been suggested by the data from Hubble and Keck: these older galaxies contained, on average, 12 times less dust than galaxies from 2 billion years later (at a redshift of approximately 4).

"In galaxies like our Milky Way or nearby Andromeda, all of the stars form in very dusty environments, so more than half of the light that is observed from young stars is absorbed by the dust," Capak says. "But in these faraway galaxies we observed with ALMA, less than 20 percent of the light is being absorbed. In the local universe, only very young galaxies and very odd ones look like that. So what we're showing is that the normal galaxy at these very high redshifts doesn't look like the normal galaxy today. Clearly there is something different going on."

That "something different" gives astronomers like Capak a peek into the lifecycle of galaxies. Galaxies form because gas and dust are present and eventually turn into stars—which then die, creating even more gas and dust, and releasing energy. Because it is impossible to watch this evolution from young galaxy to old galaxy happen in real time on the scale of a human lifespan, the researchers use telescopes like ALMA to take a survey of galaxies at different evolutionary stages. Capak and his colleagues believe that this lack of dust in early galaxies signifies a never-before-seen evolutionary stage for galaxies.

"This result is really exciting. It's the first time that we're seeing the gas that the stars are forming out of in the early universe. We are starting to see the transition from just gas to the first generation of galaxies to more mature systems like those around us today. Furthermore, because the carbon line is so bright, we can now easily find even more distant galaxies that formed even longer ago, sooner after the Big Bang," Capak says.

Lin Yan, a staff scientist at IPAC and coauthor on the paper, says that their results are also especially important because they represent typical early galaxies. "Galaxies come in different sizes. Earlier observations could only spot the largest or the brightest galaxies, and those tend to be very special—they actually appear very rarely in the population," she says. "Our findings tell you something about a typical galaxy in that early epoch, so they're results can be observed as a whole, not just as special cases."

Yan says that their ability to analyze the properties of these and earlier galaxies will only expand with ALMA's newly completed capabilities. During the study, ALMA was operating with only a portion of its antennas, 20 at the time; the capabilities to see and analyze distant galaxies will be further improved now that the array is complete with 66 antennas, Yan adds.

"This is just an initial observation, and we've only just started to peek into this really distant universe at redshift of a little over 5. An astronomer's dream is basically to go as far distant as we can. And when it's complete, we should be able to see all the distant galaxies that we've only ever dreamed of seeing," she says.

The findings are published in a paper titled, "Galaxies at redshifts 5 to 6 with systematically low dust content and high [C II] emission." The work was supported by funds from NASA and the European Union's Seventh Framework Program. Nick Scoville, the Francis L. Moseley Professor of Astronomy, was an additional coauthor on this paper. In addition to Keck, Hubble, and ALMA data, observations from the Spitzer Space Telescope were used to measure the stellar mass and age of the galaxies in this study. Coauthors and collaborators from other institutions include C. Carilli, G. Jones, C.M. Casey, D. Riechers, K. Sheth, C.M. Corollo, O. Ilbert, A. Karim, O. LeFevre, S. Lilly, and V. Smolcic.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Voting Rights: A Conversation with Morgan Kousser

Three years ago this week, the U.S. Supreme Court ruled unconstitutional a key provision of the Voting Rights Act (VRA), which was enacted in 1965 and extended four times since then by Congress. Section 5 of the act required certain "covered" jurisdictions in the Deep South and in states and counties outside the Deep South that had large populations of Hispanics and Native Americans to obtain "pre-clearance" from the Justice Department or the U.S. District Court in the District of Columbia before changing any election law. The provision was designed to prevent election officials from replacing one law that had been declared to be racially discriminatory with a different but still discriminatory law. A second provision, Section 4(b), contained the formula for coverage.

The VRA, notes Morgan Kousser, the William R. Kenan, Jr., Professor of History and Social Science, has been "very effective. You went from 7 percent of the black voters in Mississippi being registered to vote to 60 percent within three or four years. That was just an amazing change. Even more amazing, Section 5 was flexible enough to prevent almost every kind of new discriminatory technique or device over a period of nearly 50 years." For instance, Kousser notes, "when white supremacists in Mississippi saw that African Americans would soon comprise majorities in some state or local legislative districts, they merged the districts to preserve white majorities everywhere. But Section 5 stopped this runaround and allowed the new black voters real democracy. Voting rights was the one area in which federal law came close to eliminating the country's long, sad history of racial discrimination."

But on June 25, 2013, in a landmark ruling in Shelby County v. Holder, the Court overturned Section 4(b), effectively dismantling Section 5. Without a formula that defines covered jurisdictions, no area falls within the scope of Section 5. Chief Justice John Roberts, writing the 5–4 majority opinion, argued that although the original coverage formula "made sense," it was now outdated, based on "decades-old data and eradicated practices." Asserting that voter turnout and registration rates in covered jurisdictions are nearly equal for whites and African Americans, Roberts also noted that "blatantly discriminatory evasions of federal decrees are rare. And minority candidates hold office at unprecedented levels."

The decision, says Kousser, was wrong. In a comprehensive study recently published in the journal Transatlantica, he, with the help of three Caltech students who worked on the study during Summer Undergraduate Research Fellowship (SURF) projects, examined more than four thousand successful voting-rights cases around the country as well as Justice Department inquiries and settlements and changes to laws in response to the threat of lawsuits. Over 90 percent, they found, occurred in the covered jurisdictions—indicating, Kousser says, that the coverage scheme was still working very well.

The study found that—even when excluding all of the actions brought under Section 5 of the VRA, and only looking at those that can be brought anywhere in the country—83.2 percent of successful cases originated in covered jurisdictions. This shows, Kousser says, that whatever the coverage formula measured, it still captured the "overwhelming number of instances of proven racial discrimination in elections."

We talked with Kousser about the ruling and his findings—and how this constitutional law scholar made his way to Caltech.

 

Why do you think Justice Roberts and the other justices in the majority ruled the way they did?

He had a sense that there had been a lot of cases outside of the covered jurisdictions. But if you look at all of the data, you see that the coverage scheme captures 94 percent of all of the cases and other events that took place from 1957 through 2013 and an even larger proportion up to 2006. Suppose that you were a stockbroker, and you could make a decision that was right 94 percent of the time. Your clients would be very, very wealthy. No one would be dissatisfied with you. That's what the congressional coverage scheme did.

I wish very much that I had finished this paper two years earlier and that the data would have been published in a scholarly journal or at least made available in a pre-print by the time that the decision was cooking up. That was a mistake on my part. I should have let it out into the world a little earlier. Sometimes I have a fantasy that if this had been shown to the right justices at the right time, maybe they would have decided differently.

 

The Court did not rule on the VRA in general—but said that the coverage formula is outdated because voting discrimination is not as bad as it once was. Do you agree?

This is one of the reasons that I looked at the coverage of the California Voting Rights Act (CVRA), passed in 2002. In Section 2 of the National VRA, you have to prove what is called the "totality of the circumstances." You have to prove not only that voting is racially polarized and that there is a kind of election structure used for discrimination, but also show that there is a history of discrimination in the area, that there are often special informal procedures that go against minorities, and a whole series of other things. A Section 2 case is quite difficult to prove.

The CVRA attempted to simplify those circumstances so all you have to show is that there is racially polarized voting, usually shown by a statistical analysis of how various groups voted, and that there is a potentially discriminatory electoral structure, particularly at-large elections for city council, for school board, for community college district, and so on.

The CVRA, in effect, only became operative in 2007 after some preliminary litigation. And in 2007, after the city of Modesto settled a long-running lawsuit, lawyers for the successful plaintiffs presented the city with a bill for about $3 million. This scared jurisdictions throughout California, which were faced with the potential of paying out large amounts of money if they had racially polarized voting. Again and again, you suddenly saw jurisdictions settling short of going to trial and a lot of Hispanics elected to particular boards. This has changed about 100 or 125 local boards throughout California from holding their elections at-large to holding them by sub-districts, which allow geographically segregated minorities to elect candidates of their choice. If you graph that over time, you see a huge jump in the number of successful CVRA cases after 2007. What does this mean? Does it mean that there was suddenly a huge increase in discrimination? No, it means that there was a tool that allowed the discrimination that had previously existed to be legally identified.

If we had that across the country, and it was easier to bring cases, you would expose a lot more discrimination. That's my argument.

 

Do you think the coverage plan will be restored?

If there were hearings and an assessment of this scheme or any other potentially competing schemes, then Congress might decide on a new coverage scheme. If the bill was passed, it would go back up to the U.S. Supreme Court, and maybe the Court would be more interested in the actual empirical evidence instead of simply guessing what they thought might have existed. But I think right now the possibilities of getting any changes through the Congress are zero.

I would like to see some small changes in the coverage scheme, but they have to be made on the basis of evidence. Just throwing out the whole thing because allegedly it didn't fit anymore is an irrational way to make public policy.

 

As a professor of history, do you think it is your responsibility to help change policy?

Well, it has been interesting to me from the very beginning. Let me tell you how I got started in voting rights cases. My doctoral dissertation was on the disfranchisement of blacks and poor whites in the South in the late 19th and early 20th centuries. In about 1979, a lawyer who was cooperating with the ACLU [American Civil Liberties Union] in Birmingham, Alabama, called me up—I didn't know who he was—and he said, "Do you have an opinion about whether section 201 of the Alabama constitution of 1901 was adopted with a racially discriminatory purpose?" I said, "I do. I've studied that. I think it was adopted with a racially discriminatory purpose."

Writing expert witness reports and testifying in cases are exactly like what I have always done as a scholar. I have looked at the racially discriminatory effects of laws; I have looked at the racially discriminatory intent of laws. I have examined them by looking at a lot of evidence. I write very long papers for these cases. They are scholarly publications, and whether they relate to something that happened 100 years ago or something that happened five years ago or yesterday doesn't really, in principle, seem to make any difference.

 

How did you get started as a historian studying politics?

Well, I'm old. I grew up in the South during the period of segregation, but just as it was breaking down. When I was a junior in high school, the sit-ins took place in Nashville, Tennessee, which is where I'm from. I was sympathetic. I never liked segregation. I was always in favor of equal rights.

I had been fascinated by politics from the very beginning. By the time I was 8 or 9 years old, I was reading two newspapers a day. One was a very conservative newspaper, pro-segregation, and the other paper was a liberal newspaper, critical of segregation. They both covered politics. And if you read news stories in each about the same event on the same day, you'd get a completely different slant. It was a wonderful training for a historian. From reading two newspapers that I knew to be biased, one in one direction, the other in another direction, I had to try to figure out what was happening and what I should believe to be fact.

 

How did you end up at Caltech?

To be very frank, Yale, where I was a graduate student, didn't want me around anymore. When I was there, I started a graduate student senate. I wrote its constitution, and I served as its first president. We were obnoxious. This was in 1967 and 1968, and students were revolting around the country, trying to bring an end to the war in Vietnam, trying to stop racial discrimination, trying to change the world. I had less lofty aims.

 

Such as?

There was no bathroom for women in the hall of graduate studies where the vast majority of humanities and social sciences classes took place. We made a nonnegotiable demand for a bathroom for women. Yale was embarrassed. Yale granted our request. We did other things. We protested against a rent increase in graduate student married housing. Yale couldn't justify the increase and gave way. We formed a committee to get women equal access to the Yale swimming pools. Yale opened the pool.

 

 

In addition to doing research, you are an acclaimed teacher at Caltech—the winner of Caltech highest teaching honor, the Feynman Prize, in 2011. Do you think of yourself as more of a teacher or as a scholar?

I really like to do both. I can't avoid teaching. If you look at my scholarship, a lot of it is really in teaching format. I would like to school Chief Justice Roberts on what he had done wrong and to persuade him, convince him, that he should change his mind on this. A lot of my friends who are at my advanced age have quit teaching, because they can't take it anymore. When the term is over, they are jubilant.

I'm always sad when the term ends, particularly with my Supreme Court class, because the classes are small, so I know each individual student pretty well. I hate to say goodbye to them.

 

Do any particular students stand out in your mind?

I had one student who took my class in 2000. He was a computer science major. We used to talk a lot. We disagreed about practically everything politically, but he was a very nice and very intelligent guy.

When he finished the class, he decided that he would go to work for Microsoft. He did that for three years. Then he decided he wanted to go to law school, where he did very well; he clerked for an appeals court judge and he clerked for a Supreme Court justice. This spring, he argued his first case before the U.S. Supreme Court. The case that he argued was very complicated. I don't understand it, I don't understand the issues, I don't understand the precedents. It's relatively obscure, and it won't make big headlines. But he did it, and he's promised me that he'll share his impressions of being on that stage and that I can pass them on to current Caltech students. I know that they will find his experience as exciting as I will—a Techer arguing a case before the Supreme Court within 15 years of graduating from college! I can't quit teaching.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - research_news