Caltech Biochemist Sheds Light on Structure of Key Cellular 'Gatekeeper'

Facing a challenge akin to solving a 1,000-piece jigsaw puzzle while blindfolded—and without touching the pieces—many structural biochemists thought it would be impossible to determine the atomic structure of a massive cellular machine called the nuclear pore complex (NPC), which is vital for cell survival.

But after 10 years of attacking the problem, a team led by André Hoelz, assistant professor of chemistry, recently solved almost a third of the puzzle. The approach his team developed to do so also promises to speed completion of the remainder.

In an article published online February 12 by Science Express, Hoelz and his colleagues describe the structure of a significant portion of the NPC, which is made up of many copies of about 34 different proteins, perhaps 1,000 proteins in all and a total of 10 million atoms. In eukaryotic cells (those with a membrane-bound nucleus), the NPC forms a transport channel in the nuclear membrane. The NPC serves as a gatekeeper, essentially deciding which proteins and other molecules are permitted to pass into and out of the nucleus. The survival of cells is dependent upon the accuracy of these decisions.

Understanding the structure of the NPC could lead to new classes of cancer drugs as well as antiviral medicines. "The NPC is a huge target of viruses," Hoelz says. Indeed, pathogens such as HIV and Ebola subvert the NPC as a way to take control of cells, rendering them incapable of functioning normally. Figuring out just how the NPC works might enable the design of new drugs to block such intruders.

"This is an incredibly important structure to study," he says, "but because it is so large and complex, people thought it was crazy to work on it. But 10 years ago, we hypothesized that we could solve the atomic structure with a divide-and-conquer approach—basically breaking the task into manageable parts—and we've shown that for a major section of the NPC, this actually worked."

To map the structure of the NPC, Hoelz relied primarily on X-ray crystallography, which involves shining X-rays on a crystallized sample and using detectors to analyze the pattern of rays reflected off the atoms in the crystal.

It is particularly challenging to obtain X-ray diffraction images of the intact NPC for several reasons, including that the NPC is both enormous (about 30 times larger than the ribosome, a large cellular component whose structure wasn't solved until the year 2000) and complex (with as many as 1,000 individual pieces, each composed of several smaller sections). In addition, the NPC is flexible, with many moving parts, making it difficult to capture in individual snapshots at the atomic level, as X-ray crystallography aims to do. Finally, despite being enormous compared to other cellular components, the NPC is still vanishingly small (only 120 nanometers wide, or about 1/900th the thickness of a dollar bill), and its highly flexible nature prohibits structure determination with current X-ray crystallography methods.

To overcome those obstacles, Hoelz and his team chose to determine the structure of the coat nucleoporin complex (CNC)—one of the two main complexes that make up the NPC—rather than tackling the whole structure at once (in total the NPC is composed of six subcomplexes, two major ones and four smaller ones, see illustration). He enlisted the support of study coauthor Anthony Kossiakoff of the University of Chicago, who helped to develop the engineered antibodies needed to essentially "superglue" the samples into place to form an ordered crystalline lattice so they could be properly imaged. The X-ray diffraction data used for structure determination was collected at the General Medical Sciences and National Cancer Institutes Structural Biology Beamline at the Argonne National Laboratory.

With the help of Caltech's Molecular Observatory—a facility, developed with support from the Gordon and Betty Moore Foundation, that includes a completely automated X-ray beamline at the Stanford Synchrotron Radiation Laboratory that can be controlled remotely from Caltech—Hoelz's team refined the antibody adhesives required to generate the best crystalline samples. This process alone took two years to get exactly right.

Hoelz and his team were able to determine the precise size, shape, and the position of all atoms of the CNC, and also its location within the entire NPC.

The CNC is not the first component of the NPC to be fully characterized, but it is by far the largest. Hoelz says that once the other major component—known as the adaptor–channel nucleoporin complex—and the four smaller subcomplexes are mapped, the NPC's structure will be fully understood.

The CNC that Hoelz and his team evaluated comes from baker's yeast—a commonly used research organism—but the CNC structure is the right size and shape to dock with the NPC of a human cell. "It fits inside like a hand in a glove," Hoelz says. "That's significant because it is a very strong indication that the architecture of the NPC in both are probably the same and that the machinery is so important that evolution has not changed it in a billion years."

Being able to successfully determine the structure of the CNC makes mapping the remainder of the NPC an easier proposition. "It's like climbing Mount Everest. Knowing you can do it lowers the bar, so you know you can now climb K2 and all these other mountains," says Hoelz, who is convinced that the entire NPC will be characterized soon. "It will happen. I don't know if it will be in five or 10 or 20 years, but I'm sure it will happen in my lifetime. We will have an atomic model of the entire nuclear pore."

Still, he adds, "My dream actually goes much farther. I don't really want to have a static image of the pore. What I really would like—and this is where people look at me with a bit of a smile on their face, like they're laughing a little bit—is to get an image of how the pore is moving, how the machine actually works. The pore is not a static hole, it can open up like the iris of a camera to let something through that's much bigger. How does it do it?"

To understand that machine in motion, he adds, "you don't just need one snapshot, you need multiple snapshots. But once you have one, you can infer the other ones much quicker, so that's the ultimate goal. That's the dream."

Along with Hoelz, additional Caltech authors on the paper, "Architecture of the Nuclear Pore Complex Coat," include postdoctoral scholars Tobias Stuwe and Ana R. Correia, and graduate student Daniel H. Lin. Coauthors from the University of Chicago Department of Biochemistry and Molecular Biology include Anthony Kossiakoff, Marcin Paduch and Vincent Lu. The work was supported by Caltech startup funds, the Albert Wyrick V Scholar Award of the V Foundation for Cancer Research, the 54th Mallinckrodt Scholar Award of the Edward Mallinckrodt, Jr. Foundation, and a Kimmel Scholar Award of the Sidney Kimmel Foundation for Cancer Research.

Frontpage Title: 
Chemists Solve Key Cellular Puzzle
Listing Title: 
Chemists Solve Key Cellular Puzzle
Writer: 
Exclude from News Hub: 
No
Short Title: 
Chemists Solve Key Cellular Puzzle
News Type: 
Research News

How Iron Feels the Heat

As you heat up a piece of iron, the arrangement of the iron atoms changes several times before melting. This unusual behavior is one reason why steel, in which iron plays a starring role, is so sturdy and ubiquitous in everything from teapots to skyscrapers. But the details of just how and why iron takes on so many different forms have remained a mystery. Recent work at Caltech in the Division of Engineering and Applied Science, however, provides evidence for how iron's magnetism plays a role in this curious property—an understanding that could help researchers develop better and stronger steel.

"Humans have been working with regular old iron for thousands of years, but this is a piece about its thermodynamics that no one has ever really understood," says Brent Fultz, the Barbara and Stanley R. Rawn, Jr., Professor of Materials Science and Applied Physics.

The laws of thermodynamics govern the natural behavior of materials, such as the temperature at which water boils and the timing of chemical reactions. These same principles also determine how atoms in solids are arranged, and in the case of iron, nature changes its mind several times at high temperatures. At room temperature, the iron atoms are in an unusual loosely packed open arrangement; as iron is heated past 912 degrees Celsius, the atoms become more closely packed before loosening again at 1,394 degrees Celsius and ultimately melting at 1,538 degrees Celsius.

Iron is magnetic at room temperature, and previous work predicted that iron's magnetism favors its open structure at low temperatures, but at 770 degrees Celsius iron loses its magnetism. However, iron maintains its open structure for more than a hundred degrees beyond this magnetic transition. This led the researchers to believe that there must be something else contributing to iron's unusual thermodynamic properties.

For this missing link, graduate student Lisa Mauger and her colleagues needed to turn up the heat. Solids store heat as small atomic vibrations—vibrations that create disorder, or entropy. At high temperatures, entropy dominates thermodynamics, and atomic vibrations are the largest source of entropy in iron. By studying how these vibrations change as the temperature goes up and magnetism is lost, the researchers hoped to learn more about what is driving these structural rearrangements.

To do this, the team took its samples of iron to the High Pressure Collaborative Access Team beamline of the Advanced Photon Source at Argonne National Laboratory in Argonne, Illinois. This synchrotron facility produces intense flashes of x-rays that can be tuned to detect the quantum particles of atomic vibration—called phonon excitations—in iron.

When coupling these vibrational measurements with previously known data about the magnetic behavior of iron at these temperatures, the researchers found that iron's vibrational entropy was much larger than originally suspected. In fact, the excess was similar to the entropy contribution from magnetism—suggesting that magnetism and atomic vibrations interact synergistically at moderate temperatures. This excess entropy increases the stability of the iron's open structure even as the sample is heated past the magnetic transition.

The technique allowed the researchers to conclude, experimentally and for the first time, that magnons—the quantum particles of electron spin (magnetism)—and phonons interact to increase iron's stability at high temperatures.

Because the Caltech group's measurements matched up with the theoretical calculations that were simultaneously being developed by collaborators in the laboratory of Jörg Neugebauer at the Max-Planck-Institut für Eisenforschung GmbH (MPIE), Mauger's results also contributed to the validation of a new computational model.

"It has long been speculated that the structural stability of iron is strongly related to an inherent coupling between magnetism and atomic motion," says Fritz Körmann, postdoctoral fellow at MPIE and the first author on the computational paper. "Actually finding this coupling, and that the data of our experimental colleagues and our own computational results are in such an excellent agreement, was indeed an exciting moment."

"Only by combining methods and expertise from various scientific fields such as quantum mechanics, statistical mechanics, and thermodynamics, and by using incredibly powerful supercomputers, it became possible to describe the complex dynamic phenomena taking place inside one of the technologically most used structural materials," says Neugebauer. "The newly gained insight of how thermodynamic stability is realized in iron will help to make the design of new steels more systematic."

For thousands of years, metallurgists have been working to make stronger steels in much the same way that you'd try to develop a recipe for the world's best cookie: guess and check. Steel begins with a base of standard ingredients—iron and carbon—much like a basic cookie batter begins with flour and butter. And just as you'd customize a cookie recipe by varying the amounts of other ingredients like spices and nuts, the properties of steel can be tuned by adding varying amounts of other elements, such as chromium and nickel.

With a better computational model for the thermodynamics of iron at different temperatures—one that takes into account the effects of both magnetism and atomic vibrations—metallurgists will now be able to more accurately predict the thermodynamic properties of iron alloys as they alter their recipes. 

The experimental work was published in a paper titled "Nonharmonic Phonons in α-Iron at High Temperatures," in the journal Physical Review B. In addition to Fultz and first author Mauger, other Caltech coauthors include Jorge Alberto Muñoz (PhD '13) and graduate student Sally June Tracy. The computational paper, "Temperature Dependent Magnon-Phonon Coupling in bcc Fe from Theory and Experiment," was coauthored by Fultz and Mauger, led by researchers at the Max Planck Institute, and published in the journal Physical Review Letters. Fultz's and Mauger's work was supported by funding from the U.S. Department of Energy.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Potassium Salt Outperforms Precious Metals As a Catalyst

A team of Caltech chemists has discovered a method for producing a group of silicon-containing organic chemicals without relying on expensive precious metal catalysts. Instead, the new technique uses as a catalyst a cheap, abundant chemical that is commonly found in chemistry labs around the world—potassium tert-butoxide—to help create a host of products ranging from new medicines to advanced materials. And it turns out that the potassium salt is more effective than state-of-the-art precious metal complexes at running very challenging chemical reactions.

"We have shown for the first time that you can efficiently make carbon–silicon bonds with a safe and inexpensive catalyst based on potassium rather than ultrarare precious metals like platinum, palladium, and iridium," says Anton Toutov, a graduate student working in the laboratory of Bob Grubbs, Caltech's Victor and Elizabeth Atkins Professor of Chemistry. "We're very excited because this new method is not only 'greener' and more efficient, but it is also thousands of times less expensive than what's currently out there for making useful chemical building blocks. This is a technology that the chemical industry could readily adopt."

The finding marks one of the first cases in which catalysis—the use of catalysts to make certain reactions occur faster, more readily, or at all—moves away from being a practice that is fundamentally unsustainable. While the precious metals in most catalysts are rare and could eventually run out, potassium is an abundant element on Earth.

The team describes its new "green" chemistry technique in the February 5 issue of the journal Nature. The lead authors on the paper are Toutov and Wen-bo (Boger) Liu, a postdoctoral scholar at Caltech. Toutov recently won the Dow Sustainability Innovation Student Challenge Award (SISCA) grand prize for this work, in a competition held at Caltech's Resnick Sustainability Institute.

"The first time I spoke about this at a conference, people were stunned," says Grubbs, corecipient of the 2005 Nobel Prize in Chemistry. "I added three slides about this chemistry to the end of my talk, and afterward it was all anyone wanted to talk about."

Coauthor Brian Stoltz, professor of chemistry at Caltech, says the reason for this strong response is that while the chemistry the catalyst drives is challenging, potassium tert-butoxide is so seemingly simple. The white, free-flowing powder—similar to common table salt in appearance—provides a straightforward and environmentally friendly way to run a reaction that involves replacing a carbon–hydrogen bond with a carbon–silicon bond to produce molecules known as organosilanes.

These organic molecules are of particular interest because they serve as powerful chemical building blocks for medicinal chemists to use in the creation of new pharmaceuticals. They also hold promise in the development of new materials for use in products such as LCD screens and organic solar cells, could be important in the development of new pesticides, and are being incorporated into novel medical imaging tools.

"To be able to do this type of reaction, which is one of the most-studied problems in the world of chemistry, with potassium tert-butoxide—a material that's not precious-metal based but still catalytically active—was a total shocker," Stoltz says.

The current project got its start a couple of years ago when coauthor Alexey Fedorov—then a postdoctoral scholar in the Grubbs lab (now at ETH Zürich)—was working on a completely different problem. He was trying to break carbon–oxygen bonds in biomass using simple silicon-containing compounds, metals, and potassium tert-butoxide, which is a common additive. During that process, he ran a control experiment—one without a metal catalyst—leaving only potassium tert-butoxide as the reagent. Remarkably, the reaction still worked. And when Toutov—who was working with Fedorov—analyzed the reaction further, he realized that in addition to the expected products, the reaction was making small amounts of organosilanes. This was unexpected since organosilanes are very challenging to produce.

"I thought that was impossible, so I went back and checked it many times," Toutov says. "Sure enough, it checked out!"

Bolstered by the finding, Toutov refined the reaction so that it would create only a single desired organosilane in high yield under mild conditions, with hydrogen gas as the only byproduct. Then he expanded the scope of the reaction to produce industrially useful chemicals such as molecules needed for new materials and derivatives of pharmaceutical substances.

Having demonstrated the broad applicability of the reaction, Toutov teamed up with Liu from Stoltz's group to further develop the chemistry for the synthesis of building blocks relevant to the preparation of new human medicines, a field in which Stoltz has been active for over a decade.

But before delving too deeply into additional applications, the chemists sought the assistance of Nathan Dalleska, director of the Environmental Analysis Center in the Ronald and Maxine Linde Center for Global Environmental Science at Caltech to perform one more test with a mass spectrometer that geologists use to detect extremely minute quantities of metals. They were trying to detect some tiny amount of those precious metals that could be contaminating their experiments—something that might explain why they were getting these seemingly impossible results from potassium tert-butoxide alone.

"But there was nothing there," says Stoltz. "We made our own potassium tert-butoxide and also bought it from various vendors, and yet the chemistry continued to work just the same. We had to really convince ourselves that it was true, that there were no precious metals in there. Eventually, we had to just decide to believe it."

So far, the chemists do not know why the simple catalyst is able to drive these complex reactions. But Stoltz's lab is part of the Center for Selective C–H Functionalization, a National Science Foundation–funded Center for Chemical Innovation that involves 23 research groups from around the country. Through that center, the Caltech team has started working with Ken Houk's computational chemistry group at UCLA to investigate how the chemistry works from a mechanistic standpoint.

"It's pretty clear that it's functioning by a mechanism that is totally different than the way a precious metal would behave," says Stoltz. "That's going to inspire some people, including ourselves hopefully, to think about how to use and harness that reactivity."

Toutov says that unlike some other catalysts that stop working or become sensitive to air or water when scaled up from the single-gram scale, this new catalyst seems to be robust enough to be used at large, industrial scales. To demonstrate the industrial viability of the process, the Caltech team used the method to synthesize nearly 150 grams of a valuable organosilane—the largest amount of this chemical product that has been produced by a single catalytic reaction. The reaction required no solvent, generated hydrogen gas as the only byproduct, and proceeded at 45°C—the lowest reported temperature at which this reaction has successfully run, to date.

"This discovery just shows how little we in fact know about chemistry," says Stoltz. "People constantly try to tell us how mature our field is, but there is so much fundamental chemistry that we still don't understand."

Kerry Betz, an undergraduate student at Caltech, is a coauthor on the paper, "Silylation of C–H bonds in aromatic heterocycles by an Earth-abundant metal catalyst." The work was supported by the National Science Foundation. The Resnick Sustainability Institute at Caltech, Dow Chemical, the Natural Sciences and Engineering Research Council of Canada, and the Shanghai Institute of Organic Chemistry provided graduate and postdoctoral support. Fedorov's work on the original reaction was supported by BP. 

Writer: 
Kimm Fesenmaier
Frontpage Title: 
Abundant Salt Makes High-Performing Catalyst
Listing Title: 
Abundant Salt Makes High-Performing Catalyst
Contact: 
Writer: 
Exclude from News Hub: 
No
Short Title: 
A Greener Catalysis
News Type: 
Research News

Gravitational Waves from Early Universe Remain Elusive

A joint analysis of data from the Planck space mission and the ground-based experiment BICEP2 has found no conclusive evidence of gravitational waves from the birth of our universe, despite earlier reports of a possible detection. The collaboration between the teams has resulted in the most precise knowledge yet of what signals from the ancient gravitational waves should look like, aiding future searches.

Read the full story at JPL News

Exclude from News Hub: 
No
News Type: 
Research News

Genetically Engineered Antibodies Show Enhanced HIV-Fighting Abilities

Capitalizing on a new insight into HIV's strategy for evading antibodies—proteins produced by the immune system to identify and wipe out invading objects such as viruses—Caltech researchers have developed antibody-based molecules that are more than 100 times better than our bodies' own defenses at binding to and neutralizing HIV, when tested in vitro. The work suggests a novel approach that could be used to engineer more effective HIV-fighting drugs.

"Based on the work that we have done, we now think we know how to make a really potent therapeutic that would not only work at relatively low concentrations but would also force the virus to mutate along pathways that would make it less fit and therefore more susceptible to elimination," says Pamela Bjorkman, the Max Delbrück Professor of Biology and an investigator with the Howard Hughes Medical Institute. "If you were able to give this to someone who already had HIV, you might even be able to clear the infection."

The researchers describe the work in the January 29 issue of Cell. Rachel Galimidi, a graduate student in Bjorkman's lab at Caltech, is lead author on the paper.

The researchers hypothesized that one of the reasons the immune system is less effective against HIV than other viruses involves the small number and low density of spikes on HIV's surface. These spikes, each one a cluster of three protein subunits, stick up from the surface of the virus and are the targets of antibodies that neutralize HIV. While most viruses are covered with hundreds of these spikes, HIV has only 10 to 20, making the average distance between the spikes quite long.

That distance is important with respect to the mechanism that naturally occurring antibodies use to capture their viral targets. Antibodies are Y-shaped proteins that evolved to grab onto their targets with both "arms." However, if the spikes are few and far between—as is the case with HIV—it is likely that an antibody will bind with only one arm, making its connection to the virus weaker (and easier for a mutation of the spike to render the antibody ineffective).

To test their hypothesis, Bjorkman's group genetically engineered antibody-based molecules that can bind with both arms to a single spike. They started with the virus-binding parts, or Fabs, of broadly neutralizing antibodies—proteins produced naturally by a small percentage of HIV-positive individuals that are able to fight multiple strains of HIV until the virus mutates. When given in combination, these antibodies are quite effective. Rather than making Y-shaped antibodies, the Caltech group simply connected two Fabs—often from different antibodies, to mimic combination therapies—with different lengths of spacers composed of DNA.

Why DNA? In order to engineer antibodies that could latch onto a spike twice, they needed to know which Fabs to use and how long to make the connection between them so that both could readily bind to a single spike. Previously, various members of Bjorkman's group had tried to make educated guesses based on what is known of the viral spike structure, but the large number of possible variations in terms of which Fabs to use and how far apart they should be, made the problem intractable.

In the new work, Bjorkman and Galimidi struck upon the idea of using DNA as a "molecular ruler." It is well known that each base pair in double-stranded DNA is separated by 3.4 angstroms. Therefore, by incorporating varying lengths of DNA between two Fabs, they could systematically test for the best neutralizer and then derive the distance between the Fabs from the length of the DNA. They also tested different combinations of Fabs from various antibodies—sometimes incorporating two different Fabs, sometimes using two of the same.

"Most of these didn't work at all," says Bjorkman, which was reassuring because it suggested that any improvements the researchers saw were not just created by an artifact, such as the addition of DNA.

But some of the fabricated molecules worked very well. The researchers found that the molecules that combined Fabs from two different antibodies performed the best, showing an improvement of 10 to 1,000 times in their ability to neutralize HIV, as compared to naturally occurring antibodies. Depending on the Fabs used, the optimal length for the DNA linker was between 40 and 62 base pairs (corresponding to 13 and 21 nanometers, respectively).

Taking this finding to the next level in the most successful of these new molecules, the researchers replaced the piece of DNA with a protein linker of roughly the same length composed of 12 copies of a protein called tetratricopeptide repeat. The end product was an all-protein antibody-based reagent designed to bind with both Fabs to a single HIV spike.

"That one also worked, showing more than 30-fold average increased potency compared with the parental antibodies," says Bjorkman. "That is proof of principle that this can be done using protein-based reagents."

The greater potency suggests that a reagent made of these antibody-based molecules could work at lower concentrations, making a potential therapeutic less expensive and decreasing the risk of adverse reactions in patients.

"I think that our work sheds light on the potential therapeutic strategies that biotech companies should be using—and that we will be using—in order to make a better antibody reagent to combat HIV," says Galimidi. "A lot of companies discount antibody reagents because of the virus's ability to evade antibody pressure, focusing instead on small molecules as drug therapies. Our new reagents illustrate a way to get around that."

The Caltech team is currently working to produce larger quantities of the new reagents so that they can test them in humanized mice—specialized mice carrying human immune cells that, unlike most mice, are sensitive to HIV.

Along with Galimidi and Bjorkman, additional Caltech authors on the paper, "Intra-Spike Crosslinking Overcomes Antibody Evasion by HIV-1," include Maria Politzer, a lab assistant; and Anthony West, a senior research specialist. Joshua Klein, a former Caltech graduate student (PhD '09), and Shiyu Bai, a former technician in the Bjorkman lab, also contributed to the work; they are currently at Google and Case Western Reserve University School of Medicine, respectively. Michael Seaman of Beth Israel Deaconess Medical Center and Michel Nussenzweig of the Rockefeller University in New York are also coauthors. The work was supported by the National Institutes of Health through a Director's Pioneer Award and a grant from the HIV Vaccine Research and Design Program, as well as grants from the Collaboration for AIDS Vaccine Discovery and the Bill and Melinda Gates Foundation. Nussenzweig is also an investigator with the Howard Hughes Medical Institute.

Writer: 
Kimm Fesenmaier
Frontpage Title: 
Getting a Better Grip on HIV
Listing Title: 
Getting a Better Grip on HIV
Writer: 
Exclude from News Hub: 
No
Short Title: 
Getting a Better Grip on HIV
News Type: 
Research News

Why Do We Feel Thirst? An Interview with Yuki Oka

To fight dehydration on a hot summer day, you instinctively crave the relief provided by a tall glass of water. But how does your brain sense the need for water, generate the sensation of thirst, and then ultimately turn that signal into a behavioral trigger that leads you to drink water? That's what Yuki Oka, a new assistant professor of biology at Caltech, wants to find out.

Oka's research focuses on the study of how the brain and body work together to maintain a healthy ratio of salt to water as part of a delicate form of biological balance called homeostasis.

Recently, Oka came to Caltech from Columbia University. We spoke with him about his work, his interests outside of the lab, and why he's excited to be joining the faculty at Caltech.

 

Can you tell us a bit more about your research?

The goal of my research is to understand the mechanisms by which the brain and body cooperate to maintain our internal environment's stability, which is called homeostasis. I'm especially focusing on fluid homeostasis, the fundamental mechanism that regulates the balance of water and salt. When water or salt are depleted in the body, the brain generates a signal that causes either a thirst or a salt craving. And that craving then drives animals to either drink water or eat something salty.

I'd like to know how our brain generates such a specific motivation simply by sensing internal state, and then how that motivation—which is really just neural activity in the brain—goes on to control the behavior.

 

Why did you choose to study thirst?

After finishing my Ph.D. in Japan, I came to Columbia University where I worked on salt sensing mechanisms in the mammalian taste system. We found that the peripheral taste system has a key function for salt homeostasis in the body by regulating our salt intake behavior. But of course, the peripheral sensor does not work by itself.  It requires a controller, the brain, which uses information from the sensor. So I decided to move on to explore the function of the brain; the real driver of our behaviors.

I was fascinated by thirst because the behavior it generates is very robust and stereotyped across various species. If an animal feels thirst, the behavioral output is simply to drink water. On the other hand, if the brain triggers salt appetite, then the animal specifically looks for salt—nothing else. These direct causal relations make it an ideal system to study the link between the neural circuit and the behavior.

 

You recently published a paper on this work in the journal Nature. Could you tell us about those findings?

In the paper, we linked specific neural populations in the brain to water drinking behavior. Previous work from other labs suggested that thirst may stem from a part of the brain called the hypothalamus, so we wanted to identify which groups of neurons in the hypothalamus control thirst. Using a technique called optogenetics that can manipulate neural activities with light, we found two distinct populations of neurons that control thirst in two opposite directions. When we activated one of those two populations, it evoked an intense drinking behavior even in fully water-satiated animals. In contrast, activation of a second population drastically suppressed drinking, even in highly water-deprived thirsty animals.  In other words, we could artificially create or erase the desire for drinking water.

Our findings suggest that there is an innate brain circuit that can turn an animal's water-drinking behavior on and off, and that this circuit likely functions as a center for thirst control in the mammalian brain. This work was performed with support from Howard Hughes Medical Institute and National Institutes of Health [for Charles S. Zuker at Columbia University, Oka's former advisor].

 

You use a mouse model to study thirst, but does this work have applications for humans?

There are many fluid homeostasis-associated conditions; one example is dehydration. We cannot specifically say a direct application for humans since our studies are focused on basic research. But if the same mechanisms and circuits exist in mice and humans, our studies will provide important insights into human physiologies and conditions.

 

Where did you grow up—and what started your initial interest in science?

I grew up in Japan, close to Tokyo, but not really in the center of the city. It was a nice combination between the big city and nature. There was a big park close to my house and when I was a child, I went there every day and observed plants and animals. That's pretty much how I spent my childhood. My parents are not scientists—neither of them, actually. It was just my innate interest in nature that made me want to be a scientist.

 

What drew you to Caltech?

I'm really excited about the environment here and the great climate. That's actually not trivial; I think the climate really does affect the people. For example, if you compare Southern California to New York, it's just a totally different character. I came here for a visit last January, and although it was my first time at Caltech I kind of felt a bond. I hadn't even received an offer yet, but I just intuitively thought, "This is probably the place for me."

I'm also looking forward to talking to my colleagues here who use fMRI for human behavioral research. One great advantage about using human subjects in behavioral studies is that they can report back to you about how they feel. There are certainly advantages of using an animal model, like mice. But they cannot report back. We just observe their behavior and say, "They are drinking water, so they must be thirsty." But that is totally different than someone telling you, "I feel thirsty." I believe that combining advantages of animal and human studies should allow us to address important questions about brain functions.

 

Do you have any hobbies?

I play basketball in my spare time, but my major hobby is collecting fossils. I have some trilobites and, actually, I have a complete set of bones from a type of herbivorous dinosaur. It is being shipped from New York right now and I may put it in my new office.

Listing Title: 
Why Do We Feel Thirst?
Writer: 
Exclude from News Hub: 
No
Short Title: 
Why Do We Feel Thirst?
News Type: 
In Our Community

SPIDER Experiment Touches Down in Antarctica

After spending 16 days suspended from a giant helium balloon floating 115,000 feet above Antarctica, a scientific instrument dubbed SPIDER has landed in a remote region of the frozen continent. Conceived of and built by an international team of scientists, the instrument launched from McMurdo Station on New Year's Day. Caltech and JPL designed, fabricated, and tested the six refracting telescopes the instrument uses to map the thermal afterglow of the Big Bang, the cosmic microwave background (CMB). SPIDER's goal: to search the CMB for the signal of inflation, an explosive event that blew our observable universe up from a volume smaller than a single atom in the first fraction of an instant after its birth.

The instrument appears to have performed well during its flight, says Jamie Bock, head of the SPIDER receiver team at Caltech and JPL. "Of course, we won't know everything until we get the full data back as part of the instrument recovery."

Read the full story and view the slideshow

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

SPIDER Experiment Touches Down in Antarctica

Created by: 
Teaser Image: 
Frontpage Title: 
SPIDER Experiment Touches Down in Antarctica
Slideshow: 
Credit: Jon Gudmundsson (Princeton University)

Each of SPIDER's six telescopes (one shown here, at left, on a lab bench) includes a pair of lenses that focus light onto a focal plane (at right) made up of 2,400 superconducting detectors. Three of the telescopes measure at a frequency of 100GHz, while the other three measure at 150GHz.

Credit: Credit: Steve Benton (University of Toronto)

Like bullets in a revolver, the six SPIDER telescopes slide into the instrument's cryostat (shown here without the telescopes). The cryostat is a large tank of liquid helium that cools SPIDER to temperatures near absolute zero so the thermal glow of the instrument itself does not overwhelm the faint signals they are trying to detect.

Credit: Steve Benton (University of Toronto)

Before SPIDER launched, many members of the team signed an out-of-the-way spot on the payload, wishing "Spidey" well and telling it to make them proud. Bill Jones, the project's principal investigator from Princeton University, also affixed a small photo of the late Andrew Lange.

Credit: Jeff Filippini

Jeff Filippini, a postdoctoral scholar who worked on the SPIDER receiver team at Caltech, stands in front of the instrument as it was being readied for launch.

Additional Caltech researchers involved in the project include professors of physics Jamie Bock and Sunil Golwala, postdoctoral scholar Lorenzo Moncelsi, and research staff members Peter Mason, Tracy Morford, and Viktor Hristov. Becky Tucker (PhD '14) and Amy Trangsrud (PhD '12) worked on the project as graduate students. The JPL team includes Marc Runyan, Anthony Turner, Krikor Megerian, Alexis Weber, Brendan Crill, Olivier Dore, and Warren Holmes.

Credit: Jeff Filippini

Prior to launch, the team laid out the parachute and hang lines in front of SPIDER, seen in the distance. The long-duration balloon that would carry SPIDER into the sky is attached to the end of the parachute shown here in the foreground.

Credit: Jeff Filippini

SPIDER and its balloon, ready for launch.

Credit: Jeff Filippini

SPIDER launched successfully on New Year's Day! Watch a video of the complete launch.

"One of the amazing things about ballooning is there is this moment where you're on the ground doing calibration work, really not in the deployment environment, and then you launch, and you start getting data back. That sharp dividing line between before and after the launch is really remarkable," says Filippini. "So many things can go wrong, and by and large, they didn't."

Credit: John Ruhl (Case Western Reserve University)

Sixteen days after launch, the team brought SPIDER back down to the ice because wind patterns suggested that the instrument might otherwise drift northward off the continent and not return to a safe recovery location. SPIDER landed in a remote area of Antarctica, more than 1,000 miles from McMurdo Station. The team is working on plans to recover the hard drives and payload.

Body: 

After spending 16 days suspended from a giant helium balloon floating 115,000 feet above Antarctica, a scientific instrument dubbed SPIDER has landed in a remote region of the frozen continent. Conceived of and built by an international team of scientists, the instrument launched from McMurdo Station on New Year's Day. Caltech and JPL designed, fabricated, and tested the six refracting telescopes the instrument uses to map the thermal afterglow of the Big Bang, the cosmic microwave background (CMB). SPIDER's goal: to search the CMB for the signal of inflation, an explosive event that blew our observable universe up from a volume smaller than a single atom in the first fraction of an instant after its birth.

The instrument appears to have performed well during its flight, says Jamie Bock, head of the SPIDER receiver team at Caltech and JPL. "Of course, we won't know everything until we get the full data back as part of the instrument recovery."

Although SPIDER relayed limited data back to the team on the ground during flight, it stored the majority of its data on hard drives, which must be recovered from the landing site. The researchers carefully monitored the experiment's flight path, and when wind patterns suggested that the winds might carry the experiment over the ocean, they opted to bring SPIDER down a bit early. It touched down in West Antarctica, more than 1,000 miles from McMurdo Station.

Jeff Filippini, a former postdoctoral scholar at Caltech and member of the SPIDER team who is now an assistant professor at the University of Illinois, Urbana-Champaign, says the landing site is near a few outlying stations. "We are negotiating plans for recovering the data disks and payload," he says. "We are all looking forward to poring over the data."

The team originally proposed SPIDER to NASA in 2005. It is an ambitious instrument, and there were many technical challenges to getting it off the ground. Political challenges also played a role: in October 2013, after the team had completed full flight preparations in the summer and transported SPIDER to the Antarctic by boat, the U.S. government shut down, canceling all Antarctic balloon flights. SPIDER had to be shipped back to the United States.

"But our team persevered," says Bock. "We used that extra time to make improvements and to fix a few problems. It is great to finally see all of our worries resolved and the hard work paying off."

A second SPIDER flight is planned for some time in the next two to three years, depending on how the hardware fares this time around.

The SPIDER project originated in the early 2000s with the late Andrew Lange's Observational Cosmology Group at Caltech and collaborators. The experiment is now led by William Jones of Princeton University, who was a graduate student of Lange's. The other primary institutions involved in the mission are the University of Toronto, Case Western Reserve University, and the University of British Columbia. SPIDER is funded by NASA, the David and Lucile Packard Foundation, the Gordon and Betty Moore Foundation, the Canadian Space Agency, and Canada's Natural Sciences and Engineering Research Council. The National Science Foundation provides logistical support to the team on the ice through the U.S. Antarctic Program.

Exclude from News Hub: 
Yes

Unusual Light Signal Yields Clues About Elusive Black Hole Merger

The central regions of many glittering galaxies, our own Milky Way included, harbor cores of impenetrable darkness—black holes with masses equivalent to millions, or even billions, of suns. What is more, these supermassive black holes and their host galaxies appear to develop together, or "co-evolve." Theory predicts that as galaxies collide and merge, growing ever more massive, so too do their dark hearts.

Black holes by themselves are impossible to see, but their gravity can pull in surrounding gas to form a swirling band of material called an accretion disk. The spinning particles are accelerated to tremendous speeds and release vast amounts of energy in the form of heat and powerful X-rays and gamma rays. When this process happens to a supermassive black hole, the result is a quasar—an extremely luminous object that outshines all of the stars in its host galaxy and that is visible from across the universe. "Quasars are valuable probes of the evolution of galaxies and their central black holes," says George Djorgovski, professor of astronomy and director of the Center for Data-Driven Discovery at Caltech.

In the January 7 issue of the journal Nature, Djorgovski and his collaborators report on an unusual repeating light signal from a distant quasar that they say is most likely the result of two supermassive black holes in the final phases of a merger—something that is predicted from theory but which has never been observed before. The discovery could help shed light on a long-standing conundrum in astrophysics called the "final parsec problem," which refers to the failure of theoretical models to predict what the final stages of a black hole merger look like or even how long the process might take. "The end stages of the merger of these supermassive black hole systems are very poorly understood," says the study's first author, Matthew Graham, a senior computational scientist at Caltech. "The discovery of a system that seems to be at this late stage of its evolution means we now have an observational handle on what is going on."

Djorgovski and his team discovered the unusual light signal emanating from quasar PG 1302-102 after analyzing results from the Catalina Real-Time Transient Survey (CRTS), which uses three ground telescopes in the United States and Australia to continuously monitor some 500 million celestial light sources strewn across about 80 percent of the night sky. "There has never been a data set on quasar variability that approaches this scope before," says Djorgovski, who directs the CRTS. "In the past, scientists who study the variability of quasars might only be able to follow some tens, or at most hundreds, of objects with a limited number of measurements. In this case, we looked at a quarter million quasars and were able to gather a few hundred data points for each one."

"Until now, the only known examples of supermassive black holes on their way to a merger have been separated by tens or hundreds of thousands of light years," says study coauthor Daniel Stern, a scientist at NASA's Jet Propulsion Laboratory. "At such vast distances, it would take many millions, or even billions, of years for a collision and merger to occur. In contrast, the black holes in PG 1302-102 are, at most, a few hundredths of a light year apart and could merge in about a million years or less."

Djorgovski and his team did not set out to find a black hole merger. Rather, they initially embarked on a systematic study of quasar brightness variability in the hopes of finding new clues about their physics. But after screening the data using a pattern-seeking algorithm that Graham developed, the team found 20 quasars that seemed to be emitting periodic optical signals. This was surprising, because the light curves of most quasars are chaotic—a reflection of the random nature by which material from the accretion disk spirals into a black hole. "You just don't expect to see a periodic signal from a quasar," Graham says. "When you do, it stands out."

Of the 20 periodic quasars that CRTS identified, PG 1302-102 was the best example. It had a strong, clean signal that appeared to repeat every five years or so. "It has a really nice smooth up-and-down signal, similar to a sine wave, and that just hasn't been seen before in a quasar," Graham says.

The team was cautious about jumping to conclusions. "We approached it with skepticism but excitement as well," says study coauthor Eilat Glikman, an assistant professor of physics at Middlebury College in Vermont. After all, it was possible that the periodicity the scientists were seeing was just a temporary ordered blip in an otherwise chaotic signal. To help rule out this possibility, the scientists pulled in data about the quasar from previous surveys to include in their analysis. After factoring in the historical observations (the scientists had nearly 20 years' worth of data about quasar PG 1302-102), the repeating signal was, encouragingly, still there.

The team's confidence increased further after Glikman analyzed the quasar's light spectrum. The black holes that scientists believe are powering quasars do not emit light, but the gases swirling around them in the accretion disks are traveling so quickly that they become heated into glowing plasma. "When you look at the emission lines in a spectrum from an object, what you're really seeing is information about speed—whether something is moving toward you or away from you and how fast. It's the Doppler effect," Glikman says. "With quasars, you typically have one emission line, and that line is a symmetric curve. But with this quasar, it was necessary to add a second emission line with a slightly different speed than the first one in order to fit the data. That suggests something else, such as a second black hole, is perturbing this system."

Avi Loeb, who chairs the astronomy department at Harvard University, agreed with the team's assessment that a "tight" supermassive black hole binary is the most likely explanation for the periodic signal they are seeing. "The evidence suggests that the emission originates from a very compact region around the black hole and that the speed of the emitting material in that region is at least a tenth of the speed of light," says Loeb, who did not participate in the research. "A secondary black hole would be the simplest way to induce a periodic variation in the emission from that region, because a less dense object, such as a star cluster, would be disrupted by the strong gravity of the primary black hole."

In addition to providing an unprecedented glimpse into the final stages of a black hole merger, the discovery is also a testament to the power of "big data" science, where the challenge lies not only in collecting high-quality information but also devising ways to mine it for useful information. "We're basically moving from having a few pictures of the whole sky or repeated observations of tiny patches of the sky to having a movie of the entire sky all the time," says Sterl Phinney, a professor of theoretical physics at Caltech, who was also not involved in the study. "Many of the objects in the movie will not be doing anything very exciting, but there will also be a lot of interesting ones that we missed before."

It is still unclear what physical mechanism is responsible for the quasar's repeating light signal. One possibility, Graham says, is that the quasar is funneling material from its accretion disk into luminous twin plasma jets that are rotating like beams from a lighthouse. "If the glowing jets are sweeping around in a regular fashion, then we would only see them when they're pointed directly at us. The end result is a regularly repeating signal," Graham says.

Another possibility is that the accretion disk that encircles both black holes is distorted. "If one region is thicker than the rest, then as the warped section travels around the accretion disk, it could be blocking light from the quasar at regular intervals. This would explain the periodicity of the signal that we're seeing," Graham says. Yet another possibility is that something is happening to the accretion disk that is causing it to dump material onto the black holes in a regular fashion, resulting in periodic bursts of energy.

"Even though there are a number of viable physical mechanisms behind the periodicity we're seeing—either the precessing jet, warped accretion disk or periodic dumping—these are all still fundamentally caused by a close binary system," Graham says.

Along with Djorgovski, Graham, Stern, and Glikman, additional authors on the paper, "A possible close supermassive black hole binary in a quasar with optical periodicity," include Andrew Drake, a computational scientist and co-principal investigator of the CRTS sky survey at Caltech; Ashish Mahabal, a staff scientist in computational astronomy at Caltech; Ciro Donalek, a computational staff scientist at Caltech; Steve Larson, a senior staff scientist at the University of Arizona; and Eric Christensen, an associate staff scientist at the University of Arizona. Funding for the study was provided by the National Science Foundation.

Written by Ker Than

Frontpage Title: 
Watching Black Holes Merge
Listing Title: 
Clues In the Quasar
Contact: 
Writer: 
Exclude from News Hub: 
No
Short Title: 
Clues In the Quasar
News Type: 
Research News

Cake or Carrots? Timing May Decide What You'll Nosh On

When you open the refrigerator for a late-night snack, are you more likely to grab a slice of chocolate cake or a bag of carrot sticks? Your ability to exercise self-control—i.e., to settle for the carrots—may depend upon just how quickly your brain factors healthfulness into a decision, according to a recent study by Caltech neuroeconomists.

"In typical food choices, individuals need to consider attributes like health and taste in their decisions," says graduate student Nicolette Sullivan, lead author of the study, which appears in the December 15 issue of the journal Psychological Science. "What we wanted to find out was at what point the taste of the foods starts to become integrated into the choice process, and at what point health is integrated."

Since taste is a concrete, innate attribute—after all, people know what foods they like and do not like—the researchers hypothesized that it becomes factored into the food decision-making process first. A food's affect on health, on the other hand, is a more abstract attribute—one that you often need to learn about or do research on. In fact, there are such widely varying opinions about the healthfulness of nutrients like fats, calories, and carbs that you may not even be able to find a definitive answer. Therefore, the researchers assumed, the healthiness of a food likely is not factored into a person's food choice until after taste is. And for those individuals who exercised less self-control, they hypothesized, health would factor into the choice even later.

To test these ideas, Sullivan—along with her colleagues in the laboratory of Antonio Rangel, Bing Professor of Neuroscience, Behavioral Biology, and Economics, including Rangel himself—developed a new experimental technique that allowed them to evaluate, on a scale of milliseconds, when taste and health information kick in during the process of making a decision. They did this by tracking the movement of a computer mouse as a person makes a choice.

In the experiment, 28 hungry subjects—Caltech student-volunteers who had been fasting for four hours—were asked to rate 160 foods individually on a scale from –2 to 2, based on that food's healthfulness, its tastiness, and how much the subject would like to eat that food after the experiment was over. The subjects were then presented with 280 random pairings of those same foods and were asked to use a computer mouse to click on—to choose—which food they preferred from each pairing.

The researchers then used statistical tools to analyze each subject's cursor movements and, therefore, the choice process. They looked at how fast taste began to drive the mouse's movement—and how soon health did. For example, one subject's cursor trajectory might be driven by the taste of the foods very early in the trial, but soon after might be driven by health also—resulting in the selection of the healthier item, like Brussels sprouts over pizza. However, another subject's cursor trajectory might be driven by taste all the way to the selection of pizza—with health information coming online too late in the choice process to influence the selection of the food.

Sullivan and her colleagues found that, on average, taste information began to influence the trajectory of the mouse cursor, and thus the choice process, almost 200 milliseconds earlier than health information. For 32 percent of subjects, health never influenced their food choice at all; they made every single choice based on taste, and their cursor was never driven by the healthfulness of the items.

"What Nikki has shown is that a big factor here is how quickly you can represent and take into account different types of information when you are making choices," says Rangel. "People are making these choices very quickly—in a couple of seconds—so very small differences, even just a hundred milliseconds, can make an enormous difference in whether or how much health considerations ultimately influences the decision."

The researchers then wanted to find out if some people have an advantage in exercising self-control simply because they can factor health information into their choice earlier. Sullivan and her colleagues first split the subjects into two groups: those who exercised high self-control by often choosing the healthy option, and those who made their choices based almost entirely on taste—the low-self-control group.

On average, the low-self-control group began to factor in health information 323 milliseconds later than the high-self-control group. This suggests, says Sullivan, that the more quickly someone begins to consider a food's health benefits, the more likely they are to exert self-control by ultimately choosing the healthier food.

In addition, Sullivan says, it seems that those who calculate health earlier in the process also weigh it more heavily in their decision-making process.

These findings, she notes, mean it might one day be useful to encourage people to wait a bit longer before making a food choice. "Since we know that taste appears before health, we know that it has an advantage in the ultimate decision. However, once health comes online, if you wait—allowing the health information to accumulate for longer—that might give health a chance to catch up and influence the choice," she says.

Rangel adds that this work could also one day change the way health information is presented. "For example, if you go to the supermarket, does it matter how big the calorie count information label is on the yogurt?" he asks. "More visible information may affect how quickly you compute health information. We don't know, but this study opens such possibilities."

Sullivan and Rangel are next hoping to apply their cursor-tracking method to experiments beyond the refrigerator. They want to look, for instance, at how timing might affect self-control in choices involving saving money versus spending money, or deciding between an act of altruism versus an act of selfishness. They also plan to further explore the food-choice study in a larger and more diverse population of subjects through the Caltech Conte Center.

"In the past when psychologists and economists have thought about behavioral differences, they have thought of them as differences in preferences, like, 'Oh you make less healthy choices than her because you just value health less and that's the end of the story.' What our study is trying to say is that maybe part of these differences arise not from preferences, but from the amount of time it takes different people to represent information and feed it to the brain's decision-making circuit."

The Psychological Science study, "Dietary Self-Control Is Related to the Speed with Which Health and Taste Attributes Are Processed," was authored by Sullivan and Rangel along with Caltech postdoctoral scholar Cendri Hutcherson and former Caltech postdoctoral scholar and visiting associate Alison Harris, who is now an assistant professor of psychology at Claremont McKenna College. Their work was funded by the National Science Foundation.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - research_news