Neural prosthetic devices, which include small electrode arrays implanted in the brain, can allow paralyzed patients to control the movement of a robotic limb, whether that limb is attached to the individual or not. In May 2015, researchers at Caltech, USC, and Rancho Los Amigos National Rehabilitation Center reported the first successful clinical trial of such an implant in a part of the brain that translates intention—the goal to be accomplished through a movement (for example, "I want to reach to the water bottle for a drink")—into the smooth and fluid motions of a robotic limb. Now, the researchers, led by Richard Andersen, the James G. Boswell Professor of Neuroscience, report that individual neurons in that brain region, known as the posterior parietal cortex (PPC), encode entire hand shapes which can be used for grasping—as when shaking someone's hand—and hand shapes not directly related to grasping, such as the gestures people make when speaking.
Most neuroprostheses are implanted in the motor cortex, the part of the brain controlling limb motion. But the movement of these robotic arms are jerky, probably due to the complicated mechanics for controlling muscle movement. Having eliminated that problem by implanting the device in the PPC, the brain region that encodes the intent, led Andersen and colleagues to further investigate the role specific neurons play in this part of the brain.
The research appears in the November 18 issue of the Journal ofNeuroscience.
"The human hand has the ability to do numerous complex operations beyond just grasping," says Christian Klaes, a postdoctoral fellow at Caltech and first author of the paper. "We gesture when we speak, we manipulate objects, we use sign language to communicate with the hearing impaired. Tetraplegic patients rate hand and arm function to be of the highest importance to have better control over their environment. So our ultimate goal is to improve the range of neuroprostheses using control signals from the PPC.
"The more precisely we can identify individual neurons involved with hand movements, the better the capability these robotic devices will provide. Ultimately, we hope to mimic in a robotic hand the same freedom of movement of the human hand."
In the study, the researchers used the rock-paper-scissors game and a variation, rock-paper-scissors-lizard-Spock. The game, says Andersen, is "perfect" for this kind of research. "The addition of a lizard, depicted as a cartoon image of a lizard, and Spock—a picture of Leonard Nimoy in character—was to increase the repertoire of possible hand shapes available to our tetraplegic participant, Erik G. Sorto, whose limbs are completely paralyzed. We assigned a pinch gesture for the lizard and a spherical shape for Mr. Spock."
The game was played in two phases, first rock-paper-scissors and then the expanded game with the lizard and Spock. In the task, Sorto was briefly shown an object on a screen that corresponded to one of the hand shapes—for example, a picture of a rock or Mr. Spock. The image was followed by a blank screen, and then text appeared instructing Sorto to imagine making the corresponding hand shape with his right hand—a fist for the rock, an open hand for paper, a scissors gesture for scissors, a pinch for the lizard, and a spherical shape (loosely analogous to the Vulcan salute) for Spock—and to say which visual image he had seen, as the neuroprosthetic device recorded the activity of neurons in the PPC.
The researchers were able to identify single neurons in the PPC that fired when Sorto was presented with an image of an object to be grasped—a rock, say—and identified a nearly completely separate class of neurons that responded when Sorto engaged in motor imagery (the mental planning and imagined execution of a movement without the subject actually trying to move the limb).
"We found two mostly separate populations of neurons in the PPC that show either visual responses or motor-imagery responses during the task, the former when Erik identified a cue and the latter when he imagined performing a corresponding hand shape," says Andersen.
The researchers discovered that individual neurons in the PPC also responded to hand shapes that did not directly correspond to a grasp-related visual stimulus. The paper shape can be related to the initial opening of the hand to grasp a paper, and the rock closing the hand to grasp a rock—and in fact, these imagined hand shapes were used by Sorto to imagine opening a robotic hand by imagining paper and closing the robotic hand around an object by imagining rock. However, scissors, lizard, and Spock call for imagining hand gestures that are more abstract and iconic than those needed to grasp the visual objects, and suggests, says Andersen, that this area of the brain may also be involved in more general hand gestures, such as ones we use when talking, or for sign language.
The results of the trial were published in a paper titled, "Hand Shape Representations in the Human Posterior Parietal Cortex." In addition to Andersen and Klaes, other authors on the study are Spencer Kellis, Tyson Aflalo, and Kelsie Pejsa from Caltech; Brian Lee, Christi Heck, and Charles Liu from USC; and Kathleen Shanfield, Stephanie Hayes-Jackson, and Mindy Aisen from Rancho Los Amigos National Rehabilitation Center.
Your body is continuously making new blood cells from a reservoir of "starter" cells called stem cells. Blood cells come in many types, including the highly versatile T cells that play a number of key roles in the immune system. All stem cells are alike, and all the T cells that come from them start out alike before choosing specific careers in response to signals from their environment.
On Wednesday, November 18 at 8 p.m. in Caltech's Beckman Auditorium, Ellen Rothenberg, Caltech's Albert Billings Ruddock Professor of Biology, will lead us along the paths that T cells follow and show how her lab has mapped their journeys. Admission is free.
What do you do?
I'm interested in how cells choose their identities through reading out information stored in the genome, which is the entire collection of DNA that makes a creature what it is, and how a cell that begins with one identity can spawn descendants with very different, very durable new identities.
We study T cells, a large family of white blood cells that form a major part of your immune system. T cells have an extremely long and varied life. They come from so-called stem cells, which have the ability to become many, many different kinds of cells. We want to learn how a "blank slate" of a stem cell develops to achieve a rock-solid identity as a T cell—especially because a T cell has an irreversibly defined "T-cell-ness" at its core, yet it remains very dynamic in using genomic information to decide what kind of T cell it will be.
Generating T cells is a three-step process. First, a stem cell develops into a T cell. Second, the T cell circulates around the body, waiting to see how it will first be used by the body to fight an actual infection. And then third, once it has evolved a specialization, it will continue to go around the body for months, years, or even decades in humans, spawning descendants that are also specialized with the same specific type of cellular function the original T cell had when it was activated—as helper T cells, or killer T cells, or whatever other type of T cell was needed. And they may pick a subspecialty—for example, for every infectious agent you encounter, you develop a specific memory cell to recognize that particular bug so that if it comes around again you are ready for it.
Once made, the decisions are locked in. All the T cell's progeny will generally stay in "the family business." However, it sometimes happens that once a T cell has chosen its profession, a particularly strong environmental signal can drive it to change into a different type of T cell. But even so, it will never, ever go back to being a stem cell. My lab is trying to figure out the molecular control mechanisms that allow the former stem cell to achieve a new rock-solid identity as a T cell, yet maintain a level of flexibility within that T-cell-ness.
Why is this important?
I study biology for the same reason that astronomers study the universe. I believe that there are deep biological principles to be learned from T cells, whose import goes way beyond curing a particular disease. I'm ecstatic when things we do are picked up by clinicians, who do make a profession of helping people, but I do basic science.
There are two main branches to the developmental biology of multicellular organisms. The first goes from the fertilized egg through the embryo, and that's the process that makes your body in the first place. It follows well-known rules worked out by people like my late colleague Eric Davidson [Caltech's Norman Chandler Professor of Cell Biology].
I study a second form of development that begins when an embryo sets aside a bunch of cells and programs them to become stem cells. Stem cells do not differentiate further right away; they just make more copies of themselves. Then, whenever you need to make new blood cells or repair a tissue later in life, those cells are called into action. For example, red blood cells only last about three or four months, so the blood circulating in your body today is coming from stem cells, and those stem cells were "set aside" when you were a fetus. This means there's an additional set of rules, going well beyond embryonic development, for making new blood cells in the right balance and at the right time.
The new cells do have some wear and tear from the consequences of your adventures throughout your life, but to a first approximation they're the same. They're getting primed to do the same job. They have to set up all the molecular circuitry needed to retain their identity and maintain a clear one-directional flow from stem-ness to differentiation. The process has to be as accurate at our advanced ages as it was when we were fetuses. That's the genius of stem-cell-based developmental biology. In my view, the collection of stem-cell development mechanisms ranks right up there with the more established mechanisms of embryonic development.
How did you get into this line of work?
I've always been interested in science. The question when I was young was whether I wanted to be a physicist or a biologist, but then I fell completely in love with biochemistry when I was in high school. When I went off to Harvard I didn't know specifically what I was interested in, but I loved what was known about the genome. I thought it would be fantastic to understand how the genome works at a molecular, mechanistic level.
I had the great good fortune to have microbiologist Boris Magasanik as my undergraduate tutor and mentor. He was the head of MIT's biology department, but he had a relationship with Harvard and he liked teaching undergrads. Boris was an extraordinary intellectual. He was studying metabolic pathways in bacteria at the systems-biology level way before it was normal. He was drawing prototype diagrams of gene-regulatory networks back in the early '70s.
A lot of technology had to be invented before we could explain gene regulation on the molecular level, but when I became a graduate student in [Nobel Laureate] David Baltimore's lab at MIT in 1972, he was already doing incredible work on viral genomes. [Baltimore came to Caltech in 1997 and is currently the Robert Andrews Millikan Professor of Biology.] We were pushing the frontiers of knowledge outward on a daily basis, and it was exceptionally exciting.
However, the development of multicelled organisms was still extremely hard to understand back then. It seemed all anecdotal, as if every organism did things in a fundamentally different way. But by the late '70s, Eric Davidson here at Caltech was making it possible to make sense out of developmental systems. His views integrated Boris Magasanik's systems-level view with David Baltimore's molecular-level finesse, and his work was revealing general mechanisms of development in multicellular organisms. I owe a great deal to the conceptual and mechanistic perspectives that I have gotten from these three people.
Also, Caltech's smallness has been fantastic. Most of the people I know who work with T cells are in immunology departments, and most immunologists do the same kinds of things, more or less. The joy for me at Caltech has been doing things that nobody else is doing. Often when my colleagues here solve their problems, I can use those approaches to break new ground in my field. It's been extraordinarily fun, and a tremendous advantage. Science as it should be done.
Long ago at MIT, my labmates and I were studying a retrovirus that caused early T-cell leukemia in mice. Lots of retroviruses cause cancer by putting a gene responsible for normal cell growth into the host cell and then turning the gene on under the wrong conditions. But our retrovirus didn't cause cancer in other cell types, so we wondered why it affected early T cells. I realized that the T-cell development process itself must be an especially sensitive target. The retrovirus nudged the future T cells toward being cancerous, possibly by accident, and then a little push farther down the line would send them over the edge.
That's when I became interested in T-cell development and this question of what controlled the switchover between growth and differentiation. We've found in the last 10 years or so that there are actually two bursts of proliferation during T-cell development. My lab has focused on the first one, which we now know is the transition between stem-cell-ness and T-cell-ness, when the cell commits to becoming a T cell. And it turns out that if a stem-cell regulatory gene stays on during the process, you get an abnormal persistence of stem-cell-like growth and sometimes leukemia. It's ironic that it's taken me, gosh, 40 years to get back to that, but it has been an incredibly satisfying journey.
Yuki Oka, an assistant professor of biology, has been awarded a grant from the Edward Mallinckrodt, Jr. Foundation, given to "support early stage investigators engaged in biomedical research that has the potential to significantly advance the understanding, diagnosis, or treatment of disease," according to the foundation website. The grant will provide $60,000 per year for three years.
"I'm thrilled by being selected for the 2015 Mallinckrodt Grant," says Oka, whose lab uses thirst and water-drinking behavior as a simple model system to study how the brain monitors internal water balance and generates signals that drive appetitive behaviors. The long-term goal of the work is to understand how the brain integrates information about the internal body state and external sensory information to maintain homeostasis (a state of internal equilibrium). The research, he notes, will provide a framework for studying the mechanisms that govern innate behaviors such as eating and drinking. Currently, an estimated 30 million people in the U.S. suffer from appetite disorders including polydipsia and bulimia, characterized by excessive water and food intake, respectively. Identifying neural circuits underlying appetite may offer insights into safe treatments for associated disorders, he says.
Oka received his PhD from the University of Tokyo and was a postdoctoral researcher at UC San Diego and Columbia University before joining the Caltech faculty in 2014. He was named a Searle Scholar in April 2015.
On November 12 and 13, the Beckman Institute at Caltech hosted a symposium on "The Shared Legacy of Arnold Beckman and Harry Gray." The two began a close working relationship in the late 1960s, when Gray arrived at Caltech. In this interview, Gray provides some background.
How did you come to Caltech?
I grew up in southern Kentucky. I got my BS in chemistry in 1957, and my professors told me to go to grad school at Northwestern University in Evanston, Illinois, to continue my studies in synthetic organic chemistry. They didn't give me a choice. Western Kentucky College had physical chemistry, analytical chemistry, organic chemistry, and that was it.
When I got to Northwestern I met Fred Basolo, who became my mentor. He did inorganic chemistry, which I was very surprised to discover even existed as a research field. I was so excited by his work, which was studying the mechanisms of inorganic reactions, that I decided to switch fields and do what he did. I got my PhD in 1960 from work on the syntheses and reaction mechanisms of platinum, rhodium, palladium, and nickel complexes. A complex has a metal atom sitting in the middle of as many as six ions or molecules called ligands. The metal has empty orbitals that it wants to fill with paired-up electrons, and the ligands have electron pairs they aren't using, so the metal and its ligands form stable bonds.
I had gotten into chemistry in the first place because I'd always been interested in colors. Even when I was a little kid, colors fascinated me. I really wanted to understand them, and many complexes have brilliant, beautiful colors. At Northwestern I heard about crystal-field theory, which was the first attempt to explain how metal complexes got their colors. All the crystal-field theory's big shots were in Copenhagen, so I decided to go there as a postdoc. Which I did.
I soon found out that crystal-field theory didn't go far enough. It only explained the colors of a limited set of metal ions in solution, and it couldn't explain charge transfers and a lot of other things. All the atoms were treated as point charges, with no provision for the bonds between the metal and the ligands. There weren't any bonds. So I helped develop a new theory, called ligand-field theory, which put the bonds back in the complexes. Carl Ballhausen, a professor at the University of Copenhagen, and I wrote a paper on a "metal-oxo" complex in which an oxygen atom was triple-bonded to a vanadium ion. The triple bond in our theory was required to account for the blue color of the vanadium-oxo complex. We also could explain charge transfers in other oxo complexes. Bonds were back in metal complexes!
Metal-oxo bonds are very important in biology. They are crucial in a lot of reactions, such as the oxygen-producing side of photosynthesis; the metabolism of drugs by cytochrome P-450, which often leads to toxic interactions with other drugs; and respiration. When we breathe in O2, our respiratory system splits the O=O bond, forming a metal-oxo complex as a reactive intermediate on the way to the product, which is water.
My work on bonding in metal oxo complexes got me a job as an assistant professor at Columbia University in 1961. By '65 I was a full professor and getting offers from many places, including Caltech. I loved Columbia, and I would have stayed there, but the chemistry department was very small. I knew it would be hard to build inorganic chemistry in a small department that concentrated on organic and physical chemistry.
There weren't any inorganic chemists at Caltech, either, but division chair Jack Roberts encouraged me to build the field up to five or six faculty members. I came to Caltech in 1966, and we now have a very strong inorganic chemistry group.
When I got here, I started work in two new areas at the interface of inorganic chemistry and biology. I'm best known for my work showing how electrons flow through proteins in respiration and photosynthesis. I won the Wolf Prize and the Welch Prize and the National Medal of Science for this work.
I also got into inorganic photochemistry—solar-energy research. That work started well before the first energy crisis in 1973, and continued until oil became cheap again in the early 1980s and solar-energy research was no longer supported. In the late '90s, I restarted the work. Now I'm leading an NSF Center for Chemical Innovation in Solar Fuels, which has an outreach activity I proudly call the Solar Army.
And how's that going?
The Solar Army keeps growing. We now have at least 60 brigades at high schools across the U.S., and 10 more abroad. I'd say that about 1,000 students have been through the program since 2008. We're getting young scientists involved in research that could have a profound effect on the world they're going to inherit. They're helping us look for light absorbers and catalysts to turn water into hydrogen fuel, using nothing but sunlight. The solar materials need to be sturdy metal oxides that are abundant and dirt cheap. But there are many metals in the periodic table. When you start combining them in twos and threes in varying amounts, there are literally millions of possibilities to be tested. We already have found several very good water oxidation and reduction catalysts, and since the National Science Foundation has just renewed our CCI Solar Fuels grant, we expect to make great progress in the coming years in understanding how they work.
Let's shift gears and talk about the Beckman Institute. How did you first meet Arnold Beckman [PhD '28, inventor of the pH meter, founder of Beckman Instruments, and a Life Trustee of Caltech]?
I gave a talk back in 1967, probably on Alumni Day. Arnold was the chair of Caltech's Board of Trustees at the time, and he and his wife, Mabel, were seated in the second row. When the talk was over, they came down and introduced themselves. Mabel said—and I remember this very well—she said, "Arnold, I didn't understand much of what this young man said, but I really liked the way he said it." Arnold gave me the thumbs up, and that started our relationship.
When I became chairman of the Division of Chemistry and Chemical Engineering in 1978, I asked him to be on my advisory committee. I didn't ask him for money, but I asked him for advice, and we became quite close. He said he wanted to do something for us. That led to his gift for the Arnold and Mabel Beckman Laboratory of Chemical Synthesis, as well as a gift for instrumentation.
He liked it that we raised money to match his instrument gift. He told me that he wanted to do something bigger, so we started thinking about building the Beckman Institute. [Caltech President] Murph Goldberger and I would go down to Orange County about every week with a new plan. He rejected the first four or five until we came up with the idea of developing technology to support chemistry and biology—methods and instruments for fundamental research—and creating resource centers to house them.
Once we agreed on what the building should house, we started planning the building itself. But when we showed Arnold our design, which was four stories plus a basement, he said, "That's not big enough. You need another floor for growth." So we added a subbasement that was quickly occupied by a resource center for magnetic-resonance imaging and optical imaging that has been heavily used by biologists, chemists, and other investigators.
The Beckman Institute has done a lot over the last 25 years. But it develops technology for general research use, so it doesn't often make the headlines itself. Are you OK with that?
Many advances in science and technology have been made in the Beckman Institute over the last 25 years. The methods and instruments that have been developed in BI resource centers have made enormous impacts at the frontiers of chemistry and biology. Solar-fuels science and human biology are just two examples of areas where work in the Beckman Institute has made a big difference. And there are many more. Am I proud? You bet I am!
In March of this year, a team of bioengineers from Caltech, JPL, and the University of Washington spent a week in Greenland, using snowmobiles to haul their scientific equipment, waiting out windstorms, and spending hours working on the ice. Now the same researchers are planning a trip to California's Mojave Desert, where they will study Searles Lake, a dry, extremely salty basin that is naturally full of harsh chemicals like arsenic and boron. The researchers are testing a holographic microscope that they have designed and built for the purpose of observing microbes that thrive in such extreme environments. The ultimate goal? To send the microscope on a spacecraft to search for biosignatures—signs of life—on other worlds such as Mars or Saturn's icy moon Enceladus.
"Our big overarching hypothesis is that motility is a good biosignature," explains Jay Nadeau, a scientific researcher at Caltech and one of the investigators on the holographic microscope project, dubbed SHAMU (Submersible Holographic Astrobiology Microscope with Ultraresolution). "We suspect that if we send back videos of bacteria swimming, that is going to be a better proof of life than pretty much anything else."
Think, she says, of Antonie van Leeuwenhoek, the father of microbiology, who used simple microscopes in the 17th and 18th centuries to observe protozoa and bacteria. "He immediately recognized that they were living things based on the way they moved," Nadeau says. Indeed, when Leeuwenhoek wrote about observing samples of the plaque between his teeth, he described seeing "many very little animalcules, very prettily a-moving." And Nadeau adds, "No one doubted Leeuwenhoek once they saw them moving for themselves."
In order to capture images of microbes "a-moving" on another world, Nadeau and her colleagues, including Mory Gharib, the Hans W. Liepmann Professor of Aeronautics and Bioinspired Engineering and a vice provost at Caltech, had the idea to use digital holography rather than conventional microscopy.
Holography is a method for recording holistic information about the light bouncing off a sample so that a 3-D image can be reconstructed at some later time. Compared to microscopy, which often involves multiple lenses focusing over a shallow sample (on a slide, for example), holography offers the advantages of focusing over a relatively large volume and of capturing high-resolution images, without the trouble of moving parts that could break in extreme environments or during a launch or landing, if the instrument were sent into space.
Standard photography records only the intensity of the light (related to its amplitude) that reaches a camera lens after scattering off an object. But as a wave, light has both an amplitude and a phase, a separate property that can be used to tell how far the light travels once it is scattered. Holography is a technique that captures both—something that makes it possible to re-create a three-dimensional image of a sample.
To understand the technique, first imagine dropping a pebble in a pond and watching ripples emanate from that spot. Now imagine dropping a second pebble in a new spot, producing a second set of ripples. If the ripples interact with an object on the surface, such as a rock, the ripples are diffracted or scattered by the object, changing the pattern of the waves—an effect that can be detected. Holography is akin to dropping two pebbles in a pond simultaneously, with the pebbles being two laser beams—one a reference beam that shines unaffected by the sample, and an object beam that runs into the sample and gets diffracted or scattered. A detector measures the combination, or superposition, of the ripples from the two beams, which is known as the interference pattern. By knowing how the waves propagate and by analyzing the interference pattern, a computer can reconstruct what the object beam encountered as it traveled
"We can take an interference pattern and use that to reconstruct all of the images in different planes in a volume," explains Chris Lindensmith, a systems engineer at JPL and an investigator on the project. "So we can just go and reconstruct whatever plane we are interested in after the fact and look and see if there's anything in there."
That means that a single image captures all the microbes in a sample—whether there is one bacterium or a thousand. And by taking a series of such images over time, the researchers can reconstruct the path that each bacterium took as it swam in the sample.
That would be virtually impossible with conventional microscopy, says Lindensmith. With microscopy, you need to focus in real time, meaning that someone would have to turn a dial to move the sample closer or farther from the microscope's lenses in order to keep a particular microbe in focus. During that time, they would miss out on the movements of any other microbes in the sample because the focus is so small.
All of the advantages that the holographic microscope offers over microscopy make it appealing for studies elsewhere in the solar system. And there are a number of worlds that scientists are eager to study in close-up detail to search for signs of life. In 2008, using data from the Phoenix Mars lander, scientists determined that there is water ice just below the surface in the northern plains of the Red Planet, making the locale a candidate for follow-up sampling studies. In addition, both the jovian moon Europa and the saturnian moon Enceladus are thought to harbor liquid oceans beneath their icy surfaces. Therefore, the SHAMU group says, a compact, robust, microscope like the one the Caltech team is developing could be a highly desirable component of an instrument suite on a lander to any one of those locations.
Nadeau says the group's prototype performed well during the team's field-testing trip to Greenland. At each testing site, the researchers drilled a hole into the sea ice, submerged the microscope to a depth where some of the salty liquid water trapped inside the ice, called brine, was able to seep into the device's sample area, and collected holographic images. "We know that things live in the water and we know what they do and how they swim," says Nadeau. "But believe it or not, nobody knew what kinds of microorganisms live in sea-ice brine or if they can swim."
That is because typical techniques for counting, labeling, and observing microbes rely on fragile instrumentation and often require large amounts of power, making them unusable in extreme environments like the Arctic. As a result, "nobody had ever looked at sea-ice organisms immediately after collection like we did," says Stephanie Rider, a staff scientist at Caltech who went on the Greenland trip as part of the project. Previously, other teams have collected samples and taken them back to a lab where the samples have been stored in a freezer, sometimes for weeks at a time. "Who knows how much the samples have been warmed up and cooled down by the time someone studies them?" Rider says. "The samples could be totally different at that point."
When samples are returned to the laboratory, fed rich medium, and warmed to +4 degrees, swimming speeds are greatly increased. Credit: Jay Nadeau/Caltech
During the Greenland trip, the SHAMU group successfully collected images that have been used to construct videos of bacteria and algae that live in the sea-ice brine. They also brought samples back to a lab in Nuuk, Greenland, warmed them overnight, and fed them bacterial growth medium—duplicating the standard conditions under which microorganisms from sea ice have been studied in the past. The researchers found that under those conditions, "everything starts zipping around like crazy," says Nadeau, indicating that in order to be accurate, observations do need to be made in place on the ice rather than back in a lab.
The team is particularly excited about what the successful measurements from Greenland could mean in the context of Mars. "We know from this that we can tell that things are alive when you take them straight out of ice," says Nadeau. "If we can see life in there on Earth, then it's possible there might be life in pockets of ice on Mars as well. Perhaps you don't have to have a big liquid ocean to find living organisms; there's a possibility that things can live just in pockets of ice."
The three-year SHAMU project began in January 2014 with funding from the Gordon and Betty Moore Foundation. In the coming months, the engineers hope to improve the microscope's sample chamber and to scale down the entire device. They believe they will have a launch-ready instrument by the end of the funding period.
As a first test in space, they would like to send the instrument to the International Space Station not only to see how it behaves in space but also to observe microbial samples under zero-gravity conditions. Beyond that, they hope to include SHAMU on a Mars lander as part of a NASA Discovery mission aimed at searching for biosignatures in the frozen northern plains of Mars. The Caltech team is partnering with Honeybee Robotics, a company that has built drills and sampling systems for numerous NASA missions (including the Phoenix Mars lander), to integrate the holographic microscope on a drill that would bore down about three feet into the martian ground ice.
In addition to Nadeau, Gharib, and Lindensmith, Jody Deming of the University of Washington's School of Oceanography is also an investigator on the SHAMU project.
Even in a calm, unchanging environment, cells are not static. Among other actions, cells activate and then deactivate some types of transcription factors—proteins that control the expression of genes—in a series of unpredictable and intermittent pulses. Since discovering this pulsing phenomenon, scientists have wondered what functions it could provide for cells.
Now, a new study from Caltech researchers shows that pulsing can allow two proteins to interact with each other in a rhythmic fashion that allows them to control genes. Specifically, when the expression of the transcription factors goes in and out of sync, gene expression also goes up and down. These rhythms of activation, the researchers say, may also underlie core processes in the cells of organisms from across the kingdoms of life.
"The way transcription factor pulses sync up with one another in time could play an important role in allowing cells to process information, communicate with other cells, and respond to stress," says paper coauthor Michael Elowitz, a professor of biology and biological engineering and an investigator with the Howard Hughes Medical Institute.
The research, led by Caltech postdoctoral scholar Yihan Lin, appears in the October 15 issue of Nature. Other Caltech authors of the paper are Assistant Professor of Chemistry Long Cai; Chang Ho Sohn, a staff scientist in the Cai lab; and Elowitz's former graduate student Chiraj K. Dalal (PhD '10), now at UC San Francisco.
Realizing that many different factors are pulsing in the same cell even in unchanging conditions, the Caltech scientists began to wonder if cells might adjust the relative timing of these pulses to enable a novel sort of time-based regulation. To find out, they set up time-lapse movies to follow two pulsing proteins and a target gene in real time in individual yeast cells.
A three color movie of cells in response to two different stresses (as indicated). Green color corresponds to Msn2 protein, red color corresponds to Mig1 protein, and blue color corresponds to a RNA binding protein that is used to report gene expression. The white circle highlights the cell of interest. (Credit: Michael Elowitz and Yihan Lin/Caltech)
The team tagged two central transcription factors named Msn2 and Mig1 with green and red fluorescent proteins, respectively. When the transcription factors are activated, they move into the nucleus, where they influence gene expression. This movement—as well as the activation of the factors—can be visualized because the fluorescent markers concentrate within the small volume of the nucleus, causing it to glow brightly, either green, red, or both. The color choice for the fluorescent tags was symbolic: Msn2 serves as an activator, and Mig1 as a repressor. "Msn2, the green factor, steps on the gas and turns up gene expression, while Mig1, the red factor, hits the brakes," says Elowitz.
When the scientists stressed the yeast cells by adding heat, for example, or restricting food, the pulses of Msn2 and Mig1 changed their timing with respect to one another, with more or less frequent periods of overlap between their pulses, depending upon the stressing stimulus.
Generally, when the two transcription factors pulsed in synchrony, the repressor blocked the ability of the activator to turn on genes. "It's like someone simultaneously pumping the gas and brake pedals in a car over and over again," says Elowitz.
But when they were off-beat, with the activator pulsing without the repressor, gene expression increased. "When the cell alternates between the brake and the gas—the Msn2 transcription factor in this case—the car can move," says Elowitz. As a result of these stress-altered rhythms, the cells successfully produced more (or fewer) copies of certain proteins that helped the yeast cope with the unpleasant situation.
Previously, researchers have thought that the relative concentrations of multiple transcription factors in the nucleus determine how they regulate a common gene target—a phenomenon known as combinatorial regulation. But the new study suggests that the relative timing of the pulses of transcription factors may be just as important as their concentration.
"Most genes in the cell are regulated by several transcription factors in a combinatorial fashion, as parts of a complex network," says Cai. "What we're now seeing is a new mode of regulation that controls the pulse timing of transcription factors, and this could be critical to understanding the combinatorial regulation in genetic networks."
"There appears to be a layer of time-based regulation in the cell that, because it can only be observed with movies of individual cells, is still largely unexplored," says Lin. "We look forward to learning more about this intriguing and underappreciated form of gene regulation."
In future research, the scientists will try to understand how prevalent this newfound mode of time-based regulation is in a variety of cell types and will examine its involvement in gene regulation systems. In the context of synthetic biology—the harnessing and modification of biological systems for human technological applications—the researchers also hope to develop methods to control such pulsing to program new cellular behaviors.
You walk by a bakery, smell the scent of fresh cookies, and are immediately reminded of baking with your grandmother as a child. The seemingly simple act of learning to associate a smell with a good or bad outcome is actually quite a complicated behavior—one that can begin as a single synapse, or junction, where a signal is passed between two neurons in the brain.
Assistant Professor of Neuroscience Betty Hong is interested in how animals sense cues in their environment, process that information in the brain, and then use that information to guide behaviors. To study the processing of information from synapse to behavior, her work focuses on olfaction—or chemical sensing via smell—in fruit flies.
Hong, who received her bachelor's degree from Caltech in 2002 and her doctorate from Harvard in 2009, came from a postdoctoral position at Harvard Medical School to join the Caltech faculty in June. We spoke with her recently about her work, her life outside the laboratory, and why she is looking forward to being back at Caltech.
How did you initially become interested in your field?
It's rather circuitous. I was initially drawn to neuroscience because I was interested in disease. I had family who passed away from Alzheimer's disease, and it's clear that with the current demographic of our country, diseases associated with aging—like Alzheimer's—are going to have a large impact on society in the next 20 to 30 years. Working at the Children's Hospital Boston in graduate school, I also became increasingly interested in understanding the rise of neurodevelopmental disorders like autism.
I really wanted to understand the mechanistic basis for neurological disease. And then it became clear to me that part of the problem of trying to understand neurological disorders was that we really had no idea how the brain is supposed to work. If you were a mechanic who didn't know how cars work, how could you fix a broken car? That led me to study increasingly more basic mechanisms of how the brain functions.
Why did you decide to focus your research on olfaction?
Although we humans have evolved to move away from olfaction—humans and primates are very visual—the whole rest of the animal kingdom relies on olfaction heavily for all of its daily survival and functions. Even the lowliest microbe relies on chemical sensing to navigate its way through the environment. We study olfaction in an invertebrate model—the fruit fly Drosophila. We do that for a couple of reasons. One is that it has a very small brain, and so its circuits are very compact, and that small size and numerical simplicity lets us get a global overview of what's happening—a view that you could never get if you're looking at a big circuit, like a mouse brain or a human brain.
The other reason is that there are versatile genetic tools and new technologies that have allowed us to make high-resolution electrical and optical recordings of neural activity in the brains of fruit flies. That very significant technical hurdle had to be crossed in order to make it a worthwhile experimental model. With electrophysiological access to the brain, and genetic tools that allow you to manipulate the circuits, you can watch brain activity as it's happening and ask what happens to neural activity when you tweak the properties of the system in specific ways. And the fly also has a robust and flexible set of behaviors that you can relate to all of this.
What are some of the behaviors that you are interested in studying?
We're very interested in understanding how flies can associate an odor with a pleasant or unpleasant outcome. So, in the same way that you might associate wonderful baking smells with something from your childhood, flies can learn to arbitrarily associate odors with different outcomes. And to know "when I smell this odor, I should run away," or "based on what happened to me the last time I smelled this odor, this might be an indicator of food"—that's actually a fairly sophisticated behavior that is a basic building block for more complex higher-order cognitive tasks that emerge in vertebrates.
There are many animals that are inflexibly wired. In other words, they smell something, and through evolution, their circuits have evolved to tell them to move toward it or go away from it. Even if they are in an unusual environment, they can't flexibly alter that behavior. The ability to flexibly adapt our behavior to new and unfamiliar environments was a key transition in the evolution of the nervous system.
You are also a Caltech alum. What drew you back as a faculty member?
Yes, it seems like such a long time ago, but I was an undergraduate here—a biology major in Page House—from 1998 to 2002. I was also a SURF student with [Professor of Biology] Bruce Hay and later with David Baltimore [president emeritus and Robert Andrews Millikan Professor of Biology]. It's kind of wild to have as your colleagues people who were your mentors a decade ago, but I think the main reason I chose Caltech was the community of scholars here—on the level of faculty, undergraduate students, graduate students, and postdocs—that I will be able to interact with. In the end, you mainly just want to be with smart, motivated people who want to use science to make a difference in the world. And I think that encapsulates what Caltech does.
Do you have any interests or hobbies that are outside of the lab?
I used to play horn in the wind ensemble and orchestra, including the time when I was here as an undergraduate. But these days, any time that I'm not in the office, I'm with my two young kids. Right now, we're really excited about exploring all the fun and exciting things to do outdoors in Southern California. We've done a lot of hiking and exploring the natural beauty here. The kids have gotten into fishing lately, so our latest thing has been scoping out the best places to fish. I would love to hear from members of the community what their favorite spots are!
Caltech biologists have developed a nonsurgical method to deliver long-term contraception to both male and female animals with a single shot. The technique—so far used only in mice—holds promise as an alternative to spaying and neutering feral animals.
The approach was developed in the lab of Bruce Hay, professor of biology and biological engineering at Caltech, and is described in the October 5 issue of Current Biology. The lead author on the paper is postdoctoral scholar Juan Li.
Hay's team was inspired by work conducted in recent years by David Baltimore and others showing that an adeno-associated virus (AAV)—a small, harmless virus that is unable to replicate on its own, that has been useful in gene-therapy trials—can be used to deliver sequences of DNA to muscle cells, causing them to produce specific antibodies that are known to fight infectious diseases, such as HIV, malaria, and hepatitis C.
Li and her colleagues thought the same approach could be used to produce infertility. They used an AAV to deliver a gene that directs muscle cells to produce an antibody that neutralizes gonadotropin-releasing hormone (GnRH) in mice. GnRH is what the researchers refer to as a "master regulator of reproduction" in vertebrates—it stimulates the release of two hormones from the pituitary that promote the formation of eggs, sperm, and sex steroids. Without it, an animal is rendered infertile.
In the past, other teams have tried neutralizing GnRH through vaccination. However, the loss of fertility that was seen in those cases was often temporary. In the new study, Hay and his colleagues saw that the mice—both male and female—were unable to conceive after about two months, and the majority remained infertile for the remainder of their lives.
"Inhibiting GnRH is an ideal way to inhibit fertility and behaviors caused by sex steroids, such as aggression and territoriality," says Hay. He notes that in the study, his team also shows that female mice can be rendered infertile using a different antibody that targets a binding site for sperm on the egg. "This target is ideal when you want to inhibit fertility but want to leave the individual otherwise completely normal in terms of reproductive behaviors and hormonal cycling."
Hay's team has dubbed the new approach "vectored contraception" and says that there are many other proteins that are thought to be important for reproduction that might also be targeted by this technique.
The researchers are particularly excited about the possibility of replacing spay–neuter programs with single injections. "Spaying and neutering of animals to control fertility, unwanted behavior, and population numbers of feral animals is costly and time consuming, and therefore often doesn't happen," says Hay. "There is a strong desire in many parts of the world for quick, nonsurgical approaches to inhibiting fertility. We think vectored contraception provides such an approach."
As a next step, Hay's team is working with Bill Swanson, director of animal research at the Cincinnati Zoo's Center for Conservation and Research of Endangered Wildlife, to try this approach in female domestic cats. Swanson's team spends much of its time working to promote fertility in endangered cat species, but it is also interested in developing humane ways of managing populations of feral domestic cats through inhibition of fertility, as these animals are often otherwise trapped and euthanized.
Additional Caltech authors on the paper, "Vectored antibody gene delivery mediates long-term contraception," are Alejandra I. Olvera, Annie Moradian, Michael J. Sweredoski, and Sonja Hess. Omar S. Akbari is also a coauthor on the paper and is now at UC Riverside. Some of the work was completed in the Proteome Exploration Laboratory at Caltech, which is supported by the Gordon and Betty Moore Foundation, the Beckman Institute, and the National Institutes of Health. Olvera was supported by a Gates Millennium Scholar Award.
Hong and colleagues aim to reveal neural mechanisms related to olfaction
Over the summer, Betty Hong, assistant professor of neuroscience, spent a week at the Janelia Research Campus in Ashburn, Virginia, interacting and brainstorming with other researchers from around the country interested in olfaction, our sense of smell. Invited to participate by the National Science Foundation (NSF), these 30 computational and experimental neuroscientists came up with innovative ways to approach some of the mysteries about how the brain processes odors and uses that information to guide behavior.
The five-day session was an example of the agency's new funding mechanism, the Ideas Lab. At these meetings, a multidisciplinary group of researchers is charged with generating potentially transformative proposals on a focused research topic. Now the NSF has awarded $15 million to three projects from the Olfactory Ideas Lab. Hong is coprincipal investigator on one titled "Using natural odor stimuli to crack the olfactory code." The awards expand NSF's investments in President Obama's BRAIN Initiative.
"I am grateful to have had the opportunity to be thrown together for a week with such a smart, diverse group of scientists who approach olfaction from so many different angles," says Hong (BS '02), adding that without the Ideas Lab, it is unlikely that she would have ever established collaborations with her coinvestigators. "I am also extremely grateful to the NSF for including junior investigators like myself who are just kicking off their research program. This unique funding mechanism will enable us to tackle really challenging and innovative research right at the start of our careers."
Olfactory scientists typically use simple synthetic odors involving single molecules for their experiments because natural odors—those that we smell around us every day—are too difficult to reproduce in a reliable way under controlled conditions. However, those simplified stimuli may not trigger the full range of neural computations that constitute olfaction.
Therefore, Hong and her colleagues aim to use comprehensive chemical analysis and computational methods to construct reproducible synthetic odorants in the lab that mimic naturally occurring smells in terms of eliciting typical behavioral responses in honey bees, fruit flies, and fly larvae. (Hong specializes in studies of the fruit fly Drosophila.) These synthetic odor blends can then be used to investigate how the brain processes smells and orders specific adaptive behaviors.
"We believe probing the olfactory circuit with naturalistic stimuli will reveal long-hidden computational features of the circuit," Hong explains. "Much as higher-order visual neurons only respond to complex stimuli like faces or hands, and not to simple bars and dots, we hypothesize that naturalistic odor stimuli will reveal novel features of odor space that the olfactory system encodes, which may only become apparent once appropriate sets of stimuli are used."
Along with Hong, additional principal investigators on the project are Brian Smith of Arizona State University; Aravinthan Samuel of Harvard University; and Tatyana Sharpee of the Salk Institute for Biological Studies. The project will receive $3.6 million over three years.
Good communication is crucial to any relationship, especially when partners are separated by distance. This also holds true for microbes in the deep sea that need to work together to consume large amounts of methane released from vents on the ocean floor. Recent work at Caltech has shown that these microbial partners can still accomplish this task, even when not in direct contact with one another, by using electrons to share energy over long distances.
This is the first time that direct interspecies electron transport—the movement of electrons from a cell, through the external environment, to another cell type—has been documented in microorganisms in nature.
The results were published in the September 16 issue of the journal Nature.
"Our lab is interested in microbial communities in the environment and, specifically, the symbiosis—or mutually beneficial relationship—between microorganisms that allows them to catalyze reactions they wouldn't be able to do on their own," says Professor of Geobiology Victoria Orphan, who led the recent study. For the last two decades, Orphan's lab has focused on the relationship between a species of bacteria and a species of archaea that live in symbiotic aggregates, or consortia, within deep-sea methane seeps. The organisms work together in syntrophy (which means "feeding together") to consume up to 80 percent of methane emitted from the ocean floor—methane that might otherwise end up contributing to climate change as a greenhouse gas in our atmosphere.
Previously, Orphan and her colleagues contributed to the discovery of this microbial symbiosis, a cooperative partnership between methane-oxidizing archaea called anaerobic methanotrophs (or "methane eaters") and a sulfate-reducing bacterium (organisms that can "breathe" sulfate instead of oxygen) that allows these organisms to consume methane using sulfate from seawater. However, it was unclear how these cells share energy and interact within the symbiosis to perform this task.
Because these microorganisms grow slowly (reproducing only four times per year) and live in close contact with each other, it has been difficult for researchers to isolate them from the environment to grow them in the lab. So, the Caltech team used a research submersible, called Alvin, to collect samples containing the methane-oxidizing microbial consortia from deep-ocean methane seep sediments and then brought them back to the laboratory for analysis.
The researchers used different fluorescent DNA stains to mark the two types of microbes and view their spatial orientation in consortia. In some consortia, Orphan and her colleagues found the bacterial and archaeal cells were well mixed, while in other consortia, cells of the same type were clustered into separate areas.
Orphan and her team wondered if the variation in the spatial organization of the bacteria and archaea within these consortia influenced their cellular activity and their ability to cooperatively consume methane. To find out, they applied a stable isotope "tracer" to evaluate the metabolic activity. The amount of the isotope taken up by individual archaeal and bacterial cells within their microbial "neighborhoods" in each consortia was then measured with a high-resolution instrument called nanoscale secondary ion mass spectrometry (nanoSIMS) at Caltech. This allowed the researchers to determine how active the archaeal and bacterial partners were relative to their distance to one another.
To their surprise, the researchers found that the spatial arrangement of the cells in consortia had no influence on their activity. "Since this is a syntrophic relationship, we would have thought the cells at the interface—where the bacteria are directly contacting the archaea—would be more active, but we don't really see an obvious trend. What is really notable is that there are cells that are many cell lengths away from their nearest partner that are still active," Orphan says.
To find out how the bacteria and archaea were partnering, co-first authors Grayson Chadwick (BS '11), a graduate student in geobiology at Caltech and a former undergraduate researcher in Orphan's lab, and Shawn McGlynn, a former postdoctoral scholar, employed spatial statistics to look for patterns in cellular activity for multiple consortia with different cell arrangements. They found that populations of syntrophic archaea and bacteria in consortia had similar levels of metabolic activity; when one population had high activity, the associated partner microorganisms were also equally active—consistent with a beneficial symbiosis. However, a close look at the spatial organization of the cells revealed that no particular arrangement of the two types of organisms—whether evenly dispersed or in separate groups—was correlated with a cell's activity.
To determine how these metabolic interactions were taking place even over relatively long distances, postdoctoral scholar and coauthor Chris Kempes, a visitor in computing and mathematical sciences, modeled the predicted relationship between cellular activity and distance between syntrophic partners that are dependent on the molecular diffusion of a substrate. He found that conventional metabolites—molecules previously predicted to be involved in this syntrophic consumption of methane—such as hydrogen—were inconsistent with the spatial activity patterns observed in the data. However, revised models indicated that electrons could likely make the trip from cell to cell across greater distances.
"Chris came up with a generalized model for the methane-oxidizing syntrophy based on direct electron transfer, and these model results were a better match to our empirical data," Orphan says. "This pointed to the possibility that these archaea were directly transferring electrons derived from methane to the outside of the cell, and those electrons were being passed to the bacteria directly."
Guided by this information, Chadwick and McGlynn looked for independent evidence to support the possibility of direct interspecies electron transfer. Cultured bacteria, such as those from the genus Geobacter, are model organisms for the direct electron transfer process. These bacteria use large proteins, called multi-heme cytochromes, on their outer surface that act as conductive "wires" for the transport of electrons.
Using genome analysis—along with transmission electron microscopy and a stain that reacts with these multi-heme cytochromes—the researchers showed that these conductive proteins were also present on the outer surface of the archaea they were studying. And that finding, Orphan says, can explain why the spatial arrangement of the syntrophic partners does not seem to affect their relationship or activity.
"It's really one of the first examples of direct interspecies electron transfer occurring between uncultured microorganisms in the environment. Our hunch is that this is going to be more common than is currently recognized," she says.
Orphan notes that the information they have learned about this relationship will help to expand how researchers think about interspecies microbial interactions in nature. In addition, the microscale stable isotope approach used in the current study can be used to evaluate interspecies electron transport and other forms of microbial symbiosis occurring in the environment.