Doris Tsao Named Howard Hughes Medical Institute Investigator

The Howard Hughes Medical Institute (HHMI) has selected Caltech professor of biology Doris Tsao (BS '96) as one of 26 new HHMI investigators. Investigators represent some of the nation's top biomedical researchers and receive five years of funding to "move their research in creative new directions."

Tsao is a systems neuroscientist studying the neural mechanisms underlying primate vision. She and her group aim to discover how the brain "stitches together" individual pixels of light—the photons hitting our retinas—to create the visual experience of discrete and recognizable objects in space.

"The central problem I want to understand is how visual objects are represented in the brain, and how these representations are used to guide behavior," she says. "I feel inexpressibly lucky for the support from the HHMI that will allow us to dive deep into this program."

The group has used functional magnetic resonance imaging (fMRI) scanning to study neural responses to images and has identified discrete areas in the brain, called "face patches," that play important roles in detecting and identifying faces.

Tsao received her PhD in neuroscience from Harvard after completing her undergraduate studies in biology and mathematics at Caltech. She returned to Caltech as an assistant professor in 2008 and became a full professor in 2014. Her appointment brings the number of current Caltech HHMI investigators to eleven.

Frontpage Title: 
Caltech Neuroscientist Named HHMI Investigator
Listing Title: 
Caltech Neuroscientist Named HHMI Investigator
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Controlling a Robotic Arm with a Patient's Intentions

Neural prosthetic devices implanted in the brain's movement center, the motor cortex, can allow patients with amputations or paralysis to control the movement of a robotic limb—one that can be either connected to or separate from the patient's own limb. However, current neuroprosthetics produce motion that is delayed and jerky—not the smooth and seemingly automatic gestures associated with natural movement. Now, by implanting neuroprosthetics in a part of the brain that controls not the movement directly but rather our intent to move, Caltech researchers have developed a way to produce more natural and fluid motions.

In a clinical trial, the Caltech team and colleagues from Keck Medicine of USC have successfully implanted just such a device in a patient with quadriplegia, giving him the ability to perform a fluid hand-shaking gesture and even play "rock, paper, scissors" using a separate robotic arm.

The results of the trial, led by principal investigator Richard Andersen, the James G. Boswell Professor of Neuroscience, and including Caltech lab members Tyson Aflalo, Spencer Kellis, Christian Klaes, Brian Lee, Ying Shi and Kelsie Pejsa, are published in the May 22 edition of the journal Science.

"When you move your arm, you really don't think about which muscles to activate and the details of the movement—such as lift the arm, extend the arm, grasp the cup, close the hand around the cup, and so on. Instead, you think about the goal of the movement. For example, 'I want to pick up that cup of water,'" Andersen says. "So in this trial, we were successfully able to decode these actual intents, by asking the subject to simply imagine the movement as a whole, rather than breaking it down into myriad components."

For example, the process of seeing a person and then shaking his hand begins with a visual signal (for example, recognizing someone you know) that is first processed in the lower visual areas of the cerebral cortex. The signal then moves up to a high-level cognitive area known as the posterior parietal cortex (PPC). Here, the initial intent to make a movement is formed. These intentions are then transmitted to the motor cortex, through the spinal cord, and on to the arms and legs where the movement is executed.

High spinal cord injuries can cause quadriplegia in some patients because movement signals cannot get from the brain to the arms and legs. As a solution, earlier neuroprosthetic implants used tiny electrodes to detect and record movement signals at their last stop before reaching the spinal cord: the motor cortex.

The recorded signal is then carried via wire bundles from the patient's brain to a computer, where it is translated into an instruction for a robotic limb. However, because the motor cortex normally controls many muscles, the signals tend to be detailed and specific. The Caltech group wanted to see if the simpler intent to shake the hand could be used to control the prosthetic limb, instead of asking the subject to concentrate on each component of the handshake—a more painstaking and less natural approach.

Andersen and his colleagues wanted to improve the versatility of movement that a neuroprosthetic can offer by recording signals from a different brain region—the PPC. "The PPC is earlier in the pathway, so signals there are more related to movement planning—what you actually intend to do—rather than the details of the movement execution," he says. "We hoped that the signals from the PPC would be easier for the patients to use, ultimately making the movement process more intuitive. Our future studies will investigate ways to combine the detailed motor cortex signals with more cognitive PPC signals to take advantage of each area's specializations."

In the clinical trial, designed to test the safety and effectiveness of this new approach, the Caltech team collaborated with surgeons at Keck Medicine of USC and the rehabilitation team at Rancho Los Amigos National Rehabilitation Center. The surgeons implanted a pair of small electrode arrays in two parts of the PPC of a quadriplegic patient. Each array contains 96 active electrodes that, in turn, each record the activity of a single neuron in the PPC. The arrays were connected by a cable to a system of computers that processed the signals, decoded the intent of the subject, and controlled output devices that included a computer cursor and a robotic arm developed by collaborators at Johns Hopkins University.

After recovering from the surgery, the patient was trained to control the computer cursor and the robotic arm with his mind. Once training was complete, the researchers saw just what they were hoping for: intuitive movement of the robotic arm.

"For me, the most exciting moment of the trial was when the participant first moved the robotic limb with his thoughts. He had been paralyzed for over 10 years, and this was the first time since his injury that he could move a limb and reach out to someone. It was a thrilling moment for all of us," Andersen says.

"It was a big surprise that the patient was able to control the limb on day one—the very first day he tried," he adds. "This attests to how intuitive the control is when using PPC activity."

The patient, Erik G. Sorto, was also thrilled with the quick results: "I was surprised at how easy it was," he says. "I remember just having this out-of-body experience, and I wanted to just run around and high-five everybody."

Over time, Sorto continued to refine his control of his robotic arm, thus providing the researchers with more information about how the PPC works. For example, "we learned that if he thought, 'I should move my hand over toward to the object in a certain way'—trying to control the limb—that didn't work," Andersen says. "The thought actually needed to be more cognitive. But if he just thought, 'I want to grasp the object,' it was much easier. And that is exactly what we would expect from this area of the brain."

This better understanding of the PPC will help the researchers improve neuroprosthetic devices of the future, Andersen says. "What we have here is a unique window into the workings of a complex high-level brain area as we work collaboratively with our subject to perfect his skill in controlling external devices."

"The primary mission of the USC Neurorestoration Center is to take advantage of resources from our clinical programs to create unique opportunities to translate scientific discoveries, such as those of the Andersen Lab at Caltech, to human patients, ultimately turning transformative discoveries into effective therapies," says center director Charles Y. Liu, professor of neurological surgery, neurology, and biomedical engineering at USC, who led the surgical implant procedure and the USC/Rancho Los Amigos team in the collaboration.

"In taking care of patients with neurological injuries and diseases—and knowing the significant limitations of current treatment strategies—it is clear that completely new approaches are necessary to restore function to paralyzed patients. Direct brain control of robots and computers has the potential to dramatically change the lives of many people," Liu adds.

Dr. Mindy Aisen, the chief medical officer at Rancho Los Amigos who led the study's rehabilitation team, says that advancements in prosthetics like these hold promise for the future of patient rehabilitation. "We at Rancho are dedicated to advancing rehabilitation through new assistive technologies, such as robotics and brain-machine interfaces. We have created a unique environment that can seamlessly bring together rehabilitation, medicine, and science as exemplified in this study," she says.

Although tasks like shaking hands and playing "rock, paper, scissors" are important to demonstrate the capability of these devices, the hope is that neuroprosthetics will eventually enable patients to perform more practical tasks that will allow them to regain some of their independence.

"This study has been very meaningful to me. As much as the project needed me, I needed the project. The project has made a huge difference in my life. It gives me great pleasure to be part of the solution for improving paralyzed patients' lives," Sorto says. "I joke around with the guys that I want to be able to drink my own beer—to be able to take a drink at my own pace, when I want to take a sip out of my beer and to not have to ask somebody to give it to me. I really miss that independence. I think that if it was safe enough, I would really enjoy grooming myself—shaving, brushing my own teeth. That would be fantastic." 

To that end, Andersen and his colleagues are already working on a strategy that could enable patients to perform these finer motor skills. The key is to be able to provide particular types of sensory feedback from the robotic arm to the brain.

Although Sorto's implant allowed him to control larger movements with visual feedback, "to really do fine dexterous control, you also need feedback from touch," Andersen says. "Without it, it's like going to the dentist and having your mouth numbed. It's very hard to speak without somatosensory feedback." The newest devices under development by Andersen and his colleagues feature a mechanism to relay signals from the robotic arm back into the part of the brain that gives the perception of touch.

"The reason we are developing these devices is that normally a quadriplegic patient couldn't, say, pick up a glass of water to sip it, or feed themselves. They can't even do anything if their nose itches. Seemingly trivial things like this are very frustrating for the patients," Andersen says. "This trial is an important step toward improving their quality of life."

The results of the trial were published in a paper titled, "Decoding Motor Imagery from the Posterior Parietal Cortex of a Tetraplegic Human." The implanted device and signal processors used in the Caltech-led clinical trial were the NeuroPort Array and NeuroPort Bio-potential Signal Processors developed by Blackrock Microsystems in Salt Lake City, Utah. The robotic arm used in the trial was the Modular Prosthetic Limb, developed at the Applied Physics Laboratory at Johns Hopkins. Sorto was recruited to the trial by collaborators at Rancho Los Amigos National Rehabilitation Center and at Keck Medicine of USC. This trial was funded by National Institutes of Health, the Boswell Foundation, the Department of Defense, and the USC Neurorestoration Center.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Do Fruit Flies Have Emotions?

A fruit fly starts buzzing around food at a picnic, so you wave your hand over the insect and shoo it away. But when the insect flees the scene, is it doing so because it is actually afraid? Using fruit flies to study the basic components of emotion, a new Caltech study reports that a fly's response to a shadowy overhead stimulus might be analogous to a negative emotional state such as fear—a finding that could one day help us understand the neural circuitry involved in human emotion.

The study, which was done in the laboratory of David Anderson, Seymour Benzer Professor of Biology and an investigator with the Howard Hughes Medical Institute, was published online May 14 in the journal Current Biology.

Insects are an important model for the study of emotion; although mice are closer to humans on the evolutionary family tree, the fruit fly has a much simpler neurological system that is easier to study. However, studying emotions in insects or any other animal can also be tricky. Because researchers know the experience of human emotion, they might anthropomorphize those of an insect—just as you might assume that the shooed-away fly left your plate because it was afraid of your hand. But there are several problems with such an assumption, says postdoctoral scholar William T. Gibson, first author of the paper.

"There are two difficulties with taking your own experiences and then saying that maybe these are happening in a fly. First, a fly's brain is very different from yours, and second, a fly's evolutionary history is so different from yours that even if you could prove beyond any doubt that flies have emotions, those emotions probably wouldn't be the same ones that you have," he says. "For these reasons, in our study, we wanted to take an objective approach."

Anderson and Gibson and their colleagues did this by deconstructing the idea of an emotion into basic building blocks—so-called emotion primitives, a concept previously developed by Anderson and Ralph Adolphs, Bren Professor of Psychology and Neuroscience and professor of biology.

"There has been ongoing debate for decades about what 'emotion' means, and there is no generally accepted definition. In an article that Ralph Adolphs and I recently wrote, we put forth the view that emotions are a type of internal brain state with certain general properties that can exist independently of subjective, conscious feelings, which can only be studied in humans," Anderson says. "That means we can study such brain states in animal models like flies or mice without worrying about whether they have 'feelings' or not. We use the behaviors that express those states as a readout."

Gibson explains by analogy that emotions can be broken down into these emotion primitives much as a secondary color, such as orange, can be separated into two primary colors, yellow and red. "And if we can show that fruit flies display all of these separate but necessary primitives, we then may be able to make the argument that they also have an emotion, like fear."

The emotion primitives analyzed in the fly study can be understood in the context of a stimulus associated with human fear: the sound of a gunshot. If you hear a gun fire, the sound may trigger a negative feeling. This feeling, a primitive called valence, will probably cause you to behave differently for several minutes afterward. This is a primitive called persistence. Repeated exposure to the stimulus should also produce a greater emotional response—a primitive called scalability; for example, the sound of 10 gunshots would make you more afraid than the sound of one shot.

Gibson says that another primitive of fear is that it is generalized to different contexts, meaning that if you were eating lunch or were otherwise occupied when the gun fired, the fear would take over, distracting you from your lunch. Trans-situationality is another primitive that could cause you to produce the same fearful reaction in response to an unrelated stimulus—such as the sound of a car backfiring.

The researchers chose to study these five primitives by observing the insects in the presence of a fear-inducing stimulus. Because defensive behavioral responses to overhead visual threats are common in many animals, the researchers created an apparatus that would pass a dark paddle over the flies' habitat. The flies' movements were then tracked using a software program created in collaboration with Pietro Perona, the Allen E. Puckett Professor of Electrical Engineering.

The researchers analyzed the flies' responses to the stimulus and found that the insects displayed all of these emotion primitives. For example, responses were scalable: when the paddle passed overhead, the flies would either freeze, or jump away from the stimulus, or enter a state of elevated arousal, and each response increased with the number of times the stimulus was delivered. And when hungry flies were gathered around food, the stimulus would cause them to leave the food for several seconds and run around the arena until their state of elevated arousal decayed and they returned to the food—exhibiting the primitives of context generalization and persistence.

"These experiments provide objective evidence that visual stimuli designed to mimic an overhead predator can induce a persistent and scalable internal state of defensive arousal in flies, which can influence their subsequent behavior for minutes after the threat has passed," Anderson says. "For us, that's a big step beyond just casually intuiting that a fly fleeing a visual threat must be 'afraid,' based on our anthropomorphic assumptions. It suggests that the flies' response to the threat is richer and more complicated than a robotic-like avoidance reflex."

In the future, the researchers say that they plan to combine the new technique with genetically based techniques and imaging of brain activity to identify the neural circuitry that underlies these defensive behaviors. Their end goal is to identify specific populations of neurons in the fruit fly brain that are necessary for emotion primitives—and whether these functions are conserved in higher organisms, such as mice or even humans.

Although the presence of these primitives suggests that the flies might be reacting to the stimulus based on some kind of emotion, the researchers are quick to point out that this new information does not prove—nor did it set out to establish—that flies can experience fear, or happiness, or anger, or any other feelings.

"Our work can get at questions about mechanism and questions about the functional properties of emotion states, but we cannot get at the question of whether or not flies have feelings," Gibson says.

The study, titled "Behavioral Responses to a Repetitive Stimulus Express a Persistent State of Defensive Arousal in Drosophila," was published in the journal Current Biology. In addition to Gibson, Anderson, and Perona, Caltech coauthors include graduate student Carlos Gonzalez, undergraduate Rebecca Du, former research assistants Conchi Fernandez and Panna Felsen (BS '09, MS '10), and former postdoctoral scholar Michael Maire. Coauthors Lakshminarayanan Ramasamy and Tanya Tabachnik are from the Janelia Research Campus of the Howard Hughes Medical Institute (HHMI). The work was funded by the National Institutes of Health, HHMI, and the Gordon and Betty Moore Foundation.

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Monday, May 18, 2015
Brown Gymnasium – Scott Brown Gymnasium

Jupiter’s Grand Attack

Andersen Wins Inaugural Cal-BRAIN Funding

Richard Andersen, James G. Boswell Professor of Neuroscience, has been selected as a recipient of one of the first grants from the California Blueprint for Research to Advance Innovations in Neuroscience (Cal-BRAIN) program.

Cal-BRAIN, a joint initiative led by UC San Diego and the Lawrence Berkeley National Laboratory, is the California complement to President Obama's federal BRAIN Initiative. Scientists from all California nonprofit research institutions were eligible to apply for the state initiative's first round of $120,000 seed grants; Andersen from Caltech and the 15 other inaugural winners from Stanford, USC, Lawrence Berkeley National Laboratory, and 10 UC campuses were selected from a pool of 126 applicants.

The initiative's goal is for funded projects to use an interdisciplinary approach to advance the diagnosis and treatment of all brain disorders as well as develop better neural prosthetic devices that would allow paralyzed patients to move a robotic limb using signals from the patient's own brain. By supporting this research specifically, Cal-BRAIN aims to position California as a leader in the growing neurotechnology sector—a possible future source of economic growth and job creation in the state.

Andersen's Cal-BRAIN–funded project, titled "Engineering Artificial Sensation," will focus on artificially replicating the sensation of touch in patients with paralysis accompanied by loss of touch perception; such replication would improve the dexterity of neural prosthetic devices. This capability, when combined with a traditional neural prosthetic device and robotic arm, would enable patients to manipulate their environment and would provide feedback allowing them to recognize, for example, that they had used too much or too little force when grasping an object. The project is being done in collaboration with physicians Charles Liu, Brian Lee, and Christi Heck at the Keck School of Medicine of USC and the USC Neurorestoration Center. 

A full list of the Cal-BRAIN funded institutions, researchers, and projects can be viewed here: http://cal-brain.org/content/cal-brain-awards-2015.

Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community
Tuesday, May 19, 2015
Guggenheim 101 (Lees-Kubota Lecture Hall) – Guggenheim Aeronautical Laboratory

Science in a Small World - Short Talks

Tuesday, May 19, 2015
Dabney Hall, Garden of the Associates – The Garden of the Associates

Science in a Small World - Poster Session #2

Tuesday, May 19, 2015
Dabney Hall, Garden of the Associates – The Garden of the Associates

Science in a Small World - Poster Session #1

Four from Caltech Elected to National Academy of Sciences

One Caltech professor and three Caltech alumni have been elected to the prestigious National Academy of Sciences (NAS). The announcement was made Tuesday, April 28, in Washington D.C.

The Caltech faculty member elected to the Academy is Marianne Bronner, Albert Billings Ruddock Professor of Biology and Executive Officer for Neurobiology. Bronner is a developmental biologist whose studies focus on the cellular and molecular events underlying the formation, lineage, and migration of neural crest cells—early cells in vertebrate embryos that eventually give rise to a diverse range of cell types, from neurons to facial skeleton cells. In 2013, she was awarded the Conklin Award from the Society for Developmental Biology, and she currently serves as the editor in chief of Developmental Biology, the society's official journal. She is a fellow of the American Academy of Arts and Sciences (2009) and presently serves on the scientific advisory board of the Sontag Foundation.

Bronner joins 75 current Caltech faculty and three trustees as members of the NAS. Also included in this year's new members are three Caltech alumni: Eric Betzig (BS '83), a Nobel laureate and group leader in physics at the Howard Hughes Medical Institute's Janelia Farm Research Campus; Robert Schoelkopf (PhD '95), the Stirling Professor of Applied Physics and Physics at Yale; and William Ward (PhD '73), an institute scientist in the Department of Space Studies at the Southwest Research Institute.

The National Academy of Sciences is a private organization of scientists and engineers dedicated to the furtherance of science and its use for the general welfare. It was established in 1863 by a congressional act of incorporation signed by Abraham Lincoln that calls on the academy to act as an official adviser to the federal government, upon request, in any matter of science or technology.

A full list of new members is available on the academy website at: http://www.nasonline.org/news-and-multimedia/news/april-28-2015-NAS-Election.html

Frontpage Title: 
Bronner and Three Others Elected to National Academy of Sciences
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Switching On One-Shot Learning in the Brain

Caltech researchers find the brain regions responsible for jumping to conclusions

Most of the time, we learn only gradually, incrementally building connections between actions or events and outcomes. But there are exceptions—every once in a while, something happens and we immediately learn to associate that stimulus with a result. For example, maybe you have had bad service at a store once and sworn that you will never shop there again.

This type of one-shot learning is more than handy when it comes to survival—think, of an animal quickly learning to avoid a type of poisonous berry. In that case, jumping to the conclusion that the fruit was to blame for a bout of illness might help the animal steer clear of the same danger in the future. On the other hand, quickly drawing connections despite a lack of evidence can also lead to misattributions and superstitions; for example, you might blame a new food you tried for an illness when in fact it was harmless, or you might begin to believe that if you do not eat your usual meal, you will get sick.

Scientists have long suspected that one-shot learning involves a different brain system than gradual learning, but could not explain what triggers this rapid learning or how the brain decides which mode to use at any one time.

Now Caltech scientists have discovered that uncertainty in terms of the causal relationship—whether an outcome is actually caused by a particular stimulus—is the main factor in determining whether or not rapid learning occurs. They say that the more uncertainty there is about the causal relationship, the more likely it is that one-shot learning will take place. When that uncertainty is high, they suggest, you need to be more focused in order to learn the relationship between stimulus and outcome.

The researchers have also identified a part of the prefrontal cortex—the large brain area located immediately behind the forehead that is associated with complex cognitive activities—that appears to evaluate such causal uncertainty and then activate one-shot learning when needed.

The findings, described in the April 28 issue of the journal PLOS Biology, could lead to new approaches for helping people learn more efficiently. The work also suggests that an inability to properly attribute cause and effect might lie at the heart of some psychiatric disorders that involve delusional thinking, such as schizophrenia.

"Many have assumed that the novelty of a stimulus would be the main factor driving one-shot learning, but our computational model showed that causal uncertainty was more important," says Sang Wan Lee, a postdoctoral scholar in neuroscience at Caltech and lead author of the new paper. "If you are uncertain, or lack evidence, about whether a particular outcome was caused by a preceding event, you are more likely to quickly associate them together."

The researchers used a simple behavioral task paired with brain imaging to determine where in the brain this causal processing takes place. Based on the results, it appears that the ventrolateral prefrontal cortex (VLPFC) is involved in the processing and then couples with the hippocampus to switch on one-shot learning, as needed.

Indeed, a switch is an appropriate metaphor, says Shinsuke Shimojo, Caltech's Gertrude Baltimore Professor of Experimental Psychology. Since the hippocampus is known to be involved in so-called episodic memory, in which the brain quickly links a particular context with an event, the researchers hypothesized that this brain region might play a role in one-shot learning. But they were surprised to find that the coupling between the VLPFC and the hippocampus was either all or nothing. "Like a light switch, one-shot learning is either on, or it's off," says Shimojo.

In the behavioral study, 47 participants completed a simple causal-inference task; 20 of those participants completed the study in the Caltech Brain Imaging Center, where their brains were monitored using functional Magnetic Resonance Imaging. The task consisted of multiple trials. During each trial, participants were shown a series of five images one at a time on a computer screen. Over the course of the task, some images appeared multiple times, while others appeared only once or twice. After every fifth image, either a positive or negative monetary outcome was displayed. Following a number of trials, participants were asked to rate how strongly they thought each image and outcome were linked. As the task proceeded, participants gradually learned to associate some of the images with particular outcomes. One-shot learning was apparent in cases where participants made an association between an image and an outcome after a single pairing.

The researchers hypothesize that the VLPFC acts as a controller mediating the one-shot learning process. They caution, however, that they have not yet proven that the brain region actually controls the process in that way. To prove that, they will need to conduct additional studies that will involve modifying the VLPFC's activity with brain stimulation and seeing how that directly affects behavior.

Still, the researchers are intrigued by the fact that the VLPFC is very close to another part of the ventrolateral prefrontal cortex that they previously found to be involved in helping the brain to switch between two other forms of learning—habitual and goal-directed learning, which involve routine behavior and more carefully considered actions, respectively. "Now we might cautiously speculate that a significant general function of the ventrolateral prefrontal cortex is to act as a leader, telling other parts of the brain involved in different types of behavioral functions when they should get involved and when they should not get involved in controlling our behavior," says coauthor John O'Doherty, professor of psychology and director of the Caltech Brain Imaging Center.

The work, "Neural Computations Mediating One-Shot Learning in the Human Brain," was supported by the National Institutes of Health, the Gordon and Betty Moore Foundation, the Japan Science and Technology Agency–CREST, and the Caltech-Tamagawa global Center of Excellence.

Writer: 
Kimm Fesenmaier
Frontpage Title: 
Switching on One-Shot Learning in the Brain
Listing Title: 
Switching on One-Shot Learning in the Brain
Writer: 
Exclude from News Hub: 
No
Short Title: 
One-Shot Learning in the Brain
News Type: 
Research News

Pages

Subscribe to RSS - BBE