Caltech Researchers Receive NIH BRAIN Funding

On September 30, the National Institutes of Health (NIH) announced its first round of funding in furtherance of President Obama's "Brain Research through Advancing Innovative Neurotechnology"—or BRAIN—Initiative. Included among the 58 funded projects—all of which, according to the NIH, are geared toward the development of "new tools and technologies to understand neural circuit function and capture a dynamic view of the brain in action"—are six projects either led or co-led by Caltech researchers.

The Caltech projects are:

"Dissecting human brain circuits in vivo using ultrasonic neuromodulation"

Doris Tsao, assistant professor of biology
Mikhail Shapiro, assistant professor of chemical engineering

Tsao and Shapiro are teaming up to develop a new technology that both uses ultrasound to map and determine the function of interconnected brain networks and, ultimately, to change neural activity deep within the brain. "This would open new horizons for understanding human brain function and connectivity, and create completely new options for the noninvasive treatment of brain diseases such as intractable epilepsy, depression, and Parkinson's disease," Tsao says. "The key," Shapiro adds, "is to gain a precise understanding of the various mechanisms by which sound waves interact with neurons in the brain so we can use ultrasound to produce very specific neurological effects. We will be able to do this across the full spectrum, from molecules up to large model organisms."

"Modular nanophotonic probes for dense neural recording at single-cell resolution"

Michael Roukes, Robert M. Abbey Professor of Physics, Applied Physics, and Bioengineering
Thanos Siapas, professor of computation and neural systems

Roukes, Siapas, and their colleagues at Columbia University and Baylor College of Medicine propose to build ultra-dense arrays of miniature light-emitting and light-sensing probes using advanced silicon "chip" technology that permits their production en masse. These probes open the new field of integrated neurophotonics, Roukes says, and will permit simultaneous recording of the electrical activity of hundreds of thousands to, ultimately, millions of neurons, with single-cell resolution, in any given region of the brain. "The instrumentation we'll develop will enable us to observe the trafficking of information, in vivo, in brain circuits on an unprecedented scale, and to correlate this activity with behavior," he says.

"Time-Reversal Optical Focusing for Noninvasive Optogenetics"

Changhuei Yang, professor of electrical engineering, bioengineering, and medical engineering
Viviana Gradinaru, assistant professor of biology

Deep-brain stimulation has been used successfully for nearly two decades for the treatment of epilepsy, Parkinson's disease, chronic pain, depression, and other disorders. Current systems rely on electrodes implanted deep within the brain to modify the firing pattern of specific clusters of neurons, bringing them back into a more normal pattern. Yang and Gradinaru are working together on a method that would use only light waves to noninvasively activate light-sensitive molecules and precisely guide the firing of nerves. Biological tissues are opaque due to the scattering of light waves, and that scattering makes it impossible to finely focus a laser beam deep into brain tissue. The researchers hope to use an optical "time-reversal" trick previously developed by Yang to counteract the scattering, allowing light beams to be targeted to specific locations within the brain. "The technology to be developed in this project has the potential for wide-ranging applications, including noninvasive deep brain stimulation and precise incisionless laser surgery," he says.

"Integrative Functional Mapping of Sensory-Motor Pathways"

Michael H. Dickinson, Esther M. and Abe M. Zarem Professor of Bioengineering

As in other animals, locomotion in the fruit fly is a complicated process involving the interplay of sensory systems and motor circuits in the brain. Dickinson and his colleagues hope to decipher just how the brain uses sensory information to guide movements by developing a system to record the activity of large numbers of individual neurons from across the brains of fruit flies, as the flies fly in flight simulator or walk on a treadmill and are simultaneously exposed to various sights and sounds. Understanding sensory–motor integration, he says, should lead to a better understanding of human disorders, including Parkinson's disease, stroke, and spinal cord injury, and aid in the design and optimization of robotic prosthetic limbs and prosthetic devices that restore sight and other senses.

"Establishing a Comprehensive and Standardized Cell Type Characterization Platform"

David J. Anderson, Seymour Benzer Professor of Biology; Investigator, Howard Hughes Medical Institute (co-PI)

In collaboration with Hongkui Zeng and colleagues at the Allen Institute for Brain Science in Seattle, Anderson will help to develop a detailed, publicly available database characterizing the genetic, physiological, and morphological features of the various cell types in the brain that are involved in circuits controlling sensations and emotions. Understanding the cellular building blocks of brain circuits, the researchers say, is crucial for figuring out how those circuits can malfunction in disease. In particular, Anderson's lab will focus on the cells of the brain's hypothalamus and amygdala—structures that are vital to emotions and behavior, and involved in human psychiatric disorders such as post-traumatic stress disorder, anxiety, and depression. "This project will serve as a model for hub-and-spoke collaborations between academic laboratories and the Allen Institute, permitting access to their valuable resources and technologies while advancing the field more broadly," Anderson says.

"Vertically integrated approach to visual neuroscience: microcircuits to behavior"

Markus Meister, Lawrence A. Hanson, Jr. Professor of Biology (co-PI)

This project, led by Hyunjune Sebastian Seung of Princeton University, will use genetic, electrophysiological, and imaging tools to identify and map the neural connections of the retina, the light-sensing tissue in the eye, and determine their roles in visual perception and behavior. "Here we are shooting for a vertically integrated understanding of a neural system," Meister says. "The retina offers such a fantastic degree of experimental access that one can hope to bridge all scales of organization, from molecules to cells to microcircuits to behavior. We hope that success here can eventually serve as a blueprint for understanding other parts of the brain." Knowing the neural mechanisms for vision can also influence technological applications, such as new algorithms for computer vision, or the development of retinal prostheses for the treatment of blindness.

Exclude from News Hub: 
News Type: 
In Our Community
Tuesday, October 7, 2014
Center for Student Services 360 (Workshop Space)

Thirty Meter Telescope Groundbreaking and Blessing

Swimming Sea-Monkeys Reveal How Zooplankton May Help Drive Ocean Circulation

Brine shrimp, which are sold as pets known as Sea-Monkeys, are tiny—only about half an inch long each. With about 10 small leaf-like fins that flap about, they look as if they could hardly make waves.

But get billions of similarly tiny organisms together and they can move oceans.

It turns out that the collective swimming motion of Sea-Monkeys and other zooplankton—swimming plankton—can generate enough swirling flow to potentially influence the circulation of water in oceans, according to a new study by Caltech researchers.

The effect could be as strong as those due to the wind and tides, the main factors that are known to drive the up-and-down mixing of oceans, says John Dabiri, professor of aeronautics and bioengineering at Caltech. According to the new analysis by Dabiri and mechanical engineering graduate student Monica Wilhelmus, organisms like brine shrimp, despite their diminutive size, may play a significant role in stirring up nutrients, heat, and salt in the sea—major components of the ocean system.

In 2009, Dabiri's research team studied jellyfish to show that small animals can generate flow in the surrounding water. "Now," Dabiri says, "these new lab experiments show that similar effects can occur in organisms that are much smaller but also more numerous—and therefore potentially more impactful in regions of the ocean important for climate."

The researchers describe their findings in the journal Physics of Fluids.

Brine shrimp (specifically Artemia salina) can be found in toy stores, as part of kits that allow you to raise a colony at home. But in nature, they live in bodies of salty water, such as the Great Salt Lake in Utah. Their behavior is cued by light: at night, they swim toward the surface to munch on photosynthesizing algae while avoiding predators. During the day, they sink back into the dark depths of the water.

A. salina (a species of brine shrimp, commonly known as Sea-Monkeys) begin a vertical migration, stimulated by a vertical blue laser light.

To study this behavior in the laboratory, Dabiri and Wilhelmus use a combination of blue and green lasers to induce the shrimp to migrate upward inside a big tank of water. The green laser at the top of the tank provides a bright target for the shrimp to swim toward while a blue laser rising along the side of the tank lights up a path to guide them upward.

The tank water is filled with tiny, silver-coated hollow glass spheres 13 microns wide (about one-half of one-thousandth of an inch). By tracking the motion of those spheres with a high-speed camera and a red laser that is invisible to the organisms, the researchers can measure how the shrimp's swimming causes the surrounding water to swirl.

Although researchers had proposed the idea that swimming zooplankton can influence ocean circulation, the effect had never been directly observed, Dabiri says. Past studies could only analyze how individual organisms disturb the water surrounding them.

But thanks to this new laser-guided setup, Dabiri and Wilhelmus have been able to determine that the collective motion of the shrimp creates powerful swirls—stronger than would be produced by simply adding up the effects produced by individual organisms.

Adding up the effect of all of the zooplankton in the ocean—assuming they have a similar influence—could inject as much as a trillion watts of power into the oceans to drive global circulation, Dabiri says. In comparison, the winds and tides contribute a combined two trillion watts.

Using this new experimental setup will enable future studies to better untangle the complex relationships between swimming organisms and ocean currents, Dabiri says. "Coaxing Sea-Monkeys to swim when and where you want them to is even more difficult than it sounds," he adds. "But Monica was undeterred over the course of this project and found a creative solution to a very challenging problem."

The title of the Physics of Fluids paper is "Observations of large-scale fluid transport by laser-guided plankton aggregations." The research was supported by the U.S.-Israel Binational Science Foundation, the Office of Naval Research, and the National Science Foundation.

Exclude from News Hub: 
News Type: 
Research News
Tuesday, October 7, 2014
Center for Student Services 360 (Workshop Space)

Caltech Peer Tutor Training

What Is Possible in Real-World Communication Systems: An Interview with Victoria Kostina

In 1948, the mathematical engineer Claude Shannon published a paper that showed mathematically that it should be possible to transmit information reliably at a high rate even in a noisy system. And with that, the field of information theory was born.

Caltech's newest assistant professor of electrical engineering, Victoria Kostina, works in this field. She comes to Caltech's Division of Engineering and Applied Science from Princeton University, where she completed her PhD and a postdoctoral position in electrical engineering. Prior to that, she earned her master's in 2006 from the University of Ottawa and her undergraduate degree in applied mathematics and physics in 2004 from the Moscow Institute of Physics and Technology.

We sat down with Kostina to talk about communication systems and her work.


Can you tell us a little bit about your field?

Information theory is about the transmission of information. It studies, in a very abstract way, elements in all communication systems. And a communication system is something very general—it is any system with a transmitter, a receiver, a communication channel, and information that needs to be relayed. So, for example, as we are talking right now, we represent a communication system: I am trying to transmit my information to you, the receiver, through the air, which is our communication channel.

In his famous 1948 paper, Claude Shannon, the father of this field, asked, "What is achievable in a communication system when you have noise?" So imagine if we were standing in opposite corners of a large room filled with people talking amongst themselves, and I needed to talk to you, but for some reason I could not move. What could I do? I could try to yell louder to combat the noise. But yelling is very expensive because it strains your vocal chords—in other words, it can burn out the amplifiers in the system. Just increasing the power is not a good way to address the problem.

Another thing I could try to do is just repeat my message many times with a low voice, hoping that at some point the noise in the room would be low enough for my message to get to you. This is what is known as a repetition code. It is the most simple, trivial code you can think of—it just repeats the same message many, many times.

Of course, there are data transmission codes that are much more intelligent than a repetition code, and they are capable of transmitting more information much faster.


Did Shannon have an optimal solution?

What Shannon showed is that, in fact, you can transmit many bits of information per unit of time even if the noise is very, very high using intelligent coding systems. This was a remarkable insight, and at the time, people didn't know how to achieve this. Shannon made this insight based on mathematical modeling; he didn't show how to build the codes.

Information theorists like myself try to understand, for any communication system, what data transmission rate is attainable. In other words, what is the optimal data rate that we can achieve in these systems?


And do you use mathematical modeling to try to determine that?

Absolutely. This research requires tools from probability theory, from theory of random processes, etc. What we try to do is to look at the problem and extract the most relevant features to make it as simple as possible. Then we describe the problem mathematically, solve it, and then try to bring it back to the real world and see how this mathematical insight might help design real-world communication systems.


What is the specific focus of your research?

My research has to do with bridging the gap between what Shannon's theory tells us and the real world.

Shannon's theory showed us the limit of what is achievable in the case where we have all the time in the world to transmit information. But in real-world systems, especially in modern real-time applications, we cannot afford to wait forever. Anyone who has talked on Skype knows this. A delay of even a couple of seconds gets annoying very quickly.

So I'm trying to understand the fundamental limits in systems in which the delays are strictly bounded. This would ultimately inform real-world designs. Of course, there are many challenges in achieving that goal; even after you know the fundamental limit, a lot of work must be done to design codes that can attain that limit.

What are the applications of this work?

There are two points I would like to make about applications. The first is that it's important to know these limits so that the people who are working on the design of better codes know what to shoot for. Let's say they know the fundamental limit, and they're measuring the performance of an algorithm and know that they're already quite close to the limit. In that case, maybe it isn't worth spending more effort and investing into further development of that code. However, if they see there is still a big gap or room for improvement, maybe it is worth it to invest.

The second point is that as we go through our analysis of the system, we do gain insights into how to build real-world codes. We come away understanding some essential properties that a good code should have and that a coding engineer should aim for in order to attain those unsurpassable fundamental limits.


What do you find most exciting about your work?

I love that it is very basic research, very theoretical. Once we strip away all the particularities of a given problem, we are left with a mathematical model, which is timeless. Once you solve such a problem, it stays there.

But at the same time, I like that this work applies to the real world. The fact that it gives us insights into how to improve existing communication systems is a very exciting feature for me.


Do you remember how you originally got interested in these sorts of questions?

I grew up in a small town near Moscow, and both of my parents were engineers. So I think from a very early age they instilled in me this inquisitive outlook on the world. They always had very interesting, long answers to my questions like, "Why is the sky blue?" and  "How do planes fly?" I think that's how I knew that I wanted to understand more about the world from a mathematical point of view.

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
In Our Community

Variability Keeps The Body In Balance

Although the heart beats out a very familiar "lub-dub" pattern that speeds up or slows down as our activity increases or decreases, the pattern itself isn't as regular as you might think. In fact, the amount of time between heartbeats can vary even at a "constant" heart rate—and that variability, doctors have found, is a good thing.

Reduced heart rate variability (HRV) has been found to be predictive of a number of illnesses, such as congestive heart failure and inflammation. For athletes, a drop in HRV has also been linked to fatigue and overtraining. However, the underlying physiological mechanisms that control HRV—and exactly why this variation is important for good health—are still a bit of a mystery.

By combining heart rate data from real athletes with a branch of mathematics called control theory, a collaborative team of physicians and Caltech researchers from the Division of Engineering and Applied Science have now devised a way to better understand the relationship between HRV and health—a step that could soon inform better monitoring technologies for athletes and medical professionals.

The work was published in the August 19 print issue of the Proceedings of the National Academy of Sciences.

To run smoothly, complex systems, such as computer networks, cars, and even the human body, rely upon give-and-take connections and relationships among a large number of variables; if one variable must remain stable to maintain a healthy system, another variable must be able to flex to maintain that stability. Because it would be too difficult to map each individual variable, the mathematics and software tools used in control theory allow engineers to summarize the ups and downs in a system and pinpoint the source of a possible problem.

Researchers who study control theory are increasingly discovering that these concepts can also be extremely useful in studies of the human body. In order for a body to work optimally, it must operate in an environment of stability called homeostasis. When the body experiences stress—for example, from exercise or extreme temperatures—it can maintain a stable blood pressure and constant body temperature in part by dialing the heart rate up or down. And HRV plays an important role in maintaining this balance, says study author John Doyle, the Jean-Lou Chameau Professor of Control and Dynamical Systems, Electrical Engineering, and Bioengineering.

"A familiar related problem is in driving," Doyle says. "To get to a destination despite varying weather and traffic conditions, any driver—even a robotic one—will change factors such as acceleration, braking, steering, and wipers. If these factors suddenly became frozen and unchangeable while the car was still moving, it would be a nearly certain predictor that a crash was imminent. Similarly, loss of heart rate variability predicts some kind of malfunction or 'crash,' often before there are any other indications," he says.

To study how HRV helps maintain this version of "cruise control" in the human body, Doyle and his colleagues measured the heart rate, respiration rate, oxygen consumption, and carbon dioxide generation of five healthy young athletes as they completed experimental exercise routines on stationary bicycles.

By combining the data from these experiments with standard models of the physiological control mechanisms in the human body, the researchers were able to determine the essential tradeoffs that are necessary for athletes to produce enough power to maintain an exercise workload while also maintaining the internal homeostasis of their vital signs.

"For example, the heart, lungs, and circulation must deliver sufficient oxygenated blood to the muscles and other organs while not raising blood pressure so much as to damage the brain," Doyle says. "This is done in concert with control of blood vessel dilation in the muscles and brain, and control of breathing. As the physical demands of the exercise change, the muscles must produce fluctuating power outputs, and the heart, blood vessels, and lungs must then respond to keep blood pressure and oxygenation within narrow ranges."

Once these trade-offs were defined, the researchers then used control theory to analyze the exercise data and found that a healthy heart must maintain certain patterns of variability during exercise to keep this complicated system in balance.  Loss of this variability is a precursor of fatigue, the stress induced by exercise. Today, some HRV monitors in the clinic can let a doctor know when variability is high or low, but they provide little in the way of an actionable diagnosis.

Because monitors in hospitals can already provide HRV levels and dozens of other signals and readings, the integration of such mathematical analyses of control theory into HRV monitors could, in the future, provide a way to link a drop in HRV to a more specific and treatable diagnosis. In fact, one of Doyle's students has used an HRV application of control theory to better interpret traditional EKG signals.

Control theory could also be incorporated into the HRV monitors used by athletes to prevent fatigue and injury from overtraining, he says.

"Physicians who work in very data-intensive settings like the operating room or ICU are in urgent need of ways to rapidly and acutely interpret the data deluge," says Marie Csete, MD (PhD, '00), chief scientific officer at the Huntington Medical Research Institutes and a coauthor on the paper. "We hope this work is a first step in a larger research program that helps physicians make better use of data to care for patients."

This study is not the first to apply control theory in medicine. Control theory has already informed the design of a wearable artificial pancreas for type 1 diabetic patients and an automated prototype device that controls the administration of anesthetics during surgery. Nor will it be the last, says Doyle, whose sights are next set on using control theory to understand the progression of cancer.

"We have a new approach, similarly based on control of networks, that organizes and integrates a bunch of new ideas floating around about the role of healthy stroma—non-tumor cells present in tumors—in promoting cancer progression," he says.

"Based on discussions with Dr. Peter Lee at City of Hope [a cancer research and treatment center], we now understand that the non-tumor cells interact with the immune system and with chemotherapeutic drugs to modulate disease progression," Doyle says. "And I'm hoping there's a similar story there, where thinking rigorously about the tradeoffs in development, regeneration, inflammation, wound healing, and cancer will lead to new insights and ultimately new therapies."

Other Caltech coauthors on the study include former graduate students Na Li (PhD '13) now an assistant professor at Harvard; Somayeh Sojoudi (PhD '12), currently at NYU; and graduate students Chenghao Simon Chien and Jerry Cruz. Other collaborators on the study were Benjamin Recht, a former postdoctoral scholar in Doyle's lab and now an assistant professor at UC Berkeley; Daniel Bahmiller, a clinician training in public health; and David Stone, MD, an expert in ICU medicine from the University of Virginia School of Medicine.

Exclude from News Hub: 

Remembering Frank Marble


Frank Earl Marble (Eng '47, PhD '48), Caltech's Richard L. and Dorothy M. Hayman Professor of Mechanical Engineering and Professor of Jet Propulsion, Emeritus, passed away on August 11, 2014, two months after the death of Ora Lee Marble, his wife of 71 years. Marble was one of the fathers of modern jet engines; his doctoral thesis included a method for calculating the three-dimensional airflow through rows of rotating blades. A jet engine is essentially two sets of blades on a common axle. A compressor at the front of the engine slows the incoming air and feeds it to the burner, and a turbine spinning in the hot gases downstream ejects the exhaust and drives the compressor. More broadly, Marble's methods apply to any fluid flowing along the axis of a fan, pump, turbine, or propeller.

Born in Cleveland, Ohio, on July 21, 1918, 15 years after the Wright brothers' first powered flight, Marble got interested in aviation in grade school. The Cleveland airfield was "a long streetcar ride away," he recalled in his Caltech oral history, and he "could wander into the hangars" unsupervised. He got his pilot's license before his driver's license.

Marble earned his BS in aeronautics in 1940 at the Case School of Applied Science (now Case Western Reserve University), "about a two-mile walk from home." For his master's degree in 1942, he built a fan designed to measure the surface pressure along a blade as it cut through the air. Holes in the blade led to a set of pressure gauges; the trick, he noted, was inventing the "slip seal" at the fan's hub that kept the holes and their gauges connected. He brought the data with him to Caltech, where it eventually became the basis for his PhD work.

But first, Marble helped fight World War II from the Cleveland airport, joining the National Advisory Committee for Aeronautics' Aircraft Engine Research Lab (now NASA's John H. Glenn Research Center at Lewis Field). Marble led the team troubleshooting the B-29 Superfortress, capable of flying thousands of miles at 30,000 feet with 10 tons of bombs. The "Superfort" was the biggest, heaviest plane of the war and its four engines often overheated; a significant number were ditched in the Pacific after engine fires. Several alterations to the airflow maximized the engine cooling, and the B-29 would remain in service into the 1960s.

On receiving his doctorate from Caltech in 1948, Marble was hired as an assistant professor by Tsien Hsue-shen (PhD '39), the Goddard Professor of Jet Propulsion. Tsien assigned him to develop a set of courses in this new field, which blended chemistry, gas dynamics, and materials science.

Tsien also gave Marble a half-time appointment at Caltech's Jet Propulsion Laboratory (JPL), which in the pre-NASA era really was studying jet propulsion, developing missiles under contract with the army. Tsien and his fellow members of the "suicide squad" had founded JPL in the wide-open scrublands of the upper Arroyo Seco in the 1930s after a string of accidents and explosions had gotten them evicted from the campus aeronautics lab. By the late 1940s, JPL had grown into an unrivaled set of testing facilities sprawled across some 60 acres.

Marble was put in charge of the group trying to build a workable ramjet—a turbine-less supersonic engine that compresses air by "ramming" it into an inlet that rapidly slows it to subsonic speeds. An ordinary turbojet's ignition source sits in a flame holder, or "can," mounted just behind the compressor. Like a rock in a river, this obstruction creates an eddy in its wake where hot, slow-moving gas gets trapped. This region of relative calm nurtured a stable flame. In a ramjet, however, a momentary tongue of flame would blow out the back of the engine just before it quit.

Marble attacked the problem by repurposing the ramjet lab for combustion research, leading to a string of breakthroughs in the mid-1950s. First, he and graduate student Tom Adamson (MS '50, PhD '54) mathematically analyzed the contact zone between the fuel and the wake. The fuel diffuses across this mixing layer and ignites on contact with the wake, replenishing the eddy's hot gas. By assuming that the mixing layer's gases flowed in a parallel, laminar fashion, Marble and Adamson were able to predict how far downstream the fuel would catch fire and how stable the flame would be. Says Adamson, "We didn't answer every question about combustion in laminar mixing, but we answered many of them." Studies of premixed ignition still refer to the "Marble-Adamson problem" as a paradigm.

High-speed "movies" of the flame confirmed the laminar ignition theory. The movies also showed why the flame blew out—as the airflow increased, the mixing layer suddenly turned turbulent. This dislodged the eddy, which promptly dissipated. The results were "scalable," meaning that they could be applied to any combination of fuel and hardware to find a flame-holder diameter and airstream velocity that would guarantee a steady burn.

Other movies demystified the mechanism behind a type of catastrophic engine failure whose early stages were announced by a 160-decibel screech. These images revealed that the curling tendrils of burnt fuel entering the eddy conjured up opposing whirlpools in order to keep the flow's overall angular momentum in balance. This second set of whirlpools spread outward, and if they withdrew enough heat from the mixing layer, they would themselves ignite. A natural acoustic resonance in the engine could then amplify their thermal energy tenfold en route to the walls. "My desk was 600 feet away," Adamson says. "When the motor began to screech, things shook so hard I couldn't write."

Marble's group also figured out what makes a compressors stall, which happens when its rotating blades lose their "bite." (In a bad stall, the high-pressure surge of air escaping backward through the compressor can do enough damage to bring down an airplane.) Howard Emmons at Harvard had found that an individual blade stalled when it entered a cell of reduced pressure that separated the airflow from the blade, and that these cells leapt from blade to blade; think of the slats of a Venetian blind rippling up and down in a breeze. Marble developed a two-dimensional model of the ripple's essential features—a neat complement to his PhD work on unstalled flow.

Meanwhile, the Chinese-born Tsien had fallen victim to the Red Scare. His top-secret clearance was revoked in the autumn of 1950. For the next five years the Immigration and Naturalization Service forbade him from leaving Caltech's environs. He was unable to enter JPL, or to participate in classified research on campus—in effect, barred from aeronautics altogether. When the Tsiens were evicted from the house they rented, Marble found them another; when they were evicted from that one as well, the Marbles took them in. (Ironically, after being deported in 1955, the embittered Tsien did join the Communist Party and led China into the space age.)

Marble returned to campus full-time in 1959 and began studying multiphase gas dynamics, in which a gas carries tiny particles—in this case, motes of aluminum oxide, routinely added to solid rocket fuels to make them burn hotter. The grains moved more slowly than the gas and their mass affected its flow, causing the rockets to underperform. Marble helped design the nozzle for the solid-fuel Minuteman intercontinental ballistic missile in the early 1960s, but it took most of the decade to work out a complete mathematical treatment of dusty flows.

Marble spent the '70s studying various sources of jet-engine noise before returning to combustion research. Caltech professors Anatol Roshko (MS '47, PhD '52) and Garry Brown had shown in the early '70s that a turbulent shear flow's swirls retained their identities for considerable distances downstream, stretching the mixing layer and wrapping it around itself. Marble and graduate student Ann Karagozian (PhD '82) set about studying how diffusion-driven flames interacted with these vortices—"a very fundamental problem," says Karagozian. "Frank pioneered the coherent-flame model of turbulent combustion, and researchers still use 'flamelet models' in very complicated turbulent combustion simulations."

In addition to his research accomplishments, Marble was legendary for his teaching prowess—and his penchant for 8:00 a.m. lectures delivered "with breathtaking clarity and almost without notes," Karagozian says. "It was tough getting up early for them, but the lectures were incredibly stimulating and rigorous."

Marble's 60-odd graduate students included a who's-who of aerospace engineers as well as Benoit Mandelbrot (Eng '49), the father of fractal geometry. The Frank and Ora Lee Marble Professorship and a graduate fellowship have been established by his students and friends to honor his impact as a mentor as well as a scientist.

Marble was an elected member of both the National Academy of Engineering and the National Academy of Sciences, a rare distinction, and a fellow of the American Institute of Aeronautics and Astronautics (AIAA). His other honors included the AIAA's Propellants and Combustion Award and the Daniel Guggenheim Medal, often regarded as the Nobel Prize of aeronautics.

Marble is survived by his son, Stephen; his daughter-in-law, Cheryl; two grandchildren and one great-grandson. Marble's daughter, Patricia, died in 1996.

A memorial service is planned for Saturday, October 4. 

Douglas Smith
Exclude from News Hub: 
News Type: 
In Our Community

Ceramics Don't Have To Be Brittle

Caltech Materials Scientists Are Creating Materials By Design

Imagine a balloon that could float without using any lighter-than-air gas. Instead, it could simply have all of its air sucked out while maintaining its filled shape. Such a vacuum balloon, which could help ease the world's current shortage of helium, can only be made if a new material existed that was strong enough to sustain the pressure generated by forcing out all that air while still being lightweight and flexible.

Caltech materials scientist Julia Greer and her colleagues are on the path to developing such a material and many others that possess unheard-of combinations of properties. For example, they might create a material that is thermally insulating but also extremely lightweight, or one that is simultaneously strong, lightweight, and nonbreakable—properties that are generally thought to be mutually exclusive.

Greer's team has developed a method for constructing new structural materials by taking advantage of the unusual properties that solids can have at the nanometer scale, where features are measured in billionths of meters. In a paper published in the September 12 issue of the journal Science, the Caltech researchers explain how they used the method to produce a ceramic (e.g., a piece of chalk or a brick) that contains about 99.9 percent air yet is incredibly strong, and that can recover its original shape after being smashed by more than 50 percent.

"Ceramics have always been thought to be heavy and brittle," says Greer, a professor of materials science and mechanics in the Division of Engineering and Applied Science at Caltech. "We're showing that in fact, they don't have to be either. This very clearly demonstrates that if you use the concept of the nanoscale to create structures and then use those nanostructures like LEGO to construct larger materials, you can obtain nearly any set of properties you want. You can create materials by design."

The researchers use a direct laser writing method called two-photon lithography to "write" a three-dimensional pattern in a polymer by allowing a laser beam to crosslink and harden the polymer wherever it is focused. The parts of the polymer that were exposed to the laser remain intact while the rest is dissolved away, revealing a three-dimensional scaffold. That structure can then be coated with a thin layer of just about any kind of material—a metal, an alloy, a glass, a semiconductor, etc. Then the researchers use another method to etch out the polymer from within the structure, leaving a hollow architecture.

The applications of this technique are practically limitless, Greer says. Since pretty much any material can be deposited on the scaffolds, the method could be particularly useful for applications in optics, energy efficiency, and biomedicine. For example, it could be used to reproduce complex structures such as bone, producing a scaffold out of biocompatible materials on which cells could proliferate.

In the latest work, Greer and her students used the technique to produce what they call three-dimensional nanolattices that are formed by a repeating nanoscale pattern. After the patterning step, they coated the polymer scaffold with a ceramic called alumina (i.e., aluminum oxide), producing hollow-tube alumina structures with walls ranging in thickness from 5 to 60 nanometers and tubes from 450 to 1,380 nanometers in diameter.

Greer's team next wanted to test the mechanical properties of the various nanolattices they created. Using two different devices for poking and prodding materials on the nanoscale, they squished, stretched, and otherwise tried to deform the samples to see how they held up.

They found that the alumina structures with a wall thickness of 50 nanometers and a tube diameter of about 1 micron shattered when compressed. That was not surprising given that ceramics, especially those that are porous, are brittle. However, compressing lattices with a lower ratio of wall thickness to tube diameter—where the wall thickness was only 10 nanometers—produced a very different result.

"You deform it, and all of a sudden, it springs back," Greer says. "In some cases, we were able to deform these samples by as much as 85 percent, and they could still recover."

To understand why, consider that most brittle materials such as ceramics, silicon, and glass shatter because they are filled with flaws—imperfections such as small voids and inclusions. The more perfect the material, the less likely you are to find a weak spot where it will fail. Therefore, the researchers hypothesize, when you reduce these structures down to the point where individual walls are only 10 nanometers thick, both the number of flaws and the size of any flaws are kept to a minimum, making the whole structure much less likely to fail.

"One of the benefits of using nanolattices is that you significantly improve the quality of the material because you're using such small dimensions," Greer says. "It's basically as close to an ideal material as you can get, and you get the added benefit of needing only a very small amount of material in making them."

The Greer lab is now aggressively pursuing various ways of scaling up the production of these so-called meta-materials.

The lead author on the paper, "Strong, Lightweight and Recoverable Three-Dimensional Ceramic Nanolattices," is Lucas R. Meza, a graduate student in Greer's lab. Satyajit Das, who was a visiting student researcher at Caltech, is also a coauthor. The work was supported by funding from the Defense Advanced Research Projects Agency and the Institute for Collaborative Biotechnologies. Greer is also on the board of directors of the Kavli Nanoscience Institute at Caltech.

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
Research News
Wednesday, September 24, 2014
Annenberg Lecture Hall

A chance to meet Pasadena Unified School District Leadership

Measuring Earthquake Shaking with the Community Seismic Network

In 2011, the Community Seismic Network (CSN) began taking data from small, inexpensive accelerometers in the greater Pasadena area. Able to measure both weak and strong ground movement along three axes, these accelerometers promise to provide very high-resolution data of shaking produced by seismic activity in the region. "We have quite a large deployment of these accelerometers, about 400 sensors now, in people's homes but also in schools and businesses, and in some high-rise buildings downtown," says Julian Bunn, principal computational scientist for Caltech's Center for Advanced Computing Research. "We run client software on each sensor that sends data up into Google's cloud. From there we can analyze the data from all these sensors."

The CSN is the brainchild of Professor of Geophysics Rob Clayton, Professor of Engineering Seismology Tom Heaton, and Simon Ramo Professor of Computer Science, Emeritus, K. Mani Chandy, and a collaboration among Caltech's seismology, earthquake engineering, and computer science departments. It has successfully detected the many earthquakes that have occurred since its establishment. In addition, the CSN currently assists in damage assessment by generating maps of peak ground acceleration before accurate measurements of the earthquake epicenter or magnitude are known.

However, the CSN could provide further assistance in damage assessment if it were also able to produce an immediate estimation of the magnitude. "Right now we only detect an event," says Bunn. "We don't estimate the magnitude." This is where Caltech junior Kevin Li comes in. Li has been spending his 10-week Summer Undergraduate Research Fellowship (SURF) trying to develop a machine-learning system that can accurately estimate the magnitude of an earthquake within seconds of its detection.

Of course, the USGS already accurately measures earthquake magnitudes, but it does so by means of highly sophisticated—and expensive—seismometers that are located several miles apart from one another. Post-quake "ShakeMaps" are then constructed by extrapolating from this data to estimate shaking between seismometer stations. The problem, as recent quakes in California have shown, is that shaking can vary widely even from block to block—as can damage and potential injuries. The CSN proposes to capture this variation and provide an important resource for first responders during major earthquakes, pinpointing areas likely to have the most damage. Should this pilot study prove fruitful, says Bunn, it could "provide better hazard mitigation in parts of the world where they can't afford these very expensive installations."

"Seismic networks like the USGS use really fine sensors," explains Li. "However, the CSN sensors sacrifice fine measurement precision for low-cost efficiency. The sensors record particularly noisy data, far noisier than what the USGS system is used to. As a result, we cannot just adopt the algorithms from USGS. We need to develop our own system."

So far, says Li, the work is going well. "I'm currently still in week nine of my 10 weeks, but I have a system that seems like it can give a magnitude estimate that is within 1 unit of magnitude. For instance, if the estimation is 5.4, then the real magnitude should be somewhere between 4.4 and 6.4. If we can get to better precision than that, even better."

Li notes that his system has so far only been evaluated using USGS magnitudes for previous seismic events over the past two years. "I have yet to test it on a new event. Perhaps I can test it on the data from the recent earthquake in Napa once Caltech has finished processing it."

CSN is supported by funding from the Gordon and Betty Moore Foundation.

Listing Title: 
Measuring Earthquake Shaking
Exclude from News Hub: 
Short Title: 
Measuring Earthquake Shaking


Subscribe to RSS - EAS