Seismology and Resilient Infrastructure: An Interview with Domniki Asimaki

Building homes and other solid structures on a dynamic, changing earth can be a very big challenge. Since we can't prevent an earthquake or a tsunami from happening, scientists strive to understand the impacts of these forces, and structural engineers try to build infrastructure that can survive them. And that intersection is where the work of Domniki Asimaki comes in.

Asimaki, professor of mechanical and civil engineering in the Division of Engineering and Applied Science, is interested in the behavior of geotechnical systems under the influence of forces such as wind, waves, and seismological activity. Using this information, she hopes to make predictive computer models that can lead to the design of an infrastructure that is resilient to natural and man-made hazards. The effects of natural forces on man-made structures can also help in the cost-effective design of infrastructure for sustainable energy harvesting such as offshore wind farms—a promising green energy solution.

Born in Greece, Asimaki earned her bachelor's degree from the National Technical University of Athens before heading to MIT for both her master's and doctoral degrees.

Although Asimaki only joined the Caltech faculty in August, she has been thinking about moving to Pasadena since her first trip to campus a decade ago. Recently, she spoke about her work, her hobbies, and what it's like to finally be at Caltech.

 

What will you be working on at Caltech?

I am interested in the response of soils and foundations to dynamic loading, with emphasis on earthquakes. The work exists at the interface between civil engineering and earth and atmospheric sciences. Specifically for seismic loading, my research is trying to translate the output from simulations done by seismologists into input that engineers can use to design stronger structures.

In general, geotechnical engineering is an old field. Now we know a lot more about how soils behave, and that extends from the foundations of a house to the foundations of a bridge to nuclear reactors to dams. But that knowledge has been disconnected from advancements in earth sciences, and this gap has, in turn, hindered the integration of these advancements into structural design practices. I think it's an area of opportunity.

 

How does this work provide a link between the scientists and structural engineers?

Traditionally, structural engineers designed buildings using empirical data—like actual data from a previous earthquake. Today, with more than half of the global population concentrated in areas prone not only to major earthquakes but also to severe droughts and more extreme climatic events such as sea-level rise, there is an ever-increasing need to improve these empirical models, incorporate new, sustainable construction materials, and to build stronger, more resilient urban environments. I think the big promise of seismological modeling is that rather than using empirical data to make decisions about which ground motions buildings should be designed against in the future, we can actually run real earthquake scenarios in a simulation.

This can help provide a real prediction of the shaking against which the structural engineers can design buildings—provided, among other things, that seismologists have information about the soils on which their structures are built. And that's the gap that I'm hoping to fill.

 

How does this work translate to the harvesting of wind energy?

There is growing interest in offshore wind farms to be used as a source of sustainable energy, but since it's still pretty new, we don't have domestic experience about the best way to build these wind farms. We want to understand how the foundations of offshore wind turbines behave under the mix of forces from the rotor, from the waves, from currents and tide, from wind—regular wind or hurricane wind—and how all of these different types of dynamic loading affect the behavior of the foundation. We also want to understand how the behavior of the foundation, in turn, affects the stability of the wind turbine's performance and capability to harvest energy.

This specific application of my work is a fascinating direction for me. It is an opportunity to ask why design models work and how can we maximize performance capabilities and minimize cost. People like myself with an engineering background, but also with scientific curiosity, can work in areas like this and set the performance and design standards from scratch. But because the energy-harvesting industry is just starting out, we need to make it innovative while still financially feasible.

 

We have a lot of seismology expertise at Caltech. Was that a factor in your decision to come here?

It's a big part of my research interest, and so Caltech has always been the place that I felt I should be. It is a unique place in the sense that it's small enough so that different disciplines are closely connected. And there's a role that I can play, bringing research programs together. It has all the key players that I need in the same space, and it provides a great opportunity for us all to work together and build a seamless research continuum, from seismology to resilient infrastructure monitoring and design.

 

Are there any other reasons you're looking forward to living in Southern California?

Because it's gorgeous! I've never had the opportunity to have such nice weather, which is good because I love to swim, and the pool here is beautiful. I actually went to the pool on campus on the second day that we moved here. I hadn't even started yet, and I said, "I'm new faculty. I promise. I can prove it." And the guy who runs the show there, John Carter, was nice enough to give me a visitor pass so I could swim.


Do you have any other outside interests?

I love to cook. Elaborate cooking, from traditional Greek to exotic Asian cuisine and lots of other things. I am adventurous in my cooking but very traditional at the same time because I make everything from scratch. To graduate from MIT was a little easier than to graduate from a Greek mother.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Sensors to Simplify Diabetes Management

For many patients diagnosed with diabetes, treating the disease can mean a burdensome and uncomfortable lifelong routine of monitoring blood sugar levels and injecting the insulin that their bodies don't naturally produce. But, as part of their Summer Undergraduate Research Fellowship (SURF) projects at Caltech, several engineering students have contributed to the development of tiny biosensors that could one day eliminate the need for these manual blood sugar tests.

Because certain patients with diabetes are unable to make their own insulin—a hormone that helps transfer glucose, or sugar, from the blood into muscle and other tissues—they need to monitor frequently their blood glucose, manually injecting insulin when sugar levels surge after a meal. Most glucose monitors require that patients prick their fingertips to collect a drop of blood, sometimes up to 10 times a day for the rest of their lives.

In their SURF projects, the students, all from Caltech's Division of Engineering and Applied Science, looked for different ways to do these same tests but painlessly and automatically.

Mehmet SencanSenior applied physics major Mehmet Sencan has approached the problem with a tiny chip that can be implanted under the skin. The sensor, a square just 1.4 millimeters on each side, is designed to detect glucose levels from the interstitial fluid (fluid found in the spaces between cells) that is just under the skin. The glucose levels in this fluid directly relate to the blood glucose concentration.

Sencan has been involved in optimizing the electrochemical method that the chip will use to detect glucose levels. Much like a traditional finger-stick glucose meter, the chip uses glucose oxidase, an enzyme that reacts in the presence of glucose, to create an electrical current. Higher levels of glucose result in a stronger current, allowing the device to measure glucose levels based on the charge that passes through the fluid.

Once the glucose level is detected, the information is wirelessly transmitted via a radio wave frequency to a reader that uses the same frequency to power the device itself. Ultimately an external display will let the patient know if their levels are within range.

Sencan, who works in the laboratory of Axel Scherer, the Bernard Neches Professor of Electrical Engineering, Applied Physics, and Physics, and who is co-mentored by postdoctoral researcher Muhammad Mujeeb-U-Rahman, started this project three years ago during his very first SURF.

"When I started, we were just thinking about what kind of chemistry the sensor would use, and now we have a sensor that is actually designed to do that," he says. Over the summer, he implanted the sensors in rat models, and he will continue the study over the fall and spring terms using both rat and mouse models—a first step in determining if the design is a clinically viable option.

Sith DomrongkitchaipornJunior electrical engineering major Sith Domrongkitchaiporn from the Scherer laboratory, also co-mentored by Mujeeb-U-Rahman, took a different approach to glucose detection, making tiny biosensors that are inconspicuously wearable on the surface of a contact lens. "It's an interesting concept because instead of having to do a procedure to place something under the skin, you can use a less invasive method, placing a sensor on the eye to get the same information," he says.

He used the method optimized by Mehmet to determine blood glucose levels from interstitial fluid and adapted the chemistry to measure glucose in the eyes' tears. This summer, he will be attempting to fabricate the lens itself and improve upon the process whereby radio waves are used to power the sensor and then transmit data from the sensor to an external computer.

Jennifer Chih-Wen LinSURF student and sophomore electrical engineering major Jennifer Chih-Wen Lin wanted to incorporate a different kind of glucose sensor into a contact lens. "The concept—determining glucose readings from tears—is very similar to Sith's, but the method is very different," she says.

Instead of determining the glucose level based on the amount of electrical current that passes through a sample, Lin, who works in the laboratory of Hyuck Choo, assistant professor of electrical engineering, worked on a sensor that detects glucose levels from the interaction between light and molecules.

In her SURF project, she began optimizing the characterization of glucose molecules in a sample of glucose solution using a technique called Raman spectroscopy. When molecules encounter light, they vibrate differently based on their symmetry and the types of bonds that hold their atoms together. This vibrational information provides a unique fingerprint for each type of molecule, which is represented as peaks on the Raman spectrum—and the intensity of these peaks correlates to the concentration of that molecule within the sample.

"This step is important because once I can determine the relationship between peak intensities and glucose concentrations, our sensor can just compare that known spectrum to the reading from a sample of tears to determine the amount of glucose in the sample," she says.

Lin's project is in the very beginning stages, but if it is successful, it could provide a more accurate glucose measurement, and from a smaller volume of liquid, than is possible with the finger-stick method. Perhaps more importantly for patients, it can provide that measurement painlessly.

Sophia ChenAlso in Choo's laboratory, sophomore electrical engineering major Sophia Chen's SURF project involves a new way to power devices like these tiny sensors and other medical implants, using the vibrations from a patient's vocal cords. These vibrations produce the sound of our voice, and also create vibrations in the skull.

"We're using these devices called energy harvesters that can extract energy from vibrations at specific frequencies. When the vibrations go from the vocal folds to the skull, a structure in the energy harvester vibrates at the same frequency, generating energy—energy that can be used to power batteries or charge things," Chen says.

Chen's goal is to determine the frequency of these vibrations—and if the energy that they produce is actually enough to power a tiny device. The hope is that one day these vibrations could power, or at least supplement the power of, medical devices that need to be implanted near the head and that presently run on batteries with finite lifetimes.

Chen and the other students acknowledge that health-monitoring sensors powered by the human body might be years away from entering the clinic. However, this opportunity to apply classroom knowledge to a real-life challenge—such as diabetes treatment—is an important part of their training as tomorrow's scientists and engineers.

Writer: 
Exclude from News Hub: 
No
Tuesday, October 7, 2014
Red Door Cafe

Samba and Salsa Exhibition

Caltech Researchers Receive NIH BRAIN Funding

On September 30, the National Institutes of Health (NIH) announced its first round of funding in furtherance of President Obama's "Brain Research through Advancing Innovative Neurotechnology"—or BRAIN—Initiative. Included among the 58 funded projects—all of which, according to the NIH, are geared toward the development of "new tools and technologies to understand neural circuit function and capture a dynamic view of the brain in action"—are six projects either led or co-led by Caltech researchers.

The Caltech projects are:

"Dissecting human brain circuits in vivo using ultrasonic neuromodulation"

Doris Tsao, assistant professor of biology
Mikhail Shapiro, assistant professor of chemical engineering

Tsao and Shapiro are teaming up to develop a new technology that both uses ultrasound to map and determine the function of interconnected brain networks and, ultimately, to change neural activity deep within the brain. "This would open new horizons for understanding human brain function and connectivity, and create completely new options for the noninvasive treatment of brain diseases such as intractable epilepsy, depression, and Parkinson's disease," Tsao says. "The key," Shapiro adds, "is to gain a precise understanding of the various mechanisms by which sound waves interact with neurons in the brain so we can use ultrasound to produce very specific neurological effects. We will be able to do this across the full spectrum, from molecules up to large model organisms."

"Modular nanophotonic probes for dense neural recording at single-cell resolution"

Michael Roukes, Robert M. Abbey Professor of Physics, Applied Physics, and Bioengineering
Thanos Siapas, professor of computation and neural systems

Roukes, Siapas, and their colleagues at Columbia University and Baylor College of Medicine propose to build ultra-dense arrays of miniature light-emitting and light-sensing probes using advanced silicon "chip" technology that permits their production en masse. These probes open the new field of integrated neurophotonics, Roukes says, and will permit simultaneous recording of the electrical activity of hundreds of thousands to, ultimately, millions of neurons, with single-cell resolution, in any given region of the brain. "The instrumentation we'll develop will enable us to observe the trafficking of information, in vivo, in brain circuits on an unprecedented scale, and to correlate this activity with behavior," he says.

"Time-Reversal Optical Focusing for Noninvasive Optogenetics"

Changhuei Yang, professor of electrical engineering, bioengineering, and medical engineering
Viviana Gradinaru, assistant professor of biology

Deep-brain stimulation has been used successfully for nearly two decades for the treatment of epilepsy, Parkinson's disease, chronic pain, depression, and other disorders. Current systems rely on electrodes implanted deep within the brain to modify the firing pattern of specific clusters of neurons, bringing them back into a more normal pattern. Yang and Gradinaru are working together on a method that would use only light waves to noninvasively activate light-sensitive molecules and precisely guide the firing of nerves. Biological tissues are opaque due to the scattering of light waves, and that scattering makes it impossible to finely focus a laser beam deep into brain tissue. The researchers hope to use an optical "time-reversal" trick previously developed by Yang to counteract the scattering, allowing light beams to be targeted to specific locations within the brain. "The technology to be developed in this project has the potential for wide-ranging applications, including noninvasive deep brain stimulation and precise incisionless laser surgery," he says.

"Integrative Functional Mapping of Sensory-Motor Pathways"

Michael H. Dickinson, Esther M. and Abe M. Zarem Professor of Bioengineering

As in other animals, locomotion in the fruit fly is a complicated process involving the interplay of sensory systems and motor circuits in the brain. Dickinson and his colleagues hope to decipher just how the brain uses sensory information to guide movements by developing a system to record the activity of large numbers of individual neurons from across the brains of fruit flies, as the flies fly in flight simulator or walk on a treadmill and are simultaneously exposed to various sights and sounds. Understanding sensory–motor integration, he says, should lead to a better understanding of human disorders, including Parkinson's disease, stroke, and spinal cord injury, and aid in the design and optimization of robotic prosthetic limbs and prosthetic devices that restore sight and other senses.

"Establishing a Comprehensive and Standardized Cell Type Characterization Platform"

David J. Anderson, Seymour Benzer Professor of Biology; Investigator, Howard Hughes Medical Institute (co-PI)

In collaboration with Hongkui Zeng and colleagues at the Allen Institute for Brain Science in Seattle, Anderson will help to develop a detailed, publicly available database characterizing the genetic, physiological, and morphological features of the various cell types in the brain that are involved in circuits controlling sensations and emotions. Understanding the cellular building blocks of brain circuits, the researchers say, is crucial for figuring out how those circuits can malfunction in disease. In particular, Anderson's lab will focus on the cells of the brain's hypothalamus and amygdala—structures that are vital to emotions and behavior, and involved in human psychiatric disorders such as post-traumatic stress disorder, anxiety, and depression. "This project will serve as a model for hub-and-spoke collaborations between academic laboratories and the Allen Institute, permitting access to their valuable resources and technologies while advancing the field more broadly," Anderson says.

"Vertically integrated approach to visual neuroscience: microcircuits to behavior"

Markus Meister, Lawrence A. Hanson, Jr. Professor of Biology (co-PI)

This project, led by Hyunjune Sebastian Seung of Princeton University, will use genetic, electrophysiological, and imaging tools to identify and map the neural connections of the retina, the light-sensing tissue in the eye, and determine their roles in visual perception and behavior. "Here we are shooting for a vertically integrated understanding of a neural system," Meister says. "The retina offers such a fantastic degree of experimental access that one can hope to bridge all scales of organization, from molecules to cells to microcircuits to behavior. We hope that success here can eventually serve as a blueprint for understanding other parts of the brain." Knowing the neural mechanisms for vision can also influence technological applications, such as new algorithms for computer vision, or the development of retinal prostheses for the treatment of blindness.

Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community
Tuesday, October 7, 2014
Center for Student Services 360 (Workshop Space)

Thirty Meter Telescope Groundbreaking and Blessing

Swimming Sea-Monkeys Reveal How Zooplankton May Help Drive Ocean Circulation

Brine shrimp, which are sold as pets known as Sea-Monkeys, are tiny—only about half an inch long each. With about 10 small leaf-like fins that flap about, they look as if they could hardly make waves.

But get billions of similarly tiny organisms together and they can move oceans.

It turns out that the collective swimming motion of Sea-Monkeys and other zooplankton—swimming plankton—can generate enough swirling flow to potentially influence the circulation of water in oceans, according to a new study by Caltech researchers.

The effect could be as strong as those due to the wind and tides, the main factors that are known to drive the up-and-down mixing of oceans, says John Dabiri, professor of aeronautics and bioengineering at Caltech. According to the new analysis by Dabiri and mechanical engineering graduate student Monica Wilhelmus, organisms like brine shrimp, despite their diminutive size, may play a significant role in stirring up nutrients, heat, and salt in the sea—major components of the ocean system.

In 2009, Dabiri's research team studied jellyfish to show that small animals can generate flow in the surrounding water. "Now," Dabiri says, "these new lab experiments show that similar effects can occur in organisms that are much smaller but also more numerous—and therefore potentially more impactful in regions of the ocean important for climate."

The researchers describe their findings in the journal Physics of Fluids.

Brine shrimp (specifically Artemia salina) can be found in toy stores, as part of kits that allow you to raise a colony at home. But in nature, they live in bodies of salty water, such as the Great Salt Lake in Utah. Their behavior is cued by light: at night, they swim toward the surface to munch on photosynthesizing algae while avoiding predators. During the day, they sink back into the dark depths of the water.


A. salina (a species of brine shrimp, commonly known as Sea-Monkeys) begin a vertical migration, stimulated by a vertical blue laser light.

To study this behavior in the laboratory, Dabiri and Wilhelmus use a combination of blue and green lasers to induce the shrimp to migrate upward inside a big tank of water. The green laser at the top of the tank provides a bright target for the shrimp to swim toward while a blue laser rising along the side of the tank lights up a path to guide them upward.

The tank water is filled with tiny, silver-coated hollow glass spheres 13 microns wide (about one-half of one-thousandth of an inch). By tracking the motion of those spheres with a high-speed camera and a red laser that is invisible to the organisms, the researchers can measure how the shrimp's swimming causes the surrounding water to swirl.

Although researchers had proposed the idea that swimming zooplankton can influence ocean circulation, the effect had never been directly observed, Dabiri says. Past studies could only analyze how individual organisms disturb the water surrounding them.

But thanks to this new laser-guided setup, Dabiri and Wilhelmus have been able to determine that the collective motion of the shrimp creates powerful swirls—stronger than would be produced by simply adding up the effects produced by individual organisms.

Adding up the effect of all of the zooplankton in the ocean—assuming they have a similar influence—could inject as much as a trillion watts of power into the oceans to drive global circulation, Dabiri says. In comparison, the winds and tides contribute a combined two trillion watts.

Using this new experimental setup will enable future studies to better untangle the complex relationships between swimming organisms and ocean currents, Dabiri says. "Coaxing Sea-Monkeys to swim when and where you want them to is even more difficult than it sounds," he adds. "But Monica was undeterred over the course of this project and found a creative solution to a very challenging problem."

The title of the Physics of Fluids paper is "Observations of large-scale fluid transport by laser-guided plankton aggregations." The research was supported by the U.S.-Israel Binational Science Foundation, the Office of Naval Research, and the National Science Foundation.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Tuesday, October 7, 2014
Center for Student Services 360 (Workshop Space)

Caltech Peer Tutor Training

What Is Possible in Real-World Communication Systems: An Interview with Victoria Kostina

In 1948, the mathematical engineer Claude Shannon published a paper that showed mathematically that it should be possible to transmit information reliably at a high rate even in a noisy system. And with that, the field of information theory was born.

Caltech's newest assistant professor of electrical engineering, Victoria Kostina, works in this field. She comes to Caltech's Division of Engineering and Applied Science from Princeton University, where she completed her PhD and a postdoctoral position in electrical engineering. Prior to that, she earned her master's in 2006 from the University of Ottawa and her undergraduate degree in applied mathematics and physics in 2004 from the Moscow Institute of Physics and Technology.

We sat down with Kostina to talk about communication systems and her work.

 

Can you tell us a little bit about your field?

Information theory is about the transmission of information. It studies, in a very abstract way, elements in all communication systems. And a communication system is something very general—it is any system with a transmitter, a receiver, a communication channel, and information that needs to be relayed. So, for example, as we are talking right now, we represent a communication system: I am trying to transmit my information to you, the receiver, through the air, which is our communication channel.

In his famous 1948 paper, Claude Shannon, the father of this field, asked, "What is achievable in a communication system when you have noise?" So imagine if we were standing in opposite corners of a large room filled with people talking amongst themselves, and I needed to talk to you, but for some reason I could not move. What could I do? I could try to yell louder to combat the noise. But yelling is very expensive because it strains your vocal chords—in other words, it can burn out the amplifiers in the system. Just increasing the power is not a good way to address the problem.

Another thing I could try to do is just repeat my message many times with a low voice, hoping that at some point the noise in the room would be low enough for my message to get to you. This is what is known as a repetition code. It is the most simple, trivial code you can think of—it just repeats the same message many, many times.

Of course, there are data transmission codes that are much more intelligent than a repetition code, and they are capable of transmitting more information much faster.

 

Did Shannon have an optimal solution?

What Shannon showed is that, in fact, you can transmit many bits of information per unit of time even if the noise is very, very high using intelligent coding systems. This was a remarkable insight, and at the time, people didn't know how to achieve this. Shannon made this insight based on mathematical modeling; he didn't show how to build the codes.

Information theorists like myself try to understand, for any communication system, what data transmission rate is attainable. In other words, what is the optimal data rate that we can achieve in these systems?

 

And do you use mathematical modeling to try to determine that?

Absolutely. This research requires tools from probability theory, from theory of random processes, etc. What we try to do is to look at the problem and extract the most relevant features to make it as simple as possible. Then we describe the problem mathematically, solve it, and then try to bring it back to the real world and see how this mathematical insight might help design real-world communication systems.

 

What is the specific focus of your research?

My research has to do with bridging the gap between what Shannon's theory tells us and the real world.

Shannon's theory showed us the limit of what is achievable in the case where we have all the time in the world to transmit information. But in real-world systems, especially in modern real-time applications, we cannot afford to wait forever. Anyone who has talked on Skype knows this. A delay of even a couple of seconds gets annoying very quickly.

So I'm trying to understand the fundamental limits in systems in which the delays are strictly bounded. This would ultimately inform real-world designs. Of course, there are many challenges in achieving that goal; even after you know the fundamental limit, a lot of work must be done to design codes that can attain that limit.
 

What are the applications of this work?

There are two points I would like to make about applications. The first is that it's important to know these limits so that the people who are working on the design of better codes know what to shoot for. Let's say they know the fundamental limit, and they're measuring the performance of an algorithm and know that they're already quite close to the limit. In that case, maybe it isn't worth spending more effort and investing into further development of that code. However, if they see there is still a big gap or room for improvement, maybe it is worth it to invest.

The second point is that as we go through our analysis of the system, we do gain insights into how to build real-world codes. We come away understanding some essential properties that a good code should have and that a coding engineer should aim for in order to attain those unsurpassable fundamental limits.

 

What do you find most exciting about your work?

I love that it is very basic research, very theoretical. Once we strip away all the particularities of a given problem, we are left with a mathematical model, which is timeless. Once you solve such a problem, it stays there.

But at the same time, I like that this work applies to the real world. The fact that it gives us insights into how to improve existing communication systems is a very exciting feature for me.

 

Do you remember how you originally got interested in these sorts of questions?

I grew up in a small town near Moscow, and both of my parents were engineers. So I think from a very early age they instilled in me this inquisitive outlook on the world. They always had very interesting, long answers to my questions like, "Why is the sky blue?" and  "How do planes fly?" I think that's how I knew that I wanted to understand more about the world from a mathematical point of view.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Variability Keeps The Body In Balance

Although the heart beats out a very familiar "lub-dub" pattern that speeds up or slows down as our activity increases or decreases, the pattern itself isn't as regular as you might think. In fact, the amount of time between heartbeats can vary even at a "constant" heart rate—and that variability, doctors have found, is a good thing.

Reduced heart rate variability (HRV) has been found to be predictive of a number of illnesses, such as congestive heart failure and inflammation. For athletes, a drop in HRV has also been linked to fatigue and overtraining. However, the underlying physiological mechanisms that control HRV—and exactly why this variation is important for good health—are still a bit of a mystery.

By combining heart rate data from real athletes with a branch of mathematics called control theory, a collaborative team of physicians and Caltech researchers from the Division of Engineering and Applied Science have now devised a way to better understand the relationship between HRV and health—a step that could soon inform better monitoring technologies for athletes and medical professionals.

The work was published in the August 19 print issue of the Proceedings of the National Academy of Sciences.

To run smoothly, complex systems, such as computer networks, cars, and even the human body, rely upon give-and-take connections and relationships among a large number of variables; if one variable must remain stable to maintain a healthy system, another variable must be able to flex to maintain that stability. Because it would be too difficult to map each individual variable, the mathematics and software tools used in control theory allow engineers to summarize the ups and downs in a system and pinpoint the source of a possible problem.

Researchers who study control theory are increasingly discovering that these concepts can also be extremely useful in studies of the human body. In order for a body to work optimally, it must operate in an environment of stability called homeostasis. When the body experiences stress—for example, from exercise or extreme temperatures—it can maintain a stable blood pressure and constant body temperature in part by dialing the heart rate up or down. And HRV plays an important role in maintaining this balance, says study author John Doyle, the Jean-Lou Chameau Professor of Control and Dynamical Systems, Electrical Engineering, and Bioengineering.

"A familiar related problem is in driving," Doyle says. "To get to a destination despite varying weather and traffic conditions, any driver—even a robotic one—will change factors such as acceleration, braking, steering, and wipers. If these factors suddenly became frozen and unchangeable while the car was still moving, it would be a nearly certain predictor that a crash was imminent. Similarly, loss of heart rate variability predicts some kind of malfunction or 'crash,' often before there are any other indications," he says.

To study how HRV helps maintain this version of "cruise control" in the human body, Doyle and his colleagues measured the heart rate, respiration rate, oxygen consumption, and carbon dioxide generation of five healthy young athletes as they completed experimental exercise routines on stationary bicycles.

By combining the data from these experiments with standard models of the physiological control mechanisms in the human body, the researchers were able to determine the essential tradeoffs that are necessary for athletes to produce enough power to maintain an exercise workload while also maintaining the internal homeostasis of their vital signs.

"For example, the heart, lungs, and circulation must deliver sufficient oxygenated blood to the muscles and other organs while not raising blood pressure so much as to damage the brain," Doyle says. "This is done in concert with control of blood vessel dilation in the muscles and brain, and control of breathing. As the physical demands of the exercise change, the muscles must produce fluctuating power outputs, and the heart, blood vessels, and lungs must then respond to keep blood pressure and oxygenation within narrow ranges."

Once these trade-offs were defined, the researchers then used control theory to analyze the exercise data and found that a healthy heart must maintain certain patterns of variability during exercise to keep this complicated system in balance.  Loss of this variability is a precursor of fatigue, the stress induced by exercise. Today, some HRV monitors in the clinic can let a doctor know when variability is high or low, but they provide little in the way of an actionable diagnosis.

Because monitors in hospitals can already provide HRV levels and dozens of other signals and readings, the integration of such mathematical analyses of control theory into HRV monitors could, in the future, provide a way to link a drop in HRV to a more specific and treatable diagnosis. In fact, one of Doyle's students has used an HRV application of control theory to better interpret traditional EKG signals.

Control theory could also be incorporated into the HRV monitors used by athletes to prevent fatigue and injury from overtraining, he says.

"Physicians who work in very data-intensive settings like the operating room or ICU are in urgent need of ways to rapidly and acutely interpret the data deluge," says Marie Csete, MD (PhD, '00), chief scientific officer at the Huntington Medical Research Institutes and a coauthor on the paper. "We hope this work is a first step in a larger research program that helps physicians make better use of data to care for patients."

This study is not the first to apply control theory in medicine. Control theory has already informed the design of a wearable artificial pancreas for type 1 diabetic patients and an automated prototype device that controls the administration of anesthetics during surgery. Nor will it be the last, says Doyle, whose sights are next set on using control theory to understand the progression of cancer.

"We have a new approach, similarly based on control of networks, that organizes and integrates a bunch of new ideas floating around about the role of healthy stroma—non-tumor cells present in tumors—in promoting cancer progression," he says.

"Based on discussions with Dr. Peter Lee at City of Hope [a cancer research and treatment center], we now understand that the non-tumor cells interact with the immune system and with chemotherapeutic drugs to modulate disease progression," Doyle says. "And I'm hoping there's a similar story there, where thinking rigorously about the tradeoffs in development, regeneration, inflammation, wound healing, and cancer will lead to new insights and ultimately new therapies."

Other Caltech coauthors on the study include former graduate students Na Li (PhD '13) now an assistant professor at Harvard; Somayeh Sojoudi (PhD '12), currently at NYU; and graduate students Chenghao Simon Chien and Jerry Cruz. Other collaborators on the study were Benjamin Recht, a former postdoctoral scholar in Doyle's lab and now an assistant professor at UC Berkeley; Daniel Bahmiller, a clinician training in public health; and David Stone, MD, an expert in ICU medicine from the University of Virginia School of Medicine.

Writer: 
Exclude from News Hub: 
No

Remembering Frank Marble

1918–2014

Frank Earl Marble (Eng '47, PhD '48), Caltech's Richard L. and Dorothy M. Hayman Professor of Mechanical Engineering and Professor of Jet Propulsion, Emeritus, passed away on August 11, 2014, two months after the death of Ora Lee Marble, his wife of 71 years. Marble was one of the fathers of modern jet engines; his doctoral thesis included a method for calculating the three-dimensional airflow through rows of rotating blades. A jet engine is essentially two sets of blades on a common axle. A compressor at the front of the engine slows the incoming air and feeds it to the burner, and a turbine spinning in the hot gases downstream ejects the exhaust and drives the compressor. More broadly, Marble's methods apply to any fluid flowing along the axis of a fan, pump, turbine, or propeller.

Born in Cleveland, Ohio, on July 21, 1918, 15 years after the Wright brothers' first powered flight, Marble got interested in aviation in grade school. The Cleveland airfield was "a long streetcar ride away," he recalled in his Caltech oral history, and he "could wander into the hangars" unsupervised. He got his pilot's license before his driver's license.

Marble earned his BS in aeronautics in 1940 at the Case School of Applied Science (now Case Western Reserve University), "about a two-mile walk from home." For his master's degree in 1942, he built a fan designed to measure the surface pressure along a blade as it cut through the air. Holes in the blade led to a set of pressure gauges; the trick, he noted, was inventing the "slip seal" at the fan's hub that kept the holes and their gauges connected. He brought the data with him to Caltech, where it eventually became the basis for his PhD work.

But first, Marble helped fight World War II from the Cleveland airport, joining the National Advisory Committee for Aeronautics' Aircraft Engine Research Lab (now NASA's John H. Glenn Research Center at Lewis Field). Marble led the team troubleshooting the B-29 Superfortress, capable of flying thousands of miles at 30,000 feet with 10 tons of bombs. The "Superfort" was the biggest, heaviest plane of the war and its four engines often overheated; a significant number were ditched in the Pacific after engine fires. Several alterations to the airflow maximized the engine cooling, and the B-29 would remain in service into the 1960s.

On receiving his doctorate from Caltech in 1948, Marble was hired as an assistant professor by Tsien Hsue-shen (PhD '39), the Goddard Professor of Jet Propulsion. Tsien assigned him to develop a set of courses in this new field, which blended chemistry, gas dynamics, and materials science.

Tsien also gave Marble a half-time appointment at Caltech's Jet Propulsion Laboratory (JPL), which in the pre-NASA era really was studying jet propulsion, developing missiles under contract with the army. Tsien and his fellow members of the "suicide squad" had founded JPL in the wide-open scrublands of the upper Arroyo Seco in the 1930s after a string of accidents and explosions had gotten them evicted from the campus aeronautics lab. By the late 1940s, JPL had grown into an unrivaled set of testing facilities sprawled across some 60 acres.

Marble was put in charge of the group trying to build a workable ramjet—a turbine-less supersonic engine that compresses air by "ramming" it into an inlet that rapidly slows it to subsonic speeds. An ordinary turbojet's ignition source sits in a flame holder, or "can," mounted just behind the compressor. Like a rock in a river, this obstruction creates an eddy in its wake where hot, slow-moving gas gets trapped. This region of relative calm nurtured a stable flame. In a ramjet, however, a momentary tongue of flame would blow out the back of the engine just before it quit.

Marble attacked the problem by repurposing the ramjet lab for combustion research, leading to a string of breakthroughs in the mid-1950s. First, he and graduate student Tom Adamson (MS '50, PhD '54) mathematically analyzed the contact zone between the fuel and the wake. The fuel diffuses across this mixing layer and ignites on contact with the wake, replenishing the eddy's hot gas. By assuming that the mixing layer's gases flowed in a parallel, laminar fashion, Marble and Adamson were able to predict how far downstream the fuel would catch fire and how stable the flame would be. Says Adamson, "We didn't answer every question about combustion in laminar mixing, but we answered many of them." Studies of premixed ignition still refer to the "Marble-Adamson problem" as a paradigm.

High-speed "movies" of the flame confirmed the laminar ignition theory. The movies also showed why the flame blew out—as the airflow increased, the mixing layer suddenly turned turbulent. This dislodged the eddy, which promptly dissipated. The results were "scalable," meaning that they could be applied to any combination of fuel and hardware to find a flame-holder diameter and airstream velocity that would guarantee a steady burn.

Other movies demystified the mechanism behind a type of catastrophic engine failure whose early stages were announced by a 160-decibel screech. These images revealed that the curling tendrils of burnt fuel entering the eddy conjured up opposing whirlpools in order to keep the flow's overall angular momentum in balance. This second set of whirlpools spread outward, and if they withdrew enough heat from the mixing layer, they would themselves ignite. A natural acoustic resonance in the engine could then amplify their thermal energy tenfold en route to the walls. "My desk was 600 feet away," Adamson says. "When the motor began to screech, things shook so hard I couldn't write."

Marble's group also figured out what makes a compressors stall, which happens when its rotating blades lose their "bite." (In a bad stall, the high-pressure surge of air escaping backward through the compressor can do enough damage to bring down an airplane.) Howard Emmons at Harvard had found that an individual blade stalled when it entered a cell of reduced pressure that separated the airflow from the blade, and that these cells leapt from blade to blade; think of the slats of a Venetian blind rippling up and down in a breeze. Marble developed a two-dimensional model of the ripple's essential features—a neat complement to his PhD work on unstalled flow.

Meanwhile, the Chinese-born Tsien had fallen victim to the Red Scare. His top-secret clearance was revoked in the autumn of 1950. For the next five years the Immigration and Naturalization Service forbade him from leaving Caltech's environs. He was unable to enter JPL, or to participate in classified research on campus—in effect, barred from aeronautics altogether. When the Tsiens were evicted from the house they rented, Marble found them another; when they were evicted from that one as well, the Marbles took them in. (Ironically, after being deported in 1955, the embittered Tsien did join the Communist Party and led China into the space age.)

Marble returned to campus full-time in 1959 and began studying multiphase gas dynamics, in which a gas carries tiny particles—in this case, motes of aluminum oxide, routinely added to solid rocket fuels to make them burn hotter. The grains moved more slowly than the gas and their mass affected its flow, causing the rockets to underperform. Marble helped design the nozzle for the solid-fuel Minuteman intercontinental ballistic missile in the early 1960s, but it took most of the decade to work out a complete mathematical treatment of dusty flows.

Marble spent the '70s studying various sources of jet-engine noise before returning to combustion research. Caltech professors Anatol Roshko (MS '47, PhD '52) and Garry Brown had shown in the early '70s that a turbulent shear flow's swirls retained their identities for considerable distances downstream, stretching the mixing layer and wrapping it around itself. Marble and graduate student Ann Karagozian (PhD '82) set about studying how diffusion-driven flames interacted with these vortices—"a very fundamental problem," says Karagozian. "Frank pioneered the coherent-flame model of turbulent combustion, and researchers still use 'flamelet models' in very complicated turbulent combustion simulations."

In addition to his research accomplishments, Marble was legendary for his teaching prowess—and his penchant for 8:00 a.m. lectures delivered "with breathtaking clarity and almost without notes," Karagozian says. "It was tough getting up early for them, but the lectures were incredibly stimulating and rigorous."

Marble's 60-odd graduate students included a who's-who of aerospace engineers as well as Benoit Mandelbrot (Eng '49), the father of fractal geometry. The Frank and Ora Lee Marble Professorship and a graduate fellowship have been established by his students and friends to honor his impact as a mentor as well as a scientist.

Marble was an elected member of both the National Academy of Engineering and the National Academy of Sciences, a rare distinction, and a fellow of the American Institute of Aeronautics and Astronautics (AIAA). His other honors included the AIAA's Propellants and Combustion Award and the Daniel Guggenheim Medal, often regarded as the Nobel Prize of aeronautics.

Marble is survived by his son, Stephen; his daughter-in-law, Cheryl; two grandchildren and one great-grandson. Marble's daughter, Patricia, died in 1996.

A memorial service is planned for Saturday, October 4. 

Writer: 
Douglas Smith
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Pages

Subscribe to RSS - EAS