Michael Roukes Honored by the French Republic

The French Republic honored Michael Roukes, Robert M. Abbey Professor of Physics, Applied Physics, and Bioengineering at the California Institute of Technology (Caltech), with the Chevalier de l'Ordre des Palmes Académiques (Knight of the Order of Academic Palms).

Roukes received his medal and diploma in a luncheon ceremony June 11 on the Caltech campus. Attending the ceremony were representatives from the French government, including Axel Cruau, French consul general in Los Angeles, who presented Roukes with the award; Caltech president Jean-Lou Chameau; and Roukes's family and Institute colleagues.

Emperor Napoleon Bonaparte established the Palmes Académiques awards in 1808 to honor faculty from the University of Paris. During the following two centuries, the French Republic extended eligibility for the award to include other citizens and foreigners who have made significant contributions to French education and culture. There are three grades of Palmes Académiques medals: commandeur, officier, and chevalier.

"I am honored to receive the Order of Academic Palms from the French Republic," Roukes says. "Through the cooperation and collaboration we have fostered, French and American scientists greatly increase their chances of making new discoveries that benefit both nations."

Roukes, who came to Caltech in 1992, is an expert in the field of quantum measurement and applied biotechnology. He was the founding director of the Kavli Nanoscience Institute at Caltech from 2004–2006 and codirector from 2008 to 2013. He has led numerous cross-disciplinary collaborations at Caltech and with researchers from other institutions that explore the frontiers of nanoscience. In 2010, Roukes received the Director's Pioneer Award from the National Institutes of Health for his work developing nanoscale tools for use in medical research.

In his presentation remarks, Cruau cited Roukes's collaborations with French research institutions, in particular the Laboratoire d'Electronique et des Technologies de l'Information (LETI) in Grenoble, France. Roukes and LETI CEO Laurent Malier cofounded the Alliance for Nanosystems VLSI, which focuses on creating complex nanosystems-based tools for science and industry. Cruau also noted Roukes's instrumental role in helping to start the Brain Research through Advancing Innovative Neurotechnologies, or BRAIN, Initiative, a national program proposed by President Barack Obama to build a comprehensive map of activity in the human brain.

Other Palmes Académiques recipients at Caltech are Ares J. Rosakis, von Kármán Professor of Aeronautics and professor of mechanical engineering, and chair of the Division of Engineering and Applied Science, who was named a Commandeur de l'Ordre des Palmes Académiques in 2012, and Guruswami Ravichandran, John E. Goode, Jr., Professor of Aerospace and professor of mechanical engineering, and director of the Graduate Aerospace Laboratories, who received a Chevalier de l'Ordre des Palmes Académiques in 2011.

Writer: 
Brian Bell
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Seeing Data

More data are being created, consumed, and transported than ever before, and in all areas of society, including business, government, health care, and science. The hope and promise is that this influx of information—known as big data—will be transformative: armed with insightful information, businesses will make more money while providing improved customer service, governments will be more efficient, medical science will be better able to prevent and treat diseases, and science will make discoveries that otherwise would not be possible.

But to do all that, people have to be able to make sense of the data. Scientists and engineers are employing new computational techniques and algorithms to do just that, but sometimes, to gain significant insights from data, you have to see it—and as the data become increasingly complex and bountiful, traditional bar graphs and scatter plots just won't do. And so, not only will scientists and engineers have to overcome the technical challenges of processing and analyzing such large amounts of complex data—often in real time, as the data are being collected—but they also will need new ways to visualize and interact with that information.

In a recent symposium hosted at Caltech in collaboration with the Jet Propulsion Laboratory (JPL) and Art Center College of Design in Pasadena, computer scientists, artists, and designers gathered to discuss what they called the "emerging science of big-data visualization." The speakers laid out their vision for the potential of data visualization and demonstrated its utility, power, and beauty with myriad examples.

Data visualization represents a natural intersection of art and science, says Hillary Mushkin, a visiting professor of art and design in mechanical and civil engineering at Caltech and one of the co-organizers of the symposium. Artists and scientists alike pursue endeavors that involve questions, research, and creativity, she explains. "Visualization is another entry point to the same practice—another kind of inquiry that we are already engaged in."

Traditionally, data visualization tends to be reductionist, said self-described data artist Jer Thorp in his talk at the symposium. Charts and graphs are usually used to distill complex data into a simpler message or idea. But the promise of big data is that it contains hidden insight and knowledge. To gain that deeper understanding, he explained, we must embrace the inherent complexity of data. Data visualization, therefore, should be revelatory instead of just reductionist—not simply a way to convey information or find answers, Thorp said, but to generate and cultivate questions. Or, as he put it, the goal is question farming rather than answer farming.

For example, Thorp used Twitter to generate a model of worldwide travel—a model that, he said, was inspired by the desire to actually create a model that describes how viruses are spread. He searched for tweets that included the phrase, "just landed in" and recorded the tweeted destinations. Combining that information with the original locations of the travelers, as listed in their Twitter profiles, Thorp created an animated graphic depicting air travel. Since one way that diseases are spread globally is through air travel, the graphic—while rudimentary, he admitted—could be a starting point for epidemiological models of disease outbreaks.

Data visualization also may be interactive, allowing users to manipulate the data and peel back multiple layers of information. Thorp created such a graphic to represent the first four months of data from NASA's Kepler mission—the space telescope that has discovered thousands of possible planets. A user not only can visualize all the planet candidates from the data set but also can reorganize the planets in terms of size, temperature, or other variables.

Artist Gola Levin demonstrated how art and data can be used to provoke thoughts and ideas. One example is a project called The Secret Life of Numbers, in which he counted the number of pages that Google returned when searching for each integer from 0 to 1,000,000. He used that data to create an interactive graphic that shows interesting trends—the popularity of certain numbers like 911, 1234, or 90210, for example.

Anja-Silvia Goeing, a lecturer in history at Caltech, described the history of data visualization, highlighting centuries-old depictions of data, including maps of diseases in London; drawings and etchings of architecture, mechanical devices, and human anatomy; letter collections and address books as manifestations of social networks; and physical models of crystals and gemstones.

Data visualization, Goeing noted, has been around for many generations. What's new now is the need to visualize lots of complex data, and that, the symposium speakers argued, means we need to change how we think about data visualization. The idea that it is simply a technique or a tool is limiting, said designer Eric Rodenbeck during his presentation. Instead, we must think of visualization as a medium through which data can be explored, understood, and communicated.

This summer, Mushkin, along with the other co-organizers of the symposium—Scott Davidoff, the manager of the human interfaces group at JPL, and Maggie Hendrie, the chair of interaction design at Art Center—are mentoring undergraduate students from around the country to work on data-visualization research projects at Caltech and JPL. One group of students will work with two Caltech researchers—Ralph Adolphs, Bren Professor of Psychology and Neuroscience and professor of biology, and director of the Caltech Brain Imaging Center, and Mike Tyszka, associate director of the Caltech Brain Imaging Center—to visualize neural interactions deep inside the brain. Another will work with Caltech professor of aeronautics Beverley McKeon to visualize how fluid flows over walls. The third project involves systems engineering at JPL, challenging students to create a tool to visualize the complex process by which space missions are designed. Mushkin, Davidoff, and Hendrie also plan to invite more speakers to Caltech to talk about data visualization later in the summer.

You can watch all of the symposium talks on Caltech's YouTube channel.

 

Writer: 
Marcus Woo
Tags: 
Listing Title: 
Symposium: "Emerging Science of Big-Data Visualization"
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Caltech's Unique Wind Projects Move Forward

Caltech fluid-mechanics expert John Dabiri has some big plans for a high school in San Pedro, military bases in California, and a small village on Bristol Bay, Alaska—not to mention for the future of wind power generation, in general.

Back in 2009, Dabiri, a professor of aeronautics and bioengineering, was intrigued by the pattern of spinning vortices that trail fish as they swim. Curious, he assigned some graduate students to find out what would happen to a wind farm's power output if its turbines were spaced like those fishy vortices. In simulations, energy production jumped by a factor of 10. To prove that the same effect would occur under real-world conditions, Dabiri and his students established a field site in the California desert with 24 turbines. Data gathered from the site proved that placing turbines in a particular orientation in relation to one another profoundly improves their energy-generating efficiency.

The turbines Dabiri has been investigating aren't the giant pinwheels with blades like propellers—known as horizontal-axis wind turbines (HAWTs)—that most people envision when they think about wind power. Instead, Dabiri's group uses much shorter turbines that look something like egg beaters sticking out of the ground. Dabiri and his colleagues believe that with further development, these so-called vertical-axis wind turbines (VAWTs) could dramatically decrease the cost, footprint, and environmental impact of wind farms.

"We have been able to demonstrate that using wind turbines that are 30 feet tall, as opposed to 300 feet tall, could generate sufficient power for wind-farm applications," Dabiri says. "That's important for us because our approach to getting to lower-cost energy is through the use of smaller vertical-axis wind turbines that are simpler—for example, they have no gearbox and don't need to be pointed in the direction of the oncoming wind—and whose performance can be optimized by arranging them properly."

Even as Dabiri and his group continue to study the physics of the wind as it moves through their wind farm and to develop computer models that will help them to predict optimal configurations for turbines in different areas, they are now beginning several pilot projects to test their concept.

"One of the areas where these smaller turbines can have an immediate impact is in the military," says Dabiri. Indeed, the Department of Defense is one of the largest energy consumers in the country and is interested in using renewable methods to meet some of that need. However, one challenge with the use of wind energy is that large HAWTs can interfere with helicopter operations and radar signatures. Therefore, the Office of Naval Research is funding a three-year project by Dabiri's group to test the smaller VAWTs and to further develop software tools to determine the optimal placement of turbines. "We believe that these smaller turbines provide the opportunity to generate renewable power while being complementary to the ongoing activities at the base," Dabiri says.

A second pilot project, funded by the Los Angeles Unified School District, will create a small wind farm that will help to power a new school while teaching its students about wind power. San Pedro High School's John M. and Muriel Olguin Campus, which opened in August 2012, was designed to be one of the greenest schools ever built, with solar panels, artificial turf, and a solar-heated pool—and the plan has long included the use of wind turbines.

"Here, the challenge is that you have a community nearby, and so if you used the very large horizontal-axis wind turbines, you would have the potential issue of the visual signature, the noise, and so on," Dabiri says. "These smaller turbines will be a demonstration of an alternative that's still able to generate wind energy but in a way that might be more agreeable to these communities."

That is one of the major benefits of VAWTs: being smaller, they fit into a landscape far more seamlessly than would 100-meter-tall horizontal-axis wind turbines. Because VAWTs can also be placed much closer to one another, many more of them can fit within a given area, allowing them to tap into more of the wind energy available in that space than is typically possible. What this all means is that a very productive wind farm can be built that has a lower environmental impact than previously possible.

That is especially appealing in roadless areas such as Alaska's Bristol Bay, located at the eastern edge of the Bering Sea. The villages around the bay—a crucial ecosystem for sockeye salmon—face particular challenges when it comes to meeting their energy needs. The high cost of transporting diesel fuel to the region to generate power creates a significant barrier to sustainable economic development. However, the region also has strong wind resources, and that's where Dabiri comes in.

With funding from the Gordon and Betty Moore Foundation, Dabiri and his group, in collaboration with researchers at the University of Alaska Fairbanks, will be starting a three-year project this summer to asses the performance of a VAWT wind farm in a village called Igiugig. The team will start by testing a few different VAWT designs. Among them is a new polymer rotor, designed by Caltech spinoffs Materia and Scalable Wind Solutions, which may withstand icing better than standard aluminum rotors.

"Once we've figured out which components from which vendors are most effective in that environment, the idea is to expand the project next year, to have maybe a dozen turbines at the site," Dabiri says. "To power the entire village, we'd be talking somewhere in the 50- to 70-turbine range, and all of those could be on an acre or two of land. That's one of the benefits—we're trying to generate the power without changing the landscape. It's pristine, beautiful land. You wouldn't want to completely change the landscape for the sake of producing energy."

Video and images of Dabiri's field site in the California desert can be found at http://dabiri.caltech.edu/research/wind-energy.html.

 

The Gordon and Betty Moore Foundation, established in 2000, seeks to advance environmental conservation and scientific research around the world and improve the quality of life in the San Francisco Bay Area. The Foundation's Science Program aims to make a significant impact on the development of provocative, transformative scientific research, and increase knowledge in emerging fields.

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Senior Spotlight: Raymond Jimenez

Tech-savvy student headed to SpaceX

Caltech's Class of 2013 is a group of passionate, curious, and creative individuals who have spent their undergraduate years advancing research, challenging both conventional thinking and one another. They have thrived in a rigorous and unique academic environment, and built the kinds of skills in both leadership and partnership that will support them as they pursue their biggest and best ideas well into the future.

Over the next few weeks, the stories of just a few of these remarkable graduates will be featured here. Watch as they and their peers are honored at Caltech's 119th Commencement on June 14 at 10 a.m. If you can't be in Pasadena, the ceremony will be live-streamed at http://www.ustream.tv/caltech.

Talented, curious, fun, innovative, and wildly intelligent. Although these adjectives describe many Caltech students, they come to mind instantly when talking to senior Raymond Jimenez. The electrical engineering major is brimming with ideas and technical know-how. During his years at Caltech, for example, he developed new circuit designs for neural probes and figured out how to vastly increase the computing power available to students in Dabney, his undergraduate house; in his free time he has been designing a small-scale particle accelerator. Jimenez even helped improve the displays on the Tokyo Metro while interning for Mitsubishi last summer.

Jimenez, a native of Duarte, began his association with Caltech when he was a high-school junior at Polytechnic School (across the street from the Institute) and had the chance to work in the lab of Paul Bellan, a professor of applied physics. From the outset, Jimenez was struck by the freedom Caltech students were given to figure out unique solutions to research problems. Bellan gave Jimenez a spending allowance and suggested that he figure out how to make a smaller version of an experimental setup the group had been using to produce laboratory versions of the sun's coronal bursts. "There was a lot of trust put in me," Jimenez says. "I got to design my own hardware. I helped make a mock-up of the experiment using normal, basically everyday materials, instead of a bunch of fancy big-science things."

That's something Jimenez loves—showing that individuals can do science that is typically restricted to huge research teams with millions of dollars. He hopes that the room-sized particle accelerator for which he is currently drawing up plans will give students first-hand experience with such an instrument. "It's one thing to say you know that the paths of electrons can be bent using a magnetic field and another to actually go into the lab and then be able to say that you've bent the paths of electrons using a magnetic field," says Jimenez, who hopes to build the accelerator in the next few years. "A big dream of mine is to actually see if there's a way for interested people to not have to go through the usual channels to do science."

According to Jimenez, if you want to do science on a modest budget, it helps to know how to build things yourself. One of his favorite classes at Caltech was APh/EE 9, Solid-State Electronics for Integrated Circuits—a course then taught by Oskar Painter, also a professor of applied physics. In the class, students learn to make transistors for computer chips. "You might think that transistors are only available to really well-stocked companies with big research budgets, so it was really cool to be able to make one and participate in that hands-on way," he says.

After taking APh/EE 9, Jimenez completed a Summer Undergraduate Research Fellowship (SURF) project with Axel Scherer, the Bernard Neches Professor of Electrical Engineering, Applied Physics and Physics, in which he helped design circuitry for a computer chip that may eventually be used in tiny implantable neural probes that measure human brain activity.

Scherer describes Jimenez as "one of the most capable undergraduates whom I have had the pleasure of working with over my past 20 years at Caltech," adding that he has "extraordinary" abilities. "Raymond brought tremendous enthusiasm, talent, and insight to our neural probe project," Scherer says. "It was fun working with him on our research projects, and I think of him more as a scientific collaborator than as a student."

Jimenez has also been active outside of the lab and classroom. He has particularly enjoyed Caltech's unique housing system for undergraduate students and served as the vice president of Dabney House during his junior year. He was especially innovative as Dabney's system administrator, shifting available funding to create a scientific computing cluster with a hundred terabytes of storage for students to use for their classes and projects.

Sound as if Jimenez has been spread a bit thin? He says that one of the most significant skills he has gained from his time at Caltech is the ability to manage his priorities and his time. "The workload at Caltech showed me what I can achieve in a given time period," Jimenez says. "If you're pushed and you know what your total output can be, then you have a baseline standard to compare yourself to."

Starting in July, Jimenez will take that knowledge with him to the Hawthorne-based commercial space-transport company SpaceX, where he will work on the avionics team.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
Yes

Notes from the Back Row: "Quantum Entanglement and Quantum Computing"

John Preskill, the Richard P. Feynman Professor of Theoretical Physics, is hooked on quanta. He was applying quantum theory to black holes back in 1994 when mathematician Peter Shor (BS '81), then at Bell Labs, showed that a quantum computer could factor a very large number in a very short time. Much of the world's confidential information is protected by codes whose security depends on numerical "keys" large enough to not be factorable in the lifetime of your average evildoer, so, Preskill says, "When I heard about this, I was awestruck." The longest number ever factored by a real computer had 193 digits, and it took "several months for a network of hundreds of workstations collaborating over the Internet," Preskill continues. "If we wanted to factor a 500-digit number instead, it would take longer than the age of the universe." And yet, a quantum computer running at the same processor speed could polish off 193 digits in one-tenth of a second, he says. Factoring a 500-digit number would take all of two seconds.

While an ordinary computer chews through a calculation one bite at a time, a quantum computer arrives at its answer almost instantaneously because it essentially swallows the problem whole. It can do so because quantum information is "entangled," a state of being that is fundamental to the quantum world and completely foreign to ours. In the world we're used to, the two socks in a pair are always the same color. It doesn't matter who looks at them, where they are, or how they're looked at. There's no such independent reality in the quantum world, where the act of opening one of a matched pair of quantum boxes determines the contents of the other one—even if the two boxes are at opposite ends of the universe—but only if the other box is opened in exactly the same way. "Quantum boxes are not like soxes," Preskill says. (If entanglement sounds like a load of hooey to you, you're not alone. Preskill notes that Albert Einstein famously derided it back in the 1930s. "He called it 'spooky action at a distance,' and that sounds even more derisive when you say it in German—'Spukhafte Fernwirkungen!'")

An ordinary computer processes "bits," which are units of information encoded in batches of electrons, patches of magnetic field, or some other physical form. The "qubits" of a quantum computer are encoded by their entanglement, and these entanglements come with a big Do Not Disturb sign. Because the informational content of a quantum "box" is unknown until you open it and look inside, qubits exist only in secret, making them ideal for spies and high finance. However, this impenetrable security is also the quantum computer's downfall. Such a machine would be morbidly sensitive—the slightest encroachment from the outside world would demolish the entanglement and crash the system.

Ordinary computers cope with errors by storing information in triplicate. If one copy of a bit gets corrupted, it will no longer match the other two; error-detecting software constantly checks the three copies against one another and returns the flipped bit to its original state. Fixing flipped bits when you're not allowed to look at them seems an impossible challenge on the face of it, but after reading Shor's paper Preskill decided to give it a shot. Over the next few years, he and his grad student Daniel Gottesman (PhD '97) worked on quantum error correction, eventually arriving at a mathematical procedure by which indirectly measuring the states of five qubits would allow an error in any one of them to be fixed.

This changed the barriers facing practical quantum computation from insurmountable to merely incredibly difficult. The first working quantum computers, built in several labs in the early 2000s, were based on lasers interacting with what Preskill describes as "a handful" of trapped ions to perform "a modest number of [logic] operations." An ion trap is about the size of a thermos bottle, but the laser systems and their associated electronics take up several hundred square feet of lab space. With several million logic gates on a typical computer chip, scaling up this technology is a really big problem. Is there a better way? Perhaps. According Preskill, his colleagues at Caltech's Institute for Quantum Information and Matter are working out the details of a "potentially transformative" approach that would allow quantum computers to be made using the same silicon-based technologies as ordinary ones.

 "Quantum Entanglement and Quantum Computing" is available for download in HD from Caltech on iTunesU. (Episode 19)

Writer: 
Douglas Smith
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Master's Exchange Program Agreement Signed with École Polytechnique

École Polytechnique and the California Institute of Technology (Caltech) signed an agreement to establish a master's education exchange program on March 5, 2013. This program allows selected students from both institutions to follow an intensive joint program in Aeronautics or Space Engineering, as well as Mechanics (both fluids and solids).

This agreement, the first of its kind signed by Caltech, results from the fruitful relationship between the two institutions, which have been exchanging students as well as visiting professors and scientists for many years. The dual master's degree program between Caltech and École Polytechnique, which pre-dates the Educational Exchange program, won the Institute of International Education Andrew Heiskell Awards for Innovation in International Education, in the category of International Exchange Partnerships.

"I am honored to have been part of the small group who met at the CNES offices  in  Paris  and concieved of this award winning program eight years ago," remarks Ares J. Rosakis, Caltech's Theodore von Kármán Professor of Aeronautics and Professor of Mechanical Engineering as well as Chair of the Caltech Division of Engineering and Applied Science. "It is wonderful to see the program evolve and be considered an exemplary model of international collaboration."

Caltech, home of NASA's Jet  Propulsion  Laboratory,  works  more  specifically  with  École Polytechnique's Hydrodynamics Laboratory and Solids Mechanics Laboratory, as well as French organizations such as the Centre National d'Études Spatiales (CNES) or the French Aerospace  Lab (Onera), which are also closely related to École Polytechnique. This educational exchange program will thus reinforce the existing cooperation between Caltech and École Polytechnique and foster the development of a long-term partnership on basic research topics of interest to the Aerospace and Aeronautical  Sciences  community.

"I am delighted by this program which further expands the relationship between two great institutions," said Caltech president Jean-Lou Chameau. Yves Demay, head of École Polytechnique, adds: "École Polytechnique is developing strategic partnerships worldwide with a few selected institutions, chosen for their unique scientific expertise, such as Caltech. Students and scientists from both institutions will benefit from this double degree program, which will ensure regular and reciprocal exchanges between our research center and Caltech's.

The program spans two academic years; students spend a year at each institution. Along with the two partnering institutions the program has also been  supported  by  the  Partner  University  Fund. Upon completion of the program, the students  will  receive  both  a  Master's  degree  in  Mechanics from École Polytechnique and a Master's degree in Aeronautics or Space Engineering from Caltech. The institutions will share equally in covering  the students'  expenses (tuition,  mandatory fees  and stipend).

Writer: 
Caltech EAS Communications Office
Images: 
Exclude from News Hub: 
Yes

Donald Coles

1924 – 2013

Donald Earl Coles (MS '48, PhD '53), professor of aeronautics, emeritus, passed away on May 2 at age 89. Coles was a master instrument builder famed for his ingeniously designed and precisely executed experiments. His doctoral dissertation provided the first comprehensive set of data on supersonic boundary layers. A few years later, he made an important addition to the theory of turbulent boundary layers that he christened "the law of the wake."

Coles was born on February 8, 1924, in St. Paul, Minnesota. His father, Courtney, was a streetcar driver, and his mother, Lorna, was a teacher. "I think she taught him his work ethic, and the value of education," says his son Kenneth (BS, MS '79), an associate professor of geoscience at the Indiana University of Pennsylvania. "Like any kid of his generation, he grew up dying to fly." A meticulous craftsman even in adolescence, Coles built exquisite airplane models from balsa wood and tissue paper and traded them to local pilots in exchange for flying lessons. He soloed at 16 and got his pilot's license before he learned to drive, Ken says. In his senior year of high school, his model-building prowess won him an engineering scholarship to the Boeing School of Aeronautics in Oakland, California. However, his path to a degree would itself encounter considerable turbulence.

Coles entered the Boeing School in 1941. The following year, it was pressed into service for military training. Civilian students had to leave, so Coles hired on as a detail draftsman with the Lockheed Aircraft Corporation in Burbank. When the draft age was lowered to 18 he entered the Army, and eventually "he signed up as a combat engineer, because he heard that they would send you to college," says Ken. "His whole mission in life was to get a college education."

The Army put Coles through a six-month crash course at the University of Michigan before sending him overseas in early 1944. According to Ken, "He got appendicitis just in time to not go in with his unit [the 291st Engineer Combat Battalion], which wound up having to shoot its way out of the Battle of the Bulge." However, he did catch up with the 291st in time to help build a tank-worthy pontoon bridge across the Rhine River at Remagen, allowing the Allies to enter the German heartland. The bridge, more than 1,100 feet long, was built in only 32 hours despite fierce opposition—including being "on the receiving end of a couple of V-2s," Ken says.

Coles finally earned his bachelors degree in aeronautical engineering from the University of Minnesota in 1947, acquiring an aircraft engine mechanic's license that same year. He also married Ellen Searight, an editor at the University of Minnesota Press. By then part owner of a Cessna, Coles "courted my mom in the airplane," says Ken, adding that they once got lost over the endless fields of Wisconsin—on the trip to meet her folks. "He flew on until he found a railroad line, then turned and followed the tracks until he came to a water tower with the name of the town painted on it."

Minnesota aeronautics professor Jean Piccard urged Coles to "do something with himself" and apply to graduate school at Caltech, Ken says. (The Swiss-born Piccard twins were Star Trek creator Gene Roddenberry's inspiration for Captain Jean-Luc Picard; Jean was a pioneering high-altitude balloonist, while his brother Auguste invented the bathyscaphe—a sort of underwater zeppelin used for deep-ocean diving.) The newlyweds drove cross-country to Pasadena that summer, and Coles joined aeronautics professor Hans Liepmann's research group at the then Guggenheim Aeronautical Laboratory at the California Institute of Technology (GALCIT).

In his final three years as a grad student, Coles also worked full-time as a senior research engineer at Caltech's Jet Propulsion Lab. He used JPL's supersonic wind tunnel for his doctoral research, collecting data on turbulence in the so-called boundary layer at flow speeds ranging up to four and a half times the speed of sound.

Whenever a fluid flows past a solid—be it air over a wing, or oil in a pipeline—the molecules adjoining the surface tend to stick to it. Thus the flow velocity right at the wall is zero. How fast the flow increases as you begin to move away from the wall depends on the fluid's viscosity. This region is called the sublayer, and the flow within it smoothly follows the wall's contours. The boundary layer, where turbulence reigns, lies just beyond the sublayer. "Turbulence flowing along a surface is very complicated," Coles remarked in a recent interview, "because the surface changes the character of the turbulence. A large fraction of the field deals with this subject." Even so, a universal theory of turbulence remains elusive. "There is no theory," Coles said. "There is no truth about turbulence except experimental truth."

Coles drily pointed this out in his PhD thesis, which begins, "A contemporary Texan, J. Frank Dobbs, has said in another context that research is frequently a process of moving old bones from one graveyard to another. Those who have tried to find their way recently in the formidable literature of boundary layers may agree that the metaphor is apt enough." Coles's dissertation won the 1953 Lawrence Sperry Award from the Institute of the Aeronautical Sciences (now the American Institute of Aeronautics and Astronautics) "for fundamental contributions to the understanding of supersonic skin friction."

Absent a grand unified theory, fluid mechanics relies on experimentally derived equations called "similarity laws." The first of these, the law of the wall, stems from work done at the turn of the 20th century by Ludwig Prandtl at the University of Göttingen and was published in 1930 by his student Theodore von Kármán, GALCIT's founding director. "The law of the wall says that the flow velocity in the boundary layer varies logarithmically with distance from the sublayer outward," Coles explained with a chuckle. "That logarithm has exercised a lot of people. There is no widely accepted theory that lets you derive it. It's just there. You do an experiment, and there it is. "

In 1954, Francis Clauser (BS '34, MS '35, PhD '37), then at Johns Hopkins University, developed a fluid-mechanic equivalent of a lab rat—a special class of easily reproducible flows whose parameters greatly simplified point-by-point calculations within them. Said Coles, "As a postdoc, I looked at Clauser's flows and proposed a new similarity law, which I called the law of the wake." The law of the wake applies where the law of the wall leaves off. As a flow proceeds downstream, its boundary layer expands in a wedge like the wake of a passing speedboat. Within this wedge, fluid from beyond the boundary layer mixes into the flow in large, turbulent swirls. Here the flow rate at any point no longer depends on its distance from the wall, but on the distribution of angular momentum among the eddies. The resulting paper, titled "The law of the wake in the turbulent boundary layer," was published in the very first volume of the Journal of Fluid Mechanics in 1956.

Coles then turned his attention to a "lab rat" of his own—the Couette flow. Introduced in the late 1800s by French physicist Maurice Marie Alfred Couette, the apparatus consists of two concentric, independently rotating vertical cylinders. Setting one or both of them in motion causes the fluid filling the narrow space between them to circulate round and round. Couette flows are now widely used to study the transition between steady and turbulent flows in pipes and channels.

An elegant theoretical analysis of Couette flows had been published by Sir Geoffrey Taylor in 1923, says Anatol Roshko (MS '47, PhD '52), the Theodore von Kármán Professor of Aeronautics, Emeritus, and "Don built a very beautiful experiment to look into that further. Very well thought out and executed." Coles's version consisted of two eight-inch-tall concentric cylinders made of carefully polished glass. The half-inch gap between them contained clear oil in which tiny aluminum flecks were suspended. A lightbulb inside the cylinders illuminated the flecks so that their motions could be recorded. The flecks would arrange themselves into an impressive variety of patterns that depended on the two cylinders' speeds and their relative directions of rotation, but one set of patterns held a particular fascination.

When the outer cylinder was locked down and the inner one gradually spun faster and faster, paired sets of parallel bright and dark bands would suddenly fill the cylinder from top to bottom. Each pair of bands represented the top of a convection cell—the fluid near the spinning inner cylinder was flung outward by centrifugal force; meanwhile, the outermost fluid would drift inward, slowed down by the drag exerted on it by the stationary outer cylinder. The situation is similar to the large convective cells of hot air rising and cold air sinking that drive our planet's weather systems; in fact, both are called Taylor cells. As the inner cylinder's speed slowly increased, the bands began to undulate in waves traveling in the same direction at about one-third of the cylinder's speed. Then, as the cylinder revved ever faster, a second set of independent waves would suddenly burst into being, superimposing itself on the first set. This "doubly periodic flow," Coles would later write, had "a fascinating peculiarity"—"the flow pattern was observed to change abruptly, discontinuously, and irreversibly from one state to another at certain well defined and repeatable critical speeds."

Each state consisted of a specific number of Taylor cells undulating an integral number of times around the apparatus's circumference. While the number of cells and the number of waves in each state could be predicted from Taylor's equations, the transitions from one state to another could not. Coles cataloged 74 distinct transitions, and by plotting the order in which they occurred, he discovered that "at any specified speed . . . there exists a variety of possible operating states, sometimes more than 20 in number, among which the one actually observed is determined by the whole previous history of the experiment." Subtle differences in initial conditions could produce markedly different results, yet every pathway was repeatable. Coles summarized a decade's worth of work on Couette flows in a 40-page, lavishly illustrated paper that appeared in the Journal of Fluid Mechanics in 1965. The paper, which continues to be cited more than 1,000 times a year, has led to advances in the mathematics of group theory as well as in fluid mechanics.

Coles designed several of GALCIT's experimental facilities, including the 17-inch-diameter shock tube—in essence, a giant cannon. Built in the early years of the Space Age to study the shock waves encountered by ballistic missiles and space capsules as they reentered Earth's atmosphere, the tube was designed to operate at very low pressures—an unconventional approach that researchers to put the shock waves under a magnifying glass, as it were. At normal atmospheric pressures, a shock wave is a few millionths of an inch thick. The shock waves broaden as the air gets thinner, until, at a pressure equivalent to an altitude of roughly 60 miles, they become about half an inch thick—enough to make precision measurements of their internal structure. The tube's unusually large diameter and its precision-machined, mirror-smooth interior were designed to minimize any distortions to the shock wave from the tube itself, which is made of stainless-steel pipe half an inch thick.

The shock tube is divided into two unequal lengths by a metal diaphragm. The tube's "driver" section is filled with an inert gas, and pressurized until the eighth-inch-thick aluminum sheet bulges into the "test" section like a balloon. The diaphragm presses up against an X-shaped set of knife edges and eventually ruptures; the pent-up gas behind it blasts into the test section, creating a shock wave that travels at up to eight times the speed of sound.

Coles introduced several innovations that are now standard practice. For example, "there's a flange on the low-pressure side, a flange on the high-pressure side, and the diaphragm in the middle," Roshko explains. "Shock tubes used to be built with a dozen or more bolts that you put through the flanges and tightened, but he designed a clamp that went around the whole thing. You just grab the flanges with a great big caliper, and a wedge inside squeezes them together." Swapping the broken diaphragm for a new one takes about 30 seconds, greatly reducing the time between shots, and the whole facility can be run by a single person.

The tube is 80 feet long, or about half the length of Guggenheim itself. It runs straight down what was the middle of Liepmann's third-floor lab, suspended overhead from a track bolted to the ceiling. This absorbs the recoil and allows the tube to be opened easily while keeping the floor clear for other important apparatus, including the Ping-Pong table "used for unsponsored research in low-speed aerodynamics." The initial studies were wrapped by the end of the 1960s, and the shock tube "has been used for many things since then," Roshko says. "It's been very useful, because it's so easy to operate."

Coles inculcated his innovative style into generations of GALCIT's grad students via a course called Ae 104, Experimental Methods. Says Roshko, "He tackled some really sophisticated experiments," posing a question and setting the students to design and build the apparatus needed to answer it. "It couldn't be something very large scale, as it had to be built within the quarter and some results obtained," Roshko observes. Coming up with a research problem challenging enough to be worthwhile, yet doable in the allotted time takes a real knack, but "he was very good at getting that done. And in a few cases, they developed into thesis topics."

In Ae 104, with his own grad students, and to his fellow faculty members, says Roshko, Coles's "innovations and detailed designs were very helpful. He was completely devoted to Caltech, and to GALCIT, and to a life of research. And that's important, because he himself was kind of daunting. It wasn't shyness, just a deep reservedness. Once he accepted you, he was incredibly generous with his time."

Coles's legendary perfectionism set the standard around GALCIT. "It pained him to see anything not done absolutely as well as it could possibly be done," Roshko says. "When I added an innovation to the 17-inch shock tube that saved a lot of machining, he was impressed, and he told me. That made me feel really good, coming from him—much more so than it would have, had it come from somebody else."

Coles retired in 1996 and began writing a definitive work on turbulent shear flow, based on the course notes and data he'd compiled over his career. The book, which was very nearly finished at the time of his death, will be published posthumously.

Coles was a member of the National Academy of Engineering, and a fellow of the American Institute of Aeronautics and Astronautics (AIAA), the American Physical Society (APS), and the American Association for the Advancement of Science. He received the AIAA's 1985 Hugh L. Dryden Medal and the APS's 1996 Otto Laporte Award for his body of work. In 2000, the Donald Coles Prize in Aeronautics was established; it is awarded at Commencement to the aero PhD "whose thesis displays the best design of an experiment or the best design for a piece of experimental equipment." And in 2011, GALCIT created the Donald Coles Lectureship in Aerospace in his honor.

Coles is survived by his wife, Ellen; by his four children, Christopher, Elizabeth, Kenneth, and Janet; by his sister, Marjorie Schlaegel; and by two grandchildren.

Plans for a memorial service will be announced at a later date. 

Writer: 
Douglas Smith
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community
Saturday, May 25, 2013
Athenaeum

Professor Francis Clauser Memorial Lunch

Fifty Years of Clearing the Skies

A Milestone in Environmental Science

Ringed by mountains and capped by a temperature inversion that traps bad air, Los Angeles has had bouts of smog since the turn of the 20th century. An outbreak in 1903 rendered the skies so dark that many people mistook it for a solar eclipse. Angelenos might now be living in a state of perpetual midnight—assuming we could live here at all—were it not for the work of Caltech Professor of Bio-organic Chemistry Arie Jan Haagen-Smit. How he did it is told here largely in his own words, excerpted from Caltech's Engineering & Science magazine between 1950 and 1962. (See "Related Links" for the original articles.)

Old timers, which in California means people who have lived here some 25 years, will remember the invigorating atmosphere of Los Angeles, the wonderful view of the mountains, and the towns surrounded by orange groves. Although there were some badly polluted industrial areas, it was possible to ignore them and live in more pleasant locations, especially the valleys . . . Just 20 years ago, the community was disagreeably surprised when the atmosphere was filled with a foreign substance that produced a strong irritation of the eyes. Fortunately, this was a passing interlude which ended with the closing up of a wartime synthetic rubber plant. (November 1962)

Alas, the "interlude" was an illusion. In the years following World War II, visibility often fell to a few blocks. The watery-eyed citizenry established the Los Angeles County Air Pollution Control District (LACAPCD) in 1947, the first such body in the nation. The obvious culprits—smoke-belching power plants, oil refineries, steel mills, and the like—were quickly regulated, yet the problem persisted. Worse, this smog was fundamentally different from air pollution elsewhere—the yellow, sulfur-dioxide-laced smog that killed 20 people in the Pennsylvania steel town of Donora in 1948, for example, or London's infamous pitch-black "pea-soupers," where the burning of low-grade, sulfur-rich coal added soot to the SO2. (The Great Smog of 1952 would carry off some 4,000 souls in four days.) By contrast, L.A.'s smog was brown and had an acrid odor all its own.

Haagen-Smit had honed his detective skills isolating and identifying the trace compounds responsible for the flavors of pineapples and fine wines, and in 1948 he began to turn his attention to smog.

Chemically, the most characteristic aspect of smog is its strong oxidizing action . . . The amount of oxidant can readily be determined through a quantitative measurement of iodine liberated from potassium iodide solution, or of the red color formed in the oxidation of phenolphthalin to the well-known acid-base indicator, phenolphthalein. To demonstrate these effects, it is only necessary to bubble a few liters of smog air through the colorless solutions. (December 1954)

His chief suspect was ozone, a highly reactive form of oxygen widely used as a bleach and a disinfectant. It's easy to make—a spark will suffice—and it's responsible for that crisp "blue" odor produced by an overloaded electric motor. But there was a problem:

During severe smog attacks, ozone concentrations of 0.5 ppm [parts per million], twenty times higher than in [clean] country air, have been measured. From such analyses the quantity of ozone present in the [Los Angeles] basin at that time is calculated to be about 500 tons.

Since ozone is subject to a continuous destruction in competition with its formation, we can estimate that several thousand tons of ozone are formed during a smog day. It is obvious that industrial sources or occasional electrical discharges do not release such tremendous quantities of ozone. (December 1954)

If ozone really was to blame, where was it coming from? An extraordinary challenge lay ahead:

The analysis of air contaminants has some special features, due to the minute amounts present in a large volume of air. The state in which these pollutants are present—as gases, liquids and solid particles of greatly different sizes—presents additional difficulties. The small particles of less than one micron diameter do not settle out, but are in a stable suspension and form so-called aerosols.

The analytical chemist has devoted a great deal of effort to devising methods for the collection of this heterogeneous material. Most of these methods are based on the principle that the particles are given enough speed to collide with each other or with collecting surfaces . . . A sample of Los Angeles' air shows numerous oily droplets of a size smaller than 0.5 micron, as well as crystalline deposits of metals and salts . . . When air is passed through a filter paper, the paper takes on a grey appearance, and extraction with organic solvents gives an oily material. (December 1950)

Haagen-Smit suspected that this oily material, a complex brew of organic acids and other partially oxidized hydrocarbons, was smog's secret ingredient. In 1950, he took a one-year leave of absence from Caltech to prove it, working full-time in a specially equipped lab set up for him by the LACAPCD. By the end of the year, he had done so.

Through investigations initiated at Caltech, we know that the main source of this smog is due to the release of two types of material. One is organic material—mostly hydrocarbons from gasoline—and the other is a mixture of oxides of nitrogen. Each one of these emissions by itself would be hardly noticed. However, in the presence of sunlight, a reaction occurs, resulting in products which give rise to the typical smog symptoms. The photochemical oxidation is initiated by the dissociation of NO2 into NO and atomic oxygen. This reactive oxygen attacks organic material, resulting in the formation of ozone and various oxidation products . . . The oxidation reactions are generally accompanied by haze or aerosol formation, and this combination aggravates the nuisance effects of the individual components of the smog complex. (November 1962)

Professor of Plant Physiology Frits Went was also on the case. Went ran Caltech's Earhart Plant Research Laboratory, which he proudly called the "phytotron," by analogy to the various "trons" operated by particle physicists. (Phyton is the Greek word for plant.) "Caltech's plant physiologists happen to believe that the phytotron is as marvellously complicated as any of the highly-touted 'atom-smashing' machines," Went wrote in E&S in 1949. "[It] is the first laboratory in the world in which plants can be grown under every possible climatic condition. Light, temperature, humidity, gas content of the air, wind, rain, and fog—all these factors can be simultaneously and independently controlled. The laboratory can create Sacramento Valley climate in one room and New England climate in another." Most of Los Angeles was still orchards and fields instead of tract houses, and the smog was hurting the produce. Went, the LACAPCD, and the UC Riverside agricultural station tested five particularly sensitive crops in the phytotron, Haagen-Smit wrote.

The smog indicator plants include spinach, sugar beet, endive, alfalfa and oats. The symptoms on the first three species are mainly silvering or bronzing of the underside of the leaf, whereas alfalfa and oats show bleaching effects. Some fifty compounds possibly present in the air were tested on their ability to cause smog damage—without success. However, when the reaction products of ozone with unsaturated hydrocarbons were tried, typical smog damage resulted. (December 1950)

And yet a third set of experiments was under way. Rubber tires were rotting from the smog at an alarming rate, cracking as they flexed while rolling along the road. Charles E. Bradley, a research associate in biology, turned this distressing development into a cheap and effective analytical tool by cutting rubber bands by the boxful into short segments. The segments—folded double, secured with a twist of wire, and set outside—would start to fall apart almost before one could close the window. "During severe smog initial cracking appears in about four minutes, as compared to an hour or more required on smog-free days, or at night," Haagen-Smit wrote in the December 1954 E&S.

The conclusion that airborne gasoline and nitrogen oxides (another chief constituent of automobile exhaust) were to blame for smog was not well received by the oil refineries, who hired their own expert to prove him wrong. Abe Zarem (MS '40, PhD '44), the manager and chairman of physics research for the Stanford Research Institute, opined that stratospheric ozone seeping down through the inversion layer was to blame. But seeing (or smelling) is believing, so Haagen-Smit fought back by giving public lectures in which he would whip up flasks full of artificial smog before the audience's eyes, which would soon be watering—especially if they were seated in the first few rows. By the end of his talk, the smog would fill the hall, and he became known throughout the Southland as Arie Haagen-Smog.

By 1954, he and Frits Went had carried the day.

[Plant] fumigations with the photochemical oxidation products of gasoline and nitrogen dioxide (NO2) was the basis of one of the most convincing arguments for the control of hydrocarbons by the oil industry. (December 1954)

It probably didn't hurt that an outbreak that October closed schools and shuttered factories for most of the month, and that angry voters were wearing gas masks to protest meetings. By then, there were some two million cars on the road in the metropolitan area, spewing a thousand tons of hydrocarbons daily.

Incomplete combustion of gasoline allows unburned and partially burned fuel to escape from the tailpipe. Seepage of gasoline, even in new cars, past piston rings into the crankcase, is responsible for 'blowby' or crankcase vent losses. Evaporation from carburetor and fuel tank are substantial contributions, especially on hot days. (November 1962)

Haagen-Smit was a founding member of California's Motor Vehicle Pollution Control Board, established in 1960. One of the board's first projects was testing positive crankcase ventilation (PCV) systems, which sucked the blown-by hydrocarbons out of the crankcase and recirculated them through the engine to be burned on the second pass. PCV systems were mandated on all new cars sold in California as of 1963. The blowby problem was thus easily solved—but, as Haagen-Smit noted in that same article, it was only the second-largest source, representing about 30 percent of the escaping hydrocarbons.

The preferred method of control of the tailpipe hydrocarbon emission is a better combustion in the engine itself. (The automobile industry has predicted the appearance of more efficiently burning engines in 1965. It is not known how efficient these will be, nor has it been revealed whether there will be an increase or decrease of oxides of nitrogen.) Other approaches to the control of the tailpipe gases involve completing the combustion in muffler-type afterburners. One type relies on the ignition of gases with a sparkplug or pilot-burner; the second type passes the gases through a catalyst bed which burns the gases at a lower temperature than is possible with the direct-flame burners. (November 1962)

Installing an afterburner in the muffler has some drawbacks, not the least of which is that the notion of tooling around town with an open flame under the floorboards might give some people the willies. Instead, catalytic converters became required equipment on California cars in 1975.

In 1968, the Motor Vehicle Pollution Control Board became the California Air Resources Board, with Haagen-Smit as its chair. He was a member of the 1969 President's Task Force on Air Pollution, and the standards he helped those two bodies develop would eventually be adopted by the Environmental Protection Agency, established in 1970—the year that also saw the first celebration of Earth Day. It was also the year when ozone levels in the Los Angeles basin peaked at 0.58 parts per million, nearly five times in excess of the 0.12 parts per million that the EPA would declare to be safe for human health. This reading even exceeded the 0.5 ppm that Haagen-Smit had measured back in 1954, but it was a triumph nonetheless—the number of cars in L.A. had doubled, yet the smog was little worse than it had always been. That was the year we turned the corner, in fact, and our ozone levels have been dropping ever since—despite the continued influx of cars and people to the region.

Haagen-Smit retired from Caltech in 1971 as the skies began to clear, but continued to lead the fight for clean air until his death in 1977—of lung cancer, ironically, after a lifetime of cigarettes. Today, his intellectual heirs, including professors Richard Flagan, Mitchio Okumura, John Seinfeld, and Paul Wennberg, use analytical instruments descended from ones Haagen-Smit would have recognized and computer models sophisticated beyond his wildest dreams to carry the torch—a clean-burning one, of course—forward.

Writer: 
Douglas Smith
Writer: 
Exclude from News Hub: 
No

Oscar Bruno Named SIAM Fellow

The Society for Industrial and Applied Mathematics (SIAM) named Oscar P. Bruno, professor of applied and computational mathematics at Caltech, as a member of its 2013 Class of Fellows.

Bruno is one of 33 fellows selected by SIAM for "exemplary research as well as outstanding service to the community," according to the organization. "Through their contributions, the 2013 Class of Fellows is helping advance the fields of applied mathematics and computational science," the organization stated in a March 29 press release.

"SIAM is an organization that includes many of the leading applied mathematicians from around the world, so I am honored to have been selected for their 2013 Class of Fellows," Bruno says.

Bruno's research group at Caltech focuses on the development of accurate high-performance numerical partial differential equation solvers applicable to realistic scientific and engineering configurations. His research interests include numerical analysis, multiphysics modeling and simulation, and mathematical physics.

Bruno graduated with a Friedrichs Prize for an outstanding dissertation in mathematics from the Courant Institute of Mathematical Sciences at New York University in 1989. He became an associate professor at Caltech in 1995 and a professor of applied and computational mathematics in 1998. He has served as executive officer of Caltech's Applied and Computational Mathematics department, and he is the recipient of a Young Investigator Award from the National Science Foundation and a Sloan Foundation Fellowship.

Bruno is a member of the SIAM Council, and he serves on the editorial boards of the SIAM Journal on Scientific Computing and the SIAM Journal on Applied Mathematics.

Writer: 
Brian Bell
Images: 
Frontpage Title: 
Bruno Named SIAM Fellow
Listing Title: 
Bruno Named SIAM Fellow
Contact: 
Writer: 
Exclude from News Hub: 
Yes

Pages

Subscribe to RSS - EAS