bbell2's picture

India Establishes Caltech Aerospace Fellowship

The Indian Department of Space / Indian Space Research Organisation (ISRO) has established a fellowship at the California Institute of Technology (Caltech) in the name of Satish Dhawan (1920–2002), a Caltech alumnus (Eng '49, PhD '51) and a pioneer of India's space program.

The Satish Dhawan Fellowship enables one aerospace engineering graduate per year from the Indian Institute of Space Science and Technology (IIST) to study at the Graduate Aerospace Laboratories at Caltech (GALCIT), as Dhawan himself did more than 60 years ago, when GALCIT was the Guggenheim Aeronautical Laboratory. Dhawan went on to serve as the director of the Indian Institute of Science (IISc), chairman of the Indian Space Commission and the ISRO, and president of the Indian Academy of Sciences. In 1969 he was named a Caltech Distinguished Alumnus.

Chaphalkar Aaditya Nitin, an IIST graduate with a Bachelor of Technology degree in aerospace engineering, has been named the first student to receive the fellowship and will start classes at GALCIT in October.

According to GALCIT director Guruswami (Ravi) Ravichandran, the John E. Goode, Jr., Professor of Aerospace and professor of mechanical engineering, ISRO established the fellowship to create a permanent pipeline of aerospace engineering leaders who will guide India's space program into the future.

"India has a very strong domestically grown space program," explains Ravichandran. "The ISRO is hoping to maintain its momentum by training students in much the same way that Dhawan was trained when he went through GALCIT decades ago."

The first three directors of what is now India's National Aerospace Laboratories were GALCIT alumni—Parameswar Nilakantan (MS '42), S. R. Valluri (MS '50, PhD '54), and Roddam Narasimha (PhD '61 "The ISRO is honoring Dhawan and Caltech with this fellowship, and it is also recognizing the historical connections between engineers and scientists in the United States and India," says Ares Rosakis, Caltech's Theodore von Kármán Professor of Aeronautics and Mechanical Engineering and Otis Booth Leadership Chair, Division of Engineering and Applied Science. "It is an endorsement of GALCIT's fundamental research approach and rigorous curriculum.

"Most academic fellowships come from private philanthropy. It is extremely rare for a government institution to endow a fellowship intended for a private research institution such as Caltech," he says.

"GALCIT has had impact on aeronautics and aerospace development in the United States and abroad, not by training engineers in large numbers but by training engineering leaders," Rosakis says. "GALCIT graduates include CEOs of aerospace companies and the heads of departments at places like MIT, Georgia Tech, and the University of Illinois. If we were to create one von Kármán every half century, as well as a few aerospace CEOs, and a few Dhawans, we would be happy."

Writer: 
Brian Bell
Frontpage Title: 
New Aerospace Fellowship
Listing Title: 
New Aerospace Fellowship
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Figuring Out Flow Dynamics

Engineers gain insight into turbulence formation and evolution in fluids

Turbulence is all around us—in the patterns that natural gas makes as it swirls through a transcontinental pipeline or in the drag that occurs as a plane soars through the sky. Reducing such turbulence on say, an airplane wing, would cut down on the amount of power the plane has to put out just to get through the air, thereby saving fuel. But in order to reduce turbulence—a very complicated phenomenon—you need to understand it, a task that has proven to be quite a challenge.

Since 2006, Beverley McKeon, professor of aeronautics and associate director of the Graduate Aerospace Laboratories at the California Institute of Technology (Caltech) and collaborator Ati Sharma, a senior lecturer in aerodynamics and flight mechanics at the University of Southampton in the U.K., have been working together to build models of turbulent flow. Recently, they developed a new and improved way of looking at the composition of turbulence near walls, the type of flow that dominates our everyday life.

Their research could lead to significant fuel savings, as a large amount of energy is consumed by ships and planes, for example, to counteract turbulence-induced drag. Finding a way to reduce that turbulence by 30 percent would save the global economy billions of dollars in fuel costs and associated emissions annually, says McKeon, a coauthor of a study describing the new method published online in the Journal of Fluid Mechanics on July 8.

"This kind of turbulence is responsible for a large amount of the fuel that is burned to move humans, freight, and fluids such as water, oil, and natural gas, around the world," she says. "[Caltech physicist Richard] Feynman described turbulence as 'one of the last unsolved problems of classical physics,' so it is also a major academic challenge."

Wall turbulence develops when fluids—liquid or gas—flow past solid surfaces at anything but the slowest flow rates. Progress in understanding and controlling wall turbulence has been somewhat incremental because of the massive range of scales of motion involved—from the width of a human hair to the height of a multi-floor building in relative terms—says McKeon, who has been studying turbulence for 16 years. Her latest work, however, now provides a way of analyzing a large-scale flow by breaking it down into discrete, more easily analyzed bits. 

McKeon and Sharma devised a new method of looking at wall turbulence by reformulating the equations that govern the motion of fluids—called the Navier-Stokes equations—into an infinite set of smaller, simpler subequations, or "blocks," with the characteristic that they can be simply added together to introduce more complexity and eventually get back to the full equations. But the benefit comes in what can be learned without needing the complexity of the full equations. Calling the results from analysis of each one of those blocks a "response mode," the researchers have shown that commonly observed features of wall turbulence can be explained by superposing, or adding together, a very small number of these response modes, even as few as three. 

In 2010, McKeon and Sharma showed that analysis of these blocks can be used to reproduce some of the characteristics of the velocity field, like the tendency of wall turbulence to favor eddies of certain sizes and distributions. Now, the researchers also are using the method to capture coherent vortical structure, caused by the interaction of distinct, horseshoe-shaped spinning motions that occur in turbulent flow. Increasing the number of blocks included in an analysis increases the complexity with which the vortices are woven together, McKeon says. With very few blocks, things look a lot like the results of an extremely expensive, real-flow simulation or a full laboratory experiment, she says, but the mathematics are simple enough to be performed, mode-by-mode, on a laptop computer.

"We now have a low-cost way of looking at the 'skeleton' of wall turbulence," says McKeon, explaining that similar previous experiments required the use of a supercomputer. "It was surprising to find that turbulence condenses to these essential building blocks so easily. It's almost like discovering a lens that you can use to focus in on particular patterns in turbulence."

Using this lens helps to reduce the complexity of what the engineers are trying to understand, giving them a template that can be used to try to visually—and mathematically—identify order from flows that may appear to be chaotic, she says. Scientists had proposed the existence of some of the patterns based on observations of real flows; using the new technique, these patterns now can be derived mathematically from the governing equations, allowing researchers to verify previous models of how turbulence works and improve upon those ideas.

Understanding how the formulation can capture the skeleton of turbulence, McKeon says, will allow the researchers to modify turbulence in order to control flow and, for example, reduce drag or noise.

"Imagine being able to shape not just an aircraft wing but the characteristics of the turbulence in the flow over it to optimize aircraft performance," she says. "It opens the doors for entirely new capabilities in vehicle performance that may reduce the consumption of even renewable or non-fossil fuels."

Funding for the research outlined in the Journal of Fluid Mechanics paper, titled "On coherent structure in wall turbulence," was provided by the Air Force Office of Scientific Research. The paper is the subject of a "Focus on Fluids" feature article that will appear in an upcoming print issue of the same journal and was written by Joseph Klewicki of the University of New Hampshire. 

Writer: 
Katie Neith
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pushing Microscopy Beyond Standard Limits

Caltech engineers show how to make cost-effective, ultra-high-performance microscopes

Engineers at the California Institute of Technology (Caltech) have devised a method to convert a relatively inexpensive conventional microscope into a billion-pixel imaging system that significantly outperforms the best available standard microscope. Such a system could greatly improve the efficiency of digital pathology, in which specialists need to review large numbers of tissue samples. By making it possible to produce robust microscopes at low cost, the approach also has the potential to bring high-performance microscopy capabilities to medical clinics in developing countries.

"In my view, what we've come up with is very exciting because it changes the way we tackle high-performance microscopy," says Changhuei Yang, professor of electrical engineering, bioengineering and medical engineering at Caltech.  

Yang is senior author on a paper that describes the new imaging strategy, which appears in the July 28 early online version of the journal Nature Photonics.

Until now, the physical limitations of microscope objectives—their optical lenses— have posed a challenge in terms of improving conventional microscopes. Microscope makers tackle these limitations by using ever more complicated stacks of lens elements in microscope objectives to mitigate optical aberrations. Even with these efforts, these physical limitations have forced researchers to decide between high resolution and a small field of view on the one hand, or low resolution and a large field of view on the other. That has meant that scientists have either been able to see a lot of detail very clearly but only in a small area, or they have gotten a coarser view of a much larger area.

"We found a way to actually have the best of both worlds," says Guoan Zheng, lead author on the new paper and the initiator of this new microscopy approach from Yang's lab. "We used a computational approach to bypass the limitations of the optics. The optical performance of the objective lens is rendered almost irrelevant, as we can improve the resolution and correct for aberrations computationally."

Indeed, using the new approach, the researchers were able to improve the resolution of a conventional 2X objective lens to the level of a 20X objective lens. Therefore, the new system combines the field-of-view advantage of a 2X lens with the resolution advantage of a 20X lens. The final images produced by the new system contain 100 times more information than those produced by conventional microscope platforms. And building upon a conventional microscope, the new system costs only about $200 to implement.

"One big advantage of this new approach is the hardware compatibility," Zheng says, "You only need to add an LED array to an existing microscope. No other hardware modification is needed. The rest of the job is done by the computer."  

The new system acquires about 150 low-resolution images of a sample. Each image corresponds to one LED element in the LED array. Therefore, in the various images, light coming from known different directions illuminates the sample. A novel computational approach, termed Fourier ptychographic microscopy (FPM), is then used to stitch together these low-resolution images to form the high-resolution intensity and phase information of the sample—a much more complete picture of the entire light field of the sample.

Yang explains that when we look at light from an object, we are only able to sense variations in intensity. But light varies in terms of both its intensity and its phase, which is related to the angle at which light is traveling.

"What this project has developed is a means of taking low-resolution images and managing to tease out both the intensity and the phase of the light field of the target sample," Yang says. "Using that information, you can actually correct for optical aberration issues that otherwise confound your ability to resolve objects well."

The very large field of view that the new system can image could be particularly useful for digital pathology applications, where the typical process of using a microscope to scan the entirety of a sample can take tens of minutes. Using FPM, a microscope does not need to scan over the various parts of a sample—the whole thing can be imaged all at once. Furthermore, because the system acquires a complete set of data about the light field, it can computationally correct errors—such as out-of-focus images—so samples do not need to be rescanned.

"It will take the same data and allow you to perform refocusing computationally," Yang says.

The researchers say that the new method could have wide applications not only in digital pathology but also in everything from hematology to wafer inspection to forensic photography. Zheng says the strategy could also be extended to other imaging methodologies, such as X-ray imaging and electron microscopy.

The paper is titled "Wide-field, high-resolution Fourier ptychographic microscopy." Along with Yang and Zheng, Caltech graduate student Roarke Horstmeyer is also a coauthor. The work was supported by a grant from the National Institutes of Health.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Thursday, September 26, 2013
Ramo Auditorium

Graduate TA Orientation & Teaching Conference

Otis Booth Leadership Chair Established at Caltech

$10 Million Will Accelerate Innovation

With a $10 million gift, the Los Angeles–based Otis Booth Foundation has created and endowed the Otis Booth Leadership Chair for the Division of Engineering and Applied Science (EAS) at Caltech. This endowment will provide a permanent source of funds to foster novel research ventures and educational programs.

The $10 million endowment will accelerate the invention of technologies with the potential to benefit society by giving the EAS division chair flexible funds to invest in promising ideas that arise within the division. Caltech has had the highest subject-specific rankings for engineering and technology in the world for three years running (according to the Times Higher Education World University Rankings).

"The first funds from the endowment will support time-sensitive research that is too high risk for most traditional grants," says Ares Rosakis, the inaugural holder of the Booth Leadership Chair, Chair of the Division of Engineering and Applied Science, and the Theodore von Kármán Professor of Aeronautics and Professor of Mechanical Engineering. He also plans to use the endowment to support teaching innovations—including future online courses co-taught by EAS faculty and JPL scientists—and to increase funds for student aid, faculty start-up packages, and cutting-edge research equipment.

"EAS faculty and students make an impact by driving fast-growing fields like medical engineering and addressing urgent challenges such as the development of affordable renewable energy," Rosakis says. "The Booth Leadership Chair will help me and future division chairs support potentially transformational research and educational efforts that could not go forward without early funds. This gift has the potential to benefit the world by fostering EAS's development of vital new technologies and even more vital new young leaders. I am thankful to the Booth family and want to acknowledge their tremendous vision."

"I am excited to see what inventions and ideas become realities as Dr. Rosakis and his successors at the helm of EAS use this endowment now and far into the future," says Lynn Booth, president of the Otis Booth Foundation, a Caltech trustee, and a prominent Los Angeles philanthropist. Booth's late husband, Franklin Otis Booth Jr., established the foundation in 1967. He became an investor, newspaper executive, rancher, and philanthropist after graduating from Caltech in 1944 with a BS in electrical engineering. 

"My husband held Caltech in high regard," says Booth. "I am thrilled that this gift in his honor will connect his name with the pioneering work in EAS forever and give Caltech greater financial flexibility to respond to special opportunities and unforeseen challenges."

"The Booth Leadership Chair will help fuel groundbreaking research and education in engineering and applied sciences, as well as support the development of novel technologies," says President Jean-Lou Chameau. "Our trustee, friend, and visionary partner Lynn Booth and her family and friends at the Otis Booth Foundation are investing in changing the world—the example they set is inspiring."

Over the years, six faculty members in EAS have won the National Medal of Technology and Innovation and two have won the National Medal of Science. Of the 96 professors currently in the division, 30 percent are members of the National Academy of Engineering, 11 percent are members of the National Academy of Science, and 16 percent are fellows of the American Academy of Arts and Sciences.

 

Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Michael Roukes Honored by the French Republic

The French Republic honored Michael Roukes, Robert M. Abbey Professor of Physics, Applied Physics, and Bioengineering at the California Institute of Technology (Caltech), with the Chevalier de l'Ordre des Palmes Académiques (Knight of the Order of Academic Palms).

Roukes received his medal and diploma in a luncheon ceremony June 11 on the Caltech campus. Attending the ceremony were representatives from the French government, including Axel Cruau, French consul general in Los Angeles, who presented Roukes with the award; Caltech president Jean-Lou Chameau; and Roukes's family and Institute colleagues.

Emperor Napoleon Bonaparte established the Palmes Académiques awards in 1808 to honor faculty from the University of Paris. During the following two centuries, the French Republic extended eligibility for the award to include other citizens and foreigners who have made significant contributions to French education and culture. There are three grades of Palmes Académiques medals: commandeur, officier, and chevalier.

"I am honored to receive the Order of Academic Palms from the French Republic," Roukes says. "Through the cooperation and collaboration we have fostered, French and American scientists greatly increase their chances of making new discoveries that benefit both nations."

Roukes, who came to Caltech in 1992, is an expert in the field of quantum measurement and applied biotechnology. He was the founding director of the Kavli Nanoscience Institute at Caltech from 2004–2006 and codirector from 2008 to 2013. He has led numerous cross-disciplinary collaborations at Caltech and with researchers from other institutions that explore the frontiers of nanoscience. In 2010, Roukes received the Director's Pioneer Award from the National Institutes of Health for his work developing nanoscale tools for use in medical research.

In his presentation remarks, Cruau cited Roukes's collaborations with French research institutions, in particular the Laboratoire d'Electronique et des Technologies de l'Information (LETI) in Grenoble, France. Roukes and LETI CEO Laurent Malier cofounded the Alliance for Nanosystems VLSI, which focuses on creating complex nanosystems-based tools for science and industry. Cruau also noted Roukes's instrumental role in helping to start the Brain Research through Advancing Innovative Neurotechnologies, or BRAIN, Initiative, a national program proposed by President Barack Obama to build a comprehensive map of activity in the human brain.

Other Palmes Académiques recipients at Caltech are Ares J. Rosakis, von Kármán Professor of Aeronautics and professor of mechanical engineering, and chair of the Division of Engineering and Applied Science, who was named a Commandeur de l'Ordre des Palmes Académiques in 2012, and Guruswami Ravichandran, John E. Goode, Jr., Professor of Aerospace and professor of mechanical engineering, and director of the Graduate Aerospace Laboratories, who received a Chevalier de l'Ordre des Palmes Académiques in 2011.

Writer: 
Brian Bell
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Seeing Data

More data are being created, consumed, and transported than ever before, and in all areas of society, including business, government, health care, and science. The hope and promise is that this influx of information—known as big data—will be transformative: armed with insightful information, businesses will make more money while providing improved customer service, governments will be more efficient, medical science will be better able to prevent and treat diseases, and science will make discoveries that otherwise would not be possible.

But to do all that, people have to be able to make sense of the data. Scientists and engineers are employing new computational techniques and algorithms to do just that, but sometimes, to gain significant insights from data, you have to see it—and as the data become increasingly complex and bountiful, traditional bar graphs and scatter plots just won't do. And so, not only will scientists and engineers have to overcome the technical challenges of processing and analyzing such large amounts of complex data—often in real time, as the data are being collected—but they also will need new ways to visualize and interact with that information.

In a recent symposium hosted at Caltech in collaboration with the Jet Propulsion Laboratory (JPL) and Art Center College of Design in Pasadena, computer scientists, artists, and designers gathered to discuss what they called the "emerging science of big-data visualization." The speakers laid out their vision for the potential of data visualization and demonstrated its utility, power, and beauty with myriad examples.

Data visualization represents a natural intersection of art and science, says Hillary Mushkin, a visiting professor of art and design in mechanical and civil engineering at Caltech and one of the co-organizers of the symposium. Artists and scientists alike pursue endeavors that involve questions, research, and creativity, she explains. "Visualization is another entry point to the same practice—another kind of inquiry that we are already engaged in."

Traditionally, data visualization tends to be reductionist, said self-described data artist Jer Thorp in his talk at the symposium. Charts and graphs are usually used to distill complex data into a simpler message or idea. But the promise of big data is that it contains hidden insight and knowledge. To gain that deeper understanding, he explained, we must embrace the inherent complexity of data. Data visualization, therefore, should be revelatory instead of just reductionist—not simply a way to convey information or find answers, Thorp said, but to generate and cultivate questions. Or, as he put it, the goal is question farming rather than answer farming.

For example, Thorp used Twitter to generate a model of worldwide travel—a model that, he said, was inspired by the desire to actually create a model that describes how viruses are spread. He searched for tweets that included the phrase, "just landed in" and recorded the tweeted destinations. Combining that information with the original locations of the travelers, as listed in their Twitter profiles, Thorp created an animated graphic depicting air travel. Since one way that diseases are spread globally is through air travel, the graphic—while rudimentary, he admitted—could be a starting point for epidemiological models of disease outbreaks.

Data visualization also may be interactive, allowing users to manipulate the data and peel back multiple layers of information. Thorp created such a graphic to represent the first four months of data from NASA's Kepler mission—the space telescope that has discovered thousands of possible planets. A user not only can visualize all the planet candidates from the data set but also can reorganize the planets in terms of size, temperature, or other variables.

Artist Gola Levin demonstrated how art and data can be used to provoke thoughts and ideas. One example is a project called The Secret Life of Numbers, in which he counted the number of pages that Google returned when searching for each integer from 0 to 1,000,000. He used that data to create an interactive graphic that shows interesting trends—the popularity of certain numbers like 911, 1234, or 90210, for example.

Anja-Silvia Goeing, a lecturer in history at Caltech, described the history of data visualization, highlighting centuries-old depictions of data, including maps of diseases in London; drawings and etchings of architecture, mechanical devices, and human anatomy; letter collections and address books as manifestations of social networks; and physical models of crystals and gemstones.

Data visualization, Goeing noted, has been around for many generations. What's new now is the need to visualize lots of complex data, and that, the symposium speakers argued, means we need to change how we think about data visualization. The idea that it is simply a technique or a tool is limiting, said designer Eric Rodenbeck during his presentation. Instead, we must think of visualization as a medium through which data can be explored, understood, and communicated.

This summer, Mushkin, along with the other co-organizers of the symposium—Scott Davidoff, the manager of the human interfaces group at JPL, and Maggie Hendrie, the chair of interaction design at Art Center—are mentoring undergraduate students from around the country to work on data-visualization research projects at Caltech and JPL. One group of students will work with two Caltech researchers—Ralph Adolphs, Bren Professor of Psychology and Neuroscience and professor of biology, and director of the Caltech Brain Imaging Center, and Mike Tyszka, associate director of the Caltech Brain Imaging Center—to visualize neural interactions deep inside the brain. Another will work with Caltech professor of aeronautics Beverley McKeon to visualize how fluid flows over walls. The third project involves systems engineering at JPL, challenging students to create a tool to visualize the complex process by which space missions are designed. Mushkin, Davidoff, and Hendrie also plan to invite more speakers to Caltech to talk about data visualization later in the summer.

You can watch all of the symposium talks on Caltech's YouTube channel.

 

Writer: 
Marcus Woo
Tags: 
Listing Title: 
Symposium: "Emerging Science of Big-Data Visualization"
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Caltech's Unique Wind Projects Move Forward

Caltech fluid-mechanics expert John Dabiri has some big plans for a high school in San Pedro, military bases in California, and a small village on Bristol Bay, Alaska—not to mention for the future of wind power generation, in general.

Back in 2009, Dabiri, a professor of aeronautics and bioengineering, was intrigued by the pattern of spinning vortices that trail fish as they swim. Curious, he assigned some graduate students to find out what would happen to a wind farm's power output if its turbines were spaced like those fishy vortices. In simulations, energy production jumped by a factor of 10. To prove that the same effect would occur under real-world conditions, Dabiri and his students established a field site in the California desert with 24 turbines. Data gathered from the site proved that placing turbines in a particular orientation in relation to one another profoundly improves their energy-generating efficiency.

The turbines Dabiri has been investigating aren't the giant pinwheels with blades like propellers—known as horizontal-axis wind turbines (HAWTs)—that most people envision when they think about wind power. Instead, Dabiri's group uses much shorter turbines that look something like egg beaters sticking out of the ground. Dabiri and his colleagues believe that with further development, these so-called vertical-axis wind turbines (VAWTs) could dramatically decrease the cost, footprint, and environmental impact of wind farms.

"We have been able to demonstrate that using wind turbines that are 30 feet tall, as opposed to 300 feet tall, could generate sufficient power for wind-farm applications," Dabiri says. "That's important for us because our approach to getting to lower-cost energy is through the use of smaller vertical-axis wind turbines that are simpler—for example, they have no gearbox and don't need to be pointed in the direction of the oncoming wind—and whose performance can be optimized by arranging them properly."

Even as Dabiri and his group continue to study the physics of the wind as it moves through their wind farm and to develop computer models that will help them to predict optimal configurations for turbines in different areas, they are now beginning several pilot projects to test their concept.

"One of the areas where these smaller turbines can have an immediate impact is in the military," says Dabiri. Indeed, the Department of Defense is one of the largest energy consumers in the country and is interested in using renewable methods to meet some of that need. However, one challenge with the use of wind energy is that large HAWTs can interfere with helicopter operations and radar signatures. Therefore, the Office of Naval Research is funding a three-year project by Dabiri's group to test the smaller VAWTs and to further develop software tools to determine the optimal placement of turbines. "We believe that these smaller turbines provide the opportunity to generate renewable power while being complementary to the ongoing activities at the base," Dabiri says.

A second pilot project, funded by the Los Angeles Unified School District, will create a small wind farm that will help to power a new school while teaching its students about wind power. San Pedro High School's John M. and Muriel Olguin Campus, which opened in August 2012, was designed to be one of the greenest schools ever built, with solar panels, artificial turf, and a solar-heated pool—and the plan has long included the use of wind turbines.

"Here, the challenge is that you have a community nearby, and so if you used the very large horizontal-axis wind turbines, you would have the potential issue of the visual signature, the noise, and so on," Dabiri says. "These smaller turbines will be a demonstration of an alternative that's still able to generate wind energy but in a way that might be more agreeable to these communities."

That is one of the major benefits of VAWTs: being smaller, they fit into a landscape far more seamlessly than would 100-meter-tall horizontal-axis wind turbines. Because VAWTs can also be placed much closer to one another, many more of them can fit within a given area, allowing them to tap into more of the wind energy available in that space than is typically possible. What this all means is that a very productive wind farm can be built that has a lower environmental impact than previously possible.

That is especially appealing in roadless areas such as Alaska's Bristol Bay, located at the eastern edge of the Bering Sea. The villages around the bay—a crucial ecosystem for sockeye salmon—face particular challenges when it comes to meeting their energy needs. The high cost of transporting diesel fuel to the region to generate power creates a significant barrier to sustainable economic development. However, the region also has strong wind resources, and that's where Dabiri comes in.

With funding from the Gordon and Betty Moore Foundation, Dabiri and his group, in collaboration with researchers at the University of Alaska Fairbanks, will be starting a three-year project this summer to asses the performance of a VAWT wind farm in a village called Igiugig. The team will start by testing a few different VAWT designs. Among them is a new polymer rotor, designed by Caltech spinoffs Materia and Scalable Wind Solutions, which may withstand icing better than standard aluminum rotors.

"Once we've figured out which components from which vendors are most effective in that environment, the idea is to expand the project next year, to have maybe a dozen turbines at the site," Dabiri says. "To power the entire village, we'd be talking somewhere in the 50- to 70-turbine range, and all of those could be on an acre or two of land. That's one of the benefits—we're trying to generate the power without changing the landscape. It's pristine, beautiful land. You wouldn't want to completely change the landscape for the sake of producing energy."

Video and images of Dabiri's field site in the California desert can be found at http://dabiri.caltech.edu/research/wind-energy.html.

 

The Gordon and Betty Moore Foundation, established in 2000, seeks to advance environmental conservation and scientific research around the world and improve the quality of life in the San Francisco Bay Area. The Foundation's Science Program aims to make a significant impact on the development of provocative, transformative scientific research, and increase knowledge in emerging fields.

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Senior Spotlight: Raymond Jimenez

Tech-savvy student headed to SpaceX

Caltech's Class of 2013 is a group of passionate, curious, and creative individuals who have spent their undergraduate years advancing research, challenging both conventional thinking and one another. They have thrived in a rigorous and unique academic environment, and built the kinds of skills in both leadership and partnership that will support them as they pursue their biggest and best ideas well into the future.

Over the next few weeks, the stories of just a few of these remarkable graduates will be featured here. Watch as they and their peers are honored at Caltech's 119th Commencement on June 14 at 10 a.m. If you can't be in Pasadena, the ceremony will be live-streamed at http://www.ustream.tv/caltech.

Talented, curious, fun, innovative, and wildly intelligent. Although these adjectives describe many Caltech students, they come to mind instantly when talking to senior Raymond Jimenez. The electrical engineering major is brimming with ideas and technical know-how. During his years at Caltech, for example, he developed new circuit designs for neural probes and figured out how to vastly increase the computing power available to students in Dabney, his undergraduate house; in his free time he has been designing a small-scale particle accelerator. Jimenez even helped improve the displays on the Tokyo Metro while interning for Mitsubishi last summer.

Jimenez, a native of Duarte, began his association with Caltech when he was a high-school junior at Polytechnic School (across the street from the Institute) and had the chance to work in the lab of Paul Bellan, a professor of applied physics. From the outset, Jimenez was struck by the freedom Caltech students were given to figure out unique solutions to research problems. Bellan gave Jimenez a spending allowance and suggested that he figure out how to make a smaller version of an experimental setup the group had been using to produce laboratory versions of the sun's coronal bursts. "There was a lot of trust put in me," Jimenez says. "I got to design my own hardware. I helped make a mock-up of the experiment using normal, basically everyday materials, instead of a bunch of fancy big-science things."

That's something Jimenez loves—showing that individuals can do science that is typically restricted to huge research teams with millions of dollars. He hopes that the room-sized particle accelerator for which he is currently drawing up plans will give students first-hand experience with such an instrument. "It's one thing to say you know that the paths of electrons can be bent using a magnetic field and another to actually go into the lab and then be able to say that you've bent the paths of electrons using a magnetic field," says Jimenez, who hopes to build the accelerator in the next few years. "A big dream of mine is to actually see if there's a way for interested people to not have to go through the usual channels to do science."

According to Jimenez, if you want to do science on a modest budget, it helps to know how to build things yourself. One of his favorite classes at Caltech was APh/EE 9, Solid-State Electronics for Integrated Circuits—a course then taught by Oskar Painter, also a professor of applied physics. In the class, students learn to make transistors for computer chips. "You might think that transistors are only available to really well-stocked companies with big research budgets, so it was really cool to be able to make one and participate in that hands-on way," he says.

After taking APh/EE 9, Jimenez completed a Summer Undergraduate Research Fellowship (SURF) project with Axel Scherer, the Bernard Neches Professor of Electrical Engineering, Applied Physics and Physics, in which he helped design circuitry for a computer chip that may eventually be used in tiny implantable neural probes that measure human brain activity.

Scherer describes Jimenez as "one of the most capable undergraduates whom I have had the pleasure of working with over my past 20 years at Caltech," adding that he has "extraordinary" abilities. "Raymond brought tremendous enthusiasm, talent, and insight to our neural probe project," Scherer says. "It was fun working with him on our research projects, and I think of him more as a scientific collaborator than as a student."

Jimenez has also been active outside of the lab and classroom. He has particularly enjoyed Caltech's unique housing system for undergraduate students and served as the vice president of Dabney House during his junior year. He was especially innovative as Dabney's system administrator, shifting available funding to create a scientific computing cluster with a hundred terabytes of storage for students to use for their classes and projects.

Sound as if Jimenez has been spread a bit thin? He says that one of the most significant skills he has gained from his time at Caltech is the ability to manage his priorities and his time. "The workload at Caltech showed me what I can achieve in a given time period," Jimenez says. "If you're pushed and you know what your total output can be, then you have a baseline standard to compare yourself to."

Starting in July, Jimenez will take that knowledge with him to the Hawthorne-based commercial space-transport company SpaceX, where he will work on the avionics team.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
Yes

Notes from the Back Row: "Quantum Entanglement and Quantum Computing"

John Preskill, the Richard P. Feynman Professor of Theoretical Physics, is hooked on quanta. He was applying quantum theory to black holes back in 1994 when mathematician Peter Shor (BS '81), then at Bell Labs, showed that a quantum computer could factor a very large number in a very short time. Much of the world's confidential information is protected by codes whose security depends on numerical "keys" large enough to not be factorable in the lifetime of your average evildoer, so, Preskill says, "When I heard about this, I was awestruck." The longest number ever factored by a real computer had 193 digits, and it took "several months for a network of hundreds of workstations collaborating over the Internet," Preskill continues. "If we wanted to factor a 500-digit number instead, it would take longer than the age of the universe." And yet, a quantum computer running at the same processor speed could polish off 193 digits in one-tenth of a second, he says. Factoring a 500-digit number would take all of two seconds.

While an ordinary computer chews through a calculation one bite at a time, a quantum computer arrives at its answer almost instantaneously because it essentially swallows the problem whole. It can do so because quantum information is "entangled," a state of being that is fundamental to the quantum world and completely foreign to ours. In the world we're used to, the two socks in a pair are always the same color. It doesn't matter who looks at them, where they are, or how they're looked at. There's no such independent reality in the quantum world, where the act of opening one of a matched pair of quantum boxes determines the contents of the other one—even if the two boxes are at opposite ends of the universe—but only if the other box is opened in exactly the same way. "Quantum boxes are not like soxes," Preskill says. (If entanglement sounds like a load of hooey to you, you're not alone. Preskill notes that Albert Einstein famously derided it back in the 1930s. "He called it 'spooky action at a distance,' and that sounds even more derisive when you say it in German—'Spukhafte Fernwirkungen!'")

An ordinary computer processes "bits," which are units of information encoded in batches of electrons, patches of magnetic field, or some other physical form. The "qubits" of a quantum computer are encoded by their entanglement, and these entanglements come with a big Do Not Disturb sign. Because the informational content of a quantum "box" is unknown until you open it and look inside, qubits exist only in secret, making them ideal for spies and high finance. However, this impenetrable security is also the quantum computer's downfall. Such a machine would be morbidly sensitive—the slightest encroachment from the outside world would demolish the entanglement and crash the system.

Ordinary computers cope with errors by storing information in triplicate. If one copy of a bit gets corrupted, it will no longer match the other two; error-detecting software constantly checks the three copies against one another and returns the flipped bit to its original state. Fixing flipped bits when you're not allowed to look at them seems an impossible challenge on the face of it, but after reading Shor's paper Preskill decided to give it a shot. Over the next few years, he and his grad student Daniel Gottesman (PhD '97) worked on quantum error correction, eventually arriving at a mathematical procedure by which indirectly measuring the states of five qubits would allow an error in any one of them to be fixed.

This changed the barriers facing practical quantum computation from insurmountable to merely incredibly difficult. The first working quantum computers, built in several labs in the early 2000s, were based on lasers interacting with what Preskill describes as "a handful" of trapped ions to perform "a modest number of [logic] operations." An ion trap is about the size of a thermos bottle, but the laser systems and their associated electronics take up several hundred square feet of lab space. With several million logic gates on a typical computer chip, scaling up this technology is a really big problem. Is there a better way? Perhaps. According Preskill, his colleagues at Caltech's Institute for Quantum Information and Matter are working out the details of a "potentially transformative" approach that would allow quantum computers to be made using the same silicon-based technologies as ordinary ones.

 "Quantum Entanglement and Quantum Computing" is available for download in HD from Caltech on iTunesU. (Episode 19)

Writer: 
Douglas Smith
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Pages

Subscribe to RSS - EAS