Ravichandran Receives Award for Solid Mechanics

Guruswami (Ravi) Ravichandran, the John E. Goode Jr. Professor of aerospace and professor of mechanical engineering, has received the Warner T. Koiter Medal from the American Society of Mechanical Engineers (ASME).

The award, established in 1996, is given annually to individuals who make distinguished contributions to the field of solid mechanics.

Ravichandran, who is also the director of the Graduate Aerospace Laboratories of the California Institute of Technology (GALCIT), was noted for his "outstanding scientific, engineering, and mentoring contributions in the areas of ultra-high strain rate mechanics of ceramics and metals, and pioneering and innovative experiments to advance our understanding of coupled phenomena in the fields of smart materials and cellular mechanics."

"I am greatly honored to receive the Warner T. Koiter Medal from the ASME. I am pleased that my work is being recognized, but this would not be possible without the contributions of many talented students and postdoctoral scholars in my group," says Ravichandran. "I have also been very fortunate to collaborate with a number of colleagues on campus who have given me impetus to investigate interdisciplinary problems."

Ravichandran works to discover fundamental insights into the way that materials deform, are damaged, and fail. His primary aim is to develop new experimental methods to study these and other phenomena in solid mechanics.

Ravichandran received the award at the 2014 International Mechanical Engineering Congress and Exposition in Montreal, in November. There he gave his 2014 Koiter Lecture, titled "Peeling of Heterogeneous Adhesive Tapes."

Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community
Wednesday, February 18, 2015
Center for Student Services 360 (Workshop Space) – Center for Student Services

HALF TIME: A Mid-Quarter Meetup for TAs

Four Caltech Professors Elected to National Academy of Inventors

Caltech professors Frances Arnold, David Baltimore, Carver Mead, and Axel Scherer have been named fellows of the National Academy of Inventors (NAI).

Election as an NAI fellow is an honor bestowed upon academic innovators and inventors who have "demonstrated a prolific spirit of innovation in creating or facilitating outstanding inventions and innovations that have made a tangible impact on quality of life, economic development, and the welfare of society." Fellows are named inventors on U.S. patents and were each nominated by their peers for their contributions to innovation.

According to its website, the NAI "was founded in 2010 to recognize and encourage inventors with patents issued from the U.S. Patent and Trademark Office, enhance the visibility of academic technology and innovation, encourage the disclosure of intellectual property, educate and mentor innovative students, and translate the inventions of its members to benefit society."

Frances Arnold is the Dick and Barbara Dickinson Professor of Chemical Engineering, Bioengineering and Biochemistry, and director of the Donna and Benjamin M. Rosen Bioengineering Center at Caltech. Her group has pioneered methods of "directed evolution" to engineer new proteins in the lab. The method is now widely used to create catalysts for use in industrial processes, including the production of fuels and chemicals from renewable resources. The Arnold group also uses the results of laboratory evolution experiments to elucidate principles of biological design. Arnold is a coinventor on more than 40 U.S. patents. She was inducted into the National Inventors Hall of Fame in 2014 and received the National Medal of Technology and Innovation in 2013.

David Baltimore is the Robert Andrews Millikan Professor of Biology, and President Emeritus of Caltech. His research group focuses on two major areas: investigation of the development and functioning of the mammalian immune system, and translational studies of the viral transfer of genes into immune cells to increase their ability to fight disease and resist cancer. He was awarded the Nobel Prize in Physiology or Medicine in 1975.

Carver Mead (BS '56, MS '57, PhD '60) is the Gordon and Betty Moore Professor of Engineering and Applied Science, Emeritus, in computation and neural systems. Mead has significantly advanced the technology of integrated circuits by developing a method called very-large-scale integration (VSLI) that allows engineers to combine thousands of transistors onto a single microchip, thus exponentially expanding computer processing power. He received the National Medal of Technology and Innovation in 2002.

Axel Scherer is the Bernard Neches Professor of Electrical Engineering, Medical Engineering, Applied Physics and Physics, and the director of the Caltech Global Health Initiative. Holding more than 100 patents in the area of nanofabrication and device design, Scherer has most recently developed ways to integrate optics, electronics, and fluidics into sensor systems. Much of his work is currently focused on systems for medical diagnosis and health monitoring through molecular pathology and wireless implants. Over the past decades, technology from his group has been commercialized through several start-up ventures in telecommunications and health care devices.

Arnold, Baltimore, Mead, and Scherer will join Vice Provost Morteza Gharib, the Hans W. Liepmann Professor of Aeronautics and Bioinspired Engineering, and Robert H. Grubbs, the Victor and Elizabeth Atkins Professor of Chemistry, as Caltech's Fellows of the NAI. They will be inducted during the fourth annual conference of the National Academy of Inventors, which will be held at Caltech on March 20, 2015.

Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Fred Raichlen

1932–2014

Fredric ("Fred") Raichlen, professor emeritus of civil and mechanical engineering at Caltech, passed away on December 13, 2014. He was 82 years old. Raichlen was an expert on the mechanics of tsunamis, the waves created by underwater earthquakes, volcanic eruptions, and other geologic events.

Most waves propagate through the water at and just below the surface. A tsunami is fundamentally different. Because it is driven by a movement in the earth's crust, the wave extends from the seafloor all the way up to the surface. As the tsunami approaches land, the transition from deep to shallow water concentrates the wave's energy. The local topography off- and onshore focuses the onrushing wall of water, and utter devastation can follow. (The tsunami caused by the 2011 Tohoku earthquake reached six miles inland in some places and killed more than 18,000 people.)

Raichlen's wave-tank experiments enabled him to develop three-dimensional computer models of how tsunamis originate, propagate through the open ocean, and eventually run up on land. He simulated the run-up in a tank 31 feet long by 15 feet wide that was big enough to hold a scale model of an entire harbor. These experiments allowed him to design features such as breakwaters to protect a specific port or to determine where not to site vulnerable structures such as railroad tracks and oil tanks. In other experiments he determined how fast different regions within a wave move as the wave breaks, which in turn allowed him to calculate the force of the wave's impact. He also investigated the effects of the waves on floating structures such as ships moored in the harbor.

Raichlen earned his bachelor's degree in engineering from the Johns Hopkins University in 1953, and his master's and doctoral degrees at MIT in 1955 and 1962. He came to Caltech as an assistant professor of civil engineering in 1962, and he was promoted to associate professor in 1967 and to professor in 1972. In 1969, he became one of the founding faculty members of Caltech's doctoral program in environmental engineering science. He was appointed professor of civil and mechanical engineering in 1997 and professor emeritus in 2001.

Raichlen was a member of the National Academy of Engineering, and in 1994 he received the American Society of Civil Engineers' John G. Moffatt–Frank E. Nichol Harbor and Coastal Engineering Award.

The funeral will be Thursday, December 18th at 11:30 a.m. at Mt. Sinai Memorial Park, 5950 Forest Lawn Drive, Los Angeles.

A full obituary will be published at a later date.

Writer: 
dsmith
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Controlling Light on a Chip at the Single-Photon Level

Watson Lecture Preview

Integrating optics and electronics into systems such as fiber-optic data links has revolutionized how we transmit information. A second revolution awaits as researchers seek to develop chips in which individual atoms control the movement of light within the chip through optical "wires," and photons could replace electrons as the vehicle for performing computations. Andrei Faraon (BS '04), an assistant professor of applied physics and materials science in the Division of Engineering and Applied Science, presents a preview of this revolution at 8 p.m. on Wednesday, December 17, in Caltech's Beckman Auditorium. Admission is free.

 

Q: What do you do?

A: My field is nanophotonics, which means that I build ultrasmall devices to control light. Electronic devices use silicon chips to control the flow of electrical charge and to store it. We're trying to do the same thing with light, using single atoms or small ensembles of atoms that act like the transistors in electronic circuits. We're building devices out of complex oxides that are similar to glass, or to quartz, but contain atoms called lanthanides that have very interesting optical properties. Lanthanide atoms can absorb light and then release it on demand at some later time. So we're making a memory device. We want to take a single photon, put it in this device, and then push a button and release that photon back into a larger optical circuit.

These devices look like the world's smallest Toblerone bars, except that each tiny triangle is attached to its neighbors at the tip instead of the base. Each bar is about ten microns long, which is about the diameter of a red blood cell, and we make them using techniques similar to those used to make computer chips. The only difference is that we're not using silicon, but these complex oxides containing lanthanide atoms.

In order to use these devices, we would have them connected to one another by on-chip optical waveguides, or to a larger system by optical fibers. The signal would come in through the fiber, get stored in our Toblerone bar for a while, and then be released back into the fiber at the appropriate time. We dream of someday having billions of components on the same chip, but even one bit of optical memory can be useful.

 

Q: What would you use a one-bit memory for?

A: It can be used in systems that transmit information with absolute security by what's called quantum communication. You can create an absolutely secure cryptographic key by sending a sequence of single photons, one at a time. Then you use the key to encrypt your data, which you can now send openly over the Internet. The systems that generate these keys over long distances are based on you sending a series of single photons from point A to point C, and a recipient sends another series from point B to point C. At point C you perform an operation on the photons, you combine them in some way, and that is used to establish the key.

However, optical fibers are not 100 percent efficient, especially over long distances. When you transmit photons through hundreds of kilometers of cable, some of them are going to get absorbed by the fiber. This isn't a big problem when each bit of information is encoded in a big pulse of light, but quantum communication depends on being able to interfere two paired single photons with each other. Many times when the photon from point A arrives at point C, you will have to store it until the photon from point B comes in. It might take a few tries before the second photon arrives, and until then the other one just waits for it.

 

Q: How did you get into this line of work?

A: When I was an undergraduate here, I really liked quantum physics. But I also had a passion for doing things that have some applications, so I went into applied physics for my PhD at Stanford. Transmitting information via cryptographic keys is one of the applications of quantum physics, so I got into this field for my PhD, and continued it during my postdoc at HP Laboratories, and now as a faculty member here.

 

Q: Bonus question: What's it like coming back as a professor after having been here as a student?

A: It feels great. Coming back to Caltech is a privilege. As a faculty member you have the satisfaction of working with all these extremely smart kids. And my wife [Assistant Professor of Biology Viviana Gradinaru (BS '05)] and I go to many undergraduate events where we connect with the students, so it feels good.

 

Named for the late Caltech professor Earnest C. Watson, who founded the series in 1922, the Watson Lectures present Caltech and JPL researchers describing their work to the public. Many past Watson Lectures are available online at Caltech's iTunes U site.

 

Writer: 
Douglas Smith
Frontpage Title: 
Controlling Light at the Single-Photon Level
Writer: 
Exclude from News Hub: 
No
Short Title: 
This Wednesday: Controlling Light at the Single-Photon Level
News Type: 
In Our Community

Quantum Code-Cracking: An Interview with Thomas Vidick

Quantum computers, looked to as the next generation of computing technology, are expected to one day vastly outperform conventional computers. Using the laws of quantum mechanics—the physics that governs the behavior of matter and light at the atomic and subatomic scales—these computers will allow us to move and analyze enormous amounts of information with unprecedented speed. Although engineers have yet to actually build such a machine, Assistant Professor of Computing and Mathematical Sciences Thomas Vidick is figuring out how some of the principles of quantum computing can be applied right now, using today's technology.

Originally from Belgium, Vidick received his BS from École Normale Supérieure in Paris in 2007 and his master's degree from Université Paris Diderot, also in 2007. He earned his doctorate from UC Berkeley in 2011. Vidick joined the Division of Engineering and Applied Science at Caltech in June from MIT, where he was a postdoctoral associate.

This fall, he spoke with us about quantum methods for encrypting information, what he's looking forward to at Caltech, and his ongoing search for the best croissants in Los Angeles.

 

What are your research interests?

My area is quantum computing, so it's the computer science of quantum physics. Classical computers—like the computer on my desk—work based on the laws of classical mechanics. They just manipulate bits and do various operations. However, in the 1970s people started to wonder what kinds of computational processes could be realized using quantum-mechanical systems. They ended up discovering algorithms that in some cases can be more efficient, or that can implement certain tasks that were not possible with classical computers.

In my research, I look at two things. One, what are the kinds of procedures that you can implement more efficiently using quantum computers? And, two, what kinds of cryptographic systems—ways to encrypt information securely—can you come up with using quantum systems that could be more secure than classical systems? It's all about this exploration of what quantum systems allow us to do that classical systems didn't or wouldn't.

 

Quantum computers haven't been invented yet, so how do you do this work?

That's a good question, and there are several different answers. Some of my research is very theoretical, and it's just about saying, "If we had a quantum computer, what could we do with it?" We don't have a quantum computer yet because it's very hard to manipulate and control quantum systems on a large scale. But that is just an engineering problem, so what people say is that yes it's very hard, but in 10 years, we'll get to it. And the theory is also very hard, so we might as well get started right now.

That's one answer. But the better answer is that a lot of what I do and a lot of what I'm interested in doesn't require or depend on whether we can actually build a quantum computer or not. For instance, the cryptographic aspects of quantum computing are already being implemented. There are start-ups that already sell quantum cryptographic systems on the Internet because these systems only require the manipulation of very-small-scale quantum systems.

We can also do some computations about properties of quantum-mechanical systems on a classical computer. One branch of my research has to do with how you can come up with classical algorithms for computing the properties of systems that are described by the laws of quantum mechanics. The most natural way to understand the systems would be to have a quantum computer and then use the quantum computer to simulate the evolution of the quantum-mechanical system. Since we don't have a quantum computer, we have to develop these algorithms using a classical computer and our understanding of the quantum-mechanical system.

 

Can you give a real-world example of how this work might affect the ways in which an average person uses a computer in the future?

One of the most basic ways that quantum cryptographic tasks are used is to come up with a secret key or passcode to encrypt communication. For instance, the two of us, we trust one another, but we're far away from each other. We want to come up with a secret key—just some sort of passcode that we're going to use to encrypt our communication later. I could dream up the passcode and then tell it to you over the phone, but if someone listens to the line, it's not secure. There might constantly be someone listening in on the line, so there is no passcode procedure to exchange secret keys between us, unless we meet up in person.

However, it is known that if we are able to send quantum messages, then actually we could do it. How this works is that, instead of sending you a passcode of my choice, I would send you a bunch of photons, which are quantum particles, prepared in a completely random state. There is then a whole quantum protocol in which you need to measure the photons, but the main point is that at the end, we'll each be able to extract the exact same passcode: me from the way the photons were prepared, and you from the measurement results. The code will be random, but we'll both know it.

And because of the laws of quantum mechanics, if anyone has been listening on the line—intercepting the photons—we'll be able to tell. The reason for this is that any disturbance of the photon's quantum-mechanical states can be detected from the correlations between the outcomes of the measurements and the initial state of the photons. This is called an "entropy-disturbance tradeoff"—if the eavesdropper perturbs the photons then the outcome distribution you observe is affected in a way that can be checked. This is a uniquely quantum phenomenon, and

it allows distant parties to establish a secret key or a passcode between them in a perfectly secure way.

 

How does your work address this?

This system of sending quantum messages was discovered in the '80s, and, as I said before, people are already implementing it. But there is one big drawback to quantum cryptography, and that's that you need quantum equipment to do it—and this quantum equipment tends to be really clunky. It's very hard to come up with a machine that sends photons one by one, and since single photons can be easily lost, it's also hard to make accurate measurements. Also, you need a machine that can generate single photons and a machine that can detect single photons for the message to be secure.

In practice, we don't have such machines. We have these huge clunky machines that can sort of do it, but they're never perfect. My work tries to bypass the need for these machines, with cryptographic protocols and proofs of security that are secure even if you can't make or see the quantum part of the protocol. To do this, we model the quantum equipment just as a black box. So my work has been to try to get these strong proofs of security into a model where we only really trust the interactions we can see in the classical world. It's a proof of security that holds independently of whether the quantum part of the device works in the way that we think it does.

 

How did you get into this field?

I was doing math originally. I was doing number theory as an undergrad and I liked it a lot. But then I did an internship, and I realized that I couldn't tell anyone why I was asking the questions I was asking. So I thought, "I need a break from this. Whatever I do for my life, I need to know why I'm doing it." The best alternative I could think of was computer science, because it seemed more concrete. And this was when I learned that quantum computing existed—I didn't know before. I think what's most interesting about it is that you're talking about the world—because the world is quantum mechanical. Physics describes the world.

That's what I really like, because from my point of view everything I do is still very theoretical work and I like doing theoretical work. I like the beauty of it. I like the abstractness of it. I like that you have well-posed problems and you can give well-defined answers. But I also like the fact that in the end you are talking about or trying to talk about real-world physics. So every time I think "Why am I doing this?" or "What should I do?" I try to think of how I can connect it to a real, concrete question.

 

How did you get interested in math and computer science when you were a kid?

My dad was a chemist but he worked as an engineer, and he would come home from work and would bring home different experiments with liquid nitrogen or whatever.

I guess he gave me sort of a scientific mind, but then why did I do math problems? Probably like most people good at math, I was just good at it for some reason and it was just easy. Math is so beautiful when you understand it. Throughout middle school and high school, I just enjoyed it so much. But then, as I said, eventually I stretched my limits in math a little bit.

 

What are you excited about in terms of coming to Caltech?

I really like the Computing and Mathematical Sciences department here—it's a young department and it's a small department. For me it's very unique in that there's a very strong group in quantum information—especially the physics side of quantum information, like my neighbor here, John Preskill. Caltech has a very strong group in quantum information and also has a very strong group in computer science. And so, from the point of view of my research, this is just the perfect place.

And then there are the mountains. I love the mountains—they're just beautiful. This is how I abandoned the smoky Paris cafes. I had to think about the mountains. You can't beat the view from my office, and I can go hike up there.

 

Other than hiking, do you have any hobbies or interests that are outside your research?

I also like to bike up mountains. I did that a lot when I came here, but then I fractured my collarbone while biking. It's almost better now, but I still haven't gotten back on the bike yet. Another thing that is an investment of time—and I'm really worried about that one—is croissant hunting. I really like croissants and chocolates. I'm from Belgium, and Belgium is pretty big on chocolate. I've already been to a lot of famous croissant and chocolate places in L.A., but I haven't found something that has lived up to my standards yet. I haven't done everything though, so I'm open to recommendations.

Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

New Technique Could Harvest More of the Sun's Energy

As solar panels become less expensive and capable of generating more power, solar energy is becoming a more commercially viable alternative source of electricity. However, the photovoltaic cells now used to turn sunlight into electricity can only absorb and use a small fraction of that light, and that means a significant amount of solar energy goes untapped.

A new technology created by researchers from Caltech, and described in a paper published online in the October 30 issue of Science Express, represents a first step toward harnessing that lost energy.

Sunlight is composed of many wavelengths of light. In a traditional solar panel, silicon atoms are struck by sunlight and the atoms' outermost electrons absorb energy from some of these wavelengths of sunlight, causing the electrons to get excited. Once the excited electrons absorb enough energy to jump free from the silicon atoms, they can flow independently through the material to produce electricity. This is called the photovoltaic effect—a phenomenon that takes place in a solar panel's photovoltaic cells.

Although silicon-based photovoltaic cells can absorb light wavelengths that fall in the visible spectrum—light that is visible to the human eye—longer wavelengths such as infrared light pass through the silicon. These wavelengths of light pass right through the silicon and never get converted to electricity—and in the case of infrared, they are normally lost as unwanted heat.

"The silicon absorbs only a certain fraction of the spectrum, and it's transparent to the rest. If I put a photovoltaic module on my roof, the silicon absorbs that portion of the spectrum, and some of that light gets converted into power. But the rest of it ends up just heating up my roof," says Harry A. Atwater, the Howard Hughes Professor of Applied Physics and Materials Science; director, Resnick Sustainability Institute, who led the study.

Now, Atwater and his colleagues have found a way to absorb and make use of these infrared waves with a structure composed not of silicon, but entirely of metal.

The new technique they've developed is based on a phenomenon observed in metallic structures known as plasmon resonance. Plasmons are coordinated waves, or ripples, of electrons that exist on the surfaces of metals at the point where the metal meets the air.

While the plasmon resonances of metals are predetermined in nature, Atwater and his colleagues found that those resonances are capable of being tuned to other wavelengths when the metals are made into tiny nanostructures in the lab.

"Normally in a metal like silver or copper or gold, the density of electrons in that metal is fixed; it's just a property of the material," Atwater says. "But in the lab, I can add electrons to the atoms of metal nanostructures and charge them up. And when I do that, the resonance frequency will change."

"We've demonstrated that these resonantly excited metal surfaces can produce a potential"—an effect very similar to rubbing a glass rod with a piece of fur: you deposit electrons on the glass rod. "You charge it up, or build up an electrostatic charge that can be discharged as a mild shock," he says. "So similarly, exciting these metal nanostructures near their resonance charges up those metal structures, producing an electrostatic potential that you can measure."

This electrostatic potential is a first step in the creation of electricity, Atwater says. "If we can develop a way to produce a steady-state current, this could potentially be a power source. He envisions a solar cell using the plasmoelectric effect someday being used in tandem with photovoltaic cells to harness both visible and infrared light for the creation of electricity.

Although such solar cells are still on the horizon, the new technique could even now be incorporated into new types of sensors that detect light based on the electrostatic potential.

"Like all such inventions or discoveries, the path of this technology is unpredictable," Atwater says. "But any time you can demonstrate a new effect to create a sensor for light, that finding has almost always yielded some kind of new product."

This work was published in a paper titled, "Plasmoelectric Potentials in Metal Nanostructures." Other coauthors include first author Matthew T. Sheldon, a former postdoctoral scholar at Caltech; Ana M. Brown, an applied physics graduate student at Caltech; and Jorik van de Groep and Albert Polman from the FOM Institute AMOLF in Amsterdam. The study was funded by the Department of Energy, the Netherlands Organization for Scientific Research, and an NSF Graduate Research Fellowship.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Tuesday, December 2, 2014
Guggenheim 101 (Lees-Kubota Lecture Hall) – Guggenheim Aeronautical Laboratory

PUSD: Annual Open Enrollment

New Center Supports Data-Driven Research

With the advanced capabilities of today's computer technologies, researchers can now collect vast amounts of information with unprecedented speed. However, gathering information is only one half of a scientific discovery, as the data also need to be analyzed and interpreted. A new center on campus aims to hasten such data-driven discoveries by making expertise and advanced computational tools available to Caltech researchers in many disciplines within the sciences and the humanities.

The new Center for Data-Driven Discovery (CD3), which became operational this fall, is a hub for researchers to apply advanced data exploration and analysis tools to their work in fields such as biology, environmental science, physics, astronomy, chemistry, engineering, and the humanities.

The Caltech center will also complement the resources available at JPL's Center for Data Science and Technology, says director of CD3 and professor of astronomy George Djorgovski.

"Bringing together the research, technical expertise, and respective disciplines of the two centers to form this joint initiative creates a wonderful synergy that will allow us opportunities to explore and innovate new capabilities in data-driven science for many of our sponsors," adds Daniel Crichton, director of the Center for Data Science and Technology at JPL.

At the core of the Caltech center are staff members who specialize in both computational methodology and various domains of science, such as biology, chemistry, and physics. Faculty-led research groups from each of Caltech's six divisions and JPL will be able to collaborate with center staff to find new ways to get the most from their research data. Resources at CD3 will range from data storage and cataloguing that meet the highest "housekeeping" standards, to custom data-analysis methods that combine statistics with machine learning—the development of algorithms that can "learn" from data. The staff will also help develop new research projects that could benefit from large amounts of existing data.

"The volume, quality, and complexity of data are growing such that the tools that we used to use—on our desktops or even on serious computing machines—10 years ago are no longer adequate. These are not problems that can be solved by just buying a bigger computer or better software; we need to actually invent new methods that allow us to make discoveries from these data sets," says Djorgovski.

Rather than turning to off-the-shelf data-analysis methods, Caltech researchers can now collaborate with CD3 staff to develop new customized computational methods and tools that are specialized for their unique goals. For example, astronomers like Djorgovski can use data-driven computing in the development of new ways to quickly scan large digital sky surveys for rare or interesting targets, such as distant quasars or new kinds of supernova explosions—targets that can be examined more closely with telescopes, such as those at the W. M. Keck Observatory, he says.

Mary Kennedy, the Allen and Lenabelle Davis Professor of Biology and a coleader of CD3, says that the center will serve as a bridge between the laboratory-science and computer-science communities at Caltech. In addition to matching up Caltech faculty members with the expertise they will need to analyze their data, the center will also minimize the gap between those communities by providing educational opportunities for undergraduate and graduate students.

"Scientific development has moved so quickly that the education of most experimental scientists has not included the techniques one needs to synthesize or mine large data sets efficiently," Kennedy says. "Another way to say this is that 'domain' sciences—biology, engineering, astronomy, geology, chemistry, sociology, etc.—have developed in isolation from theoretical computer science and mathematics aimed at analysis of high-dimensional data. The goal of the new center is to provide a link between the two."

Work in Kennedy's laboratory focuses on understanding what takes place at the molecular level in the brain when neuronal synapses are altered to store information during learning. She says that methods and tools developed at the new center will assist her group in creating computer simulations that can help them understand how synapses are regulated by enzymes during learning.

"The ability to simulate molecular mechanisms in detail and then test predictions of the simulations with experiments will revolutionize our understanding of highly interconnected control mechanisms in cells," she says. "To some, this seems like science fiction, but it won't stay fictional for long. Caltech needs to lead in these endeavors."

Assistant Professor of Biology Mitchell Guttman says that the center will also be an asset to groups like his that are trying to make sense out of big sets of genomic data. "Biology is becoming a big-data science—genome sequences are available at an unprecedented pace. Whereas it took more than $1 billion to sequence the first genome, it now costs less than $1,000," he says. "Making sense of all this data is a challenge, but it is the future of biomedical research."

In his own work, Guttman studies the genetic code of lncRNAs, a new class of gene that he discovered, largely through computational methods like those available at the new center. "I am excited about the new CD3 center because it represents an opportunity to leverage the best ideas and approaches across disciplines to solve a major challenge in our own research," he says.

But the most valuable findings from the center could be those that stem not from a single project, but from the multidisciplinary collaborations that CD3 will enable, Djorgovski says. "To me, the most interesting outcome is to have successful methodology transfers between different fields—for example, to see if a solution developed in astronomy can be used in biology," he says.

In fact, one such crossover method has already been identified, says Matthew Graham, a computational scientist at the center. "One of the challenges in data-rich science is dealing with very heterogeneous data—data of different types from different instruments," says Graham. "Using the experience and the methods we developed in astronomy for the Virtual Observatory, I worked with biologists to develop a smart data-management system for a collection of expression and gene-integration data for genetic lines in zebrafish. We are now starting a project along similar methodology transfer lines with Professor Barbara Wold's group on RNA genomics."

And, through the discovery of more tools and methods like these, "the center could really develop new projects that bridge the boundaries between different traditional fields through new collaborations," Djorgovski says.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Converting Data Into Knowledge: An Interview with Yisong Yue

When a movie-streaming service recommends a new film you might like, sometimes that recommendation becomes a new favorite; other times, the computer's suggestion really misses the mark. Yisong Yue, assistant professor of computing and mathematical sciences, is interested in how systems like these can better "learn" from human behavior as they turn raw data into actionable knowledge—a concept called machine learning.

Yue joined the Division of Engineering and Applied Science at Caltech in September after spending a year as a research scientist at Disney Research. Born in Beijing and raised in Chicago, Yue completed a bachelor's degree at the University of Illinois in 2005, a doctorate at Cornell University in 2010, and an appointment as a postdoctoral researcher at Carnegie-Mellon in 2013.

Recently he spoke with us about his research interests, his hobbies, and what he is looking forward to here at Caltech.

 

What is your main area of research?

My main research interests are in machine learning. Machine learning is the study of how computers can take raw data or annotated data and convert that into knowledge and actionable items, ideally in a fully automated way—because it's one thing to just have a lot of data, but it's another thing to have knowledge that you can derive from that data.

 

Is machine learning a general concept that can be applied to many different fields?

That's right. Machine learning is becoming a more and more general tool as we become a more digital society. In the past, some of my research has been applied to applications such as data-driven animation, sports analytics, personalized recommender systems, and adaptive urban transportation systems.

 

What application of this work are you most excited about right now?

This is tough because I'm excited about all of them, really, but if I had to just pick one, it would be human-in-the-loop machine learning. The idea is that although we would love to have computers that can derive knowledge from data in a fully automated way, oftentimes the problem is too difficult or it would take too long. So machine learning with humans in the loop acknowledges that we can learn from how humans behave in a system.

I think that we are entering a society where we depend on digital systems for basically everything we do. And that means we have an opportunity to learn from humans how to optimize our daily lives. Because human interaction with digital systems is so ubiquitous, I think learning with humans in the loop is a very compelling research agenda moving forward.

 

Can you give an example of humans-in-the-loop machine learning that we experience on a daily basis?

One example of humans-in-the-loop that we experience fairly regularly is a personalized recommender system. Many websites have a recommendation system built into them, and the system would like to provide personalized recommendations to maximize feedback and engagement of that user with the system. However, when there is a brand-new user, the system doesn't really understand their interests. What the system can do is recommend some stuff and see if the user likes it or not, and their response—thumbs up, thumbs down, or whatever—is an indicator of the topics or content this user is interested in. You see this sort of closed loop between a machine learning system that's trying to learn how best to personalize to a user and a user that's using the system and providing feedback on the fly.

 

You also mentioned animation. How is your work applied in that field?

Before I came to Caltech, I spent one year as a research scientist at Disney Research. I worked on both sports analytics and data-driven animation. With regard to the animation, the basic idea is as follows: you take data about how humans talk in a natural sentence-speaking setting, and then you try to automatically generate natural lip movements or facial movements that correspond to the types of sentences that people would normally say. This is something that people at Disney Research have been working on for a while, so they have a lot of expertise here.

One of the things that you notice many times with animation is that either the character's lip movements are fairly unrealistic—like their mouths just open and close—or in the big-budget movies, it takes a team of artists to manually animate the character's lips. An interesting in-between technology would be to have fairly realistic automatically generated lip movements and facial movements to any type of sentence.

 

What are you looking forward to now that you're at Caltech?

Here I have a combination of research independence, talented colleagues, and support for my research endeavor—and a great culture for intellectual curiosity.

It's such a tight-knit community. It's one of the smallest institutions that I'm familiar with, and what that implies is that basically everyone knows everyone else. The great thing about that is that if you have a question about something that you may not be so knowledgeable about, it's really not that big of a deal to go down the block to talk to someone who works in that field, and you can get information and insight from that person.

 

Have you already begun collaborating with any of your new colleagues?

I'm starting a collaboration with Professor Pietro Perona [Allen E. Puckett Professor of Electrical Engineering] from electrical engineering and Professor Frederick Eberhardt [Professor of Philosophy]. In that collaboration, we'll be addressing a problem that biologists and neuroscientists at Caltech face in assessing how genes affect behavior. These researchers modify the genes of animals—such as fruit flies—and then they video the animal's resulting behaviors. The problem is that researchers don't have time to manually inspect hours upon hours of video to find the particular behavior they're interested in. Professor Perona has been working on this challenge in the past few years, and I was recently brought in to become a part of this collaboration because I work on machine learning and big-data analysis.

The goal is to develop a way to take raw video data of animals under various conditions and try to automatically digest, process, and summarize the significant behaviors in that video data, such as an aggressive attack or attempt to mate.

 

Tell us a little bit about your background.

It is a bit all over the place. I was born in Beijing. I moved to Chicago when I was fairly young, and I spent most of my childhood in Chicago and the surrounding areas. But my parents actually moved out of Chicago after my sister and I left for college, and so I really don't have any relatives or strong ties to Chicago anymore. Where I call home is … I don't really know where I call home. I guess Pasadena is my home.

 

Do you have any hobbies outside of your research?

I like hiking and photography, and I'm really excited to try some of the hiking trails in the area and to bring my camera and my tripod with me.

I have a few other hobbies, although I don't really have the time to do them as much now. I was part of an improv group in high school, and I did a fair amount of comedic acting. I wasn't very good at it, so it's not something I can really brag about, but it was fun. I am also an avid eSports fan. For instance, I love watching and playing StarCraft.

Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Pages

Subscribe to RSS - EAS