Caltech Physicists Uncover Novel Phase of Matter

Finding could have implications for high-temperature superconductivity

A team of physicists led by Caltech's David Hsieh has discovered an unusual form of matter—not a conventional metal, insulator, or magnet, for example, but something entirely different. This phase, characterized by an unusual ordering of electrons, offers possibilities for new electronic device functionalities and could hold the solution to a long-standing mystery in condensed matter physics having to do with high-temperature superconductivity—the ability for some materials to conduct electricity without resistance, even at "high" temperatures approaching  –100 degrees Celsius.

"The discovery of this phase was completely unexpected and not based on any prior theoretical prediction," says Hsieh, an assistant professor of physics, who previously was on a team that discovered another form of matter called a topological insulator. "The whole field of electronic materials is driven by the discovery of new phases, which provide the playgrounds in which to search for new macroscopic physical properties."

Hsieh and his colleagues describe their findings in the November issue of Nature Physics, and the paper is now available online. Liuyan Zhao, a postdoctoral scholar in Hsieh's group, is lead author on the paper.

The physicists made the discovery while testing a laser-based measurement technique that they recently developed to look for what is called multipolar order. To understand multipolar order, first consider a crystal with electrons moving around throughout its interior. Under certain conditions, it can be energetically favorable for these electrical charges to pile up in a regular, repeating fashion inside the crystal, forming what is called a charge-ordered phase. The building block of this type of order, namely charge, is simply a scalar quantity—that is, it can be described by just a numerical value, or magnitude.

In addition to charge, electrons also have a degree of freedom known as spin. When spins line up parallel to each other (in a crystal, for example), they form a ferromagnet—the type of magnet you might use on your refrigerator and that is used in the strip on your credit card. Because spin has both a magnitude and a direction, a spin-ordered phase is described by a vector.

Over the last several decades, physicists have developed sophisticated techniques to look for both of these types of phases. But what if the electrons in a material are not ordered in one of those ways? In other words, what if the order were described not by a scalar or vector but by something with more dimensionality, like a matrix? This could happen, for example, if the building block of the ordered phase was a pair of oppositely pointing spins—one pointing north and one pointing south—described by what is known as a magnetic quadrupole. Such examples of multipolar-ordered phases of matter are difficult to detect using traditional experimental probes.

As it turns out, the new phase that the Hsieh group identified is precisely this type of multipolar order.  

To detect multipolar order, Hsieh's group utilized an effect called optical harmonic generation, which is exhibited by all solids but is usually extremely weak. Typically, when you look at an object illuminated by a single frequency of light, all of the light that you see reflected from the object is at that frequency. When you shine a red laser pointer at a wall, for example, your eye detects red light. However, for all materials, there is a tiny amount of light bouncing off at integer multiples of the incoming frequency. So with the red laser pointer, there will also be some blue light bouncing off of the wall. You just do not see it because it is such a small percentage of the total light. These multiples are called optical harmonics.

The Hsieh group's experiment exploited the fact that changes in the symmetry of a crystal will affect the strength of each harmonic differently. Since the emergence of multipolar ordering changes the symmetry of the crystal in a very specific way—a way that can be largely invisible to conventional probes—their idea was that the optical harmonic response of a crystal could serve as a fingerprint of multipolar order.   

"We found that light reflected at the second harmonic frequency revealed a set of symmetries completely different from those of the known crystal structure, whereas this effect was completely absent for light reflected at the fundamental frequency," says Hsieh. "This is a very clear fingerprint of a specific type of multipolar order."

The specific compound that the researchers studied was strontium-iridium oxide (Sr2IrO4), a member of the class of synthetic compounds broadly known as iridates. Over the past few years, there has been a lot of interest in Sr2IrO4 owing to certain features it shares with copper-oxide-based compounds, or cuprates. Cuprates are the only family of materials known to exhibit superconductivity at high temperatures—exceeding 100 Kelvin (–173 degrees Celsius). Structurally, iridates and cuprates are very similar. And like the cuprates, iridates are electrically insulating antiferromagnets that become increasingly metallic as electrons are added to or removed from them through a process called chemical doping. A high enough level of doping will transform cuprates into high-temperature superconductors, and as cuprates evolve from being insulators to superconductors, they first transition through a mysterious phase known as the pseudogap, where an additional amount of energy is required to strip electrons out of the material. For decades, scientists have debated the origin of the pseudogap and its relationship to superconductivity—whether it is a necessary precursor to superconductivity or a competing phase with a distinct set of symmetry properties. If that relationship were better understood, scientists believe, it might be possible to develop materials that superconduct at temperatures approaching room temperature.

Recently, a pseudogap phase also has been observed in Sr2IrO4—and Hsieh's group has found that the multipolar order they have identified exists over a doping and temperature window where the pseudogap is present. The researchers are still investigating whether the two overlap exactly, but Hsieh says the work suggests a connection between multipolar order and pseudogap phenomena.

"There is also very recent work by other groups showing signatures of superconductivity in Sr2IrO4 of the same variety as that found in cuprates," he says. "Given the highly similar phenomenology of the iridates and cuprates, perhaps iridates will help us resolve some of the longstanding debates about the relationship between the pseudogap and high-temperature superconductivity."

Hsieh says the finding emphasizes the importance of developing new tools to try to uncover new phenomena. "This was really enabled by a simultaneous technique advancement," he says.

Furthermore, he adds, these multipolar orders might exist in many more materials. "Sr2IrO4 is the first thing we looked at, so these orders could very well be lurking in other materials as well, and that's exactly what we are pursuing next."

Additional Caltech authors on the paper, "Evidence of an odd-parity hidden order in a spin–orbit coupled correlated iridate," are Darius H. Torchinsky, Hao Chu, and Vsevolod Ivanov. Ron Lifshitz of Tel Aviv University, Rebecca Flint of Iowa State University, and Tongfei Qi and Gang Cao of the University of Kentucky are also coauthors. The work was supported by funding from the Army Research Office, the National Science Foundation (NSF), and the Institute for Quantum Information and Matter, an NSF Physics Frontiers Center with support from the Gordon and Betty Moore Foundation.

Home Page Title: 
Physicists Uncover Novel Phase of Matter
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Exclude from Home Page: 
Home Page Summary: 
Not a conventional metal, insulator, or magnet, it could hold the solution to a long-standing mystery related to high-temperature superconductivity.

Building a Microscope to Search for Signs of Life on Other Worlds

In March of this year, a team of bioengineers from Caltech, JPL, and the University of Washington spent a week in Greenland, using snowmobiles to haul their scientific equipment, waiting out windstorms, and spending hours working on the ice. Now the same researchers are planning a trip to California's Mojave Desert, where they will study Searles Lake, a dry, extremely salty basin that is naturally full of harsh chemicals like arsenic and boron. The researchers are testing a holographic microscope that they have designed and built for the purpose of observing microbes that thrive in such extreme environments. The ultimate goal? To send the microscope on a spacecraft to search for biosignatures—signs of life—on other worlds such as Mars or Saturn's icy moon Enceladus.

"Our big overarching hypothesis is that motility is a good biosignature," explains Jay Nadeau, a scientific researcher at Caltech and one of the investigators on the holographic microscope project, dubbed SHAMU (Submersible Holographic Astrobiology Microscope with Ultraresolution). "We suspect that if we send back videos of bacteria swimming, that is going to be a better proof of life than pretty much anything else."

Think, she says, of Antonie van Leeuwenhoek, the father of microbiology, who used simple microscopes in the 17th and 18th centuries to observe protozoa and bacteria. "He immediately recognized that they were living things based on the way they moved," Nadeau says. Indeed, when Leeuwenhoek wrote about observing samples of the plaque between his teeth, he described seeing "many very little animalcules, very prettily a-moving." And Nadeau adds, "No one doubted Leeuwenhoek once they saw them moving for themselves."

In order to capture images of microbes "a-moving" on another world, Nadeau and her colleagues, including Mory Gharib, the Hans W. Liepmann Professor of Aeronautics and Bioinspired Engineering and a vice provost at Caltech, had the idea to use digital holography rather than conventional microscopy.

Holography is a method for recording holistic information about the light bouncing off a sample so that a 3-D image can be reconstructed at some later time. Compared to microscopy, which often involves multiple lenses focusing over a shallow sample (on a slide, for example), holography offers the advantages of focusing over a relatively large volume and of capturing high-resolution images, without the trouble of moving parts that could break in extreme environments or during a launch or landing, if the instrument were sent into space.

Standard photography records only the intensity of the light (related to its amplitude) that reaches a camera lens after scattering off an object. But as a wave, light has both an amplitude and a phase, a separate property that can be used to tell how far the light travels once it is scattered. Holography is a technique that captures both—something that makes it possible to re-create a three-dimensional image of a sample.

To understand the technique, first imagine dropping a pebble in a pond and watching ripples emanate from that spot. Now imagine dropping a second pebble in a new spot, producing a second set of ripples. If the ripples interact with an object on the surface, such as a rock, the ripples are diffracted or scattered by the object, changing the pattern of the waves—an effect that can be detected. Holography is akin to dropping two pebbles in a pond simultaneously, with the pebbles being two laser beams—one a reference beam that shines unaffected by the sample, and an object beam that runs into the sample and gets diffracted or scattered. A detector measures the combination, or superposition, of the ripples from the two beams, which is known as the interference pattern. By knowing how the waves propagate and by analyzing the interference pattern, a computer can reconstruct what the object beam encountered as it traveled

"We can take an interference pattern and use that to reconstruct all of the images in different planes in a volume," explains Chris Lindensmith, a systems engineer at JPL and an investigator on the project. "So we can just go and reconstruct whatever plane we are interested in after the fact and look and see if there's anything in there."

That means that a single image captures all the microbes in a sample—whether there is one bacterium or a thousand. And by taking a series of such images over time, the researchers can reconstruct the path that each bacterium took as it swam in the sample.

That would be virtually impossible with conventional microscopy, says Lindensmith. With microscopy, you need to focus in real time, meaning that someone would have to turn a dial to move the sample closer or farther from the microscope's lenses in order to keep a particular microbe in focus. During that time, they would miss out on the movements of any other microbes in the sample because the focus is so small.

All of the advantages that the holographic microscope offers over microscopy make it appealing for studies elsewhere in the solar system. And there are a number of worlds that scientists are eager to study in close-up detail to search for signs of life. In 2008, using data from the Phoenix Mars lander, scientists determined that there is water ice just below the surface in the northern plains of the Red Planet, making the locale a candidate for follow-up sampling studies. In addition, both the jovian moon Europa and the saturnian moon Enceladus are thought to harbor liquid oceans beneath their icy surfaces. Therefore, the SHAMU group says, a compact, robust, microscope like the one the Caltech team is developing could be a highly desirable component of an instrument suite on a lander to any one of those locations.

Nadeau says the group's prototype performed well during the team's field-testing trip to Greenland. At each testing site, the researchers drilled a hole into the sea ice, submerged the microscope to a depth where some of the salty liquid water trapped inside the ice, called brine, was able to seep into the device's sample area, and collected holographic images. "We know that things live in the water and we know what they do and how they swim," says Nadeau. "But believe it or not, nobody knew what kinds of microorganisms live in sea-ice brine or if they can swim."

That is because typical techniques for counting, labeling, and observing microbes rely on fragile instrumentation and often require large amounts of power, making them unusable in extreme environments like the Arctic. As a result, "nobody had ever looked at sea-ice organisms immediately after collection like we did," says Stephanie Rider, a staff scientist at Caltech who went on the Greenland trip as part of the project. Previously, other teams have collected samples and taken them back to a lab where the samples have been stored in a freezer, sometimes for weeks at a time. "Who knows how much the samples have been warmed up and cooled down by the time someone studies them?" Rider says. "The samples could be totally different at that point."


When samples are returned to the laboratory, fed rich medium, and warmed to +4 degrees, swimming speeds are greatly increased.
Credit: Jay Nadeau/Caltech

During the Greenland trip, the SHAMU group successfully collected images that have been used to construct videos of bacteria and algae that live in the sea-ice brine. They also brought samples back to a lab in Nuuk, Greenland, warmed them overnight, and fed them bacterial growth medium—duplicating the standard conditions under which microorganisms from sea ice have been studied in the past. The researchers found that under those conditions, "everything starts zipping around like crazy," says Nadeau, indicating that in order to be accurate, observations do need to be made in place on the ice rather than back in a lab.

The team is particularly excited about what the successful measurements from Greenland could mean in the context of Mars. "We know from this that we can tell that things are alive when you take them straight out of ice," says Nadeau. "If we can see life in there on Earth, then it's possible there might be life in pockets of ice on Mars as well. Perhaps you don't have to have a big liquid ocean to find living organisms; there's a possibility that things can live just in pockets of ice."

The three-year SHAMU project began in January 2014 with funding from the Gordon and Betty Moore Foundation. In the coming months, the engineers hope to improve the microscope's sample chamber and to scale down the entire device. They believe they will have a launch-ready instrument by the end of the funding period.

As a first test in space, they would like to send the instrument to the International Space Station not only to see how it behaves in space but also to observe microbial samples under zero-gravity conditions. Beyond that, they hope to include SHAMU on a Mars lander as part of a NASA Discovery mission aimed at searching for biosignatures in the frozen northern plains of Mars. The Caltech team is partnering with Honeybee Robotics, a company that has built drills and sampling systems for numerous NASA missions (including the Phoenix Mars lander), to integrate the holographic microscope on a drill that would bore down about three feet into the martian ground ice.

In addition to Nadeau, Gharib, and Lindensmith, Jody Deming of the University of Washington's School of Oceanography is also an investigator on the SHAMU project.

Writer: 
Kimm Fesenmaier
Home Page Title: 
A Microscope to Search for Life on Other Worlds
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Exclude from Home Page: 
Home Page Summary: 
If microbial life exists elsewhere in the solar system, wouldn't we like to actually see it on the move?
rpyle's picture

Probing the Mysterious Perceptual World of Autism

New research looks at what people with Autism Spectrum Disorder pay attention to in the real world.

The perceptual world of a person with autism spectrum disorder (ASD) is unique. Beginning in infancy, people who have ASD observe and interpret images and social cues differently than others. Caltech researchers now have new insight into just how this occurs, research that eventually may help doctors diagnose, and more effectively treat, the various forms of the disorder. The work is detailed in a study published in the October 22 issue of the journal Neuron.

Symptoms of ASD include impaired social interaction, compromised communication skills, restricted interests, and repetitive behaviors. Research suggests that some of these behaviors are influenced by how an individual with ASD senses, attends to, and perceives the world.

The new study investigated how visual input is interpreted in the brain of someone with ASD. In particular, it examined the validity of long-standing assumptions about the condition, including the belief that those with ASD often miss facial cues, contributing to their inability to respond appropriately in social situations.

"Among other findings, our work shows that the story is not as simple as saying 'people with ASD don't look normally at faces.' They don't look at most things in a typical way," says Ralph Adolphs, the Bren Professor of Psychology and Neuroscience and professor of biology, in whose lab the study was done. Indeed, the researchers found that people with ASD attend more to nonsocial images, to simple edges and patterns in those images, than to the faces of people.

To reach these determinations, Adolphs and his lab teamed up with Qi Zhao, an assistant professor of electrical and computer engineering at the National University of Singapore, the senior author on the paper, who had developed a detailed method. The researchers showed 700 images to 39 subjects. Twenty of the subjects were high-functioning individuals with ASD, and 19 were control, or "neurotypical," subjects without ASD. The two groups were matched for age, race, gender, educational level, and IQ. Each subject viewed each image for three seconds while an eye-tracking device recorded their attention patterns on objects depicted in the images.

Unlike the abstract representations of single objects or faces that have been commonly used in such studies, the images that Adolphs and his team presented contained combinations of more than 5,500 real-world elements—common objects like people, trees, and furniture as well as less common items like knives and flames—in natural settings, mimicking the scenes that a person might observe in day-to-day life.

"Complex images of natural scenes were a big part of this unique approach," says first-author Shuo Wang (PhD '14), a postdoctoral fellow at Caltech. The images were shown to subjects in a rich semantic context, "which simply means showing a scene that makes sense," he explains. "I could make an equally complex scene with Photoshop by combining some random objects such as a beach ball, a hamburger, a Frisbee, a forest, and a plane, but that grouping of objects doesn't have a meaning—there is no story demonstrated. Having objects that are related in a natural way and that show something meaningful provides the semantic context. It is a real-world approach."

In addition to validating previous studies that showed, for example, that individuals with ASD are less drawn to faces than control subjects, the new study found that these subjects were strongly attracted to the center of images, regardless of the content placed there. Similarly, they tended to focus their gaze on objects that stood out—for example, due to differences in color and contrast—rather than on faces. Take, for example, one image from the study showing two people talking with one facing the camera and the other facing away so that only the back of their head is visible. Control subjects concentrated on the visible face, whereas ASD subjects attended equally to the face and the back of the other person's head.

"The study is probably most useful for informing diagnosis," Adolphs says. "Autism is many things. Our study is one initial step in trying to discover what kinds of different autisms there actually are. The next step is to see if all people with ASD show the kind of pattern we found. There are probably differences between individual people with ASD, and those differences could relate to differences in diagnosis, for instance, revealing subtypes of autism. Once we have identified those subtypes, we can begin to ask if different kinds of treatment might be best for each kind of subtype."

Adolphs plans to continue this type of research using functional magnetic resonance imaging scans to track the brain activity of people with ASD while they are viewing images in laboratory settings similar to what was used in this study.

The paper, "Atypical Visual Saliency in Autism Spectrum Disorder Quantified through Model-Based Eye Tracking," was coauthored by Shuo Wang and Ralph Adolphs at Caltech; Ming Jiang and Qi Zhao from the National University of Singapore; Xavier Morin Duchesne and Daniel P. Kennedy of Indiana University, Bloomington; and Elizabeth A. Laugeson from UCLA.

The research was supported by a postdoctoral fellowship from the Autism Science Foundation, a Fonds de Recherche du Québec en Nature et Technologies predoctoral fellowship, a National Institutes of Health Grant and National Alliance for Research on Schizophrenia and Depression Young Investigator Grant, a grant from the National Institute of Mental Health to the Caltech Conte Center for the Neurobiology of Social Decision Making, a grant from the Simons Foundation Autism Research Initiative, and Singapore's Defense Innovative Research Program and the Singapore Ministry of Education's Academic Research Fund Tier 2.

Home Page Title: 
Patterns of Attention
Listing Title: 
Patterns of Attention of People with Autism Spectrum Disorder (ASD).
Writer: 
Exclude from News Hub: 
No
Short Title: 
Patterns of Attention for Autism
News Type: 
Research News
Exclude from Home Page: 
Home Page Summary: 
New research into autism, utilizing complex real-world images, provides enhanced understanding of how people with autism attend to visual cues.

Astronomers Peer Inside Stars, Finding Giant Magnets

Astronomers have for the first time probed the magnetic fields in the mysterious inner regions of stars, finding they are strongly magnetized.

Using a technique called asteroseismology, the scientists were able to calculate the magnetic field strengths in the fusion-powered hearts of dozens of red giants, stars that are evolved versions of our sun.

"In the same way medical ultrasound uses sound waves to image the interior of the human body, asteroseismology uses sound waves generated by turbulence on the surface of stars to probe their inner properties," says Caltech postdoctoral researcher Jim Fuller, who co-led a new study detailing the research.

The findings, published in the October 23 issue of Science, will help astronomers better understand the life and death of stars. Magnetic fields likely determine the interior rotation rates of stars; such rates have dramatic effects on how the stars evolve.

Until now, astronomers have been able to study the magnetic fields of stars only on their surfaces, and have had to use supercomputer models to simulate the fields near the cores, where the nuclear-fusion process takes place. "We still don't know what the center of our own sun looks like," Fuller says.

Red giants have a different physical makeup from so-called main-sequence stars such as our sun—one that makes them ideal for asteroseismology (a field that was born at Caltech in 1962, when the late physicist and astronomer Robert Leighton discovered the solar oscillations using the solar telescopes at Mount Wilson). The cores of red-giant stars are much denser than those of younger stars. As a consequence, sound waves do not reflect off the cores, as they do in stars like our sun. Instead, the sound waves are transformed into another class of waves, called gravity waves.

"It turns out the gravity waves that we see in the red giants do propagate all the way to the center of these stars," says co-lead author Matteo Cantiello, a specialist in stellar astrophysics from UC Santa Barbara's Kavli Institute for Theoretical Physics (KITP).

This conversion from sound waves to gravity waves has major consequences for the tiny shape changes, or oscillations, that red giants undergo. "Depending on their size and internal structure, stars oscillate in different patterns," Fuller says. In one form of oscillation pattern, known as the dipole mode, one hemisphere of the star becomes brighter while the other becomes dimmer. Astronomers observe these oscillations in a star by measuring how its light varies over time.

When strong magnetic fields are present in a star's core, the fields can disrupt the propagation of gravity waves, causing some of the waves to lose energy and become trapped within the core. Fuller and his coauthors have coined the term "magnetic greenhouse effect" to describe this phenomenon because it works similarly to the greenhouse effect on Earth, in which greenhouse gases in the atmosphere help trap heat from the sun. The trapping of gravity waves inside a red giant causes some of the energy of the star's oscillation to be lost, and the result is a smaller than expected dipole mode.

In 2013, NASA's Kepler space telescope, which can measure stellar brightness variations with incredibly high precision, detected dipole-mode damping in several red giants. Dennis Stello, an astronomer at the University of Sydney, brought the Kepler data to the attention of Fuller and Cantiello. Working in collaboration with KITP director Lars Bildsten and Rafael Garcia of France's Alternative Energies and Atomic Energy Commission, the scientists showed that the magnetic greenhouse effect was the most likely explanation for dipole-mode damping in the red giants. Their calculations revealed that the internal magnetic fields of the red giants were as much as 10 million times stronger than Earth's magnetic field.

"This is exciting, as internal magnetic fields play an important role for the evolution and ultimate fate of stars," says Professor of Theoretical Astrophysics Sterl Phinney, Caltech's executive officer for astronomy, who was not involved in the study.

A better understanding of the interior magnetic fields of stars could also help settle a debate about the origin of powerful magnetic fields on the surfaces of certain neutron stars and white dwarfs, two classes of stellar corpses that form when stars die.

"The magnetic fields that they find in the red-giant cores are comparable to those of the strongly magnetized white dwarfs," Phinney says. "The fact that only some of the red giants show the dipole suppression, which indicates strong core fields, may well be related to why only some stars leave behind remnants with strong magnetic fields after they die."

The asteroseismology technique the team used to probe red giants probably will not work with our sun. "However," Fuller says, "stellar oscillations are our best probe of the interiors of stars, so more surprises are likely."

The paper is entitled "Asteroseismology can reveal strong internal magnetic fields in red giant stars." In addition to Fuller, Cantiello, Garcia, and Bildsten, the other coauthor is Dennis Stello from the University of Sydney. Jim Fuller was supported by the National Science Foundation and a Lee A. DuBridge Postdoctoral Fellowship at Caltech.

This work was written collaboratively on the web. An Open Science version of the published paper can be found on Authorea, including a layperson's summary.

Home Page Title: 
Astronomers Find Giant Magnets Inside Stars
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Exclude from Home Page: 

Toward a Smarter Grid

Steven Low, professor of computer science and electrical engineering at Caltech, says we are on the cusp of a historic transformation—a restructuring of the energy system similar to the reimagining and revamping that the communication and computer networks experienced over the last two decades, making them layered, with distributed and interconnected intelligence everywhere.

The power network of the future—aka the smart grid—will have to be much more dynamic and responsive than the current electric grid, handling tremendous loads while incorporating intermittent energy production from renewable resources such as wind and solar, all while ensuring that when you or I flip a switch at home or work, the power still comes on without fail.

The smart grid will also be much more distributed than the current network, which controls a relatively small number of generators to provide power to millions of passive endpoints—the computers, machines, buildings, and more that simply consume energy. In the future, thanks to inexpensive sensors and computers, many of those endpoints will become active and intelligent loads like smart devices, or distributed generators such as solar panels and wind turbines. These endpoints will be able to generate, sense, communicate, compute, and respond.

Given these trends, Low says, it is only reasonable to conclude that in the coming decades, the electrical system is likely to become "the largest and most complex cyberphysical system ever seen." And that presents both a risk and an opportunity. On the one hand, if the larger, more active system is not controlled correctly, blackouts could be much more frequent. On the other hand, if properly managed, it could greatly improve efficiency, security, robustness, and sustainability.

At Caltech, Low and an interdisciplinary group of engineers, economists, mathematicians, and computer scientists pulled together by the Resnick Sustainability Institute, along with partners like Southern California Edison and the Department of Energy, are working to develop the devices, systems, theories, and algorithms to help guide this historic transformation and make sure that it is properly managed.

In 2012, the Resnick Sustainability Institute issued a report titled Grid 2020: Towards a Policy of Renewable and Distributed Energy Resources, which focused on some of the major engineering, economic, and policy issues of the smart grid. That report led to a discussion series and working sessions that in turn led to the publication in 2014 of another report called More Than Smart: A Framework to Make the Distribution Grid More Open, Efficient and Resilient.

"One thing that makes the smart grid problem particularly appealing for us is that you can't solve it just as an engineer, just as a computer scientist, just as a control theorist, or just as an economist," says Adam Wierman, professor of computer science and Executive Officer for the Computing and Mathematical Sciences Department. "You actually have to bring to bear tools from all of these areas to solve the problem."

For example, he says, consider the problem of determining how much power various parts of the grid should generate at a particular time. This requires generating an amount of power that matches or closely approximates the amount of electricity demanded by customers. Currently this involves predicting electricity demand a day in advance, updating that prediction several hours before it is needed, and then figuring out how much nuclear power, natural gas, or coal will be produced to meet the demand. That determination is made through markets. In California, the California Independent System Operator runs a day-ahead electricity market in which utility companies and power plants buy and sell power generation for the following day. Then any small errors in the prediction are fixed at the last minute by engineers in a control office, with markets completely out of the picture.

"So you have a balance between the robustness and certainty provided by engineered control and the efficiency provided by markets and economic control," says Wierman. "But when renewable energy comes onto the table, all of a sudden the predictions of energy production are much less accurate, so the interaction between the markets and the engineering is up in the air, and no one knows how to handle this well." This, he says, is the type of problem the Caltech team, with its interdisciplinary approach, is uniquely equipped to address.

Indeed, the Caltech smart grid team is working on projects on the engineering side, projects on the markets side, and projects at the interface.

On the engineering side, a major project has revolved around a complex mathematical problem called optimal power flow that underlies many questions dealing with power system operations and planning. "Optimal power flow can tell you when things should be on or conserving energy, how to stabilize the voltage in the network as solar or wind generation fluctuates, or how to set your thermostat so that you maintain comfort in your building while stabilizing the voltage on the grid," explains Mani Chandy, the Simon Ramo Professor of Computer Science, Emeritus. "The problem has been around for 50 years but is extremely difficult to solve."

Chandy worked with Low; John Doyle, the Jean-Lou Chameau Professor of Control and Dynamical Systems, Electrical Engineering, and Bioengineering; and a number of Caltech students to devise a clever way to solve the problem, allowing them, for the first time, to compute a solution and then check whether that solution is globally optimal.

"We said, let's relax the constraints and optimize the cost over a bigger set that we can design to be solvable," explains Low. For example, if a customer is consuming electricity at a single location, the problem might ask how much electricity that individual is actually consuming; a relaxation would say that that person is consuming no more than a certain amount—it is a way of adding flexibility to a problem with tight constraints. "Almost magically, it turns out that if I design my physical set in a clever way, the solution for this larger simple set turns out to be the same as it would be for the original set."

The new approach produces a feasible solution for almost all distribution systems—the low-voltage networks that take power from larger substations and ultimately deliver it to the houses, buildings, street lights, and so on in a region. "That's important because many of the innovations in the energy sector in the coming decade will happen on distribution systems," says Low.

Another Caltech project attempts to predict how many home and business owners are likely to adopt rooftop solar panels over the next 5, 10, 20, or 30 years. In Southern California, the number of solar installations has increased steadily for several years. For planning purposes, utility companies need to anticipate whether that growth will continue and at what pace. For example, Low says, if the network is eventually going to comprise 15 or 20 percent renewables, then the current grid is robust enough. "But if we are going to have 50 or 80 percent renewables," he says, "then the grid will need huge changes in terms of both engineering and market design."

Working with Chandy, graduate students Desmond Cai and Anish Agarwal (BS '13, MS '15) developed a new model for predicting how many homes and businesses will install rooftop solar panels. The model has proven highly accurate. Researchers believe that whether or not people "go solar" depends largely on two factors: how much money they will save and their confidence in the new technology. The Caltech model, completed in 2012, indicates that the amount of money that people can save by installing rooftop solar has a huge influence on whether they will adopt the technology. Based on their research, the team has also developed a web-based tool that predicts how many people will install solar panels using a utility company's data. Southern California Edison's planning department is actively using the tool.

On the markets side, Caltech researchers are doing theoretical work looking at the smart grid and the network of markets it will produce. Electricity markets can be both complicated and interesting to study because unlike a traditional market—a single place where people go to buy and sell something—the electricity "market" actually consists of many networked marketplaces interacting in complicated ways.

One potential problem with this system and the introduction of more renewables, Wierman says, is that it opens the door for firms to manipulate prices by turning off generators. Whereas the operational status of a normal generator can be monitored, with solar and wind power, it is nearly impossible to verify how much power should have been produced because it is difficult to know whether it was windy or sunny at a certain time. "For example, you can significantly impact prices by pushing—or not pushing—solar energy from your solar farm," Wierman says. "There are huge opportunities for strongly manipulating market structure and prices in these environments. We are beginning to look at how to redesign markets so that this isn't as powerful or as dangerous."

An area of smart grid research where the Caltech team takes full advantage of its multidisciplinary nature is at the interface of engineering and markets. One example is a concept known as demand response, in which a mismatch between energy supply and demand can be addressed from the demand side (that is, by involving consumers), rather than from the power-generation side.

As an example of demand response, some utilities have started programs where participants, who have smart thermostats installed in their homes in exchange for some monetary reward, allow the company to turn off their air conditioners for a short period of time when it is necessary to reduce the demand on the grid. In that way, household air conditioners become "shock absorbers" for the system.

"But the economist says wait a minute, that's really inefficient. You might be turning the AC off for people who desperately want it on and leaving it on for people who couldn't care less," says John Ledyard, the Allen and Lenabelle Davis Professor of Economics and Social Sciences. A counter proposal is called Prices to Devices, where the utility sends price signals to devices, like thermostats, in homes and offices, and customers decide if they want to pay for power at those prices. Ledyard says while that is efficient rationing in equilibrium, it introduces a delay between the consumer and the utility, creating an instability in the dynamics of the system.

The Caltech team has devised an intermediate proposal that removes the delay in the system. Rather than sending a price and having consumers react to it, their program has consumers enter their sensitivity to various prices ahead of time, right on their smart devices. This can be done with a single number. Then those devices deliver that information to the algorithm that operates the network. For example, a consumer might program his or her smart thermostat, to effectively say, "If a kilowatt of power costs $1 and the temperature outside is 90 degrees, I want you to keep the air conditioner on; if the price is $5 and the temperature outside is 80 degrees, go ahead and turn it off."

"The consumer's response is handled by the algorithm, so there's no lag," says Ledyard.

Currently, the Caltech smart grid team is working closely with Southern California Edison to set up a pilot test in Orange County involving several thousand households. The homes will be equipped with various distributed energy resources including rooftop solar panels, electric vehicles, smart thermostats for air conditioners, and pool pumps. The team's new approach to the optimal power flow problem and demand response will be tested to see whether it can keep stable a miniature version of the future smart grid.

Such experiments are crucial for preparing for the major changes to the electrical system that are certainly coming down the road, Low says. "The stakes are high. In the face of this historic transformation, we need to do all that we can to minimize the risk and make sure that we realize the full potential."

Writer: 
Kimm Fesenmaier
Home Page Title: 
Toward a Smarter Grid
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Exclude from Home Page: 

Long-Term Contraception in a Single Shot

Caltech biologists have developed a nonsurgical method to deliver long-term contraception to both male and female animals with a single shot. The technique—so far used only in mice—holds promise as an alternative to spaying and neutering feral animals.

The approach was developed in the lab of Bruce Hay, professor of biology and biological engineering at Caltech, and is described in the October 5 issue of Current Biology. The lead author on the paper is postdoctoral scholar Juan Li.

Hay's team was inspired by work conducted in recent years by David Baltimore and others showing that an adeno-associated virus (AAV)—a small, harmless virus that is unable to replicate on its own, that has been useful in gene-therapy trials—can be used to deliver sequences of DNA to muscle cells, causing them to produce specific antibodies that are known to fight infectious diseases, such as HIV, malaria, and hepatitis C.

Li and her colleagues thought the same approach could be used to produce infertility. They used an AAV to deliver a gene that directs muscle cells to produce an antibody that neutralizes gonadotropin-releasing hormone (GnRH) in mice. GnRH is what the researchers refer to as a "master regulator of reproduction" in vertebrates—it stimulates the release of two hormones from the pituitary that promote the formation of eggs, sperm, and sex steroids. Without it, an animal is rendered infertile.

In the past, other teams have tried neutralizing GnRH through vaccination. However, the loss of fertility that was seen in those cases was often temporary. In the new study, Hay and his colleagues saw that the mice—both male and female—were unable to conceive after about two months, and the majority remained infertile for the remainder of their lives.

"Inhibiting GnRH is an ideal way to inhibit fertility and behaviors caused by sex steroids, such as aggression and territoriality," says Hay. He notes that in the study, his team also shows that female mice can be rendered infertile using a different antibody that targets a binding site for sperm on the egg. "This target is ideal when you want to inhibit fertility but want to leave the individual otherwise completely normal in terms of reproductive behaviors and hormonal cycling."

Hay's team has dubbed the new approach "vectored contraception" and says that there are many other proteins that are thought to be important for reproduction that might also be targeted by this technique.

The researchers are particularly excited about the possibility of replacing spay–neuter programs with single injections. "Spaying and neutering of animals to control fertility, unwanted behavior, and population numbers of feral animals is costly and time consuming, and therefore often doesn't happen," says Hay. "There is a strong desire in many parts of the world for quick, nonsurgical approaches to inhibiting fertility. We think vectored contraception provides such an approach."

As a next step, Hay's team is working with Bill Swanson, director of animal research at the Cincinnati Zoo's Center for Conservation and Research of Endangered Wildlife, to try this approach in female domestic cats. Swanson's team spends much of its time working to promote fertility in endangered cat species, but it is also interested in developing humane ways of managing populations of feral domestic cats through inhibition of fertility, as these animals are often otherwise trapped and euthanized.

Additional Caltech authors on the paper, "Vectored antibody gene delivery mediates long-term contraception," are Alejandra I. Olvera, Annie Moradian, Michael J. Sweredoski, and Sonja Hess. Omar S. Akbari is also a coauthor on the paper and is now at UC Riverside. Some of the work was completed in the Proteome Exploration Laboratory at Caltech, which is supported by the Gordon and Betty Moore Foundation, the Beckman Institute, and the National Institutes of Health. Olvera was supported by a Gates Millennium Scholar Award.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

New Polymer Creates Safer Fuels

Before embarking on a transcontinental journey, jet airplanes fill up with tens of thousands of gallons of fuel. In the event of a crash, such large quantities of fuel increase the severity of an explosion upon impact. Researchers at Caltech and JPL have discovered a polymeric fuel additive that can reduce the intensity of postimpact explosions that occur during accidents and terrorist acts. Furthermore, preliminary results show that the additive can provide this benefit without adversely affecting fuel performance.

The work is published in the October 2 issue of the journal Science.

Jet engines compress air and combine it with a fine spray of jet fuel. Ignition of the mixture of air and jet fuel by an electric spark triggers a controlled explosion that thrusts the plane forward. Jet airplanes are powered by thousands of these tiny explosions. However, the process that distributes the spray of fuel for ignition—known as misting—also causes fuel to rapidly disperse and easily catch fire in the event of an impact.

The additive, created in the laboratory of Julia Kornfield (BS '83), professor of chemical engineering, is a type of polymer—a long molecule made up of many repeating subunits—capped at each end by units that act like Velcro. The individual polymers spontaneously link into ultralong chains called "megasupramolecules."

Megasupramolecules, Kornfield says, have an unprecedented combination of properties that allows them to control fuel misting, improve the flow of fuel through pipelines, and reduce soot formation. Megasupramolecules inhibit misting under crash conditions and permit misting during fuel injection in the engine.

Other polymers have shown these benefits, but have deficiencies that limit their usefulness. For example, ultralong polymers tend to break irreversibly when passing through pumps, pipelines, and filters. As a result, they lose their useful properties. This is not an issue with megasupramolecules, however. Although supramolecules also detach into smaller parts as they pass through a pump, the process is reversible. The Velcro-like units at the ends of the individual chains simply reconnect when they meet, effectively "healing" the megasupramolecules.


High-speed video showing untreated jet fuel (upper half) and jet fuel treated with 0.3% Caltech polymer (lower half) after a 140 mph projectile impact disperses fuel mist over continuously burning propane torches. The fireball formed by jet fuel is absent for fuel treated with Caltech polymer.
Credit: Caltech/JPL

When added to fuel, megasupramolecules dramatically affect the flow behavior even when the polymer concentration is too low to influence other properties of the liquid. For example, the additive does not change the energy content, surface tension, or density of the fuel. In addition, the power and efficiency of engines that use fuel with the additive is unchanged—at least in the diesel engines that have been tested so far.

When an impact occurs, the supramolecules spring into action. The supramolecules spend most of their time coiled up in a compact conformation. When there is a sudden elongation of the fluid, however, the polymer molecules stretch out and resist further elongation. This stretching allows them to inhibit the breakup of droplets under impact conditions—thus reducing the size of explosions—as well as to reduce turbulence in pipelines.

"The idea of megasupramolecules grew out of ultralong polymers," says research scientist and co–first author Ming-Hsin "Jeremy" Wei (PhD '14). "In the late 1970s and early 1980s, polymer scientists were very enthusiastic about adding ultralong polymers to fuel in order to make postimpact explosions of aircrafts less intense." The concept was tested in a full-scale crash test of an airplane in 1984. The plane was briefly engulfed in a fireball, generating negative headlines and causing ultralong polymers to quickly fall out of favor, Wei says.

In 2002, Virendra Sarohia (PhD '75) at JPL sought to revive research on mist control in hopes of preventing another attack like that of 9-11. "He reached out to me and convinced me to design a new polymer for mist control of jet fuel," says Kornfield, the corresponding author on the new paper. The first breakthrough came in 2006 with the theoretical prediction of megasupramolecules by Ameri David (PhD '08), then a graduate student in her lab. David designed individual chains that are small enough to eliminate prior problems and that dynamically associate together into megasupramolecules, even at low concentrations. He suggested that these assemblies might provide the benefits of ultralong polymers, with the new feature that they could pass through pumps and filters unharmed.

When Wei joined the project in 2007, he set out to create these theoretical molecules. Producing polymers of the desired length with sufficiently strong "molecular Velcro" on both ends proved to be a challenge. With the help of a catalyst developed by Robert Grubbs, the Victor and Elizabeth Atkins Professor of Chemistry and winner of the 2005 Nobel Prize in Chemistry, Wei developed a method to precisely control the structure of the molecular Velcro and put it in the right place on the polymer chains.

Integration of science and engineering was the key to success. Simon Jones, an industrial chemist now at JPL, helped Wei develop practical methods to produce longer and longer chains with the Velcro-like end groups. Co–first author and Caltech graduate student Boyu Li helped Wei explore the physics behind the exciting behavior of these new polymers. Joel Schmitigal, a scientist at the U.S. Army Tank Automotive Research Development and Engineering Center (TARDEC) in Warren, Michigan, performed essential tests that put the polymer on the path toward approval as a new fuel additive.

"Looking to the future, if you want to use this additive in thousands of gallons of jet fuel, diesel, or oil, you need a process to mass-produce it," Wei says. "That is why my goal is to develop a reactor that will continuously produce the polymer—and I plan to achieve it less than a year from now."

"Above all," Kornfield says, "we hope these new polymers will save lives and minimize burns that result from postimpact fuel fires."

The findings are published in a paper titled "Megasupramolecules for safer, cleaner fuel by end association of long telechelic polymers." The work was funded by TARDEC, the Federal Aviation Administration, the Schlumberger Foundation, and the Gates Grubstake Fund.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Flowing Electrons Help Ocean Microbes Gulp Methane

Good communication is crucial to any relationship, especially when partners are separated by distance. This also holds true for microbes in the deep sea that need to work together to consume large amounts of methane released from vents on the ocean floor. Recent work at Caltech has shown that these microbial partners can still accomplish this task, even when not in direct contact with one another, by using electrons to share energy over long distances.

This is the first time that direct interspecies electron transport—the movement of electrons from a cell, through the external environment, to another cell type—has been documented in microorganisms in nature.

The results were published in the September 16 issue of the journal Nature.

"Our lab is interested in microbial communities in the environment and, specifically, the symbiosis—or mutually beneficial relationship—between microorganisms that allows them to catalyze reactions they wouldn't be able to do on their own," says Professor of Geobiology Victoria Orphan, who led the recent study. For the last two decades, Orphan's lab has focused on the relationship between a species of bacteria and a species of archaea that live in symbiotic aggregates, or consortia, within deep-sea methane seeps. The organisms work together in syntrophy (which means "feeding together") to consume up to 80 percent of methane emitted from the ocean floor—methane that might otherwise end up contributing to climate change as a greenhouse gas in our atmosphere.

Previously, Orphan and her colleagues contributed to the discovery of this microbial symbiosis, a cooperative partnership between methane-oxidizing archaea called anaerobic methanotrophs (or "methane eaters") and a sulfate-reducing bacterium (organisms that can "breathe" sulfate instead of oxygen) that allows these organisms to consume methane using sulfate from seawater. However, it was unclear how these cells share energy and interact within the symbiosis to perform this task.

Because these microorganisms grow slowly (reproducing only four times per year) and live in close contact with each other,  it has been difficult for researchers to isolate them from the environment to grow them in the lab. So, the Caltech team used a research submersible, called Alvin, to collect samples containing the methane-oxidizing microbial consortia from deep-ocean methane seep sediments and then brought them back to the laboratory for analysis.

The researchers used different fluorescent DNA stains to mark the two types of microbes and view their spatial orientation in consortia. In some consortia, Orphan and her colleagues found the bacterial and archaeal cells were well mixed, while in other consortia, cells of the same type were clustered into separate areas.

Orphan and her team wondered if the variation in the spatial organization of the bacteria and archaea within these consortia influenced their cellular activity and their ability to cooperatively consume methane. To find out, they applied a stable isotope "tracer" to evaluate the metabolic activity. The amount of the isotope taken up by individual archaeal and bacterial cells within their microbial "neighborhoods" in each consortia was then measured with a high-resolution instrument called nanoscale secondary ion mass spectrometry (nanoSIMS) at Caltech. This allowed the researchers to determine how active the archaeal and bacterial partners were relative to their distance to one another.

To their surprise, the researchers found that the spatial arrangement of the cells in consortia had no influence on their activity. "Since this is a syntrophic relationship, we would have thought the cells at the interface—where the bacteria are directly contacting the archaea—would be more active, but we don't really see an obvious trend. What is really notable is that there are cells that are many cell lengths away from their nearest partner that are still active," Orphan says.

To find out how the bacteria and archaea were partnering, co-first authors Grayson Chadwick (BS '11), a graduate student in geobiology at Caltech and a former undergraduate researcher in Orphan's lab, and Shawn McGlynn, a former postdoctoral scholar, employed spatial statistics to look for patterns in cellular activity for multiple consortia with different cell arrangements. They found that populations of syntrophic archaea and bacteria in consortia had similar levels of metabolic activity; when one population had high activity, the associated partner microorganisms were also equally active—consistent with a beneficial symbiosis. However, a close look at the spatial organization of the cells revealed that no particular arrangement of the two types of organisms—whether evenly dispersed or in separate groups—was correlated with a cell's activity.

To determine how these metabolic interactions were taking place even over relatively long distances, postdoctoral scholar and coauthor Chris Kempes, a visitor in computing and mathematical sciences, modeled the predicted relationship between cellular activity and distance between syntrophic partners that are dependent on the molecular diffusion of a substrate. He found that conventional metabolites—molecules previously predicted to be involved in this syntrophic consumption of methane—such as hydrogen—were inconsistent with the spatial activity patterns observed in the data. However, revised models indicated that electrons could likely make the trip from cell to cell across greater distances.

"Chris came up with a generalized model for the methane-oxidizing syntrophy based on direct electron transfer, and these model results were a better match to our empirical data," Orphan says. "This pointed to the possibility that these archaea were directly transferring electrons derived from methane to the outside of the cell, and those electrons were being passed to the bacteria directly."

Guided by this information, Chadwick and McGlynn looked for independent evidence to support the possibility of direct interspecies electron transfer. Cultured bacteria, such as those from the genus Geobacter, are model organisms for the direct electron transfer process. These bacteria use large proteins, called multi-heme cytochromes, on their outer surface that act as conductive "wires" for the transport of electrons.

Using genome analysis—along with transmission electron microscopy and a stain that reacts with these multi-heme cytochromes—the researchers showed that these conductive proteins were also present on the outer surface of the archaea they were studying. And that finding, Orphan says, can explain why the spatial arrangement of the syntrophic partners does not seem to affect their relationship or activity.

"It's really one of the first examples of direct interspecies electron transfer occurring between uncultured microorganisms in the environment. Our hunch is that this is going to be more common than is currently recognized," she says.

Orphan notes that the information they have learned about this relationship will help to expand how researchers think about interspecies microbial interactions in nature. In addition, the microscale stable isotope approach used in the current study can be used to evaluate interspecies electron transport and other forms of microbial symbiosis occurring in the environment.

These results were published in a paper titled, "Single cell activity reveals direct electron transfer in methanotrophic consortia." The work was funded by the Department of Energy Division of Biological and Environmental Research and the Gordon and Betty Moore Foundation Marine Microbiology Initiative.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
rpyle's picture

Advanced LIGO to Begin Operations

The Advanced LIGO begins operations this week, after 7 years of enhancement.

The Advanced LIGO Project, a major upgrade of the Laser Interferometer Gravitational-Wave Observatory, is completing its final preparations before the initiation of scientific observations, scheduled to begin in mid-September. Designed to observe gravitational waves—ripples in the fabric of space and time—LIGO, which was designed and is operated by Caltech and MIT with funding from the National Science Foundation (NSF), consists of identical detectors in Livingston, Louisiana, and Hanford, Washington.

"The LIGO scientific and engineering team at Caltech and MIT has been leading the effort over the past seven years to build Advanced LIGO, the world's most sensitive gravitational-wave detector," says David Reitze, the executive director of the LIGO program at Caltech. Groups from the international LIGO Scientific Collaboration also contributed to the design and construction of the Advanced LIGO detector.

Gravitational waves were predicted by Albert Einstein in 1916 as a consequence of his general theory of relativity, and are emitted by violent events in the universe such as exploding stars and colliding black holes. These waves carry information not only about the objects that produce them, but also about the nature of gravity in extreme conditions that cannot be obtained by other astronomical tools.

"Experimental attempts to find gravitational waves have been on going for over 50 years, and they haven't yet been found. They're both very rare and possess signal amplitudes that are exquisitely tiny," Reitze says.

Although earlier LIGO runs revealed no detections, Advanced LIGO, also funded by the NSF, increases the sensitivity of the observatories by a factor of 10, resulting in a thousandfold increase in observable candidate objects. "The first Advanced LIGO science run will take place with interferometers that can 'see' events more than three times further than the initial LIGO detector," adds David Shoemaker, the MIT Advanced LIGO project leader, "so we'll be probing a much larger volume of space."

Each of the 4-kilometer-long L-shaped LIGO interferometers uses a laser beam split into two beams that travel back and forth through the long arms, within tubes from which the air has been evacuated. The beams are used to monitor the distance between precisely configured mirrors. According to Einstein's theory, the relative distance between the mirrors will change very slightly when a gravitational wave passes by.

The original configuration of LIGO was sensitive enough to detect a change in the lengths of the 4-kilometer arms by a distance one-thousandth the diameter of a proton; this is like accurately measuring the distance from Earth to the nearest star—over four light-years—to within the width of a human hair. Advanced LIGO, which will utilize the infrastructure of LIGO, is much more powerful.

While earlier LIGO observing runs did not confirm the existence of gravitational waves, the influence of such waves has been measured indirectly via observations of a binary system called PSR B1913+6. The system consists of two objects, both neutron stars—the compact cores of dead stars—that orbit a common center of mass. The orbits of these two stellar bodies have been observed to be slowly contracting due to the energy that is lost to gravitational radiation. Binary star systems such as these that are in the very last stages of evolution—just before and during the inevitable collision of the two objects—are key targets of the planned observing schedule for Advanced LIGO.

"Ultimately, Advanced LIGO will be able to see 10 times as far as initial LIGO and, based on theoretical predictions, should detect many binary neutron star mergers per year," Reitze says.

The improved instruments will be able to look at the last minutes of the life of pairs of massive black holes as they spiral closer together, coalesce into one larger black hole, and then vibrate much like two soap bubbles becoming one. Advanced LIGO also will be able to pinpoint periodic signals from the many known pulsars that radiate in the range of 10 to 1,000 Hertz (frequencies that roughly correspond to low and high notes on an organ). In addition, Advanced LIGO will be used to search for the gravitational cosmic background, allowing tests of theories about the development of the universe only 10-35 seconds after the Big Bang.

"We expect it will take five years to fully optimize the detector performance and achieve our design sensitivity," Reitze says. "It has been a long road, and we're very excited to resume the hunt for gravitational waves."

Writer: 
Rod Pyle
Home Page Title: 
Advanced LIGO to Begin Operations
Listing Title: 
Advanced LIGO to Begin Operations
Writer: 
Exclude from News Hub: 
No
Short Title: 
Advanced LIGO Begins Operation
News Type: 
Research News

Bar-Coding Technique Opens Up Studies Within Single Cells

All of the cells in a particular tissue sample are not necessarily the same—they can vary widely in terms of genetic content, composition, and function. Yet many studies and analytical techniques aimed at understanding how biological systems work at the cellular level treat all of the cells in a tissue sample as identical, averaging measurements over the entire cellular population. It is easy to see why this happens. With the cell's complex matrix of organelles, signaling chemicals, and genetic material—not to mention its miniscule scale—zooming in to differentiate what is happening within each individual cell is no trivial task.

"But being able to do single-cell analysis is crucial to understanding a lot of biological systems," says Long Cai, assistant professor of chemistry at Caltech. "This is true in brains, in biofilms, in embryos . . . you name it."

Now Cai's lab has developed a method for simultaneously imaging and identifying dozens of molecules within individual cells. This technique could offer new insight into how cells are organized and interact with each other and could eventually improve our understanding of many diseases.

The imaging technique that Cai and his colleagues have developed allows researchers not only to resolve a large number of molecules—such as messenger RNA species (mRNAs)—within a single cell, but also to systematically label each type of molecule with its own unique fluorescent "bar code" so it can be readily identified and measured without damaging the cell.

"Using this technique, there is essentially no limit on how many different types of molecules you can detect within a single cell," explains Cai.

The new method uses an innovative sequential bar-coding scheme that takes fluorescence in situ hybridization (FISH), a well-known procedure for detecting specific sequences of DNA or RNA in a sample, to the next level. Cai and his colleagues have dubbed their technique FISH Sequential Coding anALYSis (FISH SCALYS). 

FISH makes use of molecular probes—short fragments of DNA bound to fluorescent dyes, or fluorophores. These probes bind, or hybridize, to DNA or RNA with complementary sequences. When a hybridized sample is imaged with microscopy, the fluorophore lights up, pinpointing the target molecule's location.

There are a handful of fluorophores that can be used in these probes, and researchers typically use them to identify only a few different genes. For example, they will use a red dye to label all of the probes that target a specific type of mRNA. And when they image the sample, they will see a bunch of red dots in the cell. Then they will take another set of probes that target a different type of mRNA, label them with a blue fluorophore, and see glowing blue spots. And so on.

But what if a researcher wants to image more types of molecules than there are fluorophores? In the past, they have tried to mix the dyes together, making both red and blue probes for a particular gene, so that when both probes bind to the gene, the resulting dot would look purple. It was an imperfect solution and could still only label about 30 different types of molecules.

Cai's team realized that the same handful of fluorophores could be used in sequential rounds of hybridization to create thousands of unique fluorescent bar codes that could clearly identify many types of molecules (see graphic at right).

"With our technique, each tagged molecule remains just one single color in each round but we build up a bar code through multiple rounds, so the colors remain distinguishable. Using additional colors and extra rounds of hybridization, you can scale up easily to identify tens of thousands of different molecules," says Cai.

The number of bar codes available is potentially immense: FN, where F is the number of fluorophores and N is the number of rounds of hybridization. So with four dyes and eight rounds of hybridization, scientists would have more than enough bar codes (48=65,536) to cover all of the approximately 20,000 RNA molecules in a cell.

Cai says FISH SCALYS could be used to determine molecular identities of various types of cells, including embryonic stem cells. "One subset of genes will be turned on for one type of cell and off for another," he explains. It could also provide insight into the way that diseases alter cells, allowing researchers to compare the expression differences for a large number of genes in normal tissue versus diseased tissue.

Cai has recently been funded by the McKnight Endowment Fund for Neuroscience to adapt the technique to identify different types of neurons in samples from the hippocampus, a part of the brain associated with memory and learning.

Cai is also leading a program through Caltech's Beckman Institute that is helping other researchers on campus apply the imaging method to diverse biological questions.

Cai and his team describe the technique in a Nature Methods paper titled "Single-cell in situ RNA profiling by sequential hybridization." Caltech graduate student Eric Lubeck and postdoctoral scholar Ahmet Coskun are lead authors on the paper. Additional coauthors include Timur Zhiyentayev, a former Caltech graduate student, and Mubhij Ahmad, a former research technician in the Cai lab. The work has been funded by the National Institutes of Health's Single Cell Analysis Program.

Writer: 
Kimm Fesenmaier
Home Page Title: 
Scaling Up Molecular Detection in Single Cells
Listing Title: 
Scaling Up Molecular Detection in Single Cells
Writer: 
Exclude from News Hub: 
No
Short Title: 
Scaling Up Molecular Detection in Single Cells
News Type: 
Research News

Pages