A new technique developed at Caltech that uses gas-filled microbubbles for focusing light inside tissue could one day provide doctors with a minimally invasive way of destroying tumors with lasers, and lead to improved diagnostic medical imaging.
The primary challenge with focusing light inside the body is that biological tissue is optically opaque. Unlike transparent glass, the cells and proteins that make up tissue scatter and absorb light. "Our tissues behave very much like dense fog as far as light is concerned," says Changhuei Yang, professor of electrical engineering, bioengineering, and medical engineering. "Just like we cannot focus a car's headlight through fog, scientists have always had difficulty focusing light through tissues."
To get around this problem, Yang and his team turned to microbubbles, commonly used in medicine to enhance contrast in ultrasound imaging.
The gas-filled microbubbles are encapsulated by thin protein shells and have an acoustic refractive index—a property that affects how sound waves propagate through a medium—different from that of living tissue. As a result, they respond differently to sound waves. "You can use ultrasound to make microbubbles rapidly contract and expand, and this vibration helps distinguish them from surrounding tissue because it causes them to reflect sound waves more effectively than biological tissue," says Haowen Ruan, a postdoctoral scholar in Yang's lab.
In addition, the optical refractive index of microbubbles is not the same as that of biological tissue. The optical refractive index is a measure of how much light rays bend when transitioning from one medium (a liquid, for example) to another (a gas).
Yang, Ruan, and graduate student Mooseok Jang developed a novel technique called time-reversed ultrasound microbubble encoded (TRUME) optical focusing that utilizes the mismatch between the acoustic and optical refractive indexes of microbubbles and tissue to focus light inside the body. First, microbubbles injected into tissue are ruptured with ultrasound waves. By measuring the difference in light transmission before and after such an event, the Caltech researchers can modify the wavefront of a laser beam so that it is focuses on the original locations of the microbubbles. The result, Yang explains, "is as if you're searching for someone in a dark field, and suddenly the person lets off a flare. For a brief moment, the person is illuminated and you can home in on their location."
In a new study, published online November 24, 2015, in the journal Nature Communications, the team showed that their TRUME technique could be used as an effective "guidestar" to focus laser beams on specific locations in a biological tissue. A single, well-placed microbubble was enough to successfully focus the laser; multiple popping bubbles located within the general vicinity of a target functioned as a map for the light.
"Each popping event serves as a road map for the twisting light trajectories through the tissue," Yang says. "We can use that road map to shape light in such a way that it will converge where the bubbles burst."
If TRUME is shown to work effectively inside living tissue—without, for example, any negative effects from the bursting microbubbles—it could enable a range of research and medical applications. For example, by combining the microbubbles with an antibody probe engineered to seek out biomarkers associated with cancer, doctors could target and then destroy tumors deep inside the body or detect malignant growths much sooner.
"Ultrasound and X-ray techniques can only detect cancer after it forms a mass," Yang says. "But with optical focusing, you could catch cancerous cells while they are undergoing biochemical changes but before they undergo morphological changes."
The technique could take the place of other of diagnostic screening methods. For instance, it could be used to measure the concentrations of a protein called bilirubin in infants to determine their risk for jaundice. "Currently, this procedure requires a blood draw, but with TRUME, we could shine a light into an infant's body and look for the unique absorption signature of the bilirubin molecule," Ruan says.
In combination with existing techniques that allow scientists to activate individual neurons in lab animals using light, TRUME could help neuroscientists better understand how the brain works. "Currently, neuroscientists are confined to superficial layers of the brain," Yang says. "But our method of optical focusing could allow for a minimally invasive way of probing deeper regions of the brain."
When certain massive stars use up all of their fuel and collapse onto their cores, explosions 10 to 100 times brighter than the average supernova occur. Exactly how this happens is not well understood. Astrophysicists from Caltech, UC Berkeley, the Albert Einstein Institute, and the Perimeter Institute for Theoretical Physics have used the National Science Foundation's Blue Waters supercomputer to perform three-dimensional computer simulations to fill in an important missing piece of our understanding of what drives these blasts.
The researchers report their findings online on November 30 in advance of publication in the journal Nature. The lead author on the paper is Philipp Mösta, who started the work while a postdoctoral scholar at Caltech and is now a NASA Einstein Fellow at UC Berkeley.
The extremely bright explosions come in two varieties—some are a type of energetic supernovae called hypernovae, while others are gamma-ray bursts (GRBs). Both are driven by focused jets formed in some collapsed stellar cores. In the case of GRBs, the jets themselves escape the star at close to the speed of light and emit strong beams of extremely energetic light called gamma rays. The necessary ingredients to create such jets are rapid rotation and a magnetic field that is a million billion times stronger than Earth's own magnetic field.
In the past, scientists have simulated the evolution of massive stars from their collapse to the production of these jet-driven explosions by factoring unrealistically large magnetic fields into their models—without explaining how they could be generated in the first place. But how could magnetic fields strong enough to power the explosions exist in nature?
"That's what we were trying to understand with this study," says Luke Roberts, a NASA Einstein Fellow at Caltech and a coauthor on the paper. "How can you start with the magnetic field you might expect in a massive star that is about to collapse—or at least an initial magnetic field that is much weaker than the field required to power these explosions—and build it up to the strength that you need to collimate a jet and drive a jet-driven supernova?"
For more than 20 years, theory has suggested that the magnetic field of the inner- most regions of a massive star that has collapsed, also known as a proto-neutron star, could be amplified by an instability in the flow of its plasma if the core is rapidly rotating, causing its outer edge to rotate faster than its center. However, no previous models could prove this process could strengthen a magnetic field to the extent needed to collimate a jet, largely because these simulations lacked the resolution to resolve where the flow becomes unstable.
Magnetic field amplification in hypernovae Supercomputer visualization of the toroidal magnetic field in a collapsed, massive star, showing how in a span of 10 milliseconds the rapid differential rotation revs up the stars magnetic field to a million billion times that of our sun (yellow is positive, light blue is negative). Red and blue represent weaker positive and negative magnetic fields, respectively. Credit: Philipp Mösta
Mösta and his colleagues developed a simulation of a rapidly rotating collapsed stellar core and scaled it so that it could run on the Blue Waters supercomputer, a powerful supercomputer funded by the NSF located at the National Center for Supercomputing Applications at the University of Illinois. Blue Waters is known for its ability to provide sustained high-performance computing for problems that produce large amounts of information. The team's highest-resolution simulation took 18 days of around-the-clock computing by about 130,000 computer processors to simulate just 10 milliseconds of the core's evolution.
In the end, the researchers were able to simulate the so-called magnetorotational instability responsible for the amplification of the magnetic field. They saw—as theory predicted—that the instability creates small patches of an intense magnetic field distributed in a chaotic way throughout the core of the collapsed star.
"Surprisingly, we found that a dynamo process connects these patches to create a larger, ordered structure," explains David Radice, a Walter Burke Fellow at Caltech and a coauthor on the paper. An early type of electrical generator known as a dynamo produced a current by rotating electromagnetic coils within a magnetic field. Similarly, astrophysical dynamos generate currents when hydromagnetic fluids in stellar cores rotate under the influence of their magnetic fields. Those currents can then amplify the magnetic fields.
"We find that this process is able to create large-scale fields—the kind you would need to power jets," says Radice.
The researchers also note that the magnetic fields they created in their simulations are similar in strength to those seen in magnetars—neutron stars (a type of stellar remnant) with extremely strong magnetic fields. "It takes thousands or millions of years for a proto-neutron star to become a neutron star, and we have not yet simulated that. But if you could transport this thing thousands or millions of years forward in time, you would have a strong enough magnetic field to explain magnetar field strengths," says Roberts. "This might explain some fraction of magnetars or a particular class of very bright supernovae that are thought to be powered by a spinning magnetar at their center."
Additional authors on the paper, "A large-scale dynamo and magnetoturbulence in rapidly rotating core-collapse supernovae," are Christian Ott, professor of theoretical astrophysics; Erik Schnetter of the Perimeter Institute for Theoretical Physics, the University of Guelph, and Louisiana State University; and Roland Haas of the Max Planck Institute for Gravitational Physics in Potsdam-Golm, Germany. The work was partially supported by the Sherman Fairchild Foundation, by grants from the NSF, by NASA Einstein Fellowships, and by an award from the Natural Sciences and Engineering Research Council of Canada.
Caltech and JPL scientists suggest the fingerprints of early photochemistry provide a solution to the long-standing mystery
Mars is blanketed by a thin, mostly carbon dioxide atmosphere—one that is far too thin to prevent large amounts of water on the surface of the planet from subliming or evaporating. But many researchers have suggested that the planet was once shrouded in an atmosphere many times thicker than Earth's. For decades that left the question, "Where did all the carbon go?"
Now a team of scientists from Caltech and JPL thinks they have a possible answer. The researchers suggest that 3.8 billion years ago, Mars might have had only a moderately dense atmosphere. They have identified a photochemical process that could have helped such an early atmosphere evolve into the current thin one without creating the problem of "missing" carbon and in a way that is consistent with existing carbon isotopic measurements.
The scientists describe their findings in a paper that appears in the November 24 issue of the journal Nature Communications.
"With this new mechanism, everything that we know about the martian atmosphere can now be pieced together into a consistent picture of its evolution," says Renyu Hu, a postdoctoral scholar at JPL, a visitor in planetary science at Caltech, and lead author on the paper.
When considering how the early martian atmosphere might have transitioned to its current state, there are two possible mechanisms for the removal of excess carbon dioxide (CO2). Either the CO2 was incorporated into minerals in rocks called carbonates or it was lost to space.
A separate recent study coauthored by Bethany Ehlmann, assistant professor of planetary science and a research scientist at JPL, used data from several Mars-orbiting satellites to inventory carbonate rocks, showing that there are not enough carbonates in the upper kilometer of crust to contain the missing carbon from a very thick early atmosphere that might have existed about 3.8 billion years ago.
To study the escape-to-space scenario, scientists examine the ratio of carbon-12 and carbon-13, two stable isotopes of the element carbon that have the same number of protons in their nuclei but different numbers of neutrons, and thus different masses. Because various processes can change the relative amounts of those two isotopes in the atmosphere, "we can use these measurements of the ratio at different points in time as a fingerprint to infer exactly what happened to the martian atmosphere in the past," says Hu.
To establish a starting point, the researchers used measurements of the carbon isotope ratio in martian meteorites that contain gases that originated deep in the planet's mantle. Because atmospheres are produced by outgassing of the mantle through volcanic activity, these measurements provide insight into the isotopic ratio of the original martian atmosphere.
The scientists then compared those values to isotopic measurements of the current martian atmosphere recently collected by NASA's Curiosity rover. Those measurements show the atmosphere to be unusually enriched in carbon-13.
Previously, researchers thought the main way that martian carbon would be ejected into space was through a process called sputtering, which involves interactions between the solar wind and the upper atmosphere. Sputtering causes some particles—slightly more of the lighter carbon-12 than the heavier carbon-13—to escape entirely from Mars, but this effect is small. So there had to be some other process at work.
That is where the new mechanism comes in. In the study, the researchers describe a process that begins with a particle of ultraviolet light from the sun striking a molecule of CO2 in the upper atmosphere. That molecule absorbs the photon's energy and divides into carbon monoxide (CO) and oxygen. Then another ultraviolet particle hits the CO, causing it to dissociate into atomic carbon (C) and oxygen. Some carbon atoms produced in this way have enough energy to escape the atmosphere, and the new study shows that carbon-12 is far more likely to escape than carbon-13.
Modeling the long-term effects of this ultraviolet photodissociation mechanism coupled with volcanic gas release, loss via sputtering, and loss to carbonate rock formation, the researchers found that it was very efficient in terms of enriching carbon-13 in the atmosphere. Using the isotopic constraints, they were then able to calculate that the atmosphere 3.8 billion years ago might have had the pressure of Earth's or less under most scenarios.
"The efficiency of this new mechanism shows that there is in fact no discrepancy between Curiosity's measurements of the modern enriched value for carbon in the atmosphere and the amount of carbonate rock found on the surface of Mars," says Ehlmann, also a coauthor on the new study. "With this mechanism, we can describe an evolutionary scenario for Mars that makes sense of the apparent carbon budget, with no missing processes or reservoirs."
The authors conclude their work by pointing out several tests and refinements for the model. For example, future data from the ongoing Mars Atmosphere and Volatile EvolutioN (MAVEN) mission could provide the isotope fractionation of presently ongoing atmospheric loss to space and improve the extrapolation to early Mars.
Hu emphasizes that the work is an excellent example of multidisciplinary effort. On the one hand, he says, the team looked at the atmospheric chemistry—the isotopic signature, the escape processes, and the enrichment mechanism. On the other, they used geological evidence and remote sensing of the martian surface. "By putting these together, we were able to come up with a summary of evolutionary scenarios," says Hu. "I feel that Caltech/JPL is a unique place where we have the multidisciplinary capability and experience to make this happen."
Neural prosthetic devices, which include small electrode arrays implanted in the brain, can allow paralyzed patients to control the movement of a robotic limb, whether that limb is attached to the individual or not. In May 2015, researchers at Caltech, USC, and Rancho Los Amigos National Rehabilitation Center reported the first successful clinical trial of such an implant in a part of the brain that translates intention—the goal to be accomplished through a movement (for example, "I want to reach to the water bottle for a drink")—into the smooth and fluid motions of a robotic limb. Now, the researchers, led by Richard Andersen, the James G. Boswell Professor of Neuroscience, report that individual neurons in that brain region, known as the posterior parietal cortex (PPC), encode entire hand shapes which can be used for grasping—as when shaking someone's hand—and hand shapes not directly related to grasping, such as the gestures people make when speaking.
Most neuroprostheses are implanted in the motor cortex, the part of the brain controlling limb motion. But the movement of these robotic arms are jerky, probably due to the complicated mechanics for controlling muscle movement. Having eliminated that problem by implanting the device in the PPC, the brain region that encodes the intent, led Andersen and colleagues to further investigate the role specific neurons play in this part of the brain.
The research appears in the November 18 issue of the Journal ofNeuroscience.
"The human hand has the ability to do numerous complex operations beyond just grasping," says Christian Klaes, a postdoctoral fellow at Caltech and first author of the paper. "We gesture when we speak, we manipulate objects, we use sign language to communicate with the hearing impaired. Tetraplegic patients rate hand and arm function to be of the highest importance to have better control over their environment. So our ultimate goal is to improve the range of neuroprostheses using control signals from the PPC.
"The more precisely we can identify individual neurons involved with hand movements, the better the capability these robotic devices will provide. Ultimately, we hope to mimic in a robotic hand the same freedom of movement of the human hand."
In the study, the researchers used the rock-paper-scissors game and a variation, rock-paper-scissors-lizard-Spock. The game, says Andersen, is "perfect" for this kind of research. "The addition of a lizard, depicted as a cartoon image of a lizard, and Spock—a picture of Leonard Nimoy in character—was to increase the repertoire of possible hand shapes available to our tetraplegic participant, Erik G. Sorto, whose limbs are completely paralyzed. We assigned a pinch gesture for the lizard and a spherical shape for Mr. Spock."
The game was played in two phases, first rock-paper-scissors and then the expanded game with the lizard and Spock. In the task, Sorto was briefly shown an object on a screen that corresponded to one of the hand shapes—for example, a picture of a rock or Mr. Spock. The image was followed by a blank screen, and then text appeared instructing Sorto to imagine making the corresponding hand shape with his right hand—a fist for the rock, an open hand for paper, a scissors gesture for scissors, a pinch for the lizard, and a spherical shape (loosely analogous to the Vulcan salute) for Spock—and to say which visual image he had seen, as the neuroprosthetic device recorded the activity of neurons in the PPC.
The researchers were able to identify single neurons in the PPC that fired when Sorto was presented with an image of an object to be grasped—a rock, say—and identified a nearly completely separate class of neurons that responded when Sorto engaged in motor imagery (the mental planning and imagined execution of a movement without the subject actually trying to move the limb).
"We found two mostly separate populations of neurons in the PPC that show either visual responses or motor-imagery responses during the task, the former when Erik identified a cue and the latter when he imagined performing a corresponding hand shape," says Andersen.
The researchers discovered that individual neurons in the PPC also responded to hand shapes that did not directly correspond to a grasp-related visual stimulus. The paper shape can be related to the initial opening of the hand to grasp a paper, and the rock closing the hand to grasp a rock—and in fact, these imagined hand shapes were used by Sorto to imagine opening a robotic hand by imagining paper and closing the robotic hand around an object by imagining rock. However, scissors, lizard, and Spock call for imagining hand gestures that are more abstract and iconic than those needed to grasp the visual objects, and suggests, says Andersen, that this area of the brain may also be involved in more general hand gestures, such as ones we use when talking, or for sign language.
The results of the trial were published in a paper titled, "Hand Shape Representations in the Human Posterior Parietal Cortex." In addition to Andersen and Klaes, other authors on the study are Spencer Kellis, Tyson Aflalo, and Kelsie Pejsa from Caltech; Brian Lee, Christi Heck, and Charles Liu from USC; and Kathleen Shanfield, Stephanie Hayes-Jackson, and Mindy Aisen from Rancho Los Amigos National Rehabilitation Center.
Dark matter is called "dark" for a good reason. Although they outweigh particles of regular matter by more than a factor of 5, particles of dark matter are elusive. Their existence is inferred by their gravitational influence in galaxies, but no one has ever directly observed signals from dark matter. Now, by measuring the mass of a nearby dwarf galaxy called Triangulum II, Assistant Professor of Astronomy Evan Kirby may have found the highest concentration of dark matter in any known galaxy.
Triangulum II is a small, faint galaxy at the edge of the Milky Way, made up of only about 1,000 stars. Kirby measured the mass of Triangulum II by examining the velocity of six stars whipping around the galaxy's center. "The galaxy is challenging to look at," he says. "Only six of its stars were luminous enough to see with the Keck telescope." By measuring these stars' velocity, Kirby could infer the gravitational force exerted on the stars and thereby determine the mass of the galaxy.
"The total mass I measured was much, much greater than the mass of the total number of stars—implying that there's a ton of densely packed dark matter contributing to the total mass," Kirby says. "The ratio of dark matter to luminous matter is the highest of any galaxy we know. After I had made my measurements, I was just thinking—wow."
Triangulum II could thus become a leading candidate for efforts to directly detect the signatures of dark matter. Certain particles of dark matter, called supersymmetric WIMPs (weakly interacting massive particles), will annihilate one another upon colliding and produce gamma rays that can then be detected from Earth.
While current theories predict that dark matter is producing gamma rays almost everywhere in the universe, detecting these particular signals among other galactic noises, like gamma rays emitted from pulsars, is a challenge. Triangulum II, on the other hand, is a very quiet galaxy. It lacks the gas and other material necessary to form stars, so it isn't forming new stars—astronomers call it "dead." Any gamma ray signals coming from colliding dark matter particles would theoretically be clearly visible.
It hasn't been definitively confirmed, though, that what Kirby measured is actually the total mass of the galaxy. Another group, led by researchers from the University of Strasbourg in France, measured the velocities of stars just outside Triangulum II and found that they are actually moving faster than the stars closer into the galaxy's center—the opposite of what's expected. This could suggest that the little galaxy is being pulled apart, or "tidally disrupted," by the Milky Way's gravity.
"My next steps are to make measurements to confirm that other group's findings," Kirby says. "If it turns out that those outer stars aren't actually moving faster than the inner ones, then the galaxy could be in what's called dynamic equilibrium. That would make it the most excellent candidate for detecting dark matter with gamma rays."
A paper describing this research appears in the November 17 issue of the Astrophysical Journal Letters. Judith Cohen (PhD '71), the Kate Van Nuys Page Professor of Astronomy, is a Caltech coauthor.
New research identifies possible sites of frozen, watery deposits.
Jupiter's moon Europa is believed to possess a large salty ocean beneath its icy exterior, and that ocean, scientists say, has the potential to harbor life. Indeed, a mission recently suggested by NASA would visit the icy moon's surface to search for compounds that might be indicative of life. But where is the best place to look? New research by Caltech graduate student Patrick Fischer; Mike Brown, the Richard and Barbara Rosenberg Professor and Professor of Planetary Astronomy; and Kevin Hand, an astrobiologist and planetary scientist at JPL, suggests that it might be within the scarred, jumbled areas that make up Europa's so-called "chaos terrain."
A paper about the work has been accepted to The Astronomical Journal.
"We have known for a long time that Europa's fresh icy surface, which is covered with cracks and ridges and transform faults, is the external signature of a vast internal salty ocean," Brown says. The areas of chaos terrain show signatures of vast ice plates that have broken apart, shifted position, and been refrozen. These regions are of particular interest, because water from the oceans below may have risen to the surface through the cracks and left deposits there.
"Directly sampling Europa's ocean represents a major technological challenge and is likely far in the future," Fischer says. "But if we can sample deposits left behind in the chaos areas, it could reveal much about the composition and dynamics of the ocean below." That ocean is thought to be as deep as 100 kilometers.
"This could tell us much about activity at the boundary of the rocky core and the ocean," Brown adds.
In a search for such deposits, the researchers took a new look at data from observations made in 2011 at the W. M. Keck Observatory in Hawaii using the OSIRIS spectrograph. Spectrographs break down light into its component parts and then measure their frequencies. Each chemical element has unique light-absorbing characteristics, called spectral or absorption bands. The spectral patterns resulting from light absorption at particular wavelengths can be used to identify the chemical composition of Europa's surface minerals by observing reflected sunlight.
The OSIRIS instrument measures spectra in infrared wavelengths. "The minerals we expected to find on Europa have very distinct spectral fingerprints in infrared light," Fischer says. "Combine this with the extraordinary abilities of the adaptive optics in the Keck telescope, and you have a very powerful tool." Adaptive optics mechanisms reduce blurring caused by turbulence in the earth's atmosphere by measuring the image distortion of a bright star or laser and mechanically correcting it.
The OSIRIS observations produced spectra from 1600 individual spots on Europa's surface. To make sense of this collection of data, Fischer developed a new technique to sort and identify major groupings of spectral signatures.
"Patrick developed a very clever new mathematical tool that allows you to take a collection of spectra and automatically, and with no preconceived human biases, classify them into a number of distinct spectra," Brown says. The software was then able to correlate these groups of readings with a surface map of Europa from NASA's Galileo mission, which mapped the Jovian moon beginning in the late 1990s. The resulting composite provided a visual guide to the composition of the regions the team was interested in.
Three compositionally distinct categories of spectra emerged from the analysis. The first was water ice, which dominates Europa's surface. The second category includes chemicals formed when ionized sulfur and oxygen—thought to originate from volcanic activity on the neighboring moon Io—bombard the surface of Europa and react with the native ices. These findings were consistent with results of previous work done by Brown, Hand and others in identifying Europa's surface chemistry.
But the third grouping of chemical indicators was more puzzling. It did not match either set of ice or sulfur groupings, nor was it an easily identified set of salt minerals such as they might have expected from previous knowledge of Europa. Magnesium is thought to reside on the surface but has a weak spectral signature, and this third set of readings did not match that either. "In fact, it was not consistent with any of the salt materials previously associated with Europa," Brown says.
When this third group was mapped to the surface, it overlaid the chaos terrain. "I was looking at the maps of the third grouping of spectra, and I noticed that it generally matched the chaos regions mapped with images from Galileo. It was a stunning moment," Fischer says. "The most important result of this research was understanding that these materials are native to Europa, because they are clearly related to areas with recent geological activity."
The composition of the deposits is still unclear. "Unique identification has been difficult," Brown says. "We think we might be looking at salts left over after a large amount of ocean water flowed out onto the surface and then evaporated away. He compares these regions to their earthly cousins. "They may be like the large salt flats in the desert regions of the world, in which the chemical composition of the salt reflects whatever materials were dissolved in the water before it evaporated."
Similar deposits on Europa could provide a view into the oceans below, according to Brown. "If you had to suggest an area on Europa where ocean water had recently melted through and dumped its chemicals on the surface, this would be it. If we can someday sample and catalog the chemistry found there, we may learn something of what's happening on the ocean floor of Europa and maybe even find organic compounds, and that would be very exciting."
Finding could have implications for high-temperature superconductivity
A team of physicists led by Caltech's David Hsieh has discovered an unusual form of matter—not a conventional metal, insulator, or magnet, for example, but something entirely different. This phase, characterized by an unusual ordering of electrons, offers possibilities for new electronic device functionalities and could hold the solution to a long-standing mystery in condensed matter physics having to do with high-temperature superconductivity—the ability for some materials to conduct electricity without resistance, even at "high" temperatures approaching –100 degrees Celsius.
"The discovery of this phase was completely unexpected and not based on any prior theoretical prediction," says Hsieh, an assistant professor of physics, who previously was on a team that discovered another form of matter called a topological insulator. "The whole field of electronic materials is driven by the discovery of new phases, which provide the playgrounds in which to search for new macroscopic physical properties."
Hsieh and his colleagues describe their findings in the November issue of Nature Physics, and the paper is now available online. Liuyan Zhao, a postdoctoral scholar in Hsieh's group, is lead author on the paper.
The physicists made the discovery while testing a laser-based measurement technique that they recently developed to look for what is called multipolar order. To understand multipolar order, first consider a crystal with electrons moving around throughout its interior. Under certain conditions, it can be energetically favorable for these electrical charges to pile up in a regular, repeating fashion inside the crystal, forming what is called a charge-ordered phase. The building block of this type of order, namely charge, is simply a scalar quantity—that is, it can be described by just a numerical value, or magnitude.
In addition to charge, electrons also have a degree of freedom known as spin. When spins line up parallel to each other (in a crystal, for example), they form a ferromagnet—the type of magnet you might use on your refrigerator and that is used in the strip on your credit card. Because spin has both a magnitude and a direction, a spin-ordered phase is described by a vector.
Over the last several decades, physicists have developed sophisticated techniques to look for both of these types of phases. But what if the electrons in a material are not ordered in one of those ways? In other words, what if the order were described not by a scalar or vector but by something with more dimensionality, like a matrix? This could happen, for example, if the building block of the ordered phase was a pair of oppositely pointing spins—one pointing north and one pointing south—described by what is known as a magnetic quadrupole. Such examples of multipolar-ordered phases of matter are difficult to detect using traditional experimental probes.
As it turns out, the new phase that the Hsieh group identified is precisely this type of multipolar order.
To detect multipolar order, Hsieh's group utilized an effect called optical harmonic generation, which is exhibited by all solids but is usually extremely weak. Typically, when you look at an object illuminated by a single frequency of light, all of the light that you see reflected from the object is at that frequency. When you shine a red laser pointer at a wall, for example, your eye detects red light. However, for all materials, there is a tiny amount of light bouncing off at integer multiples of the incoming frequency. So with the red laser pointer, there will also be some blue light bouncing off of the wall. You just do not see it because it is such a small percentage of the total light. These multiples are called optical harmonics.
The Hsieh group's experiment exploited the fact that changes in the symmetry of a crystal will affect the strength of each harmonic differently. Since the emergence of multipolar ordering changes the symmetry of the crystal in a very specific way—a way that can be largely invisible to conventional probes—their idea was that the optical harmonic response of a crystal could serve as a fingerprint of multipolar order.
"We found that light reflected at the second harmonic frequency revealed a set of symmetries completely different from those of the known crystal structure, whereas this effect was completely absent for light reflected at the fundamental frequency," says Hsieh. "This is a very clear fingerprint of a specific type of multipolar order."
The specific compound that the researchers studied was strontium-iridium oxide (Sr2IrO4), a member of the class of synthetic compounds broadly known as iridates. Over the past few years, there has been a lot of interest in Sr2IrO4 owing to certain features it shares with copper-oxide-based compounds, or cuprates. Cuprates are the only family of materials known to exhibit superconductivity at high temperatures—exceeding 100 Kelvin (–173 degrees Celsius). Structurally, iridates and cuprates are very similar. And like the cuprates, iridates are electrically insulating antiferromagnets that become increasingly metallic as electrons are added to or removed from them through a process called chemical doping. A high enough level of doping will transform cuprates into high-temperature superconductors, and as cuprates evolve from being insulators to superconductors, they first transition through a mysterious phase known as the pseudogap, where an additional amount of energy is required to strip electrons out of the material. For decades, scientists have debated the origin of the pseudogap and its relationship to superconductivity—whether it is a necessary precursor to superconductivity or a competing phase with a distinct set of symmetry properties. If that relationship were better understood, scientists believe, it might be possible to develop materials that superconduct at temperatures approaching room temperature.
Recently, a pseudogap phase also has been observed in Sr2IrO4—and Hsieh's group has found that the multipolar order they have identified exists over a doping and temperature window where the pseudogap is present. The researchers are still investigating whether the two overlap exactly, but Hsieh says the work suggests a connection between multipolar order and pseudogap phenomena.
"There is also very recent work by other groups showing signatures of superconductivity in Sr2IrO4 of the same variety as that found in cuprates," he says. "Given the highly similar phenomenology of the iridates and cuprates, perhaps iridates will help us resolve some of the longstanding debates about the relationship between the pseudogap and high-temperature superconductivity."
Hsieh says the finding emphasizes the importance of developing new tools to try to uncover new phenomena. "This was really enabled by a simultaneous technique advancement," he says.
Furthermore, he adds, these multipolar orders might exist in many more materials. "Sr2IrO4 is the first thing we looked at, so these orders could very well be lurking in other materials as well, and that's exactly what we are pursuing next."
In March of this year, a team of bioengineers from Caltech, JPL, and the University of Washington spent a week in Greenland, using snowmobiles to haul their scientific equipment, waiting out windstorms, and spending hours working on the ice. Now the same researchers are planning a trip to California's Mojave Desert, where they will study Searles Lake, a dry, extremely salty basin that is naturally full of harsh chemicals like arsenic and boron. The researchers are testing a holographic microscope that they have designed and built for the purpose of observing microbes that thrive in such extreme environments. The ultimate goal? To send the microscope on a spacecraft to search for biosignatures—signs of life—on other worlds such as Mars or Saturn's icy moon Enceladus.
"Our big overarching hypothesis is that motility is a good biosignature," explains Jay Nadeau, a scientific researcher at Caltech and one of the investigators on the holographic microscope project, dubbed SHAMU (Submersible Holographic Astrobiology Microscope with Ultraresolution). "We suspect that if we send back videos of bacteria swimming, that is going to be a better proof of life than pretty much anything else."
Think, she says, of Antonie van Leeuwenhoek, the father of microbiology, who used simple microscopes in the 17th and 18th centuries to observe protozoa and bacteria. "He immediately recognized that they were living things based on the way they moved," Nadeau says. Indeed, when Leeuwenhoek wrote about observing samples of the plaque between his teeth, he described seeing "many very little animalcules, very prettily a-moving." And Nadeau adds, "No one doubted Leeuwenhoek once they saw them moving for themselves."
In order to capture images of microbes "a-moving" on another world, Nadeau and her colleagues, including Mory Gharib, the Hans W. Liepmann Professor of Aeronautics and Bioinspired Engineering and a vice provost at Caltech, had the idea to use digital holography rather than conventional microscopy.
Holography is a method for recording holistic information about the light bouncing off a sample so that a 3-D image can be reconstructed at some later time. Compared to microscopy, which often involves multiple lenses focusing over a shallow sample (on a slide, for example), holography offers the advantages of focusing over a relatively large volume and of capturing high-resolution images, without the trouble of moving parts that could break in extreme environments or during a launch or landing, if the instrument were sent into space.
Standard photography records only the intensity of the light (related to its amplitude) that reaches a camera lens after scattering off an object. But as a wave, light has both an amplitude and a phase, a separate property that can be used to tell how far the light travels once it is scattered. Holography is a technique that captures both—something that makes it possible to re-create a three-dimensional image of a sample.
To understand the technique, first imagine dropping a pebble in a pond and watching ripples emanate from that spot. Now imagine dropping a second pebble in a new spot, producing a second set of ripples. If the ripples interact with an object on the surface, such as a rock, the ripples are diffracted or scattered by the object, changing the pattern of the waves—an effect that can be detected. Holography is akin to dropping two pebbles in a pond simultaneously, with the pebbles being two laser beams—one a reference beam that shines unaffected by the sample, and an object beam that runs into the sample and gets diffracted or scattered. A detector measures the combination, or superposition, of the ripples from the two beams, which is known as the interference pattern. By knowing how the waves propagate and by analyzing the interference pattern, a computer can reconstruct what the object beam encountered as it traveled
"We can take an interference pattern and use that to reconstruct all of the images in different planes in a volume," explains Chris Lindensmith, a systems engineer at JPL and an investigator on the project. "So we can just go and reconstruct whatever plane we are interested in after the fact and look and see if there's anything in there."
That means that a single image captures all the microbes in a sample—whether there is one bacterium or a thousand. And by taking a series of such images over time, the researchers can reconstruct the path that each bacterium took as it swam in the sample.
That would be virtually impossible with conventional microscopy, says Lindensmith. With microscopy, you need to focus in real time, meaning that someone would have to turn a dial to move the sample closer or farther from the microscope's lenses in order to keep a particular microbe in focus. During that time, they would miss out on the movements of any other microbes in the sample because the focus is so small.
All of the advantages that the holographic microscope offers over microscopy make it appealing for studies elsewhere in the solar system. And there are a number of worlds that scientists are eager to study in close-up detail to search for signs of life. In 2008, using data from the Phoenix Mars lander, scientists determined that there is water ice just below the surface in the northern plains of the Red Planet, making the locale a candidate for follow-up sampling studies. In addition, both the jovian moon Europa and the saturnian moon Enceladus are thought to harbor liquid oceans beneath their icy surfaces. Therefore, the SHAMU group says, a compact, robust, microscope like the one the Caltech team is developing could be a highly desirable component of an instrument suite on a lander to any one of those locations.
Nadeau says the group's prototype performed well during the team's field-testing trip to Greenland. At each testing site, the researchers drilled a hole into the sea ice, submerged the microscope to a depth where some of the salty liquid water trapped inside the ice, called brine, was able to seep into the device's sample area, and collected holographic images. "We know that things live in the water and we know what they do and how they swim," says Nadeau. "But believe it or not, nobody knew what kinds of microorganisms live in sea-ice brine or if they can swim."
That is because typical techniques for counting, labeling, and observing microbes rely on fragile instrumentation and often require large amounts of power, making them unusable in extreme environments like the Arctic. As a result, "nobody had ever looked at sea-ice organisms immediately after collection like we did," says Stephanie Rider, a staff scientist at Caltech who went on the Greenland trip as part of the project. Previously, other teams have collected samples and taken them back to a lab where the samples have been stored in a freezer, sometimes for weeks at a time. "Who knows how much the samples have been warmed up and cooled down by the time someone studies them?" Rider says. "The samples could be totally different at that point."
When samples are returned to the laboratory, fed rich medium, and warmed to +4 degrees, swimming speeds are greatly increased. Credit: Jay Nadeau/Caltech
During the Greenland trip, the SHAMU group successfully collected images that have been used to construct videos of bacteria and algae that live in the sea-ice brine. They also brought samples back to a lab in Nuuk, Greenland, warmed them overnight, and fed them bacterial growth medium—duplicating the standard conditions under which microorganisms from sea ice have been studied in the past. The researchers found that under those conditions, "everything starts zipping around like crazy," says Nadeau, indicating that in order to be accurate, observations do need to be made in place on the ice rather than back in a lab.
The team is particularly excited about what the successful measurements from Greenland could mean in the context of Mars. "We know from this that we can tell that things are alive when you take them straight out of ice," says Nadeau. "If we can see life in there on Earth, then it's possible there might be life in pockets of ice on Mars as well. Perhaps you don't have to have a big liquid ocean to find living organisms; there's a possibility that things can live just in pockets of ice."
The three-year SHAMU project began in January 2014 with funding from the Gordon and Betty Moore Foundation. In the coming months, the engineers hope to improve the microscope's sample chamber and to scale down the entire device. They believe they will have a launch-ready instrument by the end of the funding period.
As a first test in space, they would like to send the instrument to the International Space Station not only to see how it behaves in space but also to observe microbial samples under zero-gravity conditions. Beyond that, they hope to include SHAMU on a Mars lander as part of a NASA Discovery mission aimed at searching for biosignatures in the frozen northern plains of Mars. The Caltech team is partnering with Honeybee Robotics, a company that has built drills and sampling systems for numerous NASA missions (including the Phoenix Mars lander), to integrate the holographic microscope on a drill that would bore down about three feet into the martian ground ice.
In addition to Nadeau, Gharib, and Lindensmith, Jody Deming of the University of Washington's School of Oceanography is also an investigator on the SHAMU project.
New research looks at what people with Autism Spectrum Disorder pay attention to in the real world.
The perceptual world of a person with autism spectrum disorder (ASD) is unique. Beginning in infancy, people who have ASD observe and interpret images and social cues differently than others. Caltech researchers now have new insight into just how this occurs, research that eventually may help doctors diagnose, and more effectively treat, the various forms of the disorder. The work is detailed in a study published in the October 22 issue of the journal Neuron.
Symptoms of ASD include impaired social interaction, compromised communication skills, restricted interests, and repetitive behaviors. Research suggests that some of these behaviors are influenced by how an individual with ASD senses, attends to, and perceives the world.
The new study investigated how visual input is interpreted in the brain of someone with ASD. In particular, it examined the validity of long-standing assumptions about the condition, including the belief that those with ASD often miss facial cues, contributing to their inability to respond appropriately in social situations.
"Among other findings, our work shows that the story is not as simple as saying 'people with ASD don't look normally at faces.' They don't look at most things in a typical way," says Ralph Adolphs, the Bren Professor of Psychology and Neuroscience and professor of biology, in whose lab the study was done. Indeed, the researchers found that people with ASD attend more to nonsocial images, to simple edges and patterns in those images, than to the faces of people.
To reach these determinations, Adolphs and his lab teamed up with Qi Zhao, an assistant professor of electrical and computer engineering at the National University of Singapore, the senior author on the paper, who had developed a detailed method. The researchers showed 700 images to 39 subjects. Twenty of the subjects were high-functioning individuals with ASD, and 19 were control, or "neurotypical," subjects without ASD. The two groups were matched for age, race, gender, educational level, and IQ. Each subject viewed each image for three seconds while an eye-tracking device recorded their attention patterns on objects depicted in the images.
Unlike the abstract representations of single objects or faces that have been commonly used in such studies, the images that Adolphs and his team presented contained combinations of more than 5,500 real-world elements—common objects like people, trees, and furniture as well as less common items like knives and flames—in natural settings, mimicking the scenes that a person might observe in day-to-day life.
"Complex images of natural scenes were a big part of this unique approach," says first-author Shuo Wang (PhD '14), a postdoctoral fellow at Caltech. The images were shown to subjects in a rich semantic context, "which simply means showing a scene that makes sense," he explains. "I could make an equally complex scene with Photoshop by combining some random objects such as a beach ball, a hamburger, a Frisbee, a forest, and a plane, but that grouping of objects doesn't have a meaning—there is no story demonstrated. Having objects that are related in a natural way and that show something meaningful provides the semantic context. It is a real-world approach."
In addition to validating previous studies that showed, for example, that individuals with ASD are less drawn to faces than control subjects, the new study found that these subjects were strongly attracted to the center of images, regardless of the content placed there. Similarly, they tended to focus their gaze on objects that stood out—for example, due to differences in color and contrast—rather than on faces. Take, for example, one image from the study showing two people talking with one facing the camera and the other facing away so that only the back of their head is visible. Control subjects concentrated on the visible face, whereas ASD subjects attended equally to the face and the back of the other person's head.
"The study is probably most useful for informing diagnosis," Adolphs says. "Autism is many things. Our study is one initial step in trying to discover what kinds of different autisms there actually are. The next step is to see if all people with ASD show the kind of pattern we found. There are probably differences between individual people with ASD, and those differences could relate to differences in diagnosis, for instance, revealing subtypes of autism. Once we have identified those subtypes, we can begin to ask if different kinds of treatment might be best for each kind of subtype."
Adolphs plans to continue this type of research using functional magnetic resonance imaging scans to track the brain activity of people with ASD while they are viewing images in laboratory settings similar to what was used in this study.
The research was supported by a postdoctoral fellowship from the Autism Science Foundation, a Fonds de Recherche du Québec en Nature et Technologies predoctoral fellowship, a National Institutes of Health Grant and National Alliance for Research on Schizophrenia and Depression Young Investigator Grant, a grant from the National Institute of Mental Health to the Caltech Conte Center for the Neurobiology of Social Decision Making, a grant from the Simons Foundation Autism Research Initiative, and Singapore's Defense Innovative Research Program and the Singapore Ministry of Education's Academic Research Fund Tier 2.
Astronomers have for the first time probed the magnetic fields in the mysterious inner regions of stars, finding they are strongly magnetized.
Using a technique called asteroseismology, the scientists were able to calculate the magnetic field strengths in the fusion-powered hearts of dozens of red giants, stars that are evolved versions of our sun.
"In the same way medical ultrasound uses sound waves to image the interior of the human body, asteroseismology uses sound waves generated by turbulence on the surface of stars to probe their inner properties," says Caltech postdoctoral researcher Jim Fuller, who co-led a new study detailing the research.
The findings, published in the October 23 issue of Science, will help astronomers better understand the life and death of stars. Magnetic fields likely determine the interior rotation rates of stars; such rates have dramatic effects on how the stars evolve.
Until now, astronomers have been able to study the magnetic fields of stars only on their surfaces, and have had to use supercomputer models to simulate the fields near the cores, where the nuclear-fusion process takes place. "We still don't know what the center of our own sun looks like," Fuller says.
Red giants have a different physical makeup from so-called main-sequence stars such as our sun—one that makes them ideal for asteroseismology (a field that was born at Caltech in 1962, when the late physicist and astronomer Robert Leighton discovered the solar oscillations using the solar telescopes at Mount Wilson). The cores of red-giant stars are much denser than those of younger stars. As a consequence, sound waves do not reflect off the cores, as they do in stars like our sun. Instead, the sound waves are transformed into another class of waves, called gravity waves.
"It turns out the gravity waves that we see in the red giants do propagate all the way to the center of these stars," says co-lead author Matteo Cantiello, a specialist in stellar astrophysics from UC Santa Barbara's Kavli Institute for Theoretical Physics (KITP).
This conversion from sound waves to gravity waves has major consequences for the tiny shape changes, or oscillations, that red giants undergo. "Depending on their size and internal structure, stars oscillate in different patterns," Fuller says. In one form of oscillation pattern, known as the dipole mode, one hemisphere of the star becomes brighter while the other becomes dimmer. Astronomers observe these oscillations in a star by measuring how its light varies over time.
When strong magnetic fields are present in a star's core, the fields can disrupt the propagation of gravity waves, causing some of the waves to lose energy and become trapped within the core. Fuller and his coauthors have coined the term "magnetic greenhouse effect" to describe this phenomenon because it works similarly to the greenhouse effect on Earth, in which greenhouse gases in the atmosphere help trap heat from the sun. The trapping of gravity waves inside a red giant causes some of the energy of the star's oscillation to be lost, and the result is a smaller than expected dipole mode.
In 2013, NASA's Kepler space telescope, which can measure stellar brightness variations with incredibly high precision, detected dipole-mode damping in several red giants. Dennis Stello, an astronomer at the University of Sydney, brought the Kepler data to the attention of Fuller and Cantiello. Working in collaboration with KITP director Lars Bildsten and Rafael Garcia of France's Alternative Energies and Atomic Energy Commission, the scientists showed that the magnetic greenhouse effect was the most likely explanation for dipole-mode damping in the red giants. Their calculations revealed that the internal magnetic fields of the red giants were as much as 10 million times stronger than Earth's magnetic field.
"This is exciting, as internal magnetic fields play an important role for the evolution and ultimate fate of stars," says Professor of Theoretical Astrophysics Sterl Phinney, Caltech's executive officer for astronomy, who was not involved in the study.
A better understanding of the interior magnetic fields of stars could also help settle a debate about the origin of powerful magnetic fields on the surfaces of certain neutron stars and white dwarfs, two classes of stellar corpses that form when stars die.
"The magnetic fields that they find in the red-giant cores are comparable to those of the strongly magnetized white dwarfs," Phinney says. "The fact that only some of the red giants show the dipole suppression, which indicates strong core fields, may well be related to why only some stars leave behind remnants with strong magnetic fields after they die."
The asteroseismology technique the team used to probe red giants probably will not work with our sun. "However," Fuller says, "stellar oscillations are our best probe of the interiors of stars, so more surprises are likely."