Evidence for a Martian Ocean

Researchers at the California Institute of Technology (Caltech) have discovered evidence for an ancient delta on Mars where a river might once have emptied into a vast ocean.

This ocean, if it existed, could have covered much of Mars's northern hemisphere—stretching over as much as a third of the planet.

"Scientists have long hypothesized that the northern lowlands of Mars are a dried-up ocean bottom, but no one yet has found the smoking gun," says Mike Lamb, an assistant professor of geology at Caltech and a coauthor of the paper describing the results. The paper was published online in the July 12 issue of the Journal of Geophysical Research.

Although the new findings are far from proof of the existence of an ancient ocean, they provide some of the strongest support yet, says Roman DiBiase, a postdoctoral scholar at Caltech and lead author of the paper.

Most of the northern hemisphere of Mars is flat and at a lower elevation than the southern hemisphere, and thus appears similar to the ocean basins found on Earth. The border between the lowlands and the highlands would have been the coastline for the hypothetical ocean.

The Caltech team used new high-resolution images from the Mars Reconnaissance Orbiter (MRO) to study a 100-square-kilometer area that sits right on this possible former coastline. Previous satellite images have shown that this area—part of a larger region called Aeolis Dorsa, which is about 1,000 kilometers away from Gale Crater where the Curiosity rover is now roaming—is covered in ridge-like features called inverted channels.

These inverted channels form when coarse materials like large gravel and cobbles are carried along rivers and deposited at their bottoms, building up over time. After the river dries up, the finer material—such as smaller grains of clay, silt, and sand—around the river erodes away, leaving behind the coarser stuff. This remaining sediment appears as today's ridge-like features, tracing the former river system.

When looked at from above, the inverted channels appear to fan out, a configuration that suggests one of three possible origins: the channels could have once been a drainage system in which streams and creeks flowed down a mountain and converged to form a larger river; the water could have flowed in the other direction, creating an alluvial fan, in which a single river channel branches into multiple smaller streams and creeks; or the channels are actually part of a delta, which is similar to an alluvial fan except that the smaller streams and creeks empty into a larger body of water such as an ocean.

To figure out which of these scenarios was most likely, the researchers turned to satellite images taken by the HiRISE camera on MRO. By taking pictures from different points in its orbit, the spacecraft was able to make stereo images that have allowed scientists to determine the topography of the martian surface. The HiRISE camera can pick out features as tiny as 25 centimeters long on the surface and the topographic data can distinguish changes in elevation at a resolution of 1 meter.

Using this data, the Caltech researchers analyzed the stratigraphic layers of the inverted channels, piecing together the history of how sediments were deposited along these ancient rivers and streams. The team was able to determine the slopes of the channels back when water was still coursing through them. Such slope measurements can reveal the direction of water flow—in this case, showing that the water was spreading out instead of converging, meaning the channels were part of an alluvial fan or a delta.

But the researchers also found evidence for an abrupt increase in slope of the sedimentary beds near the downstream end of the channels. That sort of steep slope is most common when a stream empties into a large body of water—suggesting that the channels are part of a delta and not an alluvial fan.

Scientists have discovered martian deltas before, but most are inside a geological boundary, like a crater. Water therefore would have most likely flowed into a lake enclosed by such a boundary and so did not provide evidence for an ocean.

But the newly discovered delta is not inside a crater or other confining boundary, suggesting that the water likely emptied into a large body of water like an ocean. "This is probably one of the most convincing pieces of evidence of a delta in an unconfined region—and a delta points to the existence of a large body of water in the northern hemisphere of Mars," DiBiase says. This large body of water could be the ocean that has been hypothesized to have covered a third of the planet. At the very least, the researchers say, the water would have covered the entire Aerolis Dorsa region, which spans about 100,000 square kilometers.

Of course, there are still other possible explanations. It is plausible, for instance, that at one time there was a confining boundary—such as a large crater—that was later erased, Lamb adds. But that would require a rather substantial geological process and would mean that the martian surface was more geologically active than has been previously thought.

The next step, the researchers say, is to continue exploring the boundary between the southern highlands and northern lowlands—the hypothetical ocean coastline—and analyze other sedimentary deposits to see if they yield more evidence for an ocean. 

"In our work and that of others—including the Curiosity rover—scientists are finding a rich sedimentary record on Mars that is revealing its past environments, which include rain, flowing water, rivers, deltas, and potentially oceans," Lamb says. "Both the ancient environments on Mars and the planet's sedimentary archive of these environments are turning out to be surprisingly Earth-like."

The title of the Journal of Geophysical Research paper is "Deltaic deposits at Aeolis Dorsa: Sedimentary evidence for a standing body of water on the northern plains of Mars." In addition to DiBiase and Lamb, the other authors of the paper are graduate students Ajay Limaye and Joel Scheingross, and Woodward Fischer, assistant professor of geobiology. This research was supported by the National Science Foundation, NASA, and Caltech.

Writer: 
Marcus Woo
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

New Research Sheds Light on M.O. of Unusual RNA Molecules

The genes that code for proteins—more than 20,000 in total—make up only about 1 percent of the complete human genome. That entire thing—not just the genes, but also genetic junk and all the rest—is coiled and folded up in any number of ways within the nucleus of each of our cells. Think, then, of the challenge that a protein or other molecule, like RNA, faces when searching through that material to locate a target gene.

Now a team of researchers led by newly arrived biologist Mitchell Guttman of the California Institute of Technology (Caltech) and Kathrin Plath of UCLA, has figured out how some RNA molecules take advantage of their position within the three-dimensional mishmash of genomic material to home in on targets. The research appears in the current issue of Science Express.

The findings suggest a unique role for a class of RNAs, called lncRNAs, which Guttman and his colleagues at the Broad Institute of MIT and Harvard first characterized in 2009. Until then, these lncRNAs—short for long, noncoding RNAs and pronounced "link RNAs"—had been largely overlooked because they lie in between the genes that code for proteins. Guttman and others have since shown that lncRNAs scaffold, or bring together and organize, key proteins involved in the packaging of genetic information to regulate gene expression—controlling cell fate in some stem cells, for example.

In the new work, the researchers found that lncRNAs can easily locate and bind to nearby genes. Then, with the help of proteins that reorganize genetic material, the molecules can pull in additional related genes and move to new sites, building up a "compartment" where many genes can be regulated all at once.

"You can now think about these lncRNAs as a way to bring together genes that are needed for common function into a single physical region and then regulate them as a set, rather than individually," Guttman says. "They are not just scaffolds of proteins but actual organizers of genes."

The new work focused on Xist, a lncRNA molecule that has long been known to be involved in turning off one of the two X chromosomes in female mammals (something that must happen in order for the genome to function properly). Quite a bit has been uncovered about how Xist achieves this silencing act. We know, for example, that it binds to the X chromosome; that it recruits a chromatin regulator to help it organize and modify the structure of the chromatin; and that certain distinct regions of the RNA are necessary to do all of this work. Despite this knowledge, it had been unknown at the molecular level how Xist actually finds its targets and spreads across the X chromosome.

To gain insight into that process, Guttman and his colleagues at the Broad Institute developed a method called RNA Antisense Purification (RAP) that, by sequencing DNA at high resolution, gave them a way to map out exactly where different lncRNAs go. Then, working with Plath's group at UCLA, they used their method to watch in high resolution as Xist was activated in undifferentiated mouse stem cells, and the process of X-chromosome silencing proceeded.

"That's where this got really surprising," Guttman says. "It wasn't that somehow this RNA just went everywhere, searching for its target. There was some method to its madness. It was clear that this RNA actually used its positional information to find things that were very far away from it in genome space, but all of those genes that it went to were really close to it in three-dimensional space."

Before Xist is activated, X-chromosome genes are all spread out. But, the researchers found, once Xist is turned on, it quickly pulls in genes, forming a cloud. "And it's not just that the expression levels of Xist get higher and higher," Guttman says. "It's that Xist brings in all of these related genes into a physical nuclear structure. All of these genes then occupy a single territory."

The researchers found that a specific region of Xist, known as the A-repeat domain, that is known to be vital for the lncRNA's ability to silence X-chromosome genes is also needed to pull in all the genes that it needs to silence. When the researchers deleted the domain, the X chromosome did not become inactivated, because the silencing compartment did not form.

One of the most exciting aspects of the new research, Guttman says, is that it has implications beyond just explaining how Xist works. "In our paper, we talk a lot about Xist, but these results are likely to be general to other lncRNAs," he says. He adds that the work provides one of the first direct pieces of evidence to explain what makes lncRNAs special. "LncRNAs, unlike proteins, really can use their genomic information—their context, their location—to act, to bring together targets," he says. "That makes them quite unique."  

The new paper is titled "The Xist lncRNA exploits three-dimensional genome architecture to spread across the X-chromosome." Along with Guttman and Plath, additional coauthors are Jesse M. Engreitz, Patrick McDonel, Alexander Shishkin, Klara Sirokman, Christine Surka, Sabah Kadri, Jeffrey Xing, Along Goren, and Eric Lander of the Broad Institute of Harvard and MIT; as well as Amy Pandya-Jones of UCLA. The work was funded by an NIH Director's Early Independence Award, the National Human Genome Research Institute Centers of Excellence in Genomic Sciences, the California Institute for Regenerative Medicine, and funds from the Broad Institute and from UCLA's Eli and Edythe Broad Center for Regenerative Medicine and Stem Cell Research. 

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No

Psychology Influences Markets

When it comes to economics versus psychology, score one for psychology.

Economists argue that markets usually reflect rational behavior—that is, the dominant players in a market, such as the hedge-fund managers who make billions of dollars' worth of trades, almost always make well-informed and objective decisions. Psychologists, on the other hand, say that markets are not immune from human irrationality, whether that irrationality is due to optimism, fear, greed, or other forces.

Now, a new analysis published the week of July 1 in the online issue of the Proceedings of the National Academy of Sciences (PNAS) supports the latter case, showing that markets are indeed susceptible to psychological phenomena. "There's this tug-of-war between economics and psychology, and in this round, psychology wins," says Colin Camerer, the Robert Kirby Professor of Behavioral Economics at the California Institute of Technology (Caltech) and the corresponding author of the paper.

Indeed, it is difficult to claim that markets are immune to apparent irrationality in human behavior. "The recent financial crisis really has shaken a lot of people's faith," Camerer says. Despite the faith of many that markets would organize allocations of capital in ways that are efficient, he notes, the government still had to bail out banks, and millions of people lost their homes.

In their analysis, the researchers studied an effect called partition dependence, in which breaking down—or partitioning—the possible outcomes of an event in great detail makes people think that those outcomes are more likely to happen. The reason, psychologists say, is that providing specific scenarios makes them more explicit in people's minds. "Whatever we're thinking about, seems more likely," Camerer explains.

For example, if you are asked to predict the next presidential election, you may say that a Democrat has a 50/50 chance of winning and a Republican has a 50/50 chance of winning. But if you are asked about the odds that a particular candidate from each party might win—for example, Hillary Clinton versus Chris Christie—you are likely to envision one of them in the White House, causing you to overestimate his or her odds.

The researchers looked for this bias in a variety of prediction markets, in which people bet on future events. In these markets, participants buy and sell claims on specific outcomes, and the prices of those claims—as set by the market—reflect people's beliefs about how likely it is that each of those outcomes will happen. Say, for example, that the price for a claim that the Miami Heat will win 16 games during the NBA playoffs is $6.50 for a $10 return. That means that, in the collective judgment of the traders, Miami has a 65 percent chance of winning 16 games.

The researchers created two prediction markets via laboratory experiments and studied two others in the real world. In one lab experiment, which took place in 2006, volunteers traded claims on how many games an NBA team would win during the 2006 playoffs and how many goals a team would score in the 2006 World Cup. The volunteers traded claims on 16 teams each for the NBA playoffs and the World Cup.

In the basketball case, one group of volunteers was asked to bet on whether the Miami Heat would win 4–7 playoff games, 8–11 games, or some other range. Another group was given a range of 4–11 games, which combined the two intervals offered to the first group. Then, the volunteers traded claims on each of the intervals within their respective groups. As with all prediction markets, the price of a traded claim reflected the traders' estimations of whether the total number of games won by the Heat would fall within a particular range.

Economic theory says that the first group's perceived probability of the Heat winning 4–7 games and its perceived probability of winning 8–11 games should add up to a total close to the second group's perceived probability of the team winning 4–11 games. But when they added the numbers up, the researchers found instead that the first group thought the likelihood of the team winning 4–7 or 8–11 games higher than did the second group, which was asked about the probability of them winning 4–11 games. All of this suggests that framing the possible outcomes in terms of more specific intervals caused people to think that those outcomes were more likely.

The researchers observed similar results in a second, similar lab experiment, and in two studies of natural markets—one involving a series of 153 prediction markets run by Deutsche Bank and Goldman Sachs, and another involving long-shot horses in horse races.

People tend to bet more money on a long-shot horse, because of its higher potential payoff, and they also tend to overestimate the chance that such a horse will win. Statistically, however, a horse's chance of winning a particular race is the same regardless of how many other horses it's racing against—a horse who habitually wins just five percent of the time will continue to do so whether it is racing against fields of 5 or of 11. But when the researchers looked at horse-race data from 1992 through 2001—a total of 6.3 million starts—they found that bettors were subject to the partition bias, believing that long-shot horses had higher odds of winning when they were racing against fewer horses.

While partition dependence has been looked at in the past in specific lab experiments, it hadn't been studied in prediction markets, Camerer says. What makes this particular analysis powerful is that the researchers observed evidence for this phenomenon in a wide range of studies—short, well-controlled laboratory experiments; markets involving intelligent, well-informed traders at major financial institutions; and nine years of horse-racing data.

The title of the PNAS paper is "How psychological framing affects economic market prices in the lab and field." In addition to Camerer, the other authors are Ulrich Sonnemann and Thomas Langer at the University of Münster, Germany, and Craig Fox at UCLA. Their research was supported by the German Research Foundation, the National Science Foundation, the Gordon and Betty Moore Foundation, and the Human Frontier Science Program.

Writer: 
Marcus Woo
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

A Stepping-Stone for Oxygen on Earth

Caltech researchers find evidence of an early manganese-oxidizing photosystem

For most terrestrial life on Earth, oxygen is necessary for survival. But the planet's atmosphere did not always contain this life-sustaining substance, and one of science's greatest mysteries is how and when oxygenic photosynthesis—the process responsible for producing oxygen on Earth through the splitting of water molecules—first began. Now, a team led by geobiologists at the California Institute of Technology (Caltech) has found evidence of a precursor photosystem involving manganese that predates cyanobacteria, the first group of organisms to release oxygen into the environment via photosynthesis.  

The findings, outlined in the June 24 early edition of the Proceedings of the National Academy of Sciences (PNAS), strongly support the idea that manganese oxidation—which, despite the name, is a chemical reaction that does not have to involve oxygen—provided an evolutionary stepping-stone for the development of water-oxidizing photosynthesis in cyanobacteria.

"Water-oxidizing or water-splitting photosynthesis was invented by cyanobacteria approximately 2.4 billion years ago and then borrowed by other groups of organisms thereafter," explains Woodward Fischer, assistant professor of geobiology at Caltech and a coauthor of the study. "Algae borrowed this photosynthetic system from cyanobacteria, and plants are just a group of algae that took photosynthesis on land, so we think with this finding we're looking at the inception of the molecular machinery that would give rise to oxygen."

Photosynthesis is the process by which energy from the sun is used by plants and other organisms to split water and carbon dioxide molecules to make carbohydrates and oxygen. Manganese is required for water splitting to work, so when scientists began to wonder what evolutionary steps may have led up to an oxygenated atmosphere on Earth, they started to look for evidence of manganese-oxidizing photosynthesis prior to cyanobacteria. Since oxidation simply involves the transfer of electrons to increase the charge on an atom—and this can be accomplished using light or O2—it could have occurred before the rise of oxygen on this planet.

"Manganese plays an essential role in modern biological water splitting as a necessary catalyst in the process, so manganese-oxidizing photosynthesis makes sense as a potential transitional photosystem," says Jena Johnson, a graduate student in Fischer's laboratory at Caltech and lead author of the study.

To test the hypothesis that manganese-based photosynthesis occurred prior to the evolution of oxygenic cyanobacteria, the researchers examined drill cores (newly obtained by the Agouron Institute) from 2.415 billion-year-old South African marine sedimentary rocks with large deposits of manganese.

Manganese is soluble in seawater. Indeed, if there are no strong oxidants around to accept electrons from the manganese, it will remain aqueous, Fischer explains, but the second it is oxidized, or loses electrons, manganese precipitates, forming a solid that can become concentrated within seafloor sediments.

"Just the observation of these large enrichments—16 percent manganese in some samples—provided a strong implication that the manganese had been oxidized, but this required confirmation," he says.

To prove that the manganese was originally part of the South African rock and not deposited there later by hydrothermal fluids or some other phenomena, Johnson and colleagues developed and employed techniques that allowed the team to assess the abundance and oxidation state of manganese-bearing minerals at a very tiny scale of 2 microns.

"And it's warranted—these rocks are complicated at a micron scale!" Fischer says. "And yet, the rocks occupy hundreds of meters of stratigraphy across hundreds of square kilometers of ocean basin, so you need to be able to work between many scales—very detailed ones, but also across the whole deposit to understand the ancient environmental processes at work."

Using these multiscale approaches, Johnson and colleagues demonstrated that the manganese was original to the rocks and first deposited in sediments as manganese oxides, and that manganese oxidation occurred over a broad swath of the ancient marine basin during the entire timescale captured by the drill cores.

"It's really amazing to be able to use X-ray techniques to look back into the rock record and use the chemical observations on the microscale to shed light on some of the fundamental processes and mechanisms that occurred billions of years ago," says Samuel Webb, coauthor on the paper and beam line scientist at the SLAC National Accelerator Laboratory at Stanford University, where many of the study's experiments took place. "Questions regarding the evolution of the photosynthetic pathway and the subsequent rise of oxygen in the atmosphere are critical for understanding not only the history of our own planet, but also the basics of how biology has perfected the process of photosynthesis."

Once the team confirmed that the manganese had been deposited as an oxide phase when the rock was first forming, they checked to see if these manganese oxides were actually formed before water-splitting photosynthesis or if they formed after as a result of reactions with oxygen. They used two different techniques to check whether oxygen was present. It was not—proving that water-splitting photosynthesis had not yet evolved at that point in time. The manganese in the deposits had indeed been oxidized and deposited before the appearance of water-splitting cyanobacteria. This implies, the researchers say, that manganese-oxidizing photosynthesis was a stepping-stone for oxygen-producing, water-splitting photosynthesis.

"I think that there will be a number of additional experiments that people will now attempt to try and reverse engineer a manganese photosynthetic photosystem or cell," Fischer says. "Once you know that this happened, it all of a sudden gives you reason to take more seriously an experimental program aimed at asking, 'Can we make a photosystem that's able to oxidize manganese but doesn't then go on to split water? How does it behave, and what is its chemistry?' Even though we know what modern water splitting is and what it looks like, we still don't know exactly how it works. There is still a major discovery to be made to find out exactly how the catalysis works, and now knowing where this machinery comes from may open new perspectives into its function—an understanding that could help target technologies for energy production from artificial photosynthesis. "

Next up in Fischer's lab, Johnson plans to work with others to try and mutate a cyanobacteria to "go backwards" and perform manganese-oxidizing photosynthesis. The team also plans to investigate a set of rocks from western Australia that are similar in age to the samples used in the current study and may also contain beds of manganese. If their current study results are truly an indication of manganese-oxidizing photosynthesis, they say, there should be evidence of the same processes in other parts of the world.

"Oxygen is the backdrop on which this story is playing out on, but really, this is a tale of the evolution of this very intense metabolism that happened once—an evolutionary singularity that transformed the planet," Fischer says. "We've provided insight into how the evolution of one of these remarkable molecular machines led up to the oxidation of our planet's atmosphere, and now we're going to follow up on all angles of our findings."

Funding for the research outlined in the PNAS paper, titled "Manganese-oxidizing photosynthesis before the rise of cyanobacteria," was provided by the Agouron Institute, NASA's Exobiology Branch, the David and Lucile Packard Foundation, and the National Science Foundation Graduate Research Fellowship program. Joseph Kirschvink, Nico and Marilyn Van Wingen Professor of Geobiology at Caltech, also contributed to the study along with Katherine Thomas and Shuhei Ono from the Massachusetts Institute of Technology.

Writer: 
Katie Neith
Images: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Beauty and the Brain: Electrical Stimulation of the Brain Makes You Perceive Faces as More Attractive

Findings may lead to promising ways to treat and study neuropsychiatric disorders

Beauty is in the eye of the beholder, and—as researchers have now shown—in the brain as well.

The researchers, led by scientists at the California Institute of Technology (Caltech), have used a well-known, noninvasive technique to electrically stimulate a specific region deep inside the brain previously thought to be inaccessible. The stimulation, the scientists say, caused volunteers to judge faces as more attractive than before their brains were stimulated.

Being able to effect such behavioral changes means that this electrical stimulation tool could be used to noninvasively manipulate deep regions of the brain—and, therefore, that it could serve as a new approach to study and treat a variety of deep-brain neuropsychiatric disorders, such as Parkinson's disease and schizophrenia, the researchers say.

"This is very exciting because the primary means of inducing these kinds of deep-brain changes to date has been by administering drug treatments," says Vikram Chib, a postdoctoral scholar who led the study, which is being published in the June 11 issue of the journal Translational Psychiatry. "But the problem with drugs is that they're not location-specific—they act on the entire brain." Thus, drugs may carry unwanted side effects or, occasionally, won't work for certain patients—who then may need invasive treatments involving the implantation of electrodes into the brain.

So Chib and his colleagues turned to a technique called transcranial direct-current stimulation (tDCS), which, Chib notes, is cheap, simple, and safe. In this method, an anode and a cathode are placed at two different locations on the scalp. A weak electrical current—which can be powered by a nine-volt battery—runs from the cathode, through the brain, and to the anode. The electrical current is a mere 2 milliamps—10,000 times less than the 20 amps typically available from wall sockets. "All you feel is a little bit of tingling, and some people don't even feel that," he says.

"There have been many studies employing tDCS to affect behavior or change local neural activity," says Shinsuke Shimojo, the Gertrude Baltimore Professor of Experimental Psychology and a coauthor of the paper. For example, the technique has been used to treat depression and to help stroke patients rehabilitate their motor skills. "However, to our knowledge, virtually none of the previous studies actually examined and correlated both behavior and neural activity," he says. These studies also targeted the surface areas of the brain—not much more than a centimeter deep—which were thought to be the physical limit of how far tDCS could reach, Chib adds.

The researchers hypothesized that they could exploit known neural connections and use tDCS to stimulate deeper regions of the brain. In particular, they wanted to access the ventral midbrain—the center of the brain's reward-processing network, and about as deep as you can go. It is thought to be the source of dopamine, a chemical whose deficiency has been linked to many neuropsychiatric disorders.

The ventral midbrain is part of a neural circuit that includes the dorsolateral prefrontal cortex (DLPFC), which is located just above the temples, and the ventromedial prefrontal cortex (VMPFC), which is behind the forehead. Decreasing activity in the DLPFC boosts activity in the VMPFC, which in turn bumps up activity in the ventral midbrain. To manipulate the ventral midbrain, therefore, the researchers decided to try using tDCS to deactivate the DLPFC and activate the VMPFC.

To test their hypothesis, the researchers asked volunteers to judge the attractiveness of groups of faces both before and after the volunteers' brains had been stimulated with tDCS. Judging facial attractiveness is one of the simplest, most primal tasks that can activate the brain's reward network, and difficulty in evaluating faces and recognizing facial emotions is a common symptom of neuropsychiatric disorders. The study participants rated the faces while inside a functional magnetic resonance imaging (fMRI) scanner, which allowed the researchers to evaluate any changes in brain activity caused by the stimulation.

A total of 99 volunteers participated in the tDCS experiment and were divided into six stimulation groups. In the main stimulation group, composed of 19 subjects, the DLPFC was deactivated and the VMPFC activated with a stimulation configuration that the researchers theorized would ultimately activate the ventral midbrain. The other groups were used to test different stimulation configurations. For example, in one group, the placement of the cathode and anode were switched so that the DLPFC was activated and the VMPFC was deactivated—the opposite of the main group. Another was a "sham" group, in which the electrodes were placed on volunteers' heads, but no current was run.

Those in the main group rated the faces presented after stimulation as more attractive than those they saw before stimulation. There were no differences in the ratings from the control groups. This change in ratings in the main group suggests that tDCS is indeed able to activate the ventral midbrain, and that the resulting changes in brain activity in this deep-brain region are associated with changes in the evaluation of attractiveness.

In addition, the fMRI scans revealed that tDCS strengthened the correlation between VMPFC activity and ventral midbrain activity. In other words, stimulation appeared to enhance the neural connectivity between the two brain areas. And for those who showed the strongest connectivity, tDCS led to the biggest change in attractiveness ratings. Taken together, the researchers say these results show that tDCS is causing those shifts in perception by manipulating the ventral midbrain via the DLPFC and VMPFC.

"The fact that we haven't had a way to noninvasively manipulate a functional circuit in the brain has been a fundamental bottleneck in human behavioral neuroscience," Shimojo says. This new work, he adds, represents a big first step in removing that bottleneck.

Using tDCS to study and treat neuropsychiatric disorders hinges on the assumption that the technique directly influences dopamine production in the ventral midbrain, Chib explains. But because fMRI can't directly measure dopamine, this study was unable to make that determination. The next step, then, is to use methods that can—such as positron emission tomography (PET) scans.

More work also needs to be done to see how tDCS may be used for treating disorders and to precisely determine the duration of the stimulation effects—as a rule of thumb, the influence of tDCS lasts for twice the exposure time, Chib says. Future studies will also be needed to see what other behaviors this tDCS method can influence. Ultimately, clinical tests will be needed for medical applications.

In addition to Chib and Shimojo, the other authors of the paper are Kyongsik Yun, a former postdoctoral scholar at Caltech who is now at the Korea Advanced Institute of Science and Technology (KAIST), and Hidehiko Takahashi of the Kyoto University Graduate School of Medicine. The title of the Translational Psychiatry paper is "Noninvasive remote activation of the ventral midbrain by transcranial direct current stimulation of prefrontal cortex." This work was funded by the Exploratory Research for Advanced Technology (ERATO) and CREST programs of the Japan Science and Technology Agency (JST); the Caltech-Tamagawa gCOE (Global Center of Excellence) program; and a Japan-U.S. Brain Research Cooperative Program grant.

Writer: 
Marcus Woo
Frontpage Title: 
Beauty and the Brain
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Keeping Stem Cells Strong

Caltech biologists show that an RNA molecule protects stem cells during inflammation

When infections occur in the body, stem cells in the blood often jump into action by multiplying and differentiating into mature immune cells that can fight off illness. But repeated infections and inflammation can deplete these cell populations, potentially leading to the development of serious blood conditions such as cancer. Now, a team of researchers led by biologists at the California Institute of Technology (Caltech) has found that, in mouse models, the molecule microRNA-146a (miR-146a) acts as a critical regulator and protector of blood-forming stem cells (called hematopoietic stem cells, or HSCs) during chronic inflammation, suggesting that a deficiency of miR-146a may be one important cause of blood cancers and bone marrow failure.

The team came to this conclusion by developing a mouse model that lacks miR-146a. RNA is a polymer structured like DNA, the chemical that makes up our genes. MicroRNAs, as the name implies, are a class of very short RNAs that can interfere with or regulate the activities of particular genes. When subjected to a state of chronic inflammation, mice lacking miR-146a showed a decline in the overall number and quality of their HSCs; normal mice producing the molecule, in contrast, were better able to maintain their levels of HSCs despite long-term inflammation. The researchers' findings are outlined in the May 21 issue of the new journal eLIFE.

"This mouse with genetic deletion of miR-146a is a wonderful model with which to understand chronic-inflammation-driven tumor formation and hematopoietic stem cell biology during chronic inflammation," says Jimmy Zhao, the lead author of the study and an MD/PhD student in the Caltech laboratory of David Baltimore, the Robert Andrews Millikan Professor of Biology. "It was surprising that a single microRNA plays such a crucial role. Deleting it produced a profound and dramatic pathology, which clearly highlights the critical and indispensable function of miR-146a in guarding the quality and longevity of HSCs."

The study findings provide, for the first time, a detailed molecular connection between chronic inflammation, and bone marrow failure and diseases of the blood. These findings could lead to the discovery and development of anti-inflammatory molecules that could be used as therapeutics for blood diseases. In fact, the researchers believe that miR-146a itself may ultimately become a very effective anti-inflammatory molecule, once RNA molecules or mimetics can be delivered more efficiently to the cells of interest.

The new mouse model, Zhao says, also mimics important aspects of human myelodysplastic syndrome (MDS)—a form of pre-leukemia that often causes severe anemia, can require frequent blood transfusions, and usually leads to acute myeloid leukemia. Further study of the model could lead to a better understanding of the condition and therefore potential new treatments for MDS.

"This study speaks to the importance of keeping chronic inflammation in check and provides a good rationale for broad use of safer and more effective anti-inflammatory molecules," says Baltimore, who is a coauthor of the study. "If we can understand what cell types and proteins are critically important in chronic-inflammation-driven tumor formation and stem cell exhaustion, we can potentially design better and safer drugs to intervene."

Funding for the research outlined in the eLIFE paper, titled "MicroRNA-146a acts as a guardian of the quality and longevity of hematopoietic stem cells in mice," was provided by the National Institute of Allergy and Infectious Disease; the National Heart, Lung, and Blood Institute; and the National Cancer Institute. Yvette Garcia-Flores, the lead technician in Baltimore's lab, also contributed to the study along with Dinesh Rao from UCLA and Ryan O'Connell from the University of Utah. eLIFE, a new open-access, high-impact journal, is backed by three of the world's leading funding agencies, the Howard Hughes Medical Institute, the Max Planck Society, and the Wellcome Trust. 

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Birth of a Black Hole

A new kind of cosmic flash may reveal something never seen before: the birth of a black hole.

When a massive star exhausts its fuel, it collapses under its own gravity and produces a black hole, an object so dense that not even light can escape its gravitational grip. According to a new analysis by an astrophysicist at the California Institute of Technology (Caltech), just before the black hole forms, the dying star may generate a distinct burst of light that will allow astronomers to witness the birth of a new black hole for the first time.

Tony Piro, a postdoctoral scholar at Caltech, describes this signature light burst in a paper published in the May 1 issue of the Astrophysical Journal Letters. While some dying stars that result in black holes explode as gamma-ray bursts, which are among the most energetic phenomena in the universe, those cases are rare, requiring exotic circumstances, Piro explains. "We don't think most run-of-the-mill black holes are created that way." In most cases, according to one hypothesis, a dying star produces a black hole without a bang or a flash: the star would seemingly vanish from the sky—an event dubbed an unnova. "You don't see a burst," he says. "You see a disappearance."

But, Piro hypothesizes, that may not be the case. "Maybe they're not as boring as we thought," he says.

According to well-established theory, when a massive star dies, its core collapses under its own weight. As it collapses, the protons and electrons that make up the core merge and produce neutrons. For a few seconds—before it ultimately collapses into a black hole—the core becomes an extremely dense object called a neutron star, which is as dense as the sun would be if squeezed into a sphere with a radius of about 10 kilometers (roughly 6 miles). This collapsing process also creates neutrinos, which are particles that zip through almost all matter at nearly the speed of light. As the neutrinos stream out from the core, they carry away a lot of energy—representing about a tenth of the sun's mass (since energy and mass are equivalent, per E = mc2).

According to a little-known paper written in 1980 by Dmitry Nadezhin of the Alikhanov Institute for Theoretical and Experimental Physics in Russia, this rapid loss of mass means that the gravitational strength of the dying star's core would abruptly drop. When that happens, the outer gaseous layers—mainly hydrogen—still surrounding the core would rush outward, generating a shock wave that would hurtle through the outer layers at about 1,000 kilometers per second (more than 2 million miles per hour).

Using computer simulations, two astronomers at UC Santa Cruz, Elizabeth Lovegrove and Stan Woosley, recently found that when the shock wave strikes the outer surface of the gaseous layers, it would heat the gas at the surface, producing a glow that would shine for about a year—a potentially promising signal of a black-hole birth. Although about a million times brighter than the sun, this glow would be relatively dim compared to other stars. "It would be hard to see, even in galaxies that are relatively close to us," says Piro.

But now Piro says he has found a more promising signal. In his new study, he examines in more detail what might happen at the moment when the shock wave hits the star's surface, and he calculates that the impact itself would make a flash 10 to 100 times brighter than the glow predicted by Lovegrove and Woosley. "That flash is going to be very bright, and it gives us the best chance for actually observing that this event occurred," Piro explains. "This is what you really want to look for."

Such a flash would be dim compared to exploding stars called supernovae, for example, but it would be luminous enough to be detectable in nearby galaxies, he says. The flash, which would shine for 3 to 10 days before fading, would be very bright in optical wavelengths—and at its very brightest in ultraviolet wavelengths.

Piro estimates that astronomers should be able to see one of these events per year on average. Surveys that watch the skies for flashes of light like supernovae—surveys such as the Palomar Transient Factory (PTF), led by Caltech—are well suited to discover these unique events, he says. The intermediate Palomar Transient Factory (iPTF), which improves on the PTF and just began surveying in February, may be able to find a couple of these events per year.

Neither survey has observed any black-hole flashes as of yet, says Piro, but that does not rule out their existence. "Eventually we're going to start getting worried if we don't find these things." But for now, he says, his expectations are perfectly sound.

With Piro's analysis in hand, astronomers should be able to design and fine-tune additional surveys to maximize their chances of witnessing a black-hole birth in the near future. In 2015, the next generation of PTF, called the Zwicky Transient Facility (ZTF), is slated to begin; it will be even more sensitive, improving by several times the chances of finding those flashes. "Caltech is therefore really well-positioned to look for transient events like this," Piro says.

Within the next decade, the Large Synoptic Survey Telescope (LSST) will begin a massive survey of the entire night sky. "If LSST isn't regularly seeing these kinds of events, then that's going to tell us that maybe there's something wrong with this picture, or that black-hole formation is much rarer than we thought," he says.

The Astrophysical Journal Letters paper is titled "Taking the 'un' out of unnovae." This research was supported by the National Science Foundation, NASA, and the Sherman Fairchild Foundation.

Writer: 
Marcus Woo
Contact: 
Writer: 
Exclude from News Hub: 
No

Astronomers Discover Massive Star Factory in Early Universe

Star-forming galaxy is the most distant ever found

PASADENA, Calif.—Smaller begets bigger.

Such is often the case for galaxies, at least: the first galaxies were small, then eventually merged together to form the behemoths we see in the present universe.

Those smaller galaxies produced stars at a modest rate; only later—when the universe was a couple of billion years old—did the vast majority of larger galaxies begin to form and accumulate enough gas and dust to become prolific star factories. Indeed, astronomers have observed that these star factories—called starburst galaxies—became prevalent a couple of billion years after the Big Bang.

But now a team of astronomers, which includes several from the California Institute of Technology (Caltech), has discovered a dust-filled, massive galaxy churning out stars when the cosmos was a mere 880 million years old—making it the earliest starburst galaxy ever observed.

The galaxy is about as massive as our Milky Way, but produces stars at a rate 2,000 times greater, which is a rate as high as any galaxy in the universe. Generating the mass equivalent of 2,900 suns per year, the galaxy is especially prodigious—prompting the team to call it a "maximum-starburst" galaxy.

"Massive, intense starburst galaxies are expected to only appear at later cosmic times," says Dominik Riechers, who led the research while a senior research fellow at Caltech. "Yet, we have discovered this colossal starburst just 880 million years after the Big Bang, when the universe was at little more than 6 percent of its current age." Now an assistant professor at Cornell, Riechers is the first author of the paper describing the findings in the April 18 issue of the journal Nature.

While the discovery of this single galaxy isn't enough to overturn current theories of galaxy formation, finding more galaxies like this one could challenge those theories, the astronomers say. At the very least, theories will have to be modified to explain how this galaxy, dubbed HFLS3, formed, Riechers says.

"This galaxy is just one spectacular example, but it's telling us that extremely vigorous star formation was possible early in the universe," says Jamie Bock, professor of physics at Caltech and a coauthor of the paper.

The astronomers found HFLS3 chock full of molecules such as carbon monoxide, ammonia, hydroxide, and even water. Because most of the elements in the universe—other than hydrogen and helium—are fused in the nuclear furnaces of stars, such a rich and diverse chemical composition is indicative of active star formation. And indeed, Bock says, the chemical composition of HFLS3 is similar to those of other known starburst galaxies that existed later in cosmic history.

Last month, a Caltech-led team of astronomers—a few of whom are also authors on this newer work—discovered dozens of similar galaxies that were producing stars as early as 1.5 billion years after the Big Bang. But none of them existed as early as HFLS3, which has been studied in much greater detail.

Those previous observations were made possible by gravitational lensing, in which large foreground galaxies act as cosmic magnifying glasses, bending the light of the starburst galaxies and making their detection easier. HFLS3, however, is only weakly lensed, if at all. The fact that it was detectable without the help of lensing means that it is intrinsically a bright galaxy in far-infrared light—nearly 30 trillion times as luminous as the sun and 2,000 times more luminous than the Milky Way.

Because the galaxy is enshrouded in dust, it's very faint in visible light. The galaxy's stars, however, heat up the dust, causing it to radiate in infrared wavelengths. The astronomers were able to find HFLS3 as they sifted through data taken by the European Space Agency's Herschel Space Observatory, which studies the infrared universe. The data was part of the Herschel Multi-tiered Extragalactic Survey (HerMES), an effort co-coordinated by Bock to observe a large patch of the sky (roughly 1,300 times the size of the moon) with Herschel.

Amid the thousands of galaxies detected in the survey, HFLS3 appeared as just a faint dot—but a particularly red one. That caught the attention of Darren Dowell, a visiting associate at Caltech who was analyzing the HerMES data. The object's redness meant that its light was being substantially stretched toward longer (and redder) wavelengths by the expansion of the universe. The more distant an object, the more its light is stretched, and so a very red source would be very far away. The only other possibility would be that—because cooler objects emit light at longer wavelengths—the object might be unusually cold; the astronomers' analysis, however, ruled out that possibility. Because it takes the light billions of years to travel across space, seeing such a distant object is equivalent to looking deep into the past. "We were hoping to find a massive starburst galaxy at vast distances, but we did not expect that one would even exist that early in the universe," Riechers says.

To study HFLS3 further, the astronomers zoomed in with several other telescopes. Using the Combined Array for Research in Millimeter-Wave Astronomy (CARMA)—a series of telescope dishes that Caltech helps operate in the Inyo Mountains of California—as well as the Z-Spec instrument on the Caltech Submillimeter Observatory on Mauna Kea in Hawaii, the team was able to study the chemical composition of the galaxy in detail—in particular, the presence of water and carbon monoxide—and measure its distance. The researchers also used the 10-meter telescope at the W. M. Keck Observatory on Mauna Kea to determine to what extent HFLS3 was gravitationally lensed.

This galaxy is the first such object in the HerMES survey to be analyzed in detail. This type of galaxy is rare, the astronomers say, but to determine just how rare, they will pursue more follow-up studies to see if they can find more of them lurking in the HerMES data. These results also hint at what may soon be discovered with larger infrared observatories, such as the new Atacama Large Millimeter/submillimeter Array (ALMA) in Chile and the planned Cerro Chajnantor Atacama Telescope (CCAT), of which Caltech is a partner institution.

The title of the Nature paper is "A Dust-Obscured Massive Maximum-Starburst Galaxy at a Redshift of 6.34." In addition to Riechers, Bock, and Dowell, the other Caltech authors of the paper are visiting associates in physics Matt Bradford, Asantha Cooray, and Hien Nguyen; postdoctoral scholars Carrie Bridge, Attila Kovacs, Joaquin Vieira, Marco Viero, and Michael Zemcov; staff research scientist Eric Murphy; and Jonas Zmuidzinas, the Merle Kingsley Professor of Physics and the Chief Technologist at NASA's Jet Propulsion Laboratory (JPL). There are a total of 64 authors. Bock, Dowell, and Nguyen helped build the Spectral and Photometric Imaging Receiver (SPIRE) instrument on Herschel.

Herschel is a European Space Agency cornerstone mission, with science instruments provided by consortia of European institutes and with important participation by NASA. NASA's Herschel Project Office is based at JPL in Pasadena, California. JPL contributed mission-enabling technology for two of Herschel's three science instruments. The NASA Herschel Science Center, part of the Infrared Processing and Analysis Center at Caltech in Pasadena, supports the U.S. astronomical community. Caltech manages JPL for NASA.

The W. M. Keck Observatory operates the largest, most scientifically productive telescopes on Earth. The two 10-meter optical/infrared telescopes on the summit of Mauna Kea on the island of Hawaii feature a suite of advanced instruments including imagers, multi-object spectrographs, high-resolution spectrographs, integral-field spectroscopy and a world-leading laser guide-star adaptive optics system. The observatory is operated by a private 501(c)(3) nonprofit organization and is a scientific partnership of the California Institute of Technology, the University of California, and NASA.

Writer: 
Marcus Woo
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Picking Apart Photosynthesis

New insights from Caltech chemists could lead to better catalysts for water splitting

PASADENA, Calif.—Chemists at the California Institute of Technology (Caltech) and the Lawrence Berkeley National Laboratory believe they can now explain one of the remaining mysteries of photosynthesis, the chemical process by which plants convert sunlight into usable energy and generate the oxygen that we breathe. The finding suggests a new way of approaching the design of catalysts that drive the water-splitting reactions of artificial photosynthesis.

"If we want to make systems that can do artificial photosynthesis, it's important that we understand how the system found in nature functions," says Theodor Agapie, an assistant professor of chemistry at Caltech and principal investigator on a paper in the journal Nature Chemistry that describes the new results.

One of the key pieces of biological machinery that enables photosynthesis is a conglomeration of proteins and pigments known as photosystem II. Within that system lies a small cluster of atoms, called the oxygen-evolving complex, where water molecules are split and molecular oxygen is made. Although this oxygen-producing process has been studied extensively, the role that various parts of the cluster play has remained unclear. 

The oxygen-evolving complex performs a reaction that requires the transfer of electrons, making it an example of what is known as a redox, or oxidation-reduction, reaction. The cluster can be described as a "mixed-metal cluster" because in addition to oxygen, it includes two types of metals—one that is redox active, or capable of participating in the transfer of electrons (in this case, manganese), and one that is redox inactive (calcium).

"Since calcium is redox inactive, people have long wondered what role it might play in this cluster," Agapie says.

It has been difficult to solve that mystery in large part because the oxygen-evolving complex is just a cog in the much larger machine that is photosystem II; it is hard to study the smaller piece because there is so much going on with the whole. To get around this, Agapie's graduate student Emily Tsui prepared a series of compounds that are structurally related to the oxygen-evolving complex. She built upon an organic scaffold in a stepwise fashion, first adding three manganese centers and then attaching a fourth metal. By varying that fourth metal to be calcium and then different redox-inactive metals, such as strontium, sodium, yttrium, and zinc, Tsui was able to compare the effects of the metals on the chemical properties of the compound.

"When making mixed-metal clusters, researchers usually mix simple chemical precursors and hope the metals will self-assemble in desired structures," Tsui says. "That makes it hard to control the product. By preparing these clusters in a much more methodical way, we've been able to get just the right structures."

It turns out that the redox-inactive metals affect the way electrons are transferred in such systems. To make molecular oxygen, the manganese atoms must activate the oxygen atoms connected to the metals in the complex. In order to do that, the manganese atoms must first transfer away several electrons. Redox-inactive metals that tug more strongly on the electrons of the oxygen atoms make it more difficult for manganese to do this. But calcium does not draw electrons strongly toward itself. Therefore, it allows the manganese atoms to transfer away electrons and activate the oxygen atoms that go on to make molecular oxygen.

A number of the catalysts that are currently being developed to drive artificial photosynthesis are mixed-metal oxide catalysts. It has again been unclear what role the redox-inactive metals in these mixed catalysts play. The new findings suggest that the redox-inactive metals affect the way the electrons are transferred. "If you pick the right redox-inactive metal, you can tune the reduction potential to bring the reaction to the range where it is favorable," Agapie says. "That means we now have a more rational way of thinking about how to design these sorts of catalysts because we know how much the redox-inactive metal affects the redox chemistry."

The paper in Nature Chemistry is titled "Redox-inactive metals modulate the reduction potential in heterometallic manganese-oxido clusters." Along with Agapie and Tsui, Rosalie Tran and Junko Yano of the Lawrence Berkeley National Laboratory are also coauthors. The work was supported by the Searle Scholars Program, an NSF CAREER award, and the NSF Graduate Research Fellowship Program. X-ray spectroscopy work was supported by the NIH and the DOE Office of Basic Energy Sciences. Synchrotron facilities were provided by the Stanford Synchrotron Radiation Lightsource, operated by the DOE Office of Biological and Environmental Research. 

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Counting White Blood Cells at Home

Caltech engineers lead development of a new portable counter

PASADENA, Calif.—White blood cells, or leukocytes, are the immune system's warriors. So when an infection or disease attacks the body, the system typically responds by sending more white blood cells into the fray. This means that checking the number of these cells is a relatively easy way to detect and monitor such conditions.

Currently, most white blood cell counts are performed with large-scale equipment in central clinical laboratories. If a physician collects blood samples from a patient in the office—usually requiring a full vial of blood for each test—it can take days to get the results. But now engineers at the California Institute of Technology (Caltech), working with a collaborator from the Jerusalem-based company LeukoDx, have developed a portable device to count white blood cells that needs less than a pinprick's worth of blood and takes just minutes to run.

"The white blood cell counts from our new system closely match the results from tests conducted in hospitals and other central clinical settings," says Yu-Chong Tai, professor of electrical engineering and mechanical engineering at Caltech and the project's principal investigator. "This could make point-of-care testing possible for the first time."

Portable white blood cell counters could improve outpatient monitoring of patients with chronic conditions such as leukemia or other cancers. The counters could be used in combination with telemedicine to bring medical care to remote areas. The devices could even enable astronauts to evaluate their long-term exposure to radiation while they are still in space. The researchers describe the work in the April 7 issue of the journal Lab on a Chip.

There are five subtypes of white blood cells, and each serves a different function, which means it's useful to know the count for all of them. In general, lymphocytes use antibodies to attack certain viruses and bacteria; neutrophils are especially good at combating bacteria; eosinophils target parasites and certain infections; monocytes respond to inflammation and replenish white blood cells within bodily tissue; and basophils, the rarest of the subtypes, attack certain parasites.

"If we can give you a quick white blood cell count right in the doctor's office," says Wendian Shi, a graduate student in Tai's lab and lead author of the new paper, "you can know right away if you're dealing with a viral infection or a bacterial infection, and the doctor can prescribe the right medication."

The prototype device is able to count all five subtypes of white blood cells within a sample. It provides an accurate differential of the four major subtypes—lymphocytes, monocytes, eosinophils, and neutrophils. In addition, it could be used to flag an abnormally high level of the fifth subtype, basophils, which are normally too rare (representing less than one percent of all white blood cells) for accurate detection in clinical tests.

The entire new system fits in a small suitcase (12" x 9" x 5") and could easily be made into a handheld device, the engineers say.

A major development reported in the new paper is the creation of a detection assay that uses three dyes to stain white blood cells so that they emit light, or fluoresce, brightly in response to laser light. Blood samples are treated with this dye assay before measurement in the new device. The first dye binds strongly to the DNA found in the nucleus of white blood cells, making it simple to distinguish between white blood cells and the red blood cells that surround and outnumber them. The other two dyes help differentiate between the subtypes.

The heart of the new device is a 50-micrometer-long transparent channel made out of a silicone material with a cross section of only 32 micrometers by 28 micrometers—small enough to ensure that only one white blood cell at a time can flow through the detection region. The stained blood sample flows through this microfluidic channel to the detection region, where it is illuminated with a laser, causing it to fluoresce. The resulting emission of the sample is then split by a mirror into two beams, representing the green and red fluorescence.

Thanks to the dye assay, the white blood cell subtypes emit characteristic amounts of red and green light. Therefore, by determining the intensity of the emissions for each detected cell, the device can generate highly accurate differential white blood cell counts.

Shi says his ultimate goal is to develop a portable device that can help patients living with chronic diseases at home. "For these patients, who struggle to find a balance between their treatment and their normal quality of life, we would like to offer a device that will help them monitor their conditions at home," he says. "It would be nice to limit the number of trips they need to make to the hospital for testing."

The Lab on a Chip paper is titled "Four-part leukocyte differential count based on sheathless microflow cytometer and fluorescent dye assay." In addition to Tai and Shi, the coauthors on the paper are Luke Guo, a graduate student at MIT who worked on the project as an undergraduate student at Caltech, and Harvey Kasdan of LeukoDx Inc. in Jerusalem, Israel. The work was supported by the National Space Biomedical Research Institute under a NASA contract.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No

Pages

Subscribe to RSS - research_news