Caltech Researchers Announce Invention of the Optofluidic Microscope

PASADENA, Calif.—The old optical microscopes that everyone used in high-school biology class may be a step closer to the glass heap. Researchers at the California Institute of Technology have announced their invention of an optofluidic microscope that uses no lens elements and could revolutionize the diagnosis of certain diseases such as malaria.

Reporting in the journal Lab on a Chip, Caltech assistant professor of electrical engineering professor Changhuei Yang and his coauthors describe the novel device that combines chip technology with microfluidics. Although similar in resolution and magnifying power to a conventional top-quality optical microscope, the optofluidic microscope chip is only the size of a quarter, and the entire device—imaging screen and all—will be about the size of an iPod.

"This is a new way of doing microscopy," says Yang, who also has a dual appointment in bioengineering at Caltech. "Its imaging principle is similar to the way we see floaters in our eyes. If you can see it in a conventional microscope and it can flow in a microfluidic channel, we can image it with this tiny chip."

That list of target objects includes many pathogens that are most dangerous to human life and health, including the organism that causes malaria. The typical method of diagnosing malaria is to draw a blood sample and send it to a lab where the sample can be inspected for malaria parasites. A high-powered optical microscope with lens elements is far too big and cumbersome for inspection of samples in the field.

With a palm-sized optofluidic microscope, however, a doctor would be able to draw a drop of blood from the patient and analyze it immediately. This process would be much simpler and faster than the current method, and the equipment would be far cheaper and more readily available to physicians in third-world countries.

The device works by literally flowing a target sample across a tiny fluid pathway. Normally, the image would be low in resolution because the target would interrupt the light on a single pixel, thus limiting the resolution to pixel size.

However, the researchers have avoided this limitation by attaching an opaque metal film to a microfluidic chip. The film contains an etched array of submicron apertures that are spaced in such a way that adjacent line scans overlap and all parts of the target are imaged.

The new optofluidic microscope is one of the first major accomplishments to come out of Caltech's Center for Optofluidic Integration, which was begun in 2004 with funding from the federal Defense Advanced Research Projects Agency (DARPA) for development of a new generation of small-scale, highly adaptable, and innovative optical devices.

"The basic idea of the center is to build optical devices for imaging, fiber optics, communications, and other applications, and to transcend some of the limitations of optical devices made out of traditional materials like glass," says Demetri Psaltis, who is the Myers Professor of Electrical Engineering at Caltech and a coauthor of the paper. "This is probably the most important result so far showing how we can build very unique devices that can have a broad impact."

Xin Heng, a graduate student in electrical engineering at Caltech, performed most of the experiments reported in the paper. The other Caltech authors are David Erickson, a former postdoctoral scholar who is now a mechanical-engineering professor at Cornell University; L. Ryan Baugh, a postdoctoral scholar in biology; Zahid Yaqoob, a postdoctoral scholar in electrical engineering; and Paul W. Sternberg, the Morgan Professor of Biology.

Writer: 
Robert Tindol
Writer: 

Xena Awarded "Dwarf Planet" Status, IAU Rules; Solar System Now Has Eight Planets

PASADENA, Calif.—The International Astronomical Union (IAU) today downgraded the status of Pluto to that of a "dwarf planet," a designation that will also be applied to the spherical body discovered last year by California Institute of Technology planetary scientist Mike Brown and his colleagues. The decision means that only the rocky worlds of the inner solar system and the gas giants of the outer system will hereafter be designated as planets.

The ruling effectively settles a year-long controversy about whether the spherical body announced last year and informally named "Xena" would rise to planetary status. Somewhat larger than Pluto, the body has been informally known as Xena since the formal announcement of its discovery on July 29, 2005, by Brown and his co-discoverers, Chad Trujillo of the Gemini Observatory and David Rabinowitz of Yale University. Xena will now be known as the largest dwarf planet.

"I'm of course disappointed that Xena will not be the tenth planet, but I definitely support the IAU in this difficult and courageous decision," said Brown. "It is scientifically the right thing to do, and is a great step forward in astronomy.

"Pluto would never be considered a planet if it were discovered today, and I think the fact that we've now found one Kuiper-belt object bigger than Pluto underscores its shaky status."

Pluto was discovered in 1930. Because of its size and distance from Earth, astronomers had no idea of its composition or other characteristics at the time. But having no reason to think that many other similar bodies would eventually be found in the outer reaches of the solar system—or that a new type of body even existed in the region—they assumed that designating the new discover as the ninth planet was a scientifically accurate decision.

However, about two decades later, the famed astronomer Gerard Kuiper postulated that a region in the outer solar system could house a gigantic number of comet-like objects too faint to be seen with the telescopes of the day. The Kuiper belt, as it came to be called, was demonstrated to exist in the 1990s, and astronomers have been finding objects of varying size in the region ever since.

Few if any astronomers had previously called for the Kuiper-belt objects to be called planets, because most were significantly smaller than Pluto. But the announcement of Xena's discovery raised a new need for a more precise definition of which objects are planets and which are not.

According to Brown, the decision will pose a difficulty for a public that has been accustomed to thinking for the last 75 years that the solar system has nine planets.

"It's going to be a difficult thing to accept at first, but we will accept it eventually, and that's the right scientific and cultural thing to do," Brown says.

In fact, the public has had some experience with the demotion of a planet in the past, although not in living memory. Astronomers discovered the asteroid Ceres on January 1, 1801—literally at the turn of the 19th century. Having no reason to suspect that a new class of celestial object had been found, scientists designated it the eighth planet (Uranus having been discovered some 20 years earlier).

Soon several other asteroids were discovered, and these, too, were summarily designated as newly found planets. But when astronomers continued finding numerous other asteroids in the region (there are thought to be hundreds of thousands), the astronomical community in the early 1850s demoted Ceres and the others and coined the new term "minor planet."

Xena was discovered on January 8, 2005, at Palomar Observatory with the NASA-funded 48-inch Samuel Oschin Telescope. Xena is about 2,400 kilometers in diameter. A Kuiper-belt object like Pluto, but slightly less reddish-yellow, Xena is currently visible in the constellation Cetus to anyone with a top-quality amateur telescope.

Brown and his colleagues in late September announced that Xena has at least one moon. This body has been nicknamed Gabrielle, after Xena's sidekick on the television series.

Xena is currently about 97 astronomical units from the sun (an astronomical unit is the distance between the sun and Earth), which means that it is some nine billion miles away at present. Xena is on a highly elliptical 560-year orbit, sweeping in as close to the sun as 38 astronomical units. Currently, however, it is nearly as far away as it ever gets.

Pluto's own elliptical orbit takes it as far away as 50 astronomical units from the sun during its 250-year revolution. This means that Xena is sometimes much closer to Earth than Pluto—although never closer than Neptune.

Gabrielle is about 250 kilometers in diameter and reflects only about 1 percent of the sunlight that its parent reflects. Because of its small size, Gabrielle could be oddly shaped.

Brown says that the study of Gabrielle's orbit around Xena hasn't yet been fully completed. But once it is, the researchers will be able to derive the mass of Xena itself from Gabrielle's orbit. This information will lead to new insights on Xena's composition.

Based on spectral data, the researchers think Xena is covered with a layer of methane that has seeped from the interior and frozen on the surface. As in the case of Pluto, the methane has undergone chemical transformations, probably due to the faint solar radiation, that have caused the methane layer to redden. But the methane surface on Xena is somewhat more yellowish than the reddish-yellow surface of Pluto, perhaps because Xena is farther from the sun.

Brown and Trujillo first photographed Xena with the 48-inch Samuel Oschin Telescope on October 31, 2003. However, the object was so far away that its motion was not detected until they reanalyzed the data in January of 2005.

The search for new planets and other bodies in the Kuiper belt is funded by NASA. For more information on the program, see the Samuel Oschin Telescope's website at http://www.astro.caltech.edu/palomarnew/sot.html.

For more information on Mike Brown's research, see http://www.gps.caltech.edu/~mbrown.

Writer: 
Robert Tindol
Home Page Title: 
Xena Awarded "Dwarf Planet" Status, IAU Rules; Solar System Now Has Eight Planets
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Exclude from Home Page: 

NSF-Funded Wireless Network Leads Palomar Observatory Astronomers to Major Discoveries

For the past three years, astronomers at the California Institute of Technology's Palomar Observatory in Southern California have been using the High Performance Wireless Research and Education Network (HPWREN) as the data transfer cyberinfrastructure to further our understanding of the universe. Recent applications include the study of some of the most cataclysmic explosions in the universe, the hunt for extrasolar planets, and the discovery of our solar system's tenth planet. The data for all this research is transferred via HPWREN from the remote mountain observatory to college campuses hundreds of miles away.

Funded by the National Science Foundation, HPWREN provides Palomar Observatory with a high-speed network connection that helps enable new ways of undertaking astronomy research consistent with the data demands of today's scientists. Specifically, the HPWREN bandwidth allows astronomers to transfer a 100MB image from a telescope camera at Palomar to their campus laboratories in less than 30 seconds.

"The Palomar Observatory is by far our most bandwidth-demanding partner," says Hans-Werner Braun, HPWREN principal investigator, a research scientist with the San Diego Supercomputer Center at UC San Diego. "Palomar is able to run the 45 megabits-per-second HPWREN backbone flat out and will be able to utilize substantially more bandwidth in the future. The current plan is to upgrade critical links that support the observatory to 155 Mbps and create a redundant 45 Mbps path for a combined 200 megabits-per-second access speed at the observatory."

Last summer astronomers making use of the Palomar 48-inch Samuel Oschin Telescope announced the discovery of what some are calling our solar system's tenth planet. The object has been confirmed to be larger than Pluto. The telescope uses a 161-million-pixel camera - one of the largest and most capable in the world. HPWREN enables a large volume of data to be moved off the mountain to each astronomer's home base. Modern digital technology with pipeline processing of the data produced enables astronomers to detect very faint moving and transient objects.

To find these objects, the telescope takes a relatively short exposure of a section of the sky. It then goes off and images a pre-arranged sequence of such target fields. After a period of time it comes back and repeats the sequence. Then it does it again after another interval. Any objects that are visible in all three images, but move consistently with respect to the background star field, are solar system objects such as asteroids, comets or Kuiper Belt objects. Because of the large amount of data, pipeline processing is used both to detect such objects and to calculate their preliminary orbits from the initial triplet data. Sedna and the tenth planet, 2003UB313, were found using this technique, as were a large number of Near Earth Asteroids by the Jet Propulsion Laboratory's Near-Earth Asteroid Tracking (NEAT) program.

The Nearby Supernova Factory piggybacks their hunt for a certain type of exploding star, known as Type Ia supernovae, with the data collected by the NEAT program, and they then use the observations of these supernovae as "standard candles" for measuring the accelerating expansion of the universe. To date the survey has discovered about 350 supernovae, including 90 Type Ia supernovae. Greg Aldering of the University of California's Lawrence Berkeley Laboratory says "The recent discovery that the expansion of the universe is speeding up has turned the fields of cosmology and fundamental physics on their heads. The QUEST camera and the speedy HPWREN link are giving us an unprecedented sample of supernovae for pursuing this exciting discovery. The Palomar supernovae will be compared with supernovae from the Hubble Space Telescope and other telescopes to try to determine what is causing this acceleration."

One of the universe's most mysterious and explosive events is the phenomenon known as a gamma-ray burst (GRB). They are briefly bright enough to be visible billions of light years away, but they are difficult to study because they are very short lived and take place at seemingly random locations and times.

Astronomers rely on satellites like Swift which detects a GRB and immediately relays the information to observers worldwide via the Gamma-Ray Burst Coordinates Network.

If a gamma-ray burst occurs when it is dark and clear at Palomar, the observatory's robotic 60-inch telescope immediately slews to the coordinates provided and images the fading optical glow of the explosion. "The rapid response by the Palomar 60-inch telescope is possible only because of HPWREN. With it we have observed and categorized some of the most distant and energetic explosions in the universe," remarks Shri Kulkarni, MacArthur Professor of Astronomy and Planetary Science and director of the Caltech Optical Observatories. These observations have allowed astronomers to reach new frontiers by classifying the bursts and theorizing about their origins.

For the last decade astronomers have been using indirect methods and giant telescopes (like the Keck in Hawaii) to make their first discoveries of planets outside our solar system (called exoplanets). The smallest telescope at the Palomar Observatory is performing its own search for exoplanets. With a small telescope it is possible to detect a giant Jupiter-sized world that lies close to its parent star. By looking at a great many stars each night the HPWREN-powered Sleuth Telescope hopes to catch such a planet in the act of passing directly in front of its star. Such an eclipse, known as a transit, dims the light of the star by about one percent.

Sleuth is an automated telescope, capable of observing target areas of the night sky without much human interaction. All the required actions are scripted in advance, and a computer running this script is placed in charge of the telescope. The observer can then get a good night's sleep and receive the data in the morning. The automated nature of this procedure allows for remote observing, so the observer need not even be on the mountain.

"Living in the modern age of astronomy has made observing much more efficient. Every night we transfer about 4 gigabytes of data via HPWREN from Sleuth to Caltech in Pasadena. It is on my computer and analyzed before I arrive at work in the morning," says Caltech graduate student Francis O'Donovan. "The ability to process the previous night's data enables us to quickly check the quality of that data. We can then ensure the telescope is operational before beginning the next night's observations."

"The current HPWREN funding supports research into an understanding, prioritization, and policy-based re-routing of data network traffic, something the bursty and predominantly night-time high-volume observatory traffic is very useful for," explains Braun. "This being alongside other research and education traffic, also including continuous low-volume sensor data with tight real-time requirements, creates an ideal testbed for this network research as well."

The High Performance Wireless Research and Education Network program is an interdisciplinary and multi-institutional UC San Diego research program led by principal investigator Hans-Werner Braun at the San Diego Supercomputer Center and co-principal investigator Frank Vernon at the Scripps Institution of Oceanography. HPWREN is based on work funded by the National Science Foundation. The HPWREN web site is at http://hpwren.ucsd.edu/

More information on the tenth planet and how it was found can be seen at: http://www.gps.caltech.edu/~mbrown/planetlila/index.html and http://www.astro.caltech.edu/palomar/survey.html

For more on Palomar's gamma-ray burst research, go to: http://www.astro.caltech.edu/palomar/grb.html and http://www.astro.caltech.edu/palomar/exhibits/grb/

For more information on Sleuth: http://www.astro.caltech.edu/~ftod/tres/sleuth.html

Contact: Scott Kardel (760) 742-2111 wsk@astro.caltech.edu

 

Writer: 
SK

Researchers Announce New Way to Assess How Buildings Would Stand Up in Big Quakes

PASADENA, Calif.—How much damage will certain steel-frame, earthquake-resistant buildings located in Southern California sustain when a large temblor strikes? It's a complicated, multifaceted question, and researchers from the California Institute of Technology, the University of California, Santa Barbara, and the University of Pau, France, have answered it with unprecedented specificity using a new modeling protocol.

The results, which involve supercomputer simulations of what could happen to specific areas of greater Los Angeles in specific earthquake scenarios, were published in the latest issue of the Bulletin of the Seismological Society of America, the premier scientific journal dedicated to earthquake research.

"This study has brought together state-of-the-art 3-D-simulation tools used in the fields of earthquake engineering and seismology to address important questions that people living in seismically active regions around the world worry about," says Swaminathan Krishnan, a postdoctoral scholar in geophysics at Caltech and lead author of the study.

"What if a large earthquake occurred on a nearby fault? Would a particular building withstand the shaking? This prototype study illustrates how, with the help of high-performance computing, 3-D simulations of earthquakes can be combined with 3-D nonlinear analyses of buildings to provide realistic answers to these questions in a quantitative manner."

The publication of the paper is an ambitious attempt by the researchers to enhance and improve the methodology used to assess building integrity, says Jeroen Tromp, the McMillan Professor of Geophysics and director of the Seismological Laboratory at Caltech. "We are trying to change the way in which seismologists and engineers approach this difficult interdisciplinary problem," Tromp says.

The research simulates the effects that two different 7.9-magnitude San Andreas earthquakes would have on two hypothetical 18-story steel frame buildings located at 636 sites on a grid that covers the Los Angeles and San Fernando basins. An earthquake of this magnitude occurred on the San Andreas on January 9, 1857, and seismologists generally agree that the fault has the potential for such an event every 200 to 300 years. To put this in context, the much smaller January 17, 1994, Northridge earthquake of 6.7 magnitude caused 57 deaths and economic losses of more than $40 billion.

The simulated earthquakes "rupture" a 290-kilometer section of the San Andreas fault between Parkfield in the Central Valley and Southern California, one earthquake with rupture propagating southward and the other with rupture propagating northward. The first building is a model of an actual 18-story, steel moment-frame building located in the San Fernando Valley. It was designed according to the 1982 Uniform Building Code (UBC) standards yet suffered significant damage in the 1994 Northridge earthquake due to fracture of the welds connecting the beams to the columns. The second building is a model of the same San Fernando Valley structure redesigned to the stricter 1997 UBC standards.

Using a high-performance PC cluster, the researchers simulated both earthquakes and the damage each would cause to the two buildings at each of the 636 grid sites. They assessed the damage to each building based on "peak interstory drift."

Interstory drift is the difference between the roof and floor displacements of any given story as the building sways during the earthquake, normalized by the story height. For example, for a 10-foot high story, an interstory drift of 0.10 indicates that the roof is displaced one foot in relation to the floor below.

The greater the drift, the greater the likelihood of damage. Peak interstory drift values larger than 0.06 indicate severe damage, while values larger than 0.025 indicate that the damage could be serious enough to pose a serious threat to human safety. Values in excess of 0.10 indicate probable building collapse.

The study's conclusions include the following:

o A 7.9-magnitude San Andreas rupture from Parkfield to Los Angeles results in greater damage to both buildings than a rupture from Los Angeles to Parkfield. This difference is due to the effects of directivity and slip distribution controlling the ground-motion intensity. In the north-to-south rupture scenario, peak ground displacement is two meters in the San Fernando Valley and one meter in the Los Angeles basin; for the south-to-north rupture scenario, ground displacements are 0.6 meters and 0.4 meters respectively. o In the north-to-south rupture scenario, peak drifts in the model of the existing building far exceed 0.10 in the San Fernando Valley, Santa Monica, and West Los Angeles, Baldwin Park and its neighboring cities, Compton and its neighboring cities, and Seal Beach and its neighboring cities. Peak drifts are in the 0.06-0.08 range in Huntington Beach, Santa Ana, Anaheim, and their neighboring cities, whereas the values are in the 0.04-0.06 range for the remaining areas, including downtown Los Angeles. o The results for the redesigned building are better than for the existing building. Although the peak drifts in some areas in the San Fernando Valley still exceed 0.10, they are in the range of 0.04-0.06 for most cities in the Los Angeles basin. o In the south-to-north rupture, the peak drifts in both the existing and redesigned building models are in the range of 0.02-0.04, suggesting that there is no significant danger of collapse. However, this is indicative of damage significant enough to warrant building closures and compromise human safety in some instances.

Such hazard analyses have numerous applications, Krishnan says. They could be performed on specific existing and proposed buildings in particular areas for a range of types of earthquakes, providing information that developers, building owners, city planners, and emergency managers could use to make better, more informed decisions.

"We have shown that these questions can be answered, and they can be answered in a very quantitative way," Krishnan says.

The research paper is "Case Studies of Damage to Tall Steel Moment-Frame Buildings in Southern California during Large San Andreas Earthquakes," by Swaminathan Krishnan, Chen Ji, Dimitiri Komatitsch, and Jeroen Tromp. Online movies of the earthquakes and building-damage simulations can be viewed at http://www.ce.caltech.edu/krishnan.

Contact: Jill Perry (626) 395-3226 jperry@caltech.edu

Writer: 
RT

Interdisciplinary Team Demonstrates New Technique for Manipulation of "Light Beams"

PASADENA, Calif.—It may be surprising that a laser beam, when shot to the moon and returned by one of the mirrors the Apollo astronauts left behind, is a couple of miles in diameter at the end of its half-million-mile round trip. This spread is mostly due to atmospheric distortions, but it nonetheless underscores the problems posed to those who wish to keep laser beams from diverging or focusing to a point as light travels through a medium.

Now a team of physicists, mathematicians, and electrical engineers from the California Institute of Technology and the University of Massachusetts at Amherst has figured out a trick to keep light pulses from diverging or focusing. Using a multi-layer sandwich of glass plates alternating with air, the scientists have provided the first experimental demonstration of a procedure called "nonlinearity management." This technique wouldn't do anything for light traveling all the way to the moon, but could be useful in future generations of devices involving optical switching and optical information processing, for which precise control of laser pulses will be advantageous.

Reporting in the July 21, 2006, issue of Physical Review Letters, the researchers demonstrate that a laser beam passing through multiple layers of glass and air can be made to last much longer than if it had passed through only one type of medium. This procedure exploits a phenomenon known as the "Kerr effect," which causes the refractive index of an individual material to change if the light energy is sufficiently intense.

When light is propagated only through glass, one obtains a focused beam so intense that it generates a plasma in the medium, stripping away its electrons. Using a multi-layer "Kerr sandwich" of light and air, however, keeps the plasma from being created because the different refractive indices of the media cause the light beam to diverge and converge several times.

"The idea is for the beam size on average to stay constant," says team member Mason Porter, a postdoctoral scholar in Caltech's Center for the Physics of Information.

The experimental setup was the work of Martin Centurion, also a postdoctoral researcher in the Center for the Physics of Information. According to Centurion, the laboratory apparatus consists of nine normal microscope slides, each about one millimeter thick, that are aligned parallel to each other at one-millimeter spacings. An intense femtosecond laser pulse is sent into the slides, and the pulse converges while in the glass medium, but then diverges again while traversing through air. The end result is a beam that is the same diameter when it emerges from the apparatus as it was when it entered, although it is slightly weaker due to reflection of a fraction of the energy at each interface.

The researchers say that the setup they used is intended to demonstrate that nonlinearity management can be performed, and it is not by any means the final version of a practical apparatus.

"This is focusing in space," Porter says. "If you could combine both space and time, you'd have a 'light bullet'-that is, a pulse that stays the same all the time."

Various devices in the future could be possible through nonlinearity management, adds Centurion, "but this is a demonstration that is pretty far from any applications."

"There are potential applications of the tight beams provided by the technique such as optical lithography and sensors," says Demetri Psaltis, the Myers Professor of Electrical Engineering at Caltech and another author of the paper.

The other author is Panayotis Kevrekidis, an associate professor of mathematics at the University of Massachusetts at Amherst.

The title of the paper is "Nonlinearity Management in Optics: Experiment, Theory, and Simulation."

 

 

Writer: 
Robert Tindol
Writer: 

Structural Biologists Get First Picture of Complete Bacterial Flagellar Motor

PASADENA, Calif.-When it comes to tiny motors, the flagella used by bacteria to get around their microscopic worlds are hard to beat. Composed of several tens of different types of protein, a flagellum (that's the singular) rotates about in much the same way that a rope would spin if mounted in the chuck of an electric drill, but at much higher speeds-about 300 revolutions per second.

Biologists at the California Institute of Technology have now succeeded for the first time in obtaining a three-dimensional image of the complete flagellum assembly using a new technology called electron cryotomography. Reporting in Nature, the scientists show in unprecedented detail both the rotor of the flagellum and the stator, or protein assembly that not only attaches the rotor to the cell wall, but also generates the torque that serves to rotate it.

The accomplishment is a tour de force within the field of structural biology, through which scientists seek to understand how cells work by determining the shapes and configurations of the proteins that make them up. The results could lead to better-designed nanomachines.

"Rotors have been isolated and studied in detail," explains lead author Grant Jensen, an assistant professor of biology at Caltech. "But in the past, researchers have been forced to break the motor into pieces and/or rip it out of the cell before they could observe it in the microscope. It was like trying to understand a car engine by looking through salvaged parts. Here we were able to see the whole motor intact, like an engine still under the hood and attached to the drive train."

In terms of basic science, Jensen says, the motor is intrinsically interesting because it is such a marvelous and complex "nanomachine." But the results of studying it may also one day help engineers, who might want to use its structure to design useful things.

"The process of taking science to practical applications goes from the observation of interesting phenomena, to mechanistic understanding, to exploitation," Jensen says. "Right now, we're somewhere between observation and the beginning of mechanistic understanding of this wonderful motor."

The bacterium used in the study was isolated from the hindguts of termites. Although beneficial to the termite host, the bacterium, belonging to a group of organisms known as spirochetes, is closely related to the causative agents of syphilis, Lyme disease, and several organisms thought to play a role in gum disease. In all these cases, swimming motility is implicated as a possible determinant in disease.

The article is titled "In situ structure of the complete Treponema primitia flagellar motor." It is available as an advanced online publication of Nature at http://www.nature.com/nature/journal/vaop/ncurrent/full/nature05015.html.

The other authors are Gavin Murphy, a Caltech graduate student in biochemistry and molecular biophysics, and Jared R. Leadbetter, an associate professor of environmental microbiology at Caltech.

Photo caption: This image shows the three-dimensional reconstruction of the bacterial flagellar motor, as generated by electron cryotomography for the study. The rotor in the center (red) revolves at speeds of up to 300 times per second, driven by the stator assembly (yellow) that is embedded in the cell wall.

Photo by the authors (Gavin Murphy, Jared Leadbetter, and Grant Jensen, Caltech)

 

Writer: 
Robert Tindol
Writer: 

Study of 8.7-Magnitude Earthquake Lends New Insight into Post-Shaking Processes

PASADENA, Calif.—Although the magnitude 8.7 Nias-Simeulue earthquake of March 28, 2005, was technically an aftershock, the temblor nevertheless killed more than 2,000 people in an area that had been devastated just three months earlier by the December 2004, magnitude 9.1 earthquake. Now, data returned from instruments in the field provide constraints on the behavior of dangerous faults in subduction zones, fueling a new understanding of basic mechanics controlling slip on faults, and in turn, improved estimates of regional seismic risk.

In the June 30 issue of the journal Science, a team including Ya-Ju Hsu, Mark Simons, and others of the California Institute of Technology's new Tectonics Observatory and the University of California, San Diego, report that their analysis of Global Positioning System (GPS) data taken at the time of the earthquake and during the following 11 months provide insights into how fault slippage and aftershock production are related.

"In general, the largest earthquakes occur in subduction zones, such as those offshore of Indonesia, Japan, Alaska, Cascadia, and South America," says Hsu, a postdoctoral researcher at the Tectonics Observatory and lead author of the paper. "Of course, these earthquakes can be extremely damaging either directly, or by the resulting tsunami.

"Therefore, understanding what causes the rate of production of aftershocks is clearly important to earthquake physics and disaster response," Hsu adds.

The study finds that the regions on the fault surrounding the area that slipped during the 8.7 earthquake experienced accelerated rates of slip following the March shock. The region dividing the area that slipped during the earthquake, and that which has slipped after the earthquake, is clearly demarcated by a band of intense aftershocks.

A primary conclusion of the paper is that there is a strong relationship between the production of aftershocks and post-earthquake fault slip-in other words, the frequency and location of aftershocks in a subduction megathrust are related to the amount and location of fault slip in the months following the main earthquake. Hsu and her colleagues believe that the aftershocks are controlled by the rate of aseismic fault slip after the earthquake.

"One conjecture is that, if the aseismic fault slip occurs quickly, then lots of aftershocks are produced," says Simons, an associate professor of geophysics at Caltech. "But there are other arguments suggesting that both the aftershocks and the post-earthquake aseismic fault slip are caused by some third underlying process."

In any case, Simons and Hsu say that the study demonstrates that the placing of additional remote sensors in subduction zones leads to better modeling of earthquake hazards. In particular, the study shows that the rheology, or mechanical properties, of the region can be inferred from the accumulation of postseismic data.

A map of the region constructed from the GPS data reveals that certain areas slip in different manners than others because some parts of the fault seem to be more "sticky." Because of the nature of seismic waves, the manner in which the fault slips in the months following a large earthquake has huge implications for human habitation.

"An important question is how slip on a fault varies as a function of time," Simons explains. "The extent to which an area slips is related to the risk, because you have a finite budget. Whether all the stress is released during earthquakes or whether it creeps is important for us to know. We would be very happy if all faults slipped as a slow creep, although I guess seismologists would be out of work."

The fact that the Nias-Simeulue's postseismic slip following the December 28, 2004, earthquake can be modeled so intricately shows that other subduction zones can also be modeled, Hsu says. "In general, understanding the whole seismic cycle is very important. Most of the expected hazards of earthquakes occur in subduction zones."

The Tectonics Observatory is establishing a network of sensors in areas of active plate-boundary deformation such as Chile and Peru, the Kuril Islands off Japan, and Nepal. The observatory is supported by the Gordon and Betty Moore Foundation.

The other authors of the paper are Jean-Philippe Avouac, a professor of geology at Caltech and director of the Tectonics Observatory; Kerry Sieh, the Sharp Professor of Geology at Caltech; John Galetzka, a professional staff member at Caltech; Mohamed Chlieh, a postdoctoral scholar at Caltech; Danny Natawidjaja of the Indonesian Institute of Sciences; and Linette Prawfrodirdjo and Yehuda Bock, both of the University of California at San Diego's Institute of Geophysics and Planetary Sciences.

Writer: 
Robert Tindol
Writer: 

Physicists Devise New Technique for Detecting Heavy Water

PASADENA, Calif.—Scientists at the California Institute of Technology have created a new method of detecting heavy water that is 30 times more sensitive than any other existing method. The detection method could be helpful in the fight against international nuclear proliferation.

In the June 15 issue of the journal Optics Letters, Caltech doctoral student Andrea Armani and her professor Kerry Vahala report that a special type of tiny optical device can be configured to detect heavy water. Called an optical microresonator, the device is shaped something like a mushroom and was originally designed three years ago to store light for future opto-electronic applications. With a diameter smaller than that of a human hair, the microresonator is made of silica and is coupled with a tunable laser.

The technique works because of the difference between the molecular composition of heavy water and regular water. An H2O molecule has two atoms of hydrogen that each are built of a single proton and a single electron. A D2O molecule, by contrast, has two atoms of a hydrogen isotope known as deuterium, which differs in that each atom has a single neutron in addition to a proton and an electron. This makes a heavy-water molecule significantly more massive than a regular water molecule.

"Heavy water isn't a misnomer," says Armani, who is finishing up her doctorate in applied physics and will soon begin a two-year postdoctoral appointment at Caltech. Armani says that heavy water looks just like regular water to the naked eye, but an ice cube made of the stuff will sink if placed in regular water because of its added density. This difference in masses, in fact, is what makes the detection of heavy water possible in Armani and Vahala's new technique. When the microresonator is placed in heavy water, the difference in optical absorption results in a change in the "Q factor," which is a number used to measure how efficiently an optical resonator stores light. If a higher Q factor is detected than one would see for normal water, then more heavy water is present than the typical one-in-6,400 water molecules that exists normally in nature.

The technique is so sensitive that one heavy-water molecule in 10,000 can be detected, Armani says. Furthermore, the Q factor changes steadily as the heavy-water concentrations are varied.

The results are good news for those who worry about the escalation of nuclear weapons, because heavy water is typically found wherever someone is trying to control a nuclear chain reaction. As a nuclear moderator, heavy water can be used to control the way neutrons bounce around in fissionable material, thereby making it possible for a fission reactor to be built.

The ongoing concern with heavy water is exemplified by the fact that Armani and Vahala have received funding for their new technique from the Defense Advanced Research Projects Agency, or DARPA. The federal agency provides grants to university research that has potential applications for U.S. national defense.

"This technique is 30 times better than the best competing detection technique, and we haven't yet tried to reduce noise sources," says Armani. "We think even greater sensitivities are possible."

The paper is entitled "Heavy Water Detection Using Ultra-High-Q Microcavities" and is available online at http://ol.osa.org/abstract.cfm?id=90020.

Writer: 
Robert Tindol
Writer: 

Palomar Observes Broken Comet

PALOMAR MOUNTAIN, Calif.—Astronomers have recently been enjoying front-row seats to a spectacular cometary show. Comet 73P/Schwassmann-Wachmann 3 is in the act of splitting apart as it passes close to Earth. The breakup is providing a firsthand look at the death of a comet.

Eran Ofek of the California Institute of Technology and Bidushi Bhattacharya of Caltech's Spitzer Science Center have been observing the comet's tragic tale with the Palomar Observatory's 200-inch Hale Telescope. Their view is helping them and other scientists learn the secrets of comets and why they break up.

The comet was discovered by Arnold Schwassmann and Arno Arthur Wachmann 76 years ago and it broke into four fragments just a decade ago. It has since further split into dozens, if not hundreds, of pieces.

"We've learned that Schwassmann-Wachmann 3 presents a very dynamic system, with many smaller fragments than previously thought," says Bhattacharya. In all, 16 new fragments were discovered as a part of the Palomar observations.

A sequence of images showing the piece of the comet known as fragment R has been assembled into a movie. The movie shows the comet in the foreground against distant stars and galaxies, which appear to streak across the images. Because the comet was moving at a different rate across the sky than the stellar background, the telescope was tracking the comet's motion and not that of the stars. Fragment R and many smaller fragments of the comet are visible as nearly stationary objects in the movie.

"Seeing the many fragments was both an amazing and sobering experience," says a sleepy Eran Ofek, who has been working non-stop to produce these images and a movie of the comet's fragments.

The images used to produce the movie were taken over a period of about an hour and a half when the comet was approximately 17 million kilometers (10.6 million miles) from Earth. Astronomically speaking the comet is making a close approach to Earth this month giving astronomers their front-row seat to the comet's break up. Closest approach for any fragment of the comet occurs on May 12, when a fragment will be just 5.5 million miles from Earth. This is more than 20 times the distance to the moon. There is no chance that the comet will hit Earth.

"It is very impressive that a telescope built more than 50 years ago continues to contribute to forefront astrophysics, often working in tandem with the latest space missions and biggest ground-based facilities," remarks Shri Kulkarni, MacArthur Professor of Astronomy and Planetary Science and director of the Caltech Optical Observatories.

The Palomar observations were coordinated with observations acquired through the Spitzer Space Telescope, which imaged the comet's fragments in the infrared. The infrared images, combined with the visible-light images obtained using the Hale Telescope, will give astronomers a more complete understanding of the comet's break up.

Additional support for the observations and data analysis came from Caltech postdoc Arne Rau and grad student Alicia Soderberg.

Images of the comet and a time-lapse movie can be found at:

http://www.astro.caltech.edu/palomar/images/73p/

Contact:

Scott Kardel Palomar Public Affairs Director (760) 742-2111 wsk@astro.caltech.edu

Writer: 
RT

Biologists Uncover New Details of How Neural Crest Forms in the Early Embryonic Stages

PASADENA, Calif.—There's a time soon after conception when the stem cells in a tiny area of the embryo called the neural crest are working overtime to build such structures as the dorsal root ganglia, various neurons of the nervous system, and the bones and cartilage of the skull. If things go wrong at this stage, deformities such as cleft palates can occur.

In an article in this week's issue of Nature, a team of biologists from the California Institute of Technology announce that they have determined that neural crest precursors can be identified at surprisingly early stages of development. The work could lead to better understanding of molecular mechanisms in embryonic development that could, in turn, lead to therapeutic interventions when prenatal development goes wrong.

According to Marianne Bronner-Fraser, the Ruddock Professor of Biology at Caltech, the findings provide new information about how stem cells eventually form many and diverse cell types in humans and other vertebrates.

"We've always assumed that the precursor cells that form the neural crest arise at a time when the presumptive brain and spinal cord are first visible," she says. "But our work shows that these cells arise much earlier in development than previously thought, and well before overt signs of the other neural structures.

"We also show that a DNA binding protein called Pax7 is essential for formation of the neural crest, since removal of this protein results in absence of neural crest cells."

The work involves chicken embryos, which are especially amenable to the advanced imaging techniques utilized at Caltech's Biological Imaging Center. The results showed that interfering with the Pax7 protein also interfered with normal neural crest development.

"Because neural crest cells are a type of stem cell able to form cell types as diverse as neurons and pigment cells, understanding the molecular mechanisms underlying their formation may lead to therapeutic means of generating these precursors," Bronner-Fraser explains. "It may also help treat diseases of neural crest derivatives, like melanocytes, that can become cancerous in the form of melanoma."

The work was funded by the NIH and performed at Caltech by Martin Garcia-Castro, a former postdoctoral researcher who is currently an assistant professor at Yale University, and Martin Basch, a former Caltech graduate student who is currently a postdoctoral fellow at the House Ear Institute.

The paper appears in the May 11 issue of Nature. The title of the article is "Specification of the neural crest occurs during gastrulation and requires Pax7."

Writer: 
Robert Tindol
Writer: 

Pages