Researchers Announce New Way to Assess How Buildings Would Stand Up in Big Quakes

PASADENA, Calif.—How much damage will certain steel-frame, earthquake-resistant buildings located in Southern California sustain when a large temblor strikes? It's a complicated, multifaceted question, and researchers from the California Institute of Technology, the University of California, Santa Barbara, and the University of Pau, France, have answered it with unprecedented specificity using a new modeling protocol.

The results, which involve supercomputer simulations of what could happen to specific areas of greater Los Angeles in specific earthquake scenarios, were published in the latest issue of the Bulletin of the Seismological Society of America, the premier scientific journal dedicated to earthquake research.

"This study has brought together state-of-the-art 3-D-simulation tools used in the fields of earthquake engineering and seismology to address important questions that people living in seismically active regions around the world worry about," says Swaminathan Krishnan, a postdoctoral scholar in geophysics at Caltech and lead author of the study.

"What if a large earthquake occurred on a nearby fault? Would a particular building withstand the shaking? This prototype study illustrates how, with the help of high-performance computing, 3-D simulations of earthquakes can be combined with 3-D nonlinear analyses of buildings to provide realistic answers to these questions in a quantitative manner."

The publication of the paper is an ambitious attempt by the researchers to enhance and improve the methodology used to assess building integrity, says Jeroen Tromp, the McMillan Professor of Geophysics and director of the Seismological Laboratory at Caltech. "We are trying to change the way in which seismologists and engineers approach this difficult interdisciplinary problem," Tromp says.

The research simulates the effects that two different 7.9-magnitude San Andreas earthquakes would have on two hypothetical 18-story steel frame buildings located at 636 sites on a grid that covers the Los Angeles and San Fernando basins. An earthquake of this magnitude occurred on the San Andreas on January 9, 1857, and seismologists generally agree that the fault has the potential for such an event every 200 to 300 years. To put this in context, the much smaller January 17, 1994, Northridge earthquake of 6.7 magnitude caused 57 deaths and economic losses of more than $40 billion.

The simulated earthquakes "rupture" a 290-kilometer section of the San Andreas fault between Parkfield in the Central Valley and Southern California, one earthquake with rupture propagating southward and the other with rupture propagating northward. The first building is a model of an actual 18-story, steel moment-frame building located in the San Fernando Valley. It was designed according to the 1982 Uniform Building Code (UBC) standards yet suffered significant damage in the 1994 Northridge earthquake due to fracture of the welds connecting the beams to the columns. The second building is a model of the same San Fernando Valley structure redesigned to the stricter 1997 UBC standards.

Using a high-performance PC cluster, the researchers simulated both earthquakes and the damage each would cause to the two buildings at each of the 636 grid sites. They assessed the damage to each building based on "peak interstory drift."

Interstory drift is the difference between the roof and floor displacements of any given story as the building sways during the earthquake, normalized by the story height. For example, for a 10-foot high story, an interstory drift of 0.10 indicates that the roof is displaced one foot in relation to the floor below.

The greater the drift, the greater the likelihood of damage. Peak interstory drift values larger than 0.06 indicate severe damage, while values larger than 0.025 indicate that the damage could be serious enough to pose a serious threat to human safety. Values in excess of 0.10 indicate probable building collapse.

The study's conclusions include the following:

o A 7.9-magnitude San Andreas rupture from Parkfield to Los Angeles results in greater damage to both buildings than a rupture from Los Angeles to Parkfield. This difference is due to the effects of directivity and slip distribution controlling the ground-motion intensity. In the north-to-south rupture scenario, peak ground displacement is two meters in the San Fernando Valley and one meter in the Los Angeles basin; for the south-to-north rupture scenario, ground displacements are 0.6 meters and 0.4 meters respectively. o In the north-to-south rupture scenario, peak drifts in the model of the existing building far exceed 0.10 in the San Fernando Valley, Santa Monica, and West Los Angeles, Baldwin Park and its neighboring cities, Compton and its neighboring cities, and Seal Beach and its neighboring cities. Peak drifts are in the 0.06-0.08 range in Huntington Beach, Santa Ana, Anaheim, and their neighboring cities, whereas the values are in the 0.04-0.06 range for the remaining areas, including downtown Los Angeles. o The results for the redesigned building are better than for the existing building. Although the peak drifts in some areas in the San Fernando Valley still exceed 0.10, they are in the range of 0.04-0.06 for most cities in the Los Angeles basin. o In the south-to-north rupture, the peak drifts in both the existing and redesigned building models are in the range of 0.02-0.04, suggesting that there is no significant danger of collapse. However, this is indicative of damage significant enough to warrant building closures and compromise human safety in some instances.

Such hazard analyses have numerous applications, Krishnan says. They could be performed on specific existing and proposed buildings in particular areas for a range of types of earthquakes, providing information that developers, building owners, city planners, and emergency managers could use to make better, more informed decisions.

"We have shown that these questions can be answered, and they can be answered in a very quantitative way," Krishnan says.

The research paper is "Case Studies of Damage to Tall Steel Moment-Frame Buildings in Southern California during Large San Andreas Earthquakes," by Swaminathan Krishnan, Chen Ji, Dimitiri Komatitsch, and Jeroen Tromp. Online movies of the earthquakes and building-damage simulations can be viewed at http://www.ce.caltech.edu/krishnan.

Contact: Jill Perry (626) 395-3226 jperry@caltech.edu

Writer: 
RT

Interdisciplinary Team Demonstrates New Technique for Manipulation of "Light Beams"

PASADENA, Calif.—It may be surprising that a laser beam, when shot to the moon and returned by one of the mirrors the Apollo astronauts left behind, is a couple of miles in diameter at the end of its half-million-mile round trip. This spread is mostly due to atmospheric distortions, but it nonetheless underscores the problems posed to those who wish to keep laser beams from diverging or focusing to a point as light travels through a medium.

Now a team of physicists, mathematicians, and electrical engineers from the California Institute of Technology and the University of Massachusetts at Amherst has figured out a trick to keep light pulses from diverging or focusing. Using a multi-layer sandwich of glass plates alternating with air, the scientists have provided the first experimental demonstration of a procedure called "nonlinearity management." This technique wouldn't do anything for light traveling all the way to the moon, but could be useful in future generations of devices involving optical switching and optical information processing, for which precise control of laser pulses will be advantageous.

Reporting in the July 21, 2006, issue of Physical Review Letters, the researchers demonstrate that a laser beam passing through multiple layers of glass and air can be made to last much longer than if it had passed through only one type of medium. This procedure exploits a phenomenon known as the "Kerr effect," which causes the refractive index of an individual material to change if the light energy is sufficiently intense.

When light is propagated only through glass, one obtains a focused beam so intense that it generates a plasma in the medium, stripping away its electrons. Using a multi-layer "Kerr sandwich" of light and air, however, keeps the plasma from being created because the different refractive indices of the media cause the light beam to diverge and converge several times.

"The idea is for the beam size on average to stay constant," says team member Mason Porter, a postdoctoral scholar in Caltech's Center for the Physics of Information.

The experimental setup was the work of Martin Centurion, also a postdoctoral researcher in the Center for the Physics of Information. According to Centurion, the laboratory apparatus consists of nine normal microscope slides, each about one millimeter thick, that are aligned parallel to each other at one-millimeter spacings. An intense femtosecond laser pulse is sent into the slides, and the pulse converges while in the glass medium, but then diverges again while traversing through air. The end result is a beam that is the same diameter when it emerges from the apparatus as it was when it entered, although it is slightly weaker due to reflection of a fraction of the energy at each interface.

The researchers say that the setup they used is intended to demonstrate that nonlinearity management can be performed, and it is not by any means the final version of a practical apparatus.

"This is focusing in space," Porter says. "If you could combine both space and time, you'd have a 'light bullet'-that is, a pulse that stays the same all the time."

Various devices in the future could be possible through nonlinearity management, adds Centurion, "but this is a demonstration that is pretty far from any applications."

"There are potential applications of the tight beams provided by the technique such as optical lithography and sensors," says Demetri Psaltis, the Myers Professor of Electrical Engineering at Caltech and another author of the paper.

The other author is Panayotis Kevrekidis, an associate professor of mathematics at the University of Massachusetts at Amherst.

The title of the paper is "Nonlinearity Management in Optics: Experiment, Theory, and Simulation."

 

 

Writer: 
Robert Tindol
Writer: 

Structural Biologists Get First Picture of Complete Bacterial Flagellar Motor

PASADENA, Calif.-When it comes to tiny motors, the flagella used by bacteria to get around their microscopic worlds are hard to beat. Composed of several tens of different types of protein, a flagellum (that's the singular) rotates about in much the same way that a rope would spin if mounted in the chuck of an electric drill, but at much higher speeds-about 300 revolutions per second.

Biologists at the California Institute of Technology have now succeeded for the first time in obtaining a three-dimensional image of the complete flagellum assembly using a new technology called electron cryotomography. Reporting in Nature, the scientists show in unprecedented detail both the rotor of the flagellum and the stator, or protein assembly that not only attaches the rotor to the cell wall, but also generates the torque that serves to rotate it.

The accomplishment is a tour de force within the field of structural biology, through which scientists seek to understand how cells work by determining the shapes and configurations of the proteins that make them up. The results could lead to better-designed nanomachines.

"Rotors have been isolated and studied in detail," explains lead author Grant Jensen, an assistant professor of biology at Caltech. "But in the past, researchers have been forced to break the motor into pieces and/or rip it out of the cell before they could observe it in the microscope. It was like trying to understand a car engine by looking through salvaged parts. Here we were able to see the whole motor intact, like an engine still under the hood and attached to the drive train."

In terms of basic science, Jensen says, the motor is intrinsically interesting because it is such a marvelous and complex "nanomachine." But the results of studying it may also one day help engineers, who might want to use its structure to design useful things.

"The process of taking science to practical applications goes from the observation of interesting phenomena, to mechanistic understanding, to exploitation," Jensen says. "Right now, we're somewhere between observation and the beginning of mechanistic understanding of this wonderful motor."

The bacterium used in the study was isolated from the hindguts of termites. Although beneficial to the termite host, the bacterium, belonging to a group of organisms known as spirochetes, is closely related to the causative agents of syphilis, Lyme disease, and several organisms thought to play a role in gum disease. In all these cases, swimming motility is implicated as a possible determinant in disease.

The article is titled "In situ structure of the complete Treponema primitia flagellar motor." It is available as an advanced online publication of Nature at http://www.nature.com/nature/journal/vaop/ncurrent/full/nature05015.html.

The other authors are Gavin Murphy, a Caltech graduate student in biochemistry and molecular biophysics, and Jared R. Leadbetter, an associate professor of environmental microbiology at Caltech.

Photo caption: This image shows the three-dimensional reconstruction of the bacterial flagellar motor, as generated by electron cryotomography for the study. The rotor in the center (red) revolves at speeds of up to 300 times per second, driven by the stator assembly (yellow) that is embedded in the cell wall.

Photo by the authors (Gavin Murphy, Jared Leadbetter, and Grant Jensen, Caltech)

 

Writer: 
Robert Tindol
Writer: 

Study of 8.7-Magnitude Earthquake Lends New Insight into Post-Shaking Processes

PASADENA, Calif.—Although the magnitude 8.7 Nias-Simeulue earthquake of March 28, 2005, was technically an aftershock, the temblor nevertheless killed more than 2,000 people in an area that had been devastated just three months earlier by the December 2004, magnitude 9.1 earthquake. Now, data returned from instruments in the field provide constraints on the behavior of dangerous faults in subduction zones, fueling a new understanding of basic mechanics controlling slip on faults, and in turn, improved estimates of regional seismic risk.

In the June 30 issue of the journal Science, a team including Ya-Ju Hsu, Mark Simons, and others of the California Institute of Technology's new Tectonics Observatory and the University of California, San Diego, report that their analysis of Global Positioning System (GPS) data taken at the time of the earthquake and during the following 11 months provide insights into how fault slippage and aftershock production are related.

"In general, the largest earthquakes occur in subduction zones, such as those offshore of Indonesia, Japan, Alaska, Cascadia, and South America," says Hsu, a postdoctoral researcher at the Tectonics Observatory and lead author of the paper. "Of course, these earthquakes can be extremely damaging either directly, or by the resulting tsunami.

"Therefore, understanding what causes the rate of production of aftershocks is clearly important to earthquake physics and disaster response," Hsu adds.

The study finds that the regions on the fault surrounding the area that slipped during the 8.7 earthquake experienced accelerated rates of slip following the March shock. The region dividing the area that slipped during the earthquake, and that which has slipped after the earthquake, is clearly demarcated by a band of intense aftershocks.

A primary conclusion of the paper is that there is a strong relationship between the production of aftershocks and post-earthquake fault slip-in other words, the frequency and location of aftershocks in a subduction megathrust are related to the amount and location of fault slip in the months following the main earthquake. Hsu and her colleagues believe that the aftershocks are controlled by the rate of aseismic fault slip after the earthquake.

"One conjecture is that, if the aseismic fault slip occurs quickly, then lots of aftershocks are produced," says Simons, an associate professor of geophysics at Caltech. "But there are other arguments suggesting that both the aftershocks and the post-earthquake aseismic fault slip are caused by some third underlying process."

In any case, Simons and Hsu say that the study demonstrates that the placing of additional remote sensors in subduction zones leads to better modeling of earthquake hazards. In particular, the study shows that the rheology, or mechanical properties, of the region can be inferred from the accumulation of postseismic data.

A map of the region constructed from the GPS data reveals that certain areas slip in different manners than others because some parts of the fault seem to be more "sticky." Because of the nature of seismic waves, the manner in which the fault slips in the months following a large earthquake has huge implications for human habitation.

"An important question is how slip on a fault varies as a function of time," Simons explains. "The extent to which an area slips is related to the risk, because you have a finite budget. Whether all the stress is released during earthquakes or whether it creeps is important for us to know. We would be very happy if all faults slipped as a slow creep, although I guess seismologists would be out of work."

The fact that the Nias-Simeulue's postseismic slip following the December 28, 2004, earthquake can be modeled so intricately shows that other subduction zones can also be modeled, Hsu says. "In general, understanding the whole seismic cycle is very important. Most of the expected hazards of earthquakes occur in subduction zones."

The Tectonics Observatory is establishing a network of sensors in areas of active plate-boundary deformation such as Chile and Peru, the Kuril Islands off Japan, and Nepal. The observatory is supported by the Gordon and Betty Moore Foundation.

The other authors of the paper are Jean-Philippe Avouac, a professor of geology at Caltech and director of the Tectonics Observatory; Kerry Sieh, the Sharp Professor of Geology at Caltech; John Galetzka, a professional staff member at Caltech; Mohamed Chlieh, a postdoctoral scholar at Caltech; Danny Natawidjaja of the Indonesian Institute of Sciences; and Linette Prawfrodirdjo and Yehuda Bock, both of the University of California at San Diego's Institute of Geophysics and Planetary Sciences.

Writer: 
Robert Tindol
Writer: 

Physicists Devise New Technique for Detecting Heavy Water

PASADENA, Calif.—Scientists at the California Institute of Technology have created a new method of detecting heavy water that is 30 times more sensitive than any other existing method. The detection method could be helpful in the fight against international nuclear proliferation.

In the June 15 issue of the journal Optics Letters, Caltech doctoral student Andrea Armani and her professor Kerry Vahala report that a special type of tiny optical device can be configured to detect heavy water. Called an optical microresonator, the device is shaped something like a mushroom and was originally designed three years ago to store light for future opto-electronic applications. With a diameter smaller than that of a human hair, the microresonator is made of silica and is coupled with a tunable laser.

The technique works because of the difference between the molecular composition of heavy water and regular water. An H2O molecule has two atoms of hydrogen that each are built of a single proton and a single electron. A D2O molecule, by contrast, has two atoms of a hydrogen isotope known as deuterium, which differs in that each atom has a single neutron in addition to a proton and an electron. This makes a heavy-water molecule significantly more massive than a regular water molecule.

"Heavy water isn't a misnomer," says Armani, who is finishing up her doctorate in applied physics and will soon begin a two-year postdoctoral appointment at Caltech. Armani says that heavy water looks just like regular water to the naked eye, but an ice cube made of the stuff will sink if placed in regular water because of its added density. This difference in masses, in fact, is what makes the detection of heavy water possible in Armani and Vahala's new technique. When the microresonator is placed in heavy water, the difference in optical absorption results in a change in the "Q factor," which is a number used to measure how efficiently an optical resonator stores light. If a higher Q factor is detected than one would see for normal water, then more heavy water is present than the typical one-in-6,400 water molecules that exists normally in nature.

The technique is so sensitive that one heavy-water molecule in 10,000 can be detected, Armani says. Furthermore, the Q factor changes steadily as the heavy-water concentrations are varied.

The results are good news for those who worry about the escalation of nuclear weapons, because heavy water is typically found wherever someone is trying to control a nuclear chain reaction. As a nuclear moderator, heavy water can be used to control the way neutrons bounce around in fissionable material, thereby making it possible for a fission reactor to be built.

The ongoing concern with heavy water is exemplified by the fact that Armani and Vahala have received funding for their new technique from the Defense Advanced Research Projects Agency, or DARPA. The federal agency provides grants to university research that has potential applications for U.S. national defense.

"This technique is 30 times better than the best competing detection technique, and we haven't yet tried to reduce noise sources," says Armani. "We think even greater sensitivities are possible."

The paper is entitled "Heavy Water Detection Using Ultra-High-Q Microcavities" and is available online at http://ol.osa.org/abstract.cfm?id=90020.

Writer: 
Robert Tindol
Writer: 

Palomar Observes Broken Comet

PALOMAR MOUNTAIN, Calif.—Astronomers have recently been enjoying front-row seats to a spectacular cometary show. Comet 73P/Schwassmann-Wachmann 3 is in the act of splitting apart as it passes close to Earth. The breakup is providing a firsthand look at the death of a comet.

Eran Ofek of the California Institute of Technology and Bidushi Bhattacharya of Caltech's Spitzer Science Center have been observing the comet's tragic tale with the Palomar Observatory's 200-inch Hale Telescope. Their view is helping them and other scientists learn the secrets of comets and why they break up.

The comet was discovered by Arnold Schwassmann and Arno Arthur Wachmann 76 years ago and it broke into four fragments just a decade ago. It has since further split into dozens, if not hundreds, of pieces.

"We've learned that Schwassmann-Wachmann 3 presents a very dynamic system, with many smaller fragments than previously thought," says Bhattacharya. In all, 16 new fragments were discovered as a part of the Palomar observations.

A sequence of images showing the piece of the comet known as fragment R has been assembled into a movie. The movie shows the comet in the foreground against distant stars and galaxies, which appear to streak across the images. Because the comet was moving at a different rate across the sky than the stellar background, the telescope was tracking the comet's motion and not that of the stars. Fragment R and many smaller fragments of the comet are visible as nearly stationary objects in the movie.

"Seeing the many fragments was both an amazing and sobering experience," says a sleepy Eran Ofek, who has been working non-stop to produce these images and a movie of the comet's fragments.

The images used to produce the movie were taken over a period of about an hour and a half when the comet was approximately 17 million kilometers (10.6 million miles) from Earth. Astronomically speaking the comet is making a close approach to Earth this month giving astronomers their front-row seat to the comet's break up. Closest approach for any fragment of the comet occurs on May 12, when a fragment will be just 5.5 million miles from Earth. This is more than 20 times the distance to the moon. There is no chance that the comet will hit Earth.

"It is very impressive that a telescope built more than 50 years ago continues to contribute to forefront astrophysics, often working in tandem with the latest space missions and biggest ground-based facilities," remarks Shri Kulkarni, MacArthur Professor of Astronomy and Planetary Science and director of the Caltech Optical Observatories.

The Palomar observations were coordinated with observations acquired through the Spitzer Space Telescope, which imaged the comet's fragments in the infrared. The infrared images, combined with the visible-light images obtained using the Hale Telescope, will give astronomers a more complete understanding of the comet's break up.

Additional support for the observations and data analysis came from Caltech postdoc Arne Rau and grad student Alicia Soderberg.

Images of the comet and a time-lapse movie can be found at:

http://www.astro.caltech.edu/palomar/images/73p/

Contact:

Scott Kardel Palomar Public Affairs Director (760) 742-2111 wsk@astro.caltech.edu

Writer: 
RT

Biologists Uncover New Details of How Neural Crest Forms in the Early Embryonic Stages

PASADENA, Calif.—There's a time soon after conception when the stem cells in a tiny area of the embryo called the neural crest are working overtime to build such structures as the dorsal root ganglia, various neurons of the nervous system, and the bones and cartilage of the skull. If things go wrong at this stage, deformities such as cleft palates can occur.

In an article in this week's issue of Nature, a team of biologists from the California Institute of Technology announce that they have determined that neural crest precursors can be identified at surprisingly early stages of development. The work could lead to better understanding of molecular mechanisms in embryonic development that could, in turn, lead to therapeutic interventions when prenatal development goes wrong.

According to Marianne Bronner-Fraser, the Ruddock Professor of Biology at Caltech, the findings provide new information about how stem cells eventually form many and diverse cell types in humans and other vertebrates.

"We've always assumed that the precursor cells that form the neural crest arise at a time when the presumptive brain and spinal cord are first visible," she says. "But our work shows that these cells arise much earlier in development than previously thought, and well before overt signs of the other neural structures.

"We also show that a DNA binding protein called Pax7 is essential for formation of the neural crest, since removal of this protein results in absence of neural crest cells."

The work involves chicken embryos, which are especially amenable to the advanced imaging techniques utilized at Caltech's Biological Imaging Center. The results showed that interfering with the Pax7 protein also interfered with normal neural crest development.

"Because neural crest cells are a type of stem cell able to form cell types as diverse as neurons and pigment cells, understanding the molecular mechanisms underlying their formation may lead to therapeutic means of generating these precursors," Bronner-Fraser explains. "It may also help treat diseases of neural crest derivatives, like melanocytes, that can become cancerous in the form of melanoma."

The work was funded by the NIH and performed at Caltech by Martin Garcia-Castro, a former postdoctoral researcher who is currently an assistant professor at Yale University, and Martin Basch, a former Caltech graduate student who is currently a postdoctoral fellow at the House Ear Institute.

The paper appears in the May 11 issue of Nature. The title of the article is "Specification of the neural crest occurs during gastrulation and requires Pax7."

Writer: 
Robert Tindol
Writer: 

Aerospace Engineers and Biologists Solve Long-Standing Heart Development Mystery

PASADENA, Calif.—An engineer comparing the human adult heart and the embryo heart might never guess that the former developed from the latter. While the adult heart is a fist-shaped organ with chambers and valves, the embryo heart looks more like tube attached to smaller tubes. Physicians and researchers have assumed for years, in fact, that the embryonic heart pumps through peristaltic movements, much as material flows through the digestive system.

But new results in this week's issue of Science from an international team of biologists and engineers show that the embryonic vertebrate heart tube is indeed a dynamic suction pump. In other words, blood flows by a dynamic suction action (similar to the action of the mature left ventricle) that arises from wave motions in the tube. The findings could lead to new treatments of certain heart diseases that arise from congenital defects.

According to Mory Gharib, the Liepmann Professor of Aeronautics and Bioengineering at the California Institute of Technology, the new results show once and for all that "the embryonic heart doesn't work the way we were taught.

"The morphologies of embryonic and adult hearts look like two different engineers designed them separately," says Gharib, who has worked for years on the mechanical and dynamical nature of the heart. "This study allows you to think about the continuity of the pumping mechanism."

Scott Fraser, the Rosen Professor at Caltech and director of the MRI Center, adds that the study shows the promise of advanced biological imaging techniques for the future of medicine. "The reason this mechanism of pumping has not been noticed in the heart tube is because of the limitations of imaging," he says. "But now we have a device that is 100 times faster than the old microscopes, allowing us to see things that previously would have been a blur. Now we can see the motion of blood and the motions of vascular walls at very high resolutions."

The lead author of the paper is Gharib's graduate student Arian Forouhar. He and the other researchers used confocal microscopes in the Beckman Institute's biological imaging center on campus to do time-lapse photography of embryonic zebrafish. According to Fraser, embryonic zebrafish were chosen because they are essentially transparent, thus allowing for easy viewing, and since they develop completely in only a few days.

The time-lapse photography showed that peristalsis, an action similar to squeezing a tube of toothpaste, was not the pumping mechanism, but rather that valveless pumping known as "hydroelastic impedance pumping" takes place. In this model fewer active cells are required to sustain circulation.

Contraction of a small collection of myocytes, usually situated near the entrance of the heart tube, initiates a series of forward-traveling elastic waves that eventually reflect back after impinging on the end of the heart tube. At a specific range of contraction frequencies, these waves can constructively interact with the preceding reflected waves to generate an efficient dynamic-suction region at the outflow tract of the heart tube.

"Now there is a new paradigm that allows us to reconsider how embryonic cardiac mechanics may lead to anomalies in the adult heart, since impairment of diastolic suction is common in congestive heart-failure patients," says Gharib.

"The heart is one of the only things that makes itself while it's working," Fraser adds. "We often think of the heart as a thing the size of a fist, but it likely began forming its structures when it was a tiny tube with the diameter of a human hair."

"One of the most intriguing features of this model is that only a few contractile cells are necessary to provide mechanical stimuli that may guide later stages of heart development," says Forouhar. According to Gharib, this simplicity in construction will allow us to think of potential biomimicked mechanical counterparts for use in applications where delicate transport of blood, drugs, or other biological fluids are desired.

In addition to Forouhar, Gharib, and Fraser, the authors are Michael Liebling, a postdoctoral scholar in the Beckman Institute's biological imaging center; Anna Hickerson (BS '00; PhD '05) and Abbas Nasiraei Moghaddam, graduate students in bioengineering at Caltech; Huai-Jen Tsai of National Taiwan University's Institute of Molecular and Cellular Biology; Jay Hove of the University of Cincinnati's Genome Research Institute; and Mary Dickinson of the Baylor College of Medicine.

The article is titled "The Embryonic Vertebrate Heart Tube is a Dynamic Suction Pump," and appears in the May 5 issue of Science.

Writer: 
Robert Tindol
Writer: 

Letters and Symbols Originated Across Cultures to Mimic Natural Scenes, Study Says

PASADENA, Calif.—If a tree falls in the forest and a caveman sees it lying next to a standing tree, what does he do? New evidence suggests that he may proceed to invent the letter "L."

According to a new study in The American Naturalist, the shapes of letters and symbols used throughout history by the world's many cultures may have arisen to take advantage of the way human vision has evolved to see common structures and shapes in nature. Mark Changizi, a theoretical neurobiologist at the California Institute of Technology, says the evidence suggests that letters and symbols have their particular shapes because "these are what we are good at seeing."

In essence, this means that the letters of all writing systems-Chinese, Latin, Persian, as well as 97 other systems that have been used through the years-are visual repetitions of common sights, just as onomatopoeias such as 'bow wow" are aural repetitions of common sounds.

"Evolution has shaped our visual system to be good at seeing the structures we commonly encounter in nature, and culture has apparently selected our writing systems and visual signs to have these same shapes," says Changizi, the lead author of the paper.

Changizi says he got the initial insight for the hypothesis after reviewing the history of computer vision. Engineers have known for some time that the best way to create a system to allow for object recognition is to focus on the junctions of objects. In other words, a robot navigating a room sees the conglomeration of contours in a corner by its "Y" shape, and sees a wall because of its "L" junction with the floor.

"It struck me that these junctions are typically named with letters, such as 'L,' 'T,' 'Y,' 'K,' and 'X,' and that it may not be a coincidence that the shapes of these letters look like the things they really are in nature."

Changizi then proceeded to an ecological hypothesis of why the letters have their shapes, and decided to apply the basic contours of letters in various writing systems and symbols in symbolic systems to their basic topological contours. By this he means that a basic shape like an "L" can be turned into a "V," for example, and any other form that can be bent around so long as you don't cut the object.

He ended up with a catalog of 36 shapes employing two or three contours, and then ranked them according to how frequently they occur in the objects that primitive people would have seen millions of years ago, in pictures across many cultures that he took from National Geographic, and in computer-generated architectural forms.

It turns out that the common contours conglomerations are precisely those forms that frequently show up in the letters of various writing systems, as well as in company logos and in symbolic systems such as musical notations and the like. The forms not found as frequently in nature, by contrast, do not show up so often in writing systems or symbolic representations.

"We tested the hypothesis of whether cultures have selected visual signs and letter shapes to possess the shapes occurring in nature, and the answer is yes," Changizi says. "It's also striking that the systems that are intended to be seen have high correlations to natural forms. Company logos, for example, are meant to be recognized, and we found that logos have a high correlation. Shorthand systems, which are meant to give a note-taker speed at the expense of a commonly recognizable system of symbols, do not.

"So the figures we use in symbolic systems and writing systems seem to be selected because they are easy to see rather than easy to write," he concludes. "They're for the eye."

In addition to Changizi, the authors are Shinsuke Shimojo, a professor of biology at Caltech who specializes in psychobiology; and Qiong Zhang and Hao Ye, both undergraduate students at Caltech.

The title of the paper is "Structures of Letters and Symbols Throughout Human History Are Selected to Match Those Found in Objects in Natural Scenes." The paper is downloadable on the journal's webpage at http://www.journals.uchicago.edu/AN/journal/issues/v167n5/41010/41010.html.

Writer: 
Robert Tindol
Writer: 

Caltech Researchers Create New Proteins by Recombining the Pieces of Existing Proteins

PASADENA, Calif.—An ongoing challenge in biochemistry is getting a handle on protein folding-that is, the way that DNA sequences determine the unique structure and functions of proteins, which then act as "biology's workhorses." Gaining mastery over the construction of proteins will someday lead to breakthroughs in medicine and pharmaceuticals.

One method for studying the determinants of a protein's structure and function is to analyze numerous proteins with similar structure and function-a protein family-as a group. By studying families of natural proteins, researchers can tease out many of the fundamental interactions responsible for a given property.

A team of chemical engineers, chemists, and biochemists at the California Institute of Technology have now managed to create a large number of proteins that are very different in sequence yet retain similar structures. The scientists use computational tools to analyze protein structures and pinpoint locations at which they can break them apart and then reassemble them, like Lego pieces. Each new construction is a protein with new functions and new potential enzyme actions.

Reporting in the April 10 issue of the Public Library of Science Biology, Caltech graduate student Christopher Otey and his colleagues show that they have successfully taken three proteins from nature, broken them each into eight pieces, and successfully reconstructed the pieces to form many new proteins. According to Otey, the potential number of new proteins from just three proteins is three raised to the eighth power, or 6,561, assuming that each protein is divided into eight segments. "The result is an artificial protein family," Otey explains. "In this single experiment, we've been able to make about 3,000 new proteins."

About half of the 6,561 proteins are viable, having an average of about 72 sequence changes. "The benefit is that you can use the new proteins and new sequence information to learn new things about the original proteins," Otey adds. "For example, if a certain protein function depends on one amino acid that never changes, then the protein apparently must have that particular amino acid."

The proteins the team has been using are called cytochromes P450, which play critical roles in drug metabolism, hormone synthesis, and the biodegradation of many chemicals. Using computational techniques, the researchers predict how to break up this roughly 460-amino-acid protein into individual blocks of about 60 to 70 amino acids.

Otey says that this is an important result when considering the old-fashioned way of obtaining protein sequences. Whereas, over the past 40 years, researchers have fully determined 4,500 natural P450 sequences, the Caltech team required only a few months to create 3,000 additional new sequences.

"Our goal in the lab is to be able to create a bunch of proteins very quickly," Otey says, "but the overall benefit is an understanding of what makes a protein do what it does and potentially the production of new pharmaceuticals, new antibiotics, and such.

"During evolution, nature conserves protein structure, which we do with the computational tools, while changing protein sequence which can lead to proteins with new functions," he says. "And new functions can ultimately result in new treatments."

In addition to Otey, the other authors of the paper are Frances Arnold (the corresponding author), who is Dickinson Professor of Chemical Engineering and Biochemistry at Caltech, and Otey's supervising professor; Marco Landwehr, a postdoctoral scholar in biochemistry; Jeffrey B. Endelman, a recently graduated Caltech graduate student in bioengineering; Jesse Bloom, a graduate student in chemistry; and Kaori Hiraga, a Caltech postdoctoral scholar who is now at the New York State Department of Health.

The title of the article is "Structure-Guided Recombination Creates an Artificial Family of Cytochromes P450."

 

Writer: 
Robert Tindol
Writer: 
Exclude from News Hub: 
No

Pages