Wednesday, April 1, 2015
Center for Student Services 360 (Workshop Space)

Head TA Network

Wednesday, January 7, 2015
Center for Student Services 360 (Workshop Space)

Head TA Network

Thursday, September 25, 2014
Moore 139

Head TA Network

Friday, April 3, 2015
Center for Student Services 360 (Workshop Space)

TA Training

Wednesday, November 5, 2014
Center for Student Services 360 (Workshop Space)

HALF TIME: A Mid-Quarter Meetup for TAs

Friday, April 10, 2015
Center for Student Services 360 (Workshop Space)

Ombudsperson Training

Friday, January 16, 2015
Center for Student Services 360 (Workshop Space)

Ombudsperson Training

Friday, October 3, 2014
Center for Student Services 360 (Workshop Space)

TA Training

Neuroeconomists Confirm Warren Buffett's Wisdom

Brain Research Suggests an Early Warning Signal Tips Off Smart Traders

Investment magnate Warren Buffett has famously suggested that investors should try to "be fearful when others are greedy and be greedy only when others are fearful."

That turns out to be excellent advice, according to the results of a new study by researchers at Caltech and Virginia Tech that looked at the brain activity and behavior of people trading in experimental markets where price bubbles formed. In such markets, where price far outpaces actual value, it appears that wise traders receive an early warning signal from their brains—a warning that makes them feel uncomfortable and urges them to sell, sell, sell.

"Seeing what's going on in people's brains when they are trading suggests that Buffett was right on target," says Colin Camerer, the Robert Kirby Professor of Behavioral Economics at Caltech.  

That is because in their experimental markets, Camerer and his colleagues found two distinct types of activity in the brains of participants—one that made a small fraction of participants nervous and prompted them to sell their experimental shares even as prices were on the rise, and another that was much more common and made traders behave in a greedy way, buying aggressively during the bubble and even after the peak. The lucky few who received the early warning signal got out of the market early, ultimately causing the bubble to burst, and earned the most money. The others displayed what former Federal Reserve chairman Alan Greenspan called "irrational exuberance" and lost their proverbial shirts.

A paper about the experiment and the team's findings appears this week in the journal Proceedings of the National Academy of Sciences. Alec Smith, the lead author on the paper, is a visiting associate at Caltech. Additional coauthors are from the Virginia Tech Carilion Research Institute.

The researchers set up a simple experimental market in which they were able to control the fundamental, or actual, value of a traded risky asset. In each of 16 sessions, about 20 participants were told how an on-screen trading market worked and were given 100 units of experimental currency and six shares of the risky asset. Then, over the course of 50 trading periods, the traders indicated by pressing keyboard buttons whether they wanted to buy, sell, or hold shares at various prices.  

Given the way the experiment was set up, the fundamental price of the risky asset was 14 currency units. Yet in many sessions, the traded price rose well above that—sometimes three to five times as high—creating bubble markets that eventually crashed.

During the experiment, two or three additional subjects per session also participated in the market while having their brains scanned by a functional magnetic resonance imaging (fMRI) machine. In fMRI, blood flow is monitored and used as a proxy for brain activation. If a brain region shows a relatively high level of blood oxygenation during a task, that region is thought to be particularly active.

At the end of the experiment, the researchers first sought to understand the behavioral data—the choices the participants made and the resulting market activity—before analyzing the fMRI scans.

"The first thing we saw was that even in an environment where you don't have squawking heads and all kinds of other information being fed to people, you can get bubbles just through pricing dynamics that occur naturally," says Camerer. This finding is at odds with what some economists have held—that bubbles are rare or are caused by misinformation or hype.

Next, the researchers divided the participants into three categories based on their earnings during their 50 trading periods—low, medium, and high earners. They found that the low earners tended to be momentum buyers who started buying as prices went up and then kept buying even as prices tanked. The middle-of-the-road folks didn't take many risks at all and, as a result, neither made nor lost the most money. And the traders who earned the most bought early and sold when prices were on the rise.

"The high-earning traders are the most interesting people to us," Camerer says. "Emotionally, they have to do something really hard: sell into a rising market. We thought that something must be going on in their brains that gives them an early warning signal."

To reveal what was actually occurring in the brains of the subjects—and the nature of that warning signal—Camerer and his colleagues analyzed the fMRI scans. Using this data, the researchers first looked for an area of the brain that was unusually active when the results screen came up that told participants their outcome for the last trading period. It turned out that a region called the nucleus accumbens (NAcc) lit up at that time in all participants, showing more activity when shares were bought or sold. The NAcc is associated with reward processing—it lights up when people are given expected rewards such as money or juice or a smile, for example. So it was not particularly surprising to see that the NAcc was activated when traders found out how their gambles paid off.

What was surprising, though, was that low earners were very sensitive to activity in the NAcc: when they experienced the most activity in the NAcc, they bought a lot of the risky asset. "That is a correlation we can call irrational exuberance," Camerer says. "Exuberance is the brain signal, and the irrational part is buying so many shares. The people who make the most money have low sensitivity to the same brain signal. Even though they're having the same mental reaction, they're not translating it into buying as aggressively."

Returning to the question of the high earners and their early warning signal, the researchers hypothesized that a part of the brain called the insular cortex, or insula, might be serving as that bellwether. The insula was a good candidate because previous studies had linked it to financial uncertainty and risk aversion. It is also known to reflect negative emotions associated with bodily sensations such as being shocked or smelling something disgusting, or even with feelings of social discomfort like those that come with being treated unfairly or being excluded.

Looking at the brain data of the high earners, the researchers found that insula activity did indeed increase shortly before the traders switched from buying to selling. And again, Camerer notes, "The prices were still going up at that time, so they couldn't be making pessimistic predictions just based on the recent price trend. We think this is a real warning signal."

Meanwhile, in the low earners, insula activity actually decreased, perhaps allowing their irrational exuberance to continue unchecked.  

Read Montague, director of the Human Neuroimaging Laboratory at the Virginia Tech Carilion Research Institute and one of the paper's senior authors, emphasizes the importance of group dynamics, or group thinking, in the study. "Individual human brains are indeed powerful alone, but in groups we know they can build bridges, spacecraft, microscopes, and even economic systems," he says. "This is one of the next frontiers in neuroscience—understanding the social mind."

Additional coauthors on the paper, "Irrational exuberance and neural warning signals during endogenous experimental market bubbles," include Terry Lohrenz and Justin King of Virginia Tech Carilion Research Institute in Roanoke, Virginia. Montague is also a professor at the Wellcome Trust Centre for Neuroimaging at University College London. The work was supported by the National Science Foundation, the Betty and Gordon Moore Foundation, and the Lipper Family Foundation.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Frederick B. Thompson

1922–2014

 

Frederick Burtis Thompson, professor of applied philosophy and computer science, emeritus, passed away on May 27, 2014. The research that Thompson began in the 1960s helped pave the way for today's "expert systems" such as IBM's supercomputer Jeopardy! champ Watson and the interactive databases used in the medical profession. His work provided quick and easy access to the information stored in such systems by teaching the computer to understand human language, rather than forcing the casual user to learn a programming language.

Indeed, Caltech's Engineering & Science magazine reported in 1981 that "Thompson predicts that within a decade a typical professional [by which he meant plumbers as well as doctors] will carry a pocket computer capable of communication in natural language."

"Natural language," otherwise known as everyday English, is rife with ambiguity. As Thompson noted in that same article, "Surgical reports, for instance, usually end with the statement that 'the patient left the operating room in good condition.' While doctors would understand that the phrase refers to the person's condition, some of us might imagine the poor patient wielding a broom to clean up."

Thompson cut through these ambiguities by paring "natural" English down to "formal" sublanguages that applied only to finite bodies of knowledge. While a typical native-born English speaker knows the meanings of 20,000 to 50,000 words, Thompson realized that very few of these words are actually used in any given situation. Instead, we constantly shift between sublanguages—sometimes from minute to minute—as we interact with other people.

Thompson's computer-compatible sublanguages had vocabularies of a few thousand words—some of which might be associated with pictures, audio files, or even video clips—and a simple grammar with a few dozen rules. In the plumber's case, this language might contain the names and functions of pipe fittings, vendors' catalogs, maps of the city's water and sewer systems, sets of architectural drawings, and the building code. So, for example, a plumber at a job site could type "I need a ¾ to ½ brass elbow at 315 South Hill Avenue," and, after some back-and-forth to clarify the details (such as threaded versus soldered, or a 90-degree elbow versus a 45), the computer would place the order and give the plumber directions to the store.

Born on July 26, 1922, Thompson served in the Army and worked at Douglas Aircraft during World War II before earning bachelor's and master's degrees in mathematics at UCLA in 1946 and 1947, respectively. He then moved to UC Berkeley to work with logician Alfred Tarski, whose mathematical definitions of "truth" in formal languages would set the course of Thompson's later career.

On getting his PhD in 1951, Thompson joined the RAND (Research ANd Development) Corporation, a "think tank" created within Douglas Aircraft during the war and subsequently spun off as an independent organization. It was the dawn of the computer age—UNIVAC, the first commercial general-purpose electronic data-processing system, went on sale that same year. Unlike previous machines built to perform specific calculations, UNIVAC ran programs written by its users. Initially, these programs were limited to simple statistical analyses; for example, the first UNIVAC was bought by the U.S. Census Bureau. Thompson pioneered a process called "discrete event simulation" that modeled complex phenomena by breaking them down into sequences of simple actions that happened in specified order, both within each sequence and in relation to actions in other, parallel sequences.

Thompson also helped model a thermonuclear attack on America's major cities in order to help devise an emergency services plan. According to Philip Neches (BS '73, MS '77, PhD '83), a Caltech trustee and one of Thompson's students, "When the team developed their answer, Fred was in tears: the destruction would be so devastating that no services would survive, even if a few people did. . . . This kind of hard-headed analysis eventually led policy makers to a simple conclusion: the only way to win a nuclear war is to never have one." Refined versions of these models were used in 2010 to optimize the deployment of medical teams in the wake of the magnitude-7.0 Haiti earthquake, according to Neches. "The models treated the doctors and supplies as the bombs, and calculated the number of people affected," he explains. "Life has its ironies, and Fred would be the first to appreciate them."

In 1957, Thompson joined General Electric Corporation's computer department. By 1960 he was working at GE's TEMPO (TEchnical Military Planning Operation) in Santa Barbara, where his natural-language research began. "Fred's first effort to teach English to a computer was a system called DEACON [for Direct English Access and CONtrol], developed in the early 1960s," says Neches.

Thompson arrived at Caltech in 1965 with a joint professorship in engineering and the humanities. "He advised the computer club as a canny way to recruit a small but dedicated cadre of students to work with him," Neches recalls. In 1969, Thompson began a lifelong collaboration with Bozena Dostert, a senior research fellow in linguistics who died in 2002. The collaboration was personal as well as professional; their wedding was the second marriage for each.

Although Thompson's and Dostert's work was grounded in linguistic theory, they moved beyond the traditional classification of words into parts of speech to incorporate an operational approach similar to computer languages such as FORTRAN. And thus they created REL, for Rapidly Extensible Language. REL's data structure was based on "objects" that not only described an item or action but allowed the user to specify the interval for which the description applied. For example:

                        Object: Mary Ann Summers

                        Attribute: driver's license

                        Value: yes

                        Start time: 1964

                        End time: current

"This foreshadowed today's semantic web representations," according to Peter Szolovits (BS '70, PhD '75), another of Thompson's students.

In a uniquely experimental approach, the Thompsons tested REL on complex optimization problems such as figuring out how to load a fleet of freighters—making sure the combined volumes of the assorted cargoes didn't exceed the capacities of the holds, distributing the weights evenly fore and aft, planning the most efficient itineraries, and so forth. Volunteers worked through various strategies by typing questions and commands into the computer. The records of these human-computer interactions were compared to transcripts of control sessions in which pairs of students attacked the same problem over a stack of paperwork face-to-face or by communicating with each other from separate locations via teletype machines. Statistical analysis of hundreds of hours' worth of seemingly unstructured dialogues teased out hidden patterns. These patterns included a five-to-one ratio between complete sentences—which had a remarkably invariant average length of seven words—and three-word sentence fragments. Similar patterns are heard today in the clipped cadences of the countdown to a rocket launch.

The "extensible" in REL referred to the ease with which new knowledge bases—vocabulary lists and the relationships between their entries—could be added. In the 1980s, the Thompsons extended REL to POL, for Problem Oriented Language, which had the ability to work out the meanings of words not in its vocabulary as well as coping with such human frailties as poor spelling, bad grammar, and errant punctuation—all on a high-end desktop computer at a time when other natural-language processors ran on room-sized mainframe machines.

"Fred taught both the most theoretical and the most practical computer science courses at the Institute long before Caltech had a formal computer science department. In his theory class, students proved the equivalence of a computable function to a recursive language to a Turing machine. In his data analysis class, students got their first appreciation of the growing power of the computer to handle volumes of data in novel and interesting ways," Neches says. "Fred and his students pioneered the arena of 'Big Data' more than 50 years ahead of the pack." Thompson co-founded Caltech's official computer science program along with professors Carver Mead (BS '56, MS '57, PhD '60) and Ivan Sutherland (MS '60) in 1976.

Adds Remy Sanouillet (MS '82, PhD '94), Thompson's last graduate student, "In terms of vision, Fred 'invented' the Internet well before Al Gore did. He saw, really saw, that we would be asking computers questions that could only be answered by fetching pieces of information stored on servers all over the world, putting the pieces together, and presenting the result in a universally comprehensible format that we now call HTML."

Thompson was a member of the scientific honorary society Sigma Xi, the Association for Symbolic Logic, and the Association for Computing Machinery. He wrote or coauthored more than 40 unclassified papers—and an unknown number of classified ones.

Thompson is survived by his first wife, Margaret Schnell Thompson, and his third wife, Carmen Edmond-Thompson; two children by his first marriage, Mary Ann Thompson Arildsen and Scott Thompson; and four grandchildren.

Plans for a celebration of Thompson's life are pending.

Writer: 
Douglas Smith
Frontpage Title: 
Frederick B. Thompson (1922–2014)
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Pages

Subscribe to RSS - HSS