Functional MRI as a Lie Detector

Publication
Article
Psychiatric TimesPsychiatric Times Vol 24 No 5
Volume 24
Issue 5

In the past few years, a great deal of information has been learned about how the brain processes ambiguous information. Data exist that allow us to view what the brain looks like when we are deliberately trying to deceive someone. In response, a number of corporations have been established that use these data--and the imaging technologies that gave them to us--to create brain-based lie detectors.

In this article, I will explore whether such efforts are worth all the trouble. I will begin with a brief history of the use of brain technologies in lie detection, including that old technology, the polygraph; describe a few promising experiments; and end with several noteworthy objections to interpretations of the data.

Polygraphs

The use of physiological measures-- especially those related to brain biology --has a long and checkered history in the American courtroom. One of the first attempts to apply neuroscience to assess a suspect's truthfulness involved the well-known polygraph test, which measures a suite of physical reactions while a person, who may be a suspect in a crime, is questioned. The premise is simple: persons who are being interrogated about criminal activities are thought to show increases in pulse, blood pressure, sweat level, and breathing rate when attempting to deceive.

Invented in the 1920s, the technique has never been without controversy. Advocates say improvements in technology have made the technique virtually bulletproof. Critics state that the technology is inherently flawed: polygraphy at its best can only measure interior emotion and never interior knowledge. Some of these fears have been borne out. That guilty persons can train themselves to "fool" detectors is not only possible, it has been demonstrated. Equally troubling is that some persons become easily flustered during examination and have been wrongfully accused, not because they were guilty but because they were confused. The judiciary system seems to agree with this unreliability and the results of polygraph tests are rarely admitted in court. In 1988, Congress passed legislation outlawing the use of lie detection devices in job screening.

Nevertheless, any attempt to conceal things must involve the brain and nervous system at some level. Researchers have exploited 2 overall brain-based strategies in an attempt to detect when a person is lying and when he or she is telling the truth. Earlier efforts focused on the detection of brain wave patterns during event-related changes, primarily on the scalp. Newer efforts go a bit deeper, using noninvasive imaging technologies such as functional MRI (fMRI).

Activities at the surface

Traditional electroencephalograms mainly focus on detecting changes in brain wave patterns and linking them to externally experienced activities. From sleep signals to reactions to stress, surface eavesdropping has uncovered a wealth of interesting, although not always well understood, neural behaviors.

A great deal of research energy has focused on small "burps" of electrical activity that occur anywhere from 300 to 800 milliseconds after the recognition of a familiar stimulus. The burp is often called a p300. Researchers in the mid-1980s began asking whether p300 spikes could be used to discern whether a person was telling the truth. If the presence of the p300 was associated with familiarity, and the tester were to ask a person if he was familiar with something, the brain might still "spike" in this millisecond range, even if the person denied knowing it.

Similar to polygraphy, these tests worked with 3 classes of stimuli (Figure). Targets, as they are known, are stimuli with which a person is already familiar (eg, a person might be shown a picture of a family friend). This creates a baseline in the positive direction. Irrelevants are stimuli with which the person is unfamiliar (eg, a person might be shown a picture of a stranger). This creates a baseline in the negative direction. The probe is the topic on which the person's truthfulness is being tested.

Several years ago, a number of research papers were published that claimed to have achieved results of 87% accuracy (ie, they could accurately predict if a person was lying 87% of the time) using electrical activity at the surface. With refinement, researchers claimed to put that figure close to 100%.

Noninvasive imaging

More recently, research scientists have explored the use of newer technologies such as fMRI in an attempt to detect whether a person is concealing information. A number of laboratories have attempted to interpret fMRI scans by essentially inverting the normal analytical process using pattern-classification algorithms. To explain what those are, I will discuss how the machines have been traditionally used.

For many years the analysis of fMRI data attempted to associate the activity in discrete regions of the brain with external regressorssuch as a given task condition. One might show a person a picture of a face and then try to find which area of the brain becomes activated. The machines measure increases in blood flow to particular regions, detecting what is termed a BOLD (blood oxygen level-dependent) signal. Large patterns of activity are usually generated in the initial brain scan efforts, and much computer time is spent filtering out relevant from irrelevant signals. Eventually, an interior structure/function map is generated that associates an external stimulus to changes in localized blood flow.

This more traditional approach has been compared to the historical role a cartographer might have played on 16th- and 17th-century ships during the age of exploration--providing useful maps as explorers discovered new territory. Perhaps less charitably, this emphasis on localization has sometimes been called phrenology that uses magnets instead of fingertips, which simply allows the view to go a few centimeters deeper than the scalp.

Whatever one feels about imaging technology, the pattern-classification schemes reverse the analysis stream. A given signal at a localized area is not expected to discriminate between various cognitive states. That task is given to the observed pattern of activity that is spread across many brain regions. This more multivariate approach ends with the creation of pattern vectors associated with specific cognitive states. A flesh-and-blood classifier is then trained to predict a given cognitive state simply by observing these complex patterns. The observer does not ask what a specific brain region is doing in response to a stimulus. The observer looks at changes in whole-brain patterns and then predicts what cognitive state the brain is experiencing.

The value of these techniques lies in the ease with which the brain can be observed in response to more real-world stimuli rather than through the isolated perspective of the research laboratory. And it has been successfully tested. The University of Pittsburgh even held a contest in which participants were given the brain scan patterns of persons who had viewed 2 short movie clips under imaging conditions. Patterns corresponding to 12 separable features of the viewing experience (music, emotion, presence of a specific actor, and so on) were generated. Participants were then given a blind test. They received a set of fMRI scans with patterns derived from persons watching the same 2 movie clips, but whose features were not identified. The job of participants was to predict what feature of the movie the person was experiencing based solely on the observed test pattern.

The results seemed very encouraging. The winners achieved correlations as robust as 0.86. A few groups were even able to predict what actor was being observed by the person based solely on his brain scan pattern!

The idea that neuroimaging technologies can be used to decode the mental states of humans is now a current research reality. The only real question is: How far can you push the envelope, given the state of the art? Are the technologies mature enough to be used in a courtroom? This question is being debated in many corners because commercial companies are being set up that purport to use imaging analysis for lie detection purposes. But do these technologies really work? And if so, are they any more reliable than conventional lie detection machinery?

fMRI as a lie detector

Some of the earliest attempts to answer these questions came from experiments with undergraduate students at the University of Pennsylvania. Students were given sealed envelopes that contained a playing card and a $20 bill. Each subject was told he could pocket the money if he could successful conceal which card the envelope contained when asked while being scanned by an fMRI machine. The results showed that certain areas in the prefrontal cortex became more active when students were lying than when they were telling the truth. Interestingly, these regions are involved in the prefrontal cortex's signature activity--executive function--recruiting regions that control error detection as well as the ability to inhibit specific behaviors.

This early work showed that deception could be mapped in the brain, but the subtle differences in the activity could only be shown using pooled sets of data from groups of test participants. When individuals were examined, the signal-to-noise ratio often obscured the deception-specific signal. Such individual differences are a common problem in imaging technology. Data are often revealed only when there is a large enough sample size to make valid statistical claims.

Later work attempted to refine these initial studies. In one experiment, 30 participants were given jewelry (a ring or a watch) and were instructed to hide it in a closet. They were then placed in a brain scanner and asked to lie about the identity of the object they had just hidden. From the results of the experiment, the researchers went to work creating a filtering algorithm that focused on specific "deception regions" of the brain. The algorithm was supposed to be able to calculate the shifts in activity of these regions that occurred when the participants were lying.

The researchers then tested the efficacy of this algorithm by examining another 30 students using a similar experimental protocol. Could their detection system discriminate between persons who were lying and those who were telling the truth? The researchers claimed that they could detect persons who were lying in 90% of the test cases.

But is the work mature enough for courtrooms?

Although at first blush these results might seem encouraging, they have actually been met with a firestorm of controversy. Large swaths of the neuroscience community have greeted with open skepticism the claim that noninvasive imaging can say anything about concealing the truth. I have organized some of the objections into 4 categories.

One of the strongest objections concerns a lack of standardized definitional framework. To date, there is no systematized agreement about what deception actually means to the human brain. There are many ways to deceive people and there are many degrees of deception. However, exactly how the brain responds to such ways and degrees remains an open question and may be individually expressed. It is Brain Science 101 to know that every brain is wired differently, the great consequence of allowing natural selection to give us plastic neurons. Given the extraordinary variability of brain wiring and the extraordinary sensitivity of the machines involved, these are not trivial questions when applied to activities such as lie detection.

Other objections seem to have been borne out experimentally. Even for researchers who have measured deception using traditional interpretative approaches, the brain data only become clear if you pool the findings from groups of test subjects. Even then, much of the data barely meet statistical thresholds of significance. If all you have are individual scans using current technology, it is very difficult to judge who is concealing something and who is telling the truth. And individual scans are all you are likely to have in a courtroom situation.

Another objection accrues from examining the experimental protocols of the researchers who gathered the initial data. The most solid work was done under conditions in which participants were told to lie. The very fact of forcing an artificial condition onto persons may affect the activation patterns in their brains. Remember that in pattern-classification protocols, researchers are not attempting to discover whether a functionally well-characterized brain region is turning on or off. They are simply looking at changes in complex activation patterns between lie and no-lie situations. Such experimental conditions hardly mimic the real-world life-and-death situation of the modern courtroom. And, given the stakes, it will be important that they do.

A fourth objection arises from doubts normally aimed at polygraphy. There are very few data regarding whether a brain scan technique could be beaten by cognitive countermeasures. Nor do these technologies discriminate between persons who believe they are telling the truth and those who have greater knowledge (once again, a reaction vs content argument). Moreover, the technology is not advanced enough to discriminate between a person who is a pathological liar, is delusional, or is simply confused.

So where does one land? From mentalists to science fiction shows, mind reading has always been part of the popular imagination. And it is fair to say that a great deal of progress has been made in understanding human behavior by simply looking at BOLD signals. As researchers begin to learn how to decode the mental states of suspects, undoubtedly things will improve. But right now, from this researcher's perspective, the technology seems just a bit too primitive for the rough-and-tumble precision of the modern American courtroom.

From definitions of the phenomenon to the way the initial data on brains and deception were obtained, many more experiments need to be performed before it is ready for Perry Mason. In terms of the scientific validity of fMRI scans for lie detection, the jury in my opinion is still way out.

Related Videos
brain
nicotine use
© 2024 MJH Life Sciences

All rights reserved.