Category Archives: Mechanical methods

Posts on methods of lie-detection that rely on machines

A few deception tweets from recent days

  • Insurance “claim fraudsters think too much”. Some great Portsmouth Uni research covered by Irish Independent http://retwt.me/1P8R0
  • “If You Want to Catch a Liar, Make Him Draw” David DiSalvo @Neuronarrative on more great Portsmouth Uni research http://retwt.me/1P8ZB
  • fMRI scans of people with schizophrenia show they have same functional anatomical distinction between truth telling & deception as others http://bit.ly/aO5cI2 via @Forpsych
  • In press: Promising to tell truth makes 8- 16 year-olds more honest (but lectures on morality don’t). Beh Sciences & Law http://is.gd/fCa7X

Quick deception links for the last few weeks

Gah, Twitter update widget broken. Here are the deception-relevant tweets from the last few weeks:

Polygraph and similar:

  • Detecting concealed information w/ reaction times: Validity & comparison w/ polygraph App Cog Psych 24(7) http://is.gd/fhPMW
  • Important (rare) study on polygraph w/ UK sex offenders: leads to more admissions; case mgrs perceive increased risk http://is.gd/eoW4Q

fMRI and other brain scanning:

  • If Brain Scans Really Detected Deception, Who Would Volunteer to be Scanned? J Forensic Sci http://is.gd/eiz2o
  • FMRI & deception: “The production and detection of deception in an interactive game” in Neuropsychologia http://is.gd/eUMO3
  • In the free access PLoS1: fMRI study indicates neural activity associated with deception is valence-related. PLoS One 5(8). http://is.gd/f6IaM

Verbal cues:

  • Distinguishing truthful from invented accounts using reality monitoring criteria – http://ht.ly/2z8FC
  • Detecting Deceptive Discussions in Conference Calls. Linguistic analysis method 50-65% accuracy. SSRN via http://is.gd/eI0bA
  • Effect of suspicion & liars’ strategies on reality monitoring Gnisci, Caso & Vrij in App Cog Psy 24:762–773 http://is.gd/eCFyA

Applied contexts:

  • A new Canadian study on why sex offenders confess during police interrogation (no polygraph necessary) http://is.gd/eoWl7
  • Can fabricated evidence induce false eyewitness testimony? App Cog Psych 24(7) http://is.gd/fhPDd Free access
  • In press, B J Soc Psy Cues to deception in context. http://is.gd/fhPcY Apparently ‘context’ = ‘Jeremy Kyle Show’. Can’t wait for the paper!
  • Can people successfully feign high levels of interrogative suggestibility & compliance when given instructions to malinger? http://ht.ly/2z8Wz

Kids fibbing:

  • Eliciting cues to children’s deception via strategic disclosure of evidence App Cog Psych 24(7) http://is.gd/fhPIS
  • Perceptions about memory reliability and honesty for children of 3 to 18 years old – http://ht.ly/2z8O1

And some other links of interest:

Stress and Deception in Speech: Evaluating Layered Voice Analysis

Hot off the press in Journal of Forensic Sciences (hat tip Mind Hacks), a study in which a Layered Voice Analysis system was tested independently and found to be effective at the chance level. In other words, you might as well flip a coin.

Here’s the abstract:

This study was designed to evaluate commonly used voice stress analyzers—in this case the layered voice analysis (LVA) system. The research protocol involved the use of a speech database containing materials recorded while highly controlled deception and stress levels were systematically varied. Subjects were 24 each males/females (age range 18–63 years) drawn from a diverse population. All held strong views about some issue; they were required to make intense contradictory statements while believing that they would be heard/seen by peers. The LVA system was then evaluated by means of a double blind study using two types of examiners: a pair of scientists trained and certified by the manufacturer in the proper use of the system and two highly experienced LVA instructors provided by this same firm. The results showed that the “true positive” (or hit) rates for all examiners averaged near chance (42–56%) for all conditions, types of materials (e.g., stress vs. unstressed, truth vs. deception), and examiners (scientists vs. manufacturers). Most importantly, the false positive rate was very high, ranging from 40% to 65%. Sensitivity statistics confirmed that the LVA system operated at about chance levels in the detection of truth, deception, and the presence of high and low vocal stress states.

Reference:

You’ll find more on Layered Voice Analysis in the voice analysis category on this blog.

fMRI Lie Detection enters the courtroom

UPDATE!  Request to admit No Lie MRI report in California case is withdrawn Stanford Center for Law & the Biosciences Blog, 25 March 09

So depressing. Here’s the coverage so far:

Related links:

Voodoo science in fMRI and voice analysis to detect deception: compare and contrast

Controversy and debate is the driver of scientific progress.  It forces us to re-examine our assumptions, scrutinise our methods and think hard about the meaning of data.  Of course, there is another way of dealing with controversy…

Voodoo science in fMRI

If you’re involved or simply interested in fMRI research you’ll already be well aware of the ongoing debate about Voodoo Correlations in Social Neuroscience [pdf]. If not, you’ll find the detail in coverage all over the psych and neuroblogs by googling the title or simply “voodoo correlations”.

Here’s how it went:

1. Edward Vul, Christine Harris, Piotr Winkielman, and Harold Pashler wrote a critique of a series of recent research studies exploring the neural correlates of various social psychological issues. Their paper was accepted by a peer-reviewed journal and will be published later this year.

2. Authors of those criticised research papers wrote careful defences of their work and pointed out problems in Vul et al’s arguments (here and here).

3. Vul et al. responded to the criticisms here.

And the debate continues – watching from the sidelines you get a sense of the passion and the intellect on both sides, with the process of open debate resulting in further clarification and some concessions (on both sides). Ultimately, this debate will result in better understanding of some important issues and better scrutiny of new research. Scientific progress, in other words.

Voodoo science in deception detection

Compare this to another recent controversy that started in the research literature (hat tip to Mind Hacks).

1. In 2007, the International Journal of Speech Language and the Law (a peer reviewed journal) published a critique by Anders Eriksson and Francisco Lacerda of mechanical methods of deception detection that claim to use ‘voice stress analysis’ or ‘layered voice analysis’ to detect deception. It is more pointed and more personal than the Vul et al. critique (commenting on the companies and the individuals involved in developing and marketing such machines), but the authors nevertheless examine the scientific literature carefully and raise some significant problems with the technology as it is marketed.

2. One of the companies named, Nemesysco, threatened to sue.

3. The publishers of IJSLL withdrew the paper (though, this being the age of the internet, you can access it here).

Rather than publish the potentially ground-breaking scientific evidence underpinning their technique, respond to the criticisms or engage in debate, a company uses legal threats to silence criticism. The result is that we have no chance to hear both sides of the story, little chance of increasing our understanding of the techniques or their theoretical basis, further polarisation of the pro- and anti- camps, and bugger all scientific progress. Shame.

Of course, Nemesysco’s actions do mean that a lot more of us know and are talking about the criticism of their technology than had they let the journal article lie (no pun intended).

Research Round-up 5: Polygraphy

Part 5 in the rapid research round-up for 2008 includes some of the articles to appear over the last year relating to physiological detection of deception.

The first paper here is the most interesting to me, particularly because there are rather few published research findings relating to what happens when people are polygraphed in their non-native language, but the others are probably only really of interest to hard-core psychophysiologists. If these all seem pretty heavy then I’d recommend heading over to a delightful post about William Moulton Marsden, one of the early pioneers of the polygraph, written by Romeo Vitelli at the Providentia blog, for some light relief.

Bilingual speakers frequently report experiencing greater emotional resonance in their first language compared to their second. In Experiment 1, Turkish university students who had learned English as a foreign language had reduced skin conductance responses (SCRs) when listening to emotional phrases in English compared to Turkish, an effect which was most pronounced for childhood reprimands. A second type of emotional language, reading out loud true and false statements, was studied in Experiment 2… Results suggest that two factors influence the electrodermal activity elicited when bilingual speakers lie in their two languages: arousal due to emotions associated with lying, and arousal due to anxiety about managing speech production in non-native language. Anxiety and emotionality when speaking a non-naive language need to be better understood to inform practices ranging from bilingual psychotherapy to police interrogation of suspects and witnesses.

The effects of the state of guilt and the context in which critical information was received on the accuracy of the Concealed Information Test (CIT) were examined in a between-subjects mock crime experiment… Results indicated that accomplices were more effectively detected than innocent participants, although both were given the same critical information. Information gathered in the crime context yielded stronger orientation to the critical items than similar information gathered in a neutral context.

The present mock-crime study concentrated on the validity of the Guilty Actions Test (GAT) and the role of the orienting response (OR) for differential autonomic responding. N = 105 female subjects were assigned to one of three groups: a guilty group, members of which committed a mock-theft; an innocent-aware group, members of which witnessed the theft; and an innocent-unaware group… For informed participants (guilty and innocent-aware), relevant items were accompanied by larger skin conductance responses and heart rate decelerations whereas irrelevant items elicited HR accelerations. Uninformed participants showed a non-systematic response pattern.

Following the idea that response inhibition processes play a central role in concealing information, the present study investigated the influence of a Go/No-go task as an interfering mental activity, performed parallel to the Concealed Information Test (CIT), on the detectability of concealed information… No physiological evidence for an interaction between the parallel task and sub-processes of deception (e.g. inhibition) was found. Subjects’ performance in the Go/No-go parallel task did not contribute to the detection of concealed information.

The Concealed Information Test (CIT) requires the examinee to deceptively deny recognition of known stimuli and to truthfully deny recognition of unknown stimuli. Because deception and orienting are typically coupled, it is unclear how exactly these sub-processes affect the physiological responses measured in the CIT…The present study aimed at separating the effects of deception from those of orienting…The findings further support the notion that psychophysiological measures elicited by a modified CIT may reflect different mental processes involved in orienting and deception.

The final part of this research round-up includes papers on children’s deception, and on technotreachery.

Research round-up 2: New technologies and deception detection

Part two of the Deception Blog round-up of “all those articles I haven’t had a chance to blog about”. Part one was about catching liars via non-mechanical techniques. This post covers articles and discussion about new technologies to detect deception, including fMRI and measurement of Event-Related Potentials.

fMRI and deception: discussion on the journal pages

It’s been quite a year for advances in neuroscience and deception detection, so much so that in a recent paper in of the American Academy of Psychiatry & Law, Daniel Langleben and Frank Dattilio suggested that a new discipline of “forensic MRI” was emerging. One interesting exchange appeared recently in the Journal of the American Academy of Psychiatry & Law:

…The new approach promises significantly greater accuracy than the conventional polygraph—at least under carefully controlled laboratory conditions. But would it work in the real world? Despite some significant concerns about validity and reliability, fMRI lie detection may in fact be appropriate for certain applications. This new ability to peer inside someone’s head raises significant questions of ethics. Commentators have already begun to weigh in on many of these questions. A wider dialogue within the medical, neuroscientific, and legal communities would be optimal in promoting the responsible use of this technology and preventing abuses.

…The present article concludes that the use of functional imaging to discriminate truth from lies does not meet the Daubert criteria for courtroom testimony.

…we update and interpret the data described by Simpson, from the points of view of an experimental scientist and a forensic clinician. We conclude that the current research funding and literature are prematurely skewed toward discussion of existing findings, rather than generation of new fMRI data on deception and related topics such as mind-reading, consciousness, morality, and criminal responsibility. We propose that further progress in brain imaging research may foster the emergence of a new discipline of forensic MRI.

Earlier this year Kamila Sip and colleagues challenged proponents of neuroimaging for deception detection to take more account of the real world context in which deception occurs, which led to a robust defence from John-Dylan Haynes and an equally robust rebuttal from Sip et al. It all happened in the pages of Trends in Cognitive Sciences:

With the increasing interest in the neuroimaging of deception and its commercial application, there is a need to pay more attention to methodology. The weakness of studying deception in an experimental setting has been discussed intensively for over half a century. However, even though much effort has been put into their development, paradigms are still inadequate. The problems that bedevilled the old technology have not been eliminated by the new. Advances will only be possible if experiments are designed that take account of the intentions of the subject and the context in which these occur.

In their recent article, Sip and colleagues raise several criticisms that question whether neuroimaging is suitable for lie detection. Here, two of their points are critically discussed. First, contrary to the view of Sip et al., the fact that brain regions involved in deception are also involved in other cognitive processes is not a problem for classification-based detection of deception. Second, I disagree with their proposition that the development of lie-detection requires enriched experimental deception scenarios. Instead, I propose a data-driven perspective whereby powerful statistical techniques are applied to data obtained in real-world scenarios.

…Valid experimental paradigms for eliciting deception are still required, and such paradigms will be particularly difficult to apply in real-life settings… We agree with Haynes, however, that there are important ethical issues at stake for researchers in this field. In our opinion, one of the most important of these is careful consideration of how results derived from highly controlled laboratory settings compare with those obtained from real-life scenarios, and if and when imaging technology should be transferred from the laboratory to the judicial system.

fMRI and deception: new research findings

Of course discussion is worth nothing if you don’t have research results to discuss. Shawn Christ and colleagues delved deeper into to the cognitive processes associated with deception:

Previous neuroimaging studies have implicated the prefrontal cortex (PFC) and nearby brain regions in deception. This is consistent with the hypothesis that lying involves the executive control system….Our findings support the notion that executive control processes, particularly working memory, and their associated neural substrates play an integral role in deception. This work provides a foundation for future research on the neurocognitive basis of deception.

Meanwhile, two groups of researchers reported that fMRI techniques can differentiate between mistakes and false memories vs deliberate deception, with Tatia Lee and colleagues showing that in the case of feigning memory impairment, deception “is not only more cognitively demanding than making unintentional errors but also utilizes different cognitive processes”:

fMRI and deception in the blogosphere

Commentary and discussion of fMRI was not limited to the pages of scholarly journals, however. A terrific post by Vaughan over at Mind Hacks on the limitations of fMRI studies zipped around the blogosphere (and rightly so) and is well worth a read if you are interested in becoming a more critical consumer of fMRI deception detection studies (see also Neurophilosophy’s post MRI: What is it good for? ).

There’s a detailed write-up by Hank Greely of the University of Akron Law School’s conference on Law and Neuroscience held in September, which covers the science, the practicalities and the ethics of using neuroscience in forensic contexts (see also his summary of a presentation at an earlier conference on ‘neurolaw’). Judges too, are “waking up to the potential misuse of brain-scanning technologies” with a recent judges’ summit in the US to “discuss protecting courts from junk neuroscience”, reports New Scientist .

Nevertheless, purveyors of MRI lie-detection technology continue to push their wares. For instance, the Antipolygraph Blog picked up a radio discussion on commercial fMRI-based lie detection in June (the audio download is still available as an mp3 download).

ERP and deception: the controversial BEOS test

Earlier this year I and many others blogged about the disturbing use of brain scanning in a recent murder trial in India. The technique is known as the Brain Electrical Oscillations Signature test and is based on measuring Event-Related Potentials (electrical activity across the brain). Neurologica blog and Neuroethics and Law have a write-ups and links for those who wish to know more.

Neuroethics and Law blog links to a pdf of the judge’s opinion in the case, where pages 58-64 include a summary of the judge’s understanding of the BEOS procedure and what it ‘revealed’ in this case. Most disturbing is the apparent certainty of the judge that the tests were appropriate, scientifically robust and applied correctly by “Sunny Joseph who is working as Assistant Chemical Analyser in Forensic Science Laboratory, Mumbai” (p.55-56):

…competency of this witness to conduct the Test is not seriously challenged. His evidence also reveals that he was working as Clinical Psychologist in National Institute of Mental Health and Neuro Sciences at Bangalore and he has experience in the field of Neuro psychology since last 6 years and in forensic technique since last 1½ years. He has himself conducted approximately 15 Polygraph Tests and has been associated with almost 100 Polygraph Tests. He has conducted 16 BEOS Tests and has been associated in conducting of about 12 Neuro Psychology Tests. Therefore his expertise in my opinion, can in no way be challenged and nothing is brought on record in his cross examination to show that the Tests conducted were not proper and requisite procedure was not followed (p.62).

On a happier note, my hot tip for the New Year is to keep your eye on Social Neuroscience – there are several articles on neural correlates of deception in press there which they are saving up for a special issue in 2009.

More soon – part 3 covers the 2008 flurry of interest in deception and magic!

Could brain scans ever be safe evidence?

New Scientist (3 Oct) asks: Could brain scans ever be safe evidence?

DONNA insists that she met friends for lunch on the afternoon of 25 January 2008 and did not violate a restraining order against Marie. But Marie told police that Donna broke the terms of the order by driving up to her car while it was stopped at a traffic light, yelling and cursing, and then driving off.

A polygraph test proved unsatisfactory: every time Marie’s name was mentioned Donna’s responses went sky-high. But when Donna approached Cephos of Tyngsboro, Massachusetts, for an fMRI scan, which picks up changes in blood flow and oxygenation in the brain, it was a different story.

“Her results indicated that she was telling the truth about the January 25 incident,” says Steven Laken of Cephos, who maintains that when people are lying, more areas of the brain “light up” than when they are telling the truth (see the scans of Donna’s brain).

Unfortunately the rest of the article is secured to subscribers.

Deception in the news

I’ve been lax posting on the blogs recently, I know (real life interferes with blogging). Consider this a catch-up post with some of the deception-related issues hitting the news stands over the last few weeks.

Polygraphing sex offenders gains momentum in the UK: A new pilot scheme to polygraph test sex offenders to see if they “are a risk to the public or are breaking the terms of their release from jail”, according to The Times (20 Sept 2008).

Brain fingerprinting in the news again: Brain test could be next polygraph (Seattle Post-Intelligencer, 14 Sept):

A Seattle scientist who has developed an electronic brain test that he says could improve our ability to force criminals to reveal themselves, identify potential terrorists and free those wrongly convicted may have finally broken through the bureaucratic barriers that he believes have served to stifle adoption of the pioneering technique.

“There seems to be a renewed surge of interest in this by the intelligence agencies and the military,” said Larry Farwell, neuroscientist and founder of Brain Fingerprinting Laboratories based at the Seattle Science Foundation.

Not-brain-fingerprinting deception detection brain scan procedure isn’t scientific, according to a well-qualified panel (The Hindu, 8 Sept). India’s Directorate of Forensic Sciences chooses not to accept the panel’s findings:

The Directorate of Forensic Sciences, which comes under the Union Ministry of Home, will not accept the findings of the six-member expert committee that looked into brain mapping and its variant “brain electrical oscillation signature (BEOS) profiling” on the ground that the committee has been dissolved.

The six-member technical peer review committee, headed by National Institute of Mental Health and Neuro Sciences (NIMHANS) Director D. Nagaraja, started work in May 2007. The panel had concluded that the two procedures were unscientific and had recommended against their use as evidence in court or as an investigative tool.

  • See also: more on “BEOS profiling” in The Times of India (21 July) which claims that “This brain test maps the truth”.

Perhaps the answer can be found with infrared light? New Scientist (22 Sept) reports on a patent application to develop a new type of brain scanning lie detection technology:

Scott Bunce, at Drexel University’s College of Medicine in Philadelphia, thinks a better solution [to the problem of detecting lies] is to send near-infrared light through the scalp and skull into the brain and see how much is reflected back. And he has designed a special headband that does just that. The amount of reflected light is dependent on the levels of oxygen in the blood, which in turn depends on how active the brain is at that point.

This, he says, gives a detailed picture of real-time activity within the brain that can be used to determine whether the subject is lying. The technique is both cheaper and easier to apply than fMRI and gives a higher resolution than an EEG. …Of course, nobody knows whether brain activity can reliably be decoded to reveal deception, but that’s another question.

Polygraph reasoning applied to spotting terrorists…

Remember that the rationale behind the polygraph is that (with an appropriate questioning regime) guilty people are assumed have physiological responses that differ from innocents? Well, the new “anxiety-detecting machines” that the DHS hopes might one day spot terrorists seem to work on the same basis. Here’s the report from USA Today (18 Sept):

A scene from the airport of the future: A man’s pulse races as he walks through a checkpoint. His quickened heart rate and heavier breathing set off an alarm. A machine senses his skin temperature jumping. Screeners move in to question him. Signs of a terrorist? Or simply a passenger nervous about a cross-country flight?

It may seem Orwellian, but on Thursday, the Homeland Security Department showed off an early version of physiological screeners that could spot terrorists. The department’s research division is years from using the machines in an airport or an office building— if they even work at all. But officials believe the idea could transform security by doing a bio scan to spot dangerous people.

Critics doubt such a system can work. The idea, they say, subjects innocent travelers to the intrusion of a medical exam.

According to the news report, there is some effort going into testing the equipment, though if the details in the news report are to be believed it sounds like the research is still at a very early stage:

To pinpoint the physiological reactions that indicate hostile intent, researchers… recruited 140 local people with newspaper and Internet ads seeking testers in a “security study.” Each person receives $150.

On Thursday, subjects walked one by one into a trailer with a makeshift checkpoint. A heat camera measured skin temperature. A motion camera watched for tiny skin movements to measure heart and breathing rates. As a screener questioned each tester, five observers in another trailer looked for sharp jumps on the computerized bands that display the person’s physiological characteristics.

Some subjects were instructed in advance to try to cause a disruption when they got past the checkpoint, and to lie about their intentions when being questioned. Those people’s physiological responses are being used to create a database of reactions that signal someone may be planning an attack. More testing is planned for the next year.

The questioning element does make it sound like what is being developed is a ‘remote’ polygraph.

Hat tip to Crim Prof Blog.

UPDATE: Lots of places picking this up all over the www. New Scientist has a post on the same topic here, and an earlier article on the system here. The Telegraph’s report adds some new information.

India’s Novel Use of Brain Scans in Courts Is Debated

According to a report in the New York Times (14 Sept), an Indian judge has taken the results a brain scan as “proof that the [murder] suspect’s brain held ‘experiential knowledge’ about the crime that only the killer could possess”, and passed a life sentence.

The Brain Electrical Oscillations Signature test, or BEOS, was developed by Champadi Raman Mukundan, a neuroscientist who formerly ran the clinical psychology department of the National Institute of Mental Health and Neuro Sciences in Bangalore. His system builds on methods developed at American universities by other scientists, including Emanuel Donchin, Lawrence A. Farwell and J. Peter Rosenfeld.

Neuroethics and Law Blog comments, as does Dr Lawrence Farwell (inventor of the controversial ‘Brain Fingerprinting’ technique, which bears a passing resemblence to the BEOS test used in India).

Scary stuff.

Lie Detector Technology in court – seduced by neuroscience?

Jeffrey Bellin from the California Courts of Appeal has a paper forthcoming in Temple Law Review on the legal issues involved in deploying new lie detection technology – specifically fMRI technology – in real-world courtroom settings (hat tip to the Neuroethics and Law blog ).

Bellin examines the ‘scientific validity’ requirements and argues that the research has progressed to the point where fMRI evidence in deception detection issues will soon reach the standard required to be admissible under the Daubert criteria. However, Bellin’s key issue with using fMRI evidence in court is not on scientific but on legal grounds: he claims that fMRI evidence would fall foul of the hearsay prohibition. He explains that “The hearsay problem arises because lie detector evidence consists of expert analysis of out-of-court statements offered for their truth (i.e., hearsay) and is consequently inadmissible under Federal Rule of Evidence 801 absent an applicable hearsay exception” (p.102).

I am not a lawyer so can’t really comment on the hearsay issue raised by Bellin, except to say that it’s an interesting observation and not one I’ve heard before. I feel better placed to assess his analysis that fMRI technology is only a small step from reaching the Daubert standard. In this Bellin is – in my judgement – way off-beam. His argument runs something like this:

1. The US Government has poured lots of money into lie detection techologies (Bellin quotes a Time magazine guess-timate of “tens of millions to hundreds of millions of dollars” – an uncorroborated rumour, not an established fact).

2. fMRI is “the most promising of the emerging new lie detection technologies” (p.106) because “brain activities will be more difficult to suppress than typical stress reactions measured by traditional polygraph examinations, [so] new technologies like fMRI show great promise for the development of scientifically valid lie detectors” (p.106).

3. Thus, “The infusion of money and energy into the science of lie detection coupled with the pace of recent developments in that science suggest that it is only a matter of time before lie detector evidence meets the Daubert threshold for scientific validity.” (p.107).

And the references he provides for this analysis? Steve Silberman’s “Don’t Even Think About Lying” in Wired Magazine from 2006, a piece in Time magazine the same year, entitled “How to Spot a Liar“, by Jeffrey Kluger and Coco Masters. Now both of these articles are fine pieces of journalism, but they hardly constitute good grounds for Bellin’s assertion that fMRI techology is almost ready to be admitted in court. (And if you’re going to use journalistic pieces as references, can I recommend, as a much better source, an excellent article: “Duped: Can brain scans uncover lies?” by Margaret Talbot from The New Yorker [July 2, 2007].)

Let’s just remind ourselves of the Daubert criteria. To paraphrase the comprehensive Wikipedia page, before expert testimony can be entered into evidence it must be relevant to the case at hand, and the expert’s conclusions must be scientific. This latter condition means that a judge deciding on whether to admit expert testimony based on a technique has to address five points:

1. Has the technique been tested in actual field conditions (and not just in a laboratory)?
2. Has the technique been subject to peer review and publication?
3. What is the known or potential rate of error? Is it zero, or low enough to be close to zero?
4. Do standards exist for the control of the technique’s operation?
5. Has the technique been generally accepted within the relevant scientific community?

As far as fMRI for lie detection is concerned I think the answers are:

  1. No, with only a couple of exceptions.
  2. Yes, though there is a long way to go before the technique has been tested in relevant conditions.
  3. In some lab conditions, accuracy rates reach 95%. But what about in real life situations? We don’t have enough research to say.
  4. There are no published or agreed standards for undertaking deception detection fMRI scans.
  5. No, the arguments are still raging!

As an example of 5, one of the crucial arguments is over the interpretation of the results of fMRI experiments (Logothetis, 2008). Mind Hacks had a terrific article a few weeks ago in which they summarise the key issue:

It starts with this simple question: what is fMRI measuring? When we talk about imaging experiments, we usually say it measures ‘brain activity’, but you may be surprised to know that no-one’s really sure what this actually means.

And as Jonah Lehrer points out more recently :

[...T]he critical flaw of such studies is that they neglect the vast interconnectivity of the brain… Because large swaths of the cortex are involved in almost every aspect of cognition – even a mind at rest exhibits widespread neural activity – the typical fMRI image, with its highly localized spots of color, can be deceptive. The technology makes sense of the mind by leaving lots of stuff out – it attempts to sort the “noise” from the “signal” – but sometimes what’s left out is essential to understanding what’s really going on.

Bellin is not alone in perhaps being seduced by the fMRI myth, as two recent studies (McCabe & Castel, 2007; Wiesberg et al., 2008) demonstrate very nicely. McCabe and Castel showed that participants judged news stories as ‘more scientific’ when accompanied by images of brain scans than without, and Weisberg et al.’s participants rated bad explanations of psychological phenomena as more scientifically sound when they included a spurious neuroscience reference. Why are people so beguiled by the blobs in the brain? Here are McCabe and Castel, quoted in the BPS Research Blog:

McCabe and Castel said their results show people have a “natural affinity for reductionistic explanations of cognitive phenomena, such that physical representations of cognitive processes, like brain images, are more satisfying, or more credible, than more abstract representations, like tables or bar graphs.”

References:

See also:

Deception research across the blogosphere

The physiology of lying by exaggerating: Over at the BPS Research Digest Blog, a summary of research that has caused ripples around the media: lying by exaggeration doesn’t seem to cause the typical physiological arousal effects that some associate with liars:

Telling lies about our past successes can sometimes be self-fulfilling, at least when it come to exam performance. That’s according to the New York Times, which reports on studies by Richard Gramzow at the University of Southampton and colleagues.

Their research has shown that, when asked, many students exaggerate their past exam performance, and that those students who do this tend to go on to perform better in the future.

What’s more, a study published in February showed that when these exaggerators are interviewed about their past academic performance, they don’t show any of the physiological hallmarks associated with lying, but rather their bodies stay calm. It’s almost as though this is a different kind of lying, aimed more at the self, with the hope of encouraging improved future performance.

More commentary on this research over at Deric Bownds’ Mind Blog.

Reference:

Two popular articles on deception: Via the Situationist Blog (7 April), a link to an article in Forbes on “how to sniff out a liar” (which doesn’t include any hints for olfactory detection of deceivers!). And hat tip to the Antipolygraph Blog (16 April) for pointing us toThe Lie of Lie Detectors By Rob Shmerling:

Recently, two studies announced effective ways to determine whether a person was telling the truth — one used a brain scan while the other detected heat around the face. Since you probably tell the truth all of the time, it is likely that these reports will have no direct bearing on you. But, for those who perform lie detector tests or for those who might be asked to submit to one, these techniques could someday change how these tests are performed.

The Pentagon’s “Porta-Poly”: The news that the Pentagon is trialling a ‘pocket lie detector’ known as the Preliminary Credibility Assessment Screening System (PCASS) for soldiers has been picked up and commented upon by a number of sources including Bruce Schneier and the Anti-Polygraph Blog, but don’t skip the original MSN story which is well worth reading.

Update: Missed one: Over at Practical Ethics, in Fighting Absenteeism with Voice Analysis (16 May).  The news that some companies are apparently considering using this discredited technology to check up on workers calling in sick is chilling.

Why the spurious link to deception?

From New Scientist (4 April):

Our skin may contain millions of tiny “antennas” in the form of microscopic sweat ducts, say researchers in Israel. In experiments, they found evidence that signals produced by bouncing electromagnetic waves off the tiny tubes might reveal a person’s physical and emotional state from a distance.

So far so good, but then:

The research might eventually result in lie detectors that require no physical contact with the subject.

Why the spurious link to deception? The original article doesn’t mention it – the authors’ comment about the possible application of the technique is this:

This phenomenon can be used as the basis for a generic remote sensing technique for providing a spatial map of the sweat gland activity of the examined subjects. As the mental state and sweat gland activity are correlated it has the potential to become a method for providing by remote sensing information regarding some physiological parameters and the mental state of the patients.

I guess that just isn’t as sexy as “hey, what about this as a lie detector!”.

As several erudite commenters on Slashdot have noted, despite the common misconception, lying does not necessarily lead to a stress reaction in the deceiver. And people can have stress reactions when they are telling the truth. So machines that measure stress can be very unreliable detectors of deceit.

Reference:

Voice Stress Analysis: Only 15 Percent of Lies About Drug Use Detected in Field Test

The latest issue of the National Institute of Justice journal (NIJ Journal No. 259, March 2008) features a great article by Kelly Damphousse summarising recent research on voice stress analysis (VSA). Here’s an extract:

According to a recent study funded by the National Institute of Justice (NIJ), two of the most popular VSA programs in use by police departments across the country are no better than flipping a coin when it comes to detecting deception regarding recent drug use. The study’s findings also noted, however, that the mere presence of a VSA program during an interrogation may deter a respondent from giving a false answer.

The findings of our study revealed:

  • Deceptive respondents. Fifteen percent who said they had not used drugs—but who, according to their urine tests, had—were correctly identified by the VSA programs as being deceptive.
  • Nondeceptive respondents. Eight and a half percent who were telling the truth—that is, their urine tests were consistent with their statements that they had or had not used drugs—were incorrectly classified by the VSA programs as being deceptive.

Using these percentages to determine the overall accuracy rates of the two VSA programs, we found that their ability to accurately detect deception about recent drug use was about 50 percent.

Based solely on these statistics, it seems reasonable to conclude that these VSA programs were not able to detect deception about drug use, at least to a degree that law enforcement professionals would require—particularly when weighed against the financial investment. We did find, however, that arrestees who were questioned using the VSA instruments were less likely to lie about illicit drug use compared to arrestees whose responses were recorded by the interviewer with pen and paper.

Damphousse concludes:

It is important to look at both “hard” and “hidden” costs when deciding whether to purchase or maintain a VSA program. The monetary costs are substantial: it can cost up to $20,000 to purchase LVA. The average cost of CVSA® training and equipment is $11,500. Calculating the current investment nationwide—more than 1,400 police departments currently use CVSA®, according to the manufacturer—the total cost is more than $16 million not including the manpower expense to use it.

The hidden costs are, of course, more difficult to quantify. As VSA programs come under greater scrutiny—due, in part, to reports of false confessions during investigations that used VSA—the overall value of the technology continues to be questioned.

See also:

Quick round up of deception news

Sorry for the slow posting recently – real life is getting in the way of blogging at the moment., and is likely to continue to do so for some time yet, so please bear with me. Perhaps some of these items will give you your deception research fix in the meantime.

If you’d like something to listen to during the daily commute why not download an interview with John F. Sullivan, author of Gatekeeper: Memoirs of a CIA Polygraph Examiner (h/t Antipolygraph Blog).

Alternatively, try a short NPR Morning Edition segment on the neuropsychology of lying (h/t and see also The Frontal Cortex).

The ever-interesting BPS Research Digest discusses a study of how toddlers tell a joke from a mistake. According to the researchers, Elena Hoicka and Merideth Gattis:

…the ability to recognise humorous intent comes after the ability to recognise jokes, but before the ability to recognise pretense and lies. “We propose that humour understanding is an important step toward understanding that human actions can be intentional not just when actions are right, but even when they are wrong,” they concluded.

Karen Franklin has a terrific commentary on the Wall Street Journal’s discussion of a subscale of the MMPI, which claims to detect malingerers but which, according to critics, results in a large number of false positives (i.e., labelling truthful test-takers as malingerers). (See also a short commentary by Steven Erikson).

There are two articles by Jeremy Dean of the glorious PsyBlog on false memories (here and here).

And finally, Kai Chang at Overcoming Bias reports on an unusual teaching technique which involves asking students to spot the Lie of the Day.

Simple test improves accuracy of polygraph results

polygraphA press release from Blackwell Publishing (28 Nov) highlights a new study coming out in the next issue of the journal Psychophysiology.

In order to prevent false positive results in polygraph examinations, testing is set to err on the side of caution. This protects the innocent, but increases the chances that a guilty suspect will go unidentified. A new study published in Psychophysiology finds that the use of a written test, known as Symptom Validity Testing (SVT), in conjunction with polygraph testing may improve the accuracy of results.

SVT is an independent measure that tests an entirely different psychological mechanism than polygraph examinations. It is based on the rationale that, when presented with both real and plausible-but-unrelated crime information, innocent suspects will show a random pattern of results when asked questions about the crime. SVT has previously been shown as effective in detecting post-traumatic stress disorder, amnesia and other perceptual deficits for specific events.

The study finds that SVT is also an easy and cost-effective method for determining whether or not a suspect is concealing information. In simulated cases of mock crime questioning and feigned amnesia, it accurately detected when a participant was lying.

Furthermore, when used in combination with the preexisting but relatively uncommon concealed information polygraph test (CIT), test accuracy is found to be higher than when either technique is used alone.

“We showed that the accuracy of a Concealed Information Test can be increased by adding a simple pencil and paper test,” says lead author Ewout Meijer of Maastricht University. “When ‘guilty’ participants were forced to choose one answer for each question, a substantial proportion did not succeed in producing the random pattern that can be expected from ‘innocent’ participants.”

Reference:

Abstract below the fold

Photo credit: pauldwaite, Creative Commons License

Continue reading

NPR on lie detection

Hat tip to blog.bioethics.net (a great blog associated with the American Journal of Bioethics):

This past week NPR’s Morning Edition carried a three-part series about lie detection reported by Dina Temple-Raston. (The segments are posted as both audio and text, so they’re easy to scan if you can’t listen.) The series covers the questionable accuracy of polygraphs, the emerging field of lie detection by fMRI, and the examination of facial “micro expressions” for hints of lies.

Head over to blog.bioethics.net for some commentary, or go straight to the NPR site for more details.

Computer voice stress analyzer tests debated

Another TV expose of the use of Computer Voice Stress Analyzers in the USA, this time from Colorado’s 9 News (1 November):

A device used by Colorado law enforcement agencies to identify when someone is lying, may not work and may be costing taxpayers money. Computer Voice Stress Analyzers (CVSAs) claim to measure changes in a person’s voice that indicate a lie. However, three recent studies say the device does not accurately tell the difference between a person lying and a person telling the truth. CVSAs have been used by 21 law enforcement agencies in Colorado.

Applying fMRI to the question of guilt versus innocence – on TV and then in an academic journal…

brainscan2 A press release (2 Nov) heralds the publication of a new study by Professor Sean Spence from the University of Sheffield, who claims the research shows that fMRI “could be used alongside other factors to address questions of guilt versus innocence”. It’s an interesting study on two counts: one, it appears to be the first time that fMRI lie-detection research has been carried out using a real world case (as opposed to contrived experiments), and two, the research was funded by a TV company and featured on a TV documentary earlier this year. The study is currently in press in the journal European Psychiatry (reference below).

The press release gives a summary of the findings:

An academic at the University of Sheffield has used groundbreaking technology to investigate the potential innocence of a woman convicted of poisoning a child in her care. Professor Sean Spence, who has pioneered the use of functional Magnetic Resonance Imaging (fMRI) to detect lies, carried out groundbreaking experiments on the woman who, despite protesting her innocence, was sentenced to four years in prison. ….Using the technology, Professor Spence examined the woman´s brain activity as she alternately confirmed her account of events and that of her accusers. The tests demonstrated that when she agreed with her accusers´ account of events she activated extensive regions of her frontal lobes and also took significantly longer to respond – these findings have previously been found to be consistent with false or untrue statements.

In the acknowledgements section of the paper the authors reveal that the study “was funded by Quickfire Media in association with Channel Four Television”. The case Spence et al. describe as that of “Woman X” was featured in Channel 4′s Lie Lab series (and if you’re really interested, you can easily identify X in a couple of clicks). Although unusual, this isn’t the first time that research featured on TV has found its way into academic journals: see, for example, Haslam and Reicher’s academic publications based on their controversial televised replication of Zimbardo’s Stanford Prison Experiment .

In theory, I am not sure it necessarily matters if a study is done for the TV, if the study is carried out in an ethical and scientific way, and the subsequent article(s) meet rigorous standards of peer review. Nor does it always matter if the academic research then receives wider publicity as a result. In this case, however, I hope that anyone picking up and reporting further on this story reads the actual paper, in which Spence and his co-authors consider carefully the implications of the study and the caveats that should be applied to the results:

To our knowledge, this is the first case described where fMRI or any other form of functional neuroimaging has been used to study truths and lies derived from a genuine ‘real-life’ scenario, where the events described pertain to a serious forensic case. All the more reason then for us to remain especially cautious while interpreting our findings and to ensure that we make explicit their limitations: the weaknesses of our approach (p.4).

The authors go on to discuss alternative interpretations of their results: Perhaps X had told her story so many times that her responses were automatic? Perhaps the emotive nature of the subject under discussion (poisoning a child) gave rise to the observed pattern of activation? Maybe X used countermeasures (such as moving her head or using cognitive distractions)? Perhaps she “has ‘convinced herself’ of her innocence … she answered sincerely though ‘incorrectly’”? In this case, perhaps the researchers have “merely imaged ‘self-deception’” (p.5)? For each argument, the authors discuss the pros and cons, remaining careful not to claim too much for their results, and pointing out that further empirical enquiry is needed.

These cautions are also echoed in Spence’s comments at the end of the press release:

“This research provides a fresh opportunity for the British legal system as it has the potential to reduce the number of miscarriages of justice. However, it is important to note that, at the moment, this research doesn´t prove that this woman is innocent. Instead, what it clearly demonstrates is that her brain responds as if she were innocent.”

Reference:

See also :

  • Mind Hacks discusses an article in which Raymond Tallis “laments the rise of ‘neurolaw’ where brain scan evidence is used in court in an attempt to show that the accused was not responsible for their actions”.
  • Deception Blog posts on brain scanning and deception

Abstract below the fold.

Photo credit: killermonkeys, Creative Commons License

Continue reading