- Insurance “claim fraudsters think too much”. Some great Portsmouth Uni research covered by Irish Independent http://retwt.me/1P8R0
- “If You Want to Catch a Liar, Make Him Draw” David DiSalvo @Neuronarrative on more great Portsmouth Uni research http://retwt.me/1P8ZB
- fMRI scans of people with schizophrenia show they have same functional anatomical distinction between truth telling & deception as others http://bit.ly/aO5cI2 via @Forpsych
- In press: Promising to tell truth makes 8- 16 year-olds more honest (but lectures on morality don’t). Beh Sciences & Law http://is.gd/fCa7X
Gah, Twitter update widget broken. Here are the deception-relevant tweets from the last few weeks:
Polygraph and similar:
- Detecting concealed information w/ reaction times: Validity & comparison w/ polygraph App Cog Psych 24(7) http://is.gd/fhPMW
- Important (rare) study on polygraph w/ UK sex offenders: leads to more admissions; case mgrs perceive increased risk http://is.gd/eoW4Q
fMRI and other brain scanning:
- If Brain Scans Really Detected Deception, Who Would Volunteer to be Scanned? J Forensic Sci http://is.gd/eiz2o
- FMRI & deception: “The production and detection of deception in an interactive game” in Neuropsychologia http://is.gd/eUMO3
- In the free access PLoS1: fMRI study indicates neural activity associated with deception is valence-related. PLoS One 5(8). http://is.gd/f6IaM
- Distinguishing truthful from invented accounts using reality monitoring criteria – http://ht.ly/2z8FC
- Detecting Deceptive Discussions in Conference Calls. Linguistic analysis method 50-65% accuracy. SSRN via http://is.gd/eI0bA
- Effect of suspicion & liars’ strategies on reality monitoring Gnisci, Caso & Vrij in App Cog Psy 24:762–773 http://is.gd/eCFyA
- A new Canadian study on why sex offenders confess during police interrogation (no polygraph necessary) http://is.gd/eoWl7
- Can fabricated evidence induce false eyewitness testimony? App Cog Psych 24(7) http://is.gd/fhPDd Free access
- In press, B J Soc Psy Cues to deception in context. http://is.gd/fhPcY Apparently ‘context’ = ‘Jeremy Kyle Show’. Can’t wait for the paper!
- Can people successfully feign high levels of interrogative suggestibility & compliance when given instructions to malinger? http://ht.ly/2z8Wz
- Eliciting cues to children’s deception via strategic disclosure of evidence App Cog Psych 24(7) http://is.gd/fhPIS
- Perceptions about memory reliability and honesty for children of 3 to 18 years old – http://ht.ly/2z8O1
And some other links of interest:
- “How to Catch a Terrorist: Read His Brainwaves-ORLY?” Wired Danger Room is sceptical about P300 tests as CT measure http://is.gd/f5JFT
RT@vaughanbell: Good piece on the attempts to get dodgy fMRI lie detection technology introduced to the courtroom. http://is.gd/eSdP6
- Robots learn how to deceive http://bit.ly/bTPCHh
UPDATE! Request to admit No Lie MRI report in California case is withdrawn Stanford Center for Law & the Biosciences Blog, 25 March 09
So depressing. Here’s the coverage so far:
- Stanford Center for Law & the Biosciences Blog, it appears, broke the story (14 Mar)
- Brief comments from the Neuroethics and Law Blog (15 Mar)
- Detailed report from Wired Science in MRI Lie Detection to Get First Day in Court (16 Mar)
- Karen Franklin’s In the News Blog offers further thoughts and links to previous posts on the limitations of fMRI for lie detection (16 Mar)
Part two of the Deception Blog round-up of “all those articles I haven’t had a chance to blog about”. Part one was about catching liars via non-mechanical techniques. This post covers articles and discussion about new technologies to detect deception, including fMRI and measurement of Event-Related Potentials.
fMRI and deception: discussion on the journal pages
It’s been quite a year for advances in neuroscience and deception detection, so much so that in a recent paper in of the American Academy of Psychiatry & Law, Daniel Langleben and Frank Dattilio suggested that a new discipline of “forensic MRI” was emerging. One interesting exchange appeared recently in the Journal of the American Academy of Psychiatry & Law:
Joseph R. Simpson (2008). Functional MRI Lie Detection: Too Good to be True? Journal of the American Academy of Psychiatry & Law 36(4):491-498
…The new approach promises significantly greater accuracy than the conventional polygraph—at least under carefully controlled laboratory conditions. But would it work in the real world? Despite some significant concerns about validity and reliability, fMRI lie detection may in fact be appropriate for certain applications. This new ability to peer inside someone’s head raises significant questions of ethics. Commentators have already begun to weigh in on many of these questions. A wider dialogue within the medical, neuroscientific, and legal communities would be optimal in promoting the responsible use of this technology and preventing abuses.
James R. Merikangas (2008). Commentary: Functional MRI Lie Detection. Journal of the American Academy of Psychiatry & Law 36(4): 499-501
…The present article concludes that the use of functional imaging to discriminate truth from lies does not meet the Daubert criteria for courtroom testimony.
Daniel D. Langleben and Frank M. Dattilio (2008). Commentary: The Future of Forensic Functional Brain Imaging. Journal of the American Academy of Psychiatry & Law 36(4): p. 502-504
…we update and interpret the data described by Simpson, from the points of view of an experimental scientist and a forensic clinician. We conclude that the current research funding and literature are prematurely skewed toward discussion of existing findings, rather than generation of new fMRI data on deception and related topics such as mind-reading, consciousness, morality, and criminal responsibility. We propose that further progress in brain imaging research may foster the emergence of a new discipline of forensic MRI.
Earlier this year Kamila Sip and colleagues challenged proponents of neuroimaging for deception detection to take more account of the real world context in which deception occurs, which led to a robust defence from John-Dylan Haynes and an equally robust rebuttal from Sip et al. It all happened in the pages of Trends in Cognitive Sciences:
- Kamila E Sip, Andreas Roepstorff, William McGregor and Chris D Frith (2008). Detecting deception: the scope and limits. Trends in Cognitive Sciences 12(2):48-53
With the increasing interest in the neuroimaging of deception and its commercial application, there is a need to pay more attention to methodology. The weakness of studying deception in an experimental setting has been discussed intensively for over half a century. However, even though much effort has been put into their development, paradigms are still inadequate. The problems that bedevilled the old technology have not been eliminated by the new. Advances will only be possible if experiments are designed that take account of the intentions of the subject and the context in which these occur.
John-Dylan Haynes (2008). Detecting deception from neuroimaging signals – a data-driven perspective. Trends in Cognitive Sciences 12(4):126-127
In their recent article, Sip and colleagues raise several criticisms that question whether neuroimaging is suitable for lie detection. Here, two of their points are critically discussed. First, contrary to the view of Sip et al., the fact that brain regions involved in deception are also involved in other cognitive processes is not a problem for classification-based detection of deception. Second, I disagree with their proposition that the development of lie-detection requires enriched experimental deception scenarios. Instead, I propose a data-driven perspective whereby powerful statistical techniques are applied to data obtained in real-world scenarios.
Kamila E. Sip, Andreas Roepstorff, William McGregor and Chris D. Frith (2008). Response to Haynes: There’s more to deception than brain activity. Trends in Cognitive Sciences 12(4):127-128
…Valid experimental paradigms for eliciting deception are still required, and such paradigms will be particularly difficult to apply in real-life settings… We agree with Haynes, however, that there are important ethical issues at stake for researchers in this field. In our opinion, one of the most important of these is careful consideration of how results derived from highly controlled laboratory settings compare with those obtained from real-life scenarios, and if and when imaging technology should be transferred from the laboratory to the judicial system.
fMRI and deception: new research findings
Of course discussion is worth nothing if you don’t have research results to discuss. Shawn Christ and colleagues delved deeper into to the cognitive processes associated with deception:
Shawn E Christ, David C Van Essen, Jason M Watson, Lindsay E Brubaker, and Kathleen B McDermott (in press). The Contributions of Prefrontal Cortex and Executive Control to Deception: Evidence from Activation Likelihood Estimate Meta-analyses. Cerebral Cortex Advance Access published online on November 2, 2008
Previous neuroimaging studies have implicated the prefrontal cortex (PFC) and nearby brain regions in deception. This is consistent with the hypothesis that lying involves the executive control system….Our findings support the notion that executive control processes, particularly working memory, and their associated neural substrates play an integral role in deception. This work provides a foundation for future research on the neurocognitive basis of deception.
Meanwhile, two groups of researchers reported that fMRI techniques can differentiate between mistakes and false memories vs deliberate deception, with Tatia Lee and colleagues showing that in the case of feigning memory impairment, deception “is not only more cognitively demanding than making unintentional errors but also utilizes different cognitive processes”:
Nobuhito Abe, Jiro Okuda, Maki Suzuki, Hiroshi Sasaki, Tetsuya Matsuda, Etsuro Mori, Minoru Tsukada, and Toshikatsu Fujii (2008). Neural Correlates of True Memory, False Memory, and Deception. Cerebral Cortex 18(12):2811-2819
Tatia M C Lee, Ricky K C Au, Ho-Ling Liu, K H Ting, Chih-Mao Huang, and Chetwyn C H Chan (in press). Are errors differentiable from deceptive responses when feigning memory impairment? An fMRI study. Brain and Cognition, published online 18 Oct 2008.
fMRI and deception in the blogosphere
Commentary and discussion of fMRI was not limited to the pages of scholarly journals, however. A terrific post by Vaughan over at Mind Hacks on the limitations of fMRI studies zipped around the blogosphere (and rightly so) and is well worth a read if you are interested in becoming a more critical consumer of fMRI deception detection studies (see also Neurophilosophy’s post MRI: What is it good for? ).
There’s a detailed write-up by Hank Greely of the University of Akron Law School’s conference on Law and Neuroscience held in September, which covers the science, the practicalities and the ethics of using neuroscience in forensic contexts (see also his summary of a presentation at an earlier conference on ‘neurolaw’). Judges too, are “waking up to the potential misuse of brain-scanning technologies” with a recent judges’ summit in the US to “discuss protecting courts from junk neuroscience”, reports New Scientist .
Nevertheless, purveyors of MRI lie-detection technology continue to push their wares. For instance, the Antipolygraph Blog picked up a radio discussion on commercial fMRI-based lie detection in June (the audio download is still available as an mp3 download).
ERP and deception: the controversial BEOS test
Earlier this year I and many others blogged about the disturbing use of brain scanning in a recent murder trial in India. The technique is known as the Brain Electrical Oscillations Signature test and is based on measuring Event-Related Potentials (electrical activity across the brain). Neurologica blog and Neuroethics and Law have a write-ups and links for those who wish to know more.
Neuroethics and Law blog links to a pdf of the judge’s opinion in the case, where pages 58-64 include a summary of the judge’s understanding of the BEOS procedure and what it ‘revealed’ in this case. Most disturbing is the apparent certainty of the judge that the tests were appropriate, scientifically robust and applied correctly by “Sunny Joseph who is working as Assistant Chemical Analyser in Forensic Science Laboratory, Mumbai” (p.55-56):
…competency of this witness to conduct the Test is not seriously challenged. His evidence also reveals that he was working as Clinical Psychologist in National Institute of Mental Health and Neuro Sciences at Bangalore and he has experience in the field of Neuro psychology since last 6 years and in forensic technique since last 1½ years. He has himself conducted approximately 15 Polygraph Tests and has been associated with almost 100 Polygraph Tests. He has conducted 16 BEOS Tests and has been associated in conducting of about 12 Neuro Psychology Tests. Therefore his expertise in my opinion, can in no way be challenged and nothing is brought on record in his cross examination to show that the Tests conducted were not proper and requisite procedure was not followed (p.62).
On a happier note, my hot tip for the New Year is to keep your eye on Social Neuroscience – there are several articles on neural correlates of deception in press there which they are saving up for a special issue in 2009.
More soon – part 3 covers the 2008 flurry of interest in deception and magic!
New Scientist (3 Oct) asks: Could brain scans ever be safe evidence?
DONNA insists that she met friends for lunch on the afternoon of 25 January 2008 and did not violate a restraining order against Marie. But Marie told police that Donna broke the terms of the order by driving up to her car while it was stopped at a traffic light, yelling and cursing, and then driving off.
A polygraph test proved unsatisfactory: every time Marie’s name was mentioned Donna’s responses went sky-high. But when Donna approached Cephos of Tyngsboro, Massachusetts, for an fMRI scan, which picks up changes in blood flow and oxygenation in the brain, it was a different story.
“Her results indicated that she was telling the truth about the January 25 incident,” says Steven Laken of Cephos, who maintains that when people are lying, more areas of the brain “light up” than when they are telling the truth (see the scans of Donna’s brain).
Unfortunately the rest of the article is secured to subscribers.
I’ve been lax posting on the blogs recently, I know (real life interferes with blogging). Consider this a catch-up post with some of the deception-related issues hitting the news stands over the last few weeks.
Polygraphing sex offenders gains momentum in the UK: A new pilot scheme to polygraph test sex offenders to see if they “are a risk to the public or are breaking the terms of their release from jail”, according to The Times (20 Sept 2008).
Brain fingerprinting in the news again: Brain test could be next polygraph (Seattle Post-Intelligencer, 14 Sept):
A Seattle scientist who has developed an electronic brain test that he says could improve our ability to force criminals to reveal themselves, identify potential terrorists and free those wrongly convicted may have finally broken through the bureaucratic barriers that he believes have served to stifle adoption of the pioneering technique.
“There seems to be a renewed surge of interest in this by the intelligence agencies and the military,” said Larry Farwell, neuroscientist and founder of Brain Fingerprinting Laboratories based at the Seattle Science Foundation.
Not-brain-fingerprinting deception detection brain scan procedure isn’t scientific, according to a well-qualified panel (The Hindu, 8 Sept). India’s Directorate of Forensic Sciences chooses not to accept the panel’s findings:
The Directorate of Forensic Sciences, which comes under the Union Ministry of Home, will not accept the findings of the six-member expert committee that looked into brain mapping and its variant “brain electrical oscillation signature (BEOS) profiling” on the ground that the committee has been dissolved.
The six-member technical peer review committee, headed by National Institute of Mental Health and Neuro Sciences (NIMHANS) Director D. Nagaraja, started work in May 2007. The panel had concluded that the two procedures were unscientific and had recommended against their use as evidence in court or as an investigative tool.
See also: more on “BEOS profiling” in The Times of India (21 July) which claims that “This brain test maps the truth”.
Scott Bunce, at Drexel University’s College of Medicine in Philadelphia, thinks a better solution [to the problem of detecting lies] is to send near-infrared light through the scalp and skull into the brain and see how much is reflected back. And he has designed a special headband that does just that. The amount of reflected light is dependent on the levels of oxygen in the blood, which in turn depends on how active the brain is at that point.
This, he says, gives a detailed picture of real-time activity within the brain that can be used to determine whether the subject is lying. The technique is both cheaper and easier to apply than fMRI and gives a higher resolution than an EEG. …Of course, nobody knows whether brain activity can reliably be decoded to reveal deception, but that’s another question.
According to a report in the New York Times (14 Sept), an Indian judge has taken the results a brain scan as “proof that the [murder] suspect’s brain held ‘experiential knowledge’ about the crime that only the killer could possess”, and passed a life sentence.
The Brain Electrical Oscillations Signature test, or BEOS, was developed by Champadi Raman Mukundan, a neuroscientist who formerly ran the clinical psychology department of the National Institute of Mental Health and Neuro Sciences in Bangalore. His system builds on methods developed at American universities by other scientists, including Emanuel Donchin, Lawrence A. Farwell and J. Peter Rosenfeld.
Sorry for the slow posting recently – real life is getting in the way of blogging at the moment., and is likely to continue to do so for some time yet, so please bear with me. Perhaps some of these items will give you your deception research fix in the meantime.
The ever-interesting BPS Research Digest discusses a study of how toddlers tell a joke from a mistake. According to the researchers, Elena Hoicka and Merideth Gattis:
…the ability to recognise humorous intent comes after the ability to recognise jokes, but before the ability to recognise pretense and lies. “We propose that humour understanding is an important step toward understanding that human actions can be intentional not just when actions are right, but even when they are wrong,” they concluded.
- Reference: Elena Hoicka and Merideth Gattis (2008). Do the wrong thing: How toddlers tell a joke from a mistake. Cognitive Development 23(1):180-190
Karen Franklin has a terrific commentary on the Wall Street Journal’s discussion of a subscale of the MMPI, which claims to detect malingerers but which, according to critics, results in a large number of false positives (i.e., labelling truthful test-takers as malingerers). (See also a short commentary by Steven Erikson).
Hat tip to blog.bioethics.net (a great blog associated with the American Journal of Bioethics):
This past week NPR’s Morning Edition carried a three-part series about lie detection reported by Dina Temple-Raston. (The segments are posted as both audio and text, so they’re easy to scan if you can’t listen.) The series covers the questionable accuracy of polygraphs, the emerging field of lie detection by fMRI, and the examination of facial “micro expressions” for hints of lies.
A press release (2 Nov) heralds the publication of a new study by Professor Sean Spence from the University of Sheffield, who claims the research shows that fMRI “could be used alongside other factors to address questions of guilt versus innocence”. It’s an interesting study on two counts: one, it appears to be the first time that fMRI lie-detection research has been carried out using a real world case (as opposed to contrived experiments), and two, the research was funded by a TV company and featured on a TV documentary earlier this year. The study is currently in press in the journal European Psychiatry (reference below).
The press release gives a summary of the findings:
An academic at the University of Sheffield has used groundbreaking technology to investigate the potential innocence of a woman convicted of poisoning a child in her care. Professor Sean Spence, who has pioneered the use of functional Magnetic Resonance Imaging (fMRI) to detect lies, carried out groundbreaking experiments on the woman who, despite protesting her innocence, was sentenced to four years in prison. ….Using the technology, Professor Spence examined the woman´s brain activity as she alternately confirmed her account of events and that of her accusers. The tests demonstrated that when she agreed with her accusers´ account of events she activated extensive regions of her frontal lobes and also took significantly longer to respond – these findings have previously been found to be consistent with false or untrue statements.
In the acknowledgements section of the paper the authors reveal that the study “was funded by Quickfire Media in association with Channel Four Television”. The case Spence et al. describe as that of “Woman X” was featured in Channel 4’s Lie Lab series (and if you’re really interested, you can easily identify X in a couple of clicks). Although unusual, this isn’t the first time that research featured on TV has found its way into academic journals: see, for example, Haslam and Reicher’s academic publications based on their controversial televised replication of Zimbardo’s Stanford Prison Experiment .
In theory, I am not sure it necessarily matters if a study is done for the TV, if the study is carried out in an ethical and scientific way, and the subsequent article(s) meet rigorous standards of peer review. Nor does it always matter if the academic research then receives wider publicity as a result. In this case, however, I hope that anyone picking up and reporting further on this story reads the actual paper, in which Spence and his co-authors consider carefully the implications of the study and the caveats that should be applied to the results:
To our knowledge, this is the first case described where fMRI or any other form of functional neuroimaging has been used to study truths and lies derived from a genuine ‘real-life’ scenario, where the events described pertain to a serious forensic case. All the more reason then for us to remain especially cautious while interpreting our findings and to ensure that we make explicit their limitations: the weaknesses of our approach (p.4).
The authors go on to discuss alternative interpretations of their results: Perhaps X had told her story so many times that her responses were automatic? Perhaps the emotive nature of the subject under discussion (poisoning a child) gave rise to the observed pattern of activation? Maybe X used countermeasures (such as moving her head or using cognitive distractions)? Perhaps she “has ‘convinced herself’ of her innocence … she answered sincerely though ‘incorrectly’”? In this case, perhaps the researchers have “merely imaged ‘self-deception’” (p.5)? For each argument, the authors discuss the pros and cons, remaining careful not to claim too much for their results, and pointing out that further empirical enquiry is needed.
These cautions are also echoed in Spence’s comments at the end of the press release:
“This research provides a fresh opportunity for the British legal system as it has the potential to reduce the number of miscarriages of justice. However, it is important to note that, at the moment, this research doesn´t prove that this woman is innocent. Instead, what it clearly demonstrates is that her brain responds as if she were innocent.”
- Sean A. Spence, Catherine J. Kaylor-Hughes, Martin L. Brook, Sudheer T. Lankappa and Iain D. Wilkinson (in press). ‘Munchausen’s syndrome by proxy’ or a ‘miscarriage of justice’? An initial application of functional neuroimaging to the question of guilt versus innocence. European Psychiatry.
See also :
- Mind Hacks discusses an article in which Raymond Tallis “laments the rise of ‘neurolaw’ where brain scan evidence is used in court in an attempt to show that the accused was not responsible for their actions”.
- Deception Blog posts on brain scanning and deception
Abstract below the fold.
Detailed commentary from Patrick Barkham in the Guardian (18 Sept), exploring the use of ‘lie detecting’ machines in the UK. He covers the use of voice stress analysis in benefit offices and insurance companies, and polygraphy for sex offenders. Interesting stuff, and well worth reading in full over on the Guardian site. Here’s a flavour:
[Harrow] council prefers the phrase “voice risk analysis” and Capita calls its combination of software, special scripts and training for handlers the “Advanced Validation Solution”. Just don’t say it’s a lie detector. “Please don’t call it that. We’re not happy with that. It’s an assessment,” says Fabio Esposito, Harrow’s assistant benefit manager.
… Voice stress analysis systems have been used for more than five years in the British insurance industry but have yet to really catch on, according to the Association of British Insurers. There was an initial flurry of publicity when motor insurance companies introduced the technology in 2001 but it is still “the exception rather than the norm,” says Malcolm Tarling of the ABI. “Not many companies use it and those that do use it in very controlled circumstances. They never use the results of a voice risk analysis alone because the technology is not infallible.”
… Next year, in a pilot study, the government will introduce a mandatory polygraph for convicted sex offenders in three regions. … Professor Don Grubin, a forensic psychiatrist at Newcastle University… admits he was initially sceptical but argues that polygraphs are a useful tool. “We were less concerned about accuracy per se than with the disclosures and the changes in behaviour it encourages these guys to make,” he says. “It should not be seen as a lie detector but as a truth facilitator. What you find is you get markedly increased disclosures. You don’t get the full story but you get more than you had.”
…critics argue that most kinds of lie-detector studies are lab tests, which can never replicate the high stakes of real lies and tend to test technology on healthy individuals (usually students) of above-average intelligence. Children, criminals, the psychotic, the stupid and even those not speaking in their first language (a common issue with benefit claimants) are rarely involved in studies.
ABC News (30 Aug) is the latest media outlet to get on the MRI Lie-Detection bandwagon. “See a Lie Inside the Brain – Researchers Detect the Truth and Find Lies With an FMRI” is their breathless headline. How exciting! But Don Q Blogger points out it’s mostly uncritical puff for commercial companies offering fMRI lie detection tests
Meanwhile, Mind Hacks, Boing Boing and The Neurocritic all weigh in on a recent New York Times article on the growing commercialisation of fMRI technology for lie detection, pain control and a host of other purposes.
Hat tip to Prof Peter Tillers for pointing us to a paper from Charles Keckler, George Mason University School of Law, on admissibility in court of neuroimaging evidence of deception. Here’s the abstract:
The last decade has seen remarkable process in understanding ongoing psychological processes at the neurobiological level, progress that has been driven technologically by the spread of functional neuroimaging devices, especially magnetic resonance imaging, that have become the research tools of a theoretically sophisticated cognitive neuroscience. As this research turns to specification of the mental processes involved in interpersonal deception, the potential evidentiary use of material produced by devices for detecting deception, long stymied by the conceptual and legal limitations of the polygraph, must be re-examined.
Although studies in this area are preliminary, and I conclude they have not yet satisfied the foundational requirements for the admissibility of scientific evidence, the potential for use – particularly as a devastating impeachment threat to encourage factual veracity – is a real one that the legal profession should seek to foster through structuring the correct incentives and rules for admissibility. In particular, neuroscience has articulated basic memory processes to a sufficient degree that contemporaneously neuroimaged witnesses would be unable to feign ignorance of a familiar item (or to claim knowledge of something unfamiliar). The brain implementation of actual lies, and deceit more generally, is of greater complexity and variability. Nevertheless, the research project to elucidate them is conceptually sound, and the law cannot afford to stand apart from what may ultimately constitute profound progress in a fundamental problem of adjudication.
- Charles N. W. Keckler (2005) Cross-Examining The Brain: A Legal Analysis of Neural Imaging for Credibility Impeachment. bepress Legal Series. Working Paper 568.
Hat tip to Mind Hacks (25 June) for alterting us to the fact that the organisers of the conference on The Law and Ethics of Brain Scanning: Coming soon to a courtroom near you?, held in Arizona in April, have uploaded both the powerpoint presentations and MP3s of most of the lectures to the conference website.
A feast of interesting material here that should keep you going, even on the longest commute, including:
- “Brain Imaging and the Mind: Pseudoscience or Science?” – William Uttal, Arizona State University
- “Overview of Brain Scanning Technologies” – John J.B. Allen, Department of Psychology, University of Arizona
- “Brain Scanning and Lie Detection” – Steven Laken, Founder and CEO, Cephos Corporation
- “Brain Scanning in the Courts: The Story So Far” – Gary Marchant, Center for the Study of Law, Science, & Technology Sandra Day O’Connor College of Law
- “Legal Admissibility of Neurological Lie Detection Evidence” – Archie A. Alexander, Health Law & Policy Institute, University of Houston Law Center
- “Demonstrating Brain Injuries with Brain Scanning” – Larry Cohen, The Cohen Law Firm
- “Harm and Punishment: An fMRI Experiment” – Owen D. Jones, Vanderbilt University School of Law & Department of Biological Sciences
- “Through a Glass Darkly: Transdisciplinary Brain Imaging Studies to Predict and Explain Abnormal Behavior” – James H. Fallon, Department of Psychiatry and Human Behavior, University of California, Irvine
- “Authenticity, Bluffing, and the Privacy of Human Thought: Ethical Issues in Brain Scanning” – Emily Murphy, Stanford Center for Biomedical Ethics
- “Health, Disability, and Employment Law Implications of MRI” – Stacey Tovino, Hamline University School of Law
From a deception researcher’s point of view, the chance to hear from Steven Laken of commercial fMRI deception detection company Cephos will be particularly interesting.
Mind Hacks also notes that ABC Radio National’s All in the Mind on 23 June featured many of the speakers from this conference in a discussion of neuroscience, criminality and the courtroom. The webpage accompanying this programme has a great reference list. For those interested in deception research, I particularly recommend Wolpe, Foster & Langleben (2005) for an informative overview of the potential uses and dangers of neurotechnologies and deception detection.
- Paul Root Wolpe, Kenneth R Foster, Daniel D Langleben (2005). Emerging Neurotechnologies for Lie-Detection: Promises and Perils. The American Journal of Bioethics 5(2): 39-49
Wow. Mind Hacks is right. A great article from the New Yorker on fMRI and deception detection. Here’s a little snippet but as the article is freely available online you should really head on over there and read the whole thing:
To date, there have been only a dozen or so peer-reviewed studies that attempt to catch lies with fMRI technology, and most of them involved fewer than twenty people. Nevertheless, the idea has inspired a torrent of media attention, because scientific studies involving brain scans dazzle people, and because mind reading by machine is a beloved science-fiction trope, revived most recently in movies like “Minority Report” and “Eternal Sunshine of the Spotless Mind.” Many journalistic accounts of the new technology—accompanied by colorful bitmapped images of the brain in action—resemble science fiction themselves.
And later, commenting on University of Pennsylvania psychiatrist Daniel Langleben’s studies that kicked off the current fMRI-to-detect-deception craze:
Nearly all the volunteers for Langleben’s studies were Penn students or members of the academic community. There were no sociopaths or psychopaths; no one on antidepressants or other psychiatric medication; no one addicted to alcohol or drugs; no one with a criminal record; no one mentally retarded. These allegedly seminal studies look exclusively at unproblematic, intelligent people who were instructed to lie about trivial matters in which they had little stake. An incentive of twenty dollars can hardly be compared with, say, your freedom, reputation, children, or marriage—any or all of which might be at risk in an actual lie-detection scenario.
- Duped: Can brain scans uncover lies? by Margaret Talbot (The New Yorker, July 2, 2007)
- Mind Hacks comments on the article (4 July)
I’ve been out of the country for the last couple of weeks and missed the start of what looks to be an interesting series from UK’s Channel 4 on lie detection. Luckily the trusty Mind Hacks is on hand to pick it up!
Lie Lab is a three-part TV series where they use the not-very-accurate brain scan lie detection method to test high profile people who have been accused of lying.
If, like me, you’ve missed the start of the series, UK/Eire viewers can use the Channel 4 ‘on demand’ feature to catch up over the internet.
This is the question asked in the May 2007 issue of The Scientist, which discusses the recent commercialisation of fMRI for lie detection, and concludes with a good summary of the persistent problems using this technology in forensic contexts:
[…] in reality, a nonconsensual testtaker need only move his or her head slightly to render the results useless. And there are other challenges. For one, individuals with psychopathologies or drug use (overrepresented in the criminal defendant population) may have very different brain responses to lying, says [New York University Psychology prof Elizabeth] Phelps. They might lack the sense of conflict or guilt used to detect lying in other individuals. […]
If a person actually believes an untruth, it’s not clear if a machine could ever identify it as such. Researchers including Phelps are still debating whether the brain can distinguish true from false memory in the first place. […]
Jed Rakoff, US District Judge for the Southern District of New York, says he doubts that fMRI tests will meet the courtroom standards for scientific evidence (reliability and acceptance within the scientific community) anytime in the near future, or that the limited information they provide will have much impact on the stand.
[…] According to Rakoff, the best way to get at the truth in the courtroom is still “plain old cross-examination.” And in the national security sphere, there’s “much more to detecting spies than the perfect gadget,” [Marcus Raichle, professor at the Washington University in St. Louis School of Medicine] agrees. “There’s some plain old-fashioned footwork that needs to be done.”
- Hat tip to Mind Hacks (11 May), which has a detailed commentary on the article.
Anywhere near Arizona in a couple of weeks? Arizona State University is running a one day conference on Friday, April 13, entitled The Law and Ethics of Brain Scanning: Coming soon to a courtroom near you? The conference is free but you must pre-register.
The conference has four consecutive sessions, on Brain Scanning Technologies; Brain Scanning in the Courts; Specific Applications of Brain Scanning Technologies; and Ethical Aspects of Brain Scanning.
The full line-up of speakers and talks is here. Most of the day looks like being interesting from but from a deception point of view, two particular presentations stand out:
Brain Scanning and Lie Detection from Daniel Langleben, University of Pennsylvania School of Medicine
Legal Admissibility of Neurological Lie Detection Evidence – Archie A. Alexander, Health Law & Policy Institute, University of Houston Law Center
Hat tip to the Neuroethics and Law Blog for bringing this to our attention!
… asks Ronald Bailey on Reason Online (23 Feb):
[…] Deception arises in our brains. The utility of finding a way to look under the hood directly for the source of deception is undeniable. Not surprisingly, a number of researchers have been trying to find correlates in the brain for truth and lies. […] Now a couple of American companies are claiming to be able to do just that. No Lie MRI in Tarzana, Calif., and Cephos Corporation in Pepperell, Mass. use fMRI scanning to uncover deception. No Lie MRI asserts that its technology, “represents the first and only direct measure of truth verification and lie detection in human history.” Both companies say that their technology can distinguish lies from truth with an accuracy rate of 90 percent.
[…] What evidence does No Lie MRI and Cephos Corporation offer for their assertion of 90 percent accuracy in detecting lies? A look at the studies cited on No Lie MRI’s website is not reassuring. The company links to one done using 26 right-handed male undergraduates; to another with 22 right-handed male undergraduates; and to a third one with 23 right-handed participants (11 men and 12 women).
Cephos links to just three fMRI studies, one using a total of 61 subjects (29 male and 32 female of whom 52 were right-handed); another using 14 right-handed adults who did not smoke or drink coffee; and a third one that tested 8 men. So adding up the studies cited by these two companies, we get a total of 154 subjects whose brains have been probed for lying in controlled laboratory settings.
[…] Right now its accuracy has not yet been proven beyond a reasonable doubt. Or as Stanford law professor Hank Greeley succinctly put it: “I want proof before this gets used, and proof is not three studies of 40 college students lying about whether or not they are holding the three of spades.”
From Science Daily, 19 Feb, a report on the recent symposium Is There Science Underlying Truth Detection? sponsored by the American Academy of Arts and Sciences and the McGovern Institute for Brain Research at MIT. It does a good job at summarising some of the practical, legal, ethical and theoretical issues surrounding the use of fMRI for deception detection. Here’s an excerpt, but it’s worth reading in full:
The symposium explored whether functional magnetic resonance imaging (fMRI), which images brain regions at work, can detect lying. “There are some bold claims regarding the potential to use functional MRI to detect deception, so it’s important to learn what is known about the science,” said Emilio Bizzi, president of the American Academy of Arts and Sciences, an investigator at MIT’s McGovern Institute for Brain Research and one of the organizers of the event.
[…] In 2005, two separate teams of researchers announced that their algorithms had been able to reliably identify “neural signatures” that indicated when a subject was lying. But the research, conducted on only a handful of subjects, was flawed, Kanwisher said. Subjects were asked to lie about whether they were holding a certain card or whether they had “stolen” certain items. These are not actually lies, she pointed out, because subjects were asked to make such statements. “What does this have to do with real-world lie detection? Making a false response when instructed isn’t a lie.
[…] In addition, the subject may not want to cooperate. “FMRI results are garbage if the subject is moving even a little bit. A subject can completely mess up the data by moving his tongue in his mouth or performing mental arithmetic,” she said. Testing also poses problems. To ensure accurate results, fMRIs would have to be tested on a wide variety of people, some guilty and some innocent, and they would need to believe that the data would have real consequences on their lives. The work would need to published in peer-reviewed journals and replicated without conflicts of interest.
In short, Kaniwsher said, “There’s no compelling evidence fMRIs will work for lie detection in the real world.”