Category Archives: Courtroom

Research round-up 2: New technologies and deception detection

Part two of the Deception Blog round-up of “all those articles I haven’t had a chance to blog about”. Part one was about catching liars via non-mechanical techniques. This post covers articles and discussion about new technologies to detect deception, including fMRI and measurement of Event-Related Potentials.

fMRI and deception: discussion on the journal pages

It’s been quite a year for advances in neuroscience and deception detection, so much so that in a recent paper in of the American Academy of Psychiatry & Law, Daniel Langleben and Frank Dattilio suggested that a new discipline of “forensic MRI” was emerging. One interesting exchange appeared recently in the Journal of the American Academy of Psychiatry & Law:

…The new approach promises significantly greater accuracy than the conventional polygraph—at least under carefully controlled laboratory conditions. But would it work in the real world? Despite some significant concerns about validity and reliability, fMRI lie detection may in fact be appropriate for certain applications. This new ability to peer inside someone’s head raises significant questions of ethics. Commentators have already begun to weigh in on many of these questions. A wider dialogue within the medical, neuroscientific, and legal communities would be optimal in promoting the responsible use of this technology and preventing abuses.

…The present article concludes that the use of functional imaging to discriminate truth from lies does not meet the Daubert criteria for courtroom testimony.

…we update and interpret the data described by Simpson, from the points of view of an experimental scientist and a forensic clinician. We conclude that the current research funding and literature are prematurely skewed toward discussion of existing findings, rather than generation of new fMRI data on deception and related topics such as mind-reading, consciousness, morality, and criminal responsibility. We propose that further progress in brain imaging research may foster the emergence of a new discipline of forensic MRI.

Earlier this year Kamila Sip and colleagues challenged proponents of neuroimaging for deception detection to take more account of the real world context in which deception occurs, which led to a robust defence from John-Dylan Haynes and an equally robust rebuttal from Sip et al. It all happened in the pages of Trends in Cognitive Sciences:

With the increasing interest in the neuroimaging of deception and its commercial application, there is a need to pay more attention to methodology. The weakness of studying deception in an experimental setting has been discussed intensively for over half a century. However, even though much effort has been put into their development, paradigms are still inadequate. The problems that bedevilled the old technology have not been eliminated by the new. Advances will only be possible if experiments are designed that take account of the intentions of the subject and the context in which these occur.

In their recent article, Sip and colleagues raise several criticisms that question whether neuroimaging is suitable for lie detection. Here, two of their points are critically discussed. First, contrary to the view of Sip et al., the fact that brain regions involved in deception are also involved in other cognitive processes is not a problem for classification-based detection of deception. Second, I disagree with their proposition that the development of lie-detection requires enriched experimental deception scenarios. Instead, I propose a data-driven perspective whereby powerful statistical techniques are applied to data obtained in real-world scenarios.

…Valid experimental paradigms for eliciting deception are still required, and such paradigms will be particularly difficult to apply in real-life settings… We agree with Haynes, however, that there are important ethical issues at stake for researchers in this field. In our opinion, one of the most important of these is careful consideration of how results derived from highly controlled laboratory settings compare with those obtained from real-life scenarios, and if and when imaging technology should be transferred from the laboratory to the judicial system.

fMRI and deception: new research findings

Of course discussion is worth nothing if you don’t have research results to discuss. Shawn Christ and colleagues delved deeper into to the cognitive processes associated with deception:

Previous neuroimaging studies have implicated the prefrontal cortex (PFC) and nearby brain regions in deception. This is consistent with the hypothesis that lying involves the executive control system….Our findings support the notion that executive control processes, particularly working memory, and their associated neural substrates play an integral role in deception. This work provides a foundation for future research on the neurocognitive basis of deception.

Meanwhile, two groups of researchers reported that fMRI techniques can differentiate between mistakes and false memories vs deliberate deception, with Tatia Lee and colleagues showing that in the case of feigning memory impairment, deception “is not only more cognitively demanding than making unintentional errors but also utilizes different cognitive processes”:

fMRI and deception in the blogosphere

Commentary and discussion of fMRI was not limited to the pages of scholarly journals, however. A terrific post by Vaughan over at Mind Hacks on the limitations of fMRI studies zipped around the blogosphere (and rightly so) and is well worth a read if you are interested in becoming a more critical consumer of fMRI deception detection studies (see also Neurophilosophy’s post MRI: What is it good for? ).

There’s a detailed write-up by Hank Greely of the University of Akron Law School’s conference on Law and Neuroscience held in September, which covers the science, the practicalities and the ethics of using neuroscience in forensic contexts (see also his summary of a presentation at an earlier conference on ‘neurolaw’). Judges too, are “waking up to the potential misuse of brain-scanning technologies” with a recent judges’ summit in the US to “discuss protecting courts from junk neuroscience”, reports New Scientist .

Nevertheless, purveyors of MRI lie-detection technology continue to push their wares. For instance, the Antipolygraph Blog picked up a radio discussion on commercial fMRI-based lie detection in June (the audio download is still available as an mp3 download).

ERP and deception: the controversial BEOS test

Earlier this year I and many others blogged about the disturbing use of brain scanning in a recent murder trial in India. The technique is known as the Brain Electrical Oscillations Signature test and is based on measuring Event-Related Potentials (electrical activity across the brain). Neurologica blog and Neuroethics and Law have a write-ups and links for those who wish to know more.

Neuroethics and Law blog links to a pdf of the judge’s opinion in the case, where pages 58-64 include a summary of the judge’s understanding of the BEOS procedure and what it ‘revealed’ in this case. Most disturbing is the apparent certainty of the judge that the tests were appropriate, scientifically robust and applied correctly by “Sunny Joseph who is working as Assistant Chemical Analyser in Forensic Science Laboratory, Mumbai” (p.55-56):

…competency of this witness to conduct the Test is not seriously challenged. His evidence also reveals that he was working as Clinical Psychologist in National Institute of Mental Health and Neuro Sciences at Bangalore and he has experience in the field of Neuro psychology since last 6 years and in forensic technique since last 1½ years. He has himself conducted approximately 15 Polygraph Tests and has been associated with almost 100 Polygraph Tests. He has conducted 16 BEOS Tests and has been associated in conducting of about 12 Neuro Psychology Tests. Therefore his expertise in my opinion, can in no way be challenged and nothing is brought on record in his cross examination to show that the Tests conducted were not proper and requisite procedure was not followed (p.62).

On a happier note, my hot tip for the New Year is to keep your eye on Social Neuroscience – there are several articles on neural correlates of deception in press there which they are saving up for a special issue in 2009.

More soon – part 3 covers the 2008 flurry of interest in deception and magic!

Adults easily fooled by children’s false denials

University of California – Davis press release (17 August):

Adults are easily fooled when a child denies that an actual event took place, but do somewhat better at detecting when a child makes up information about something that never happened, according to new research from the University of California, Davis….

“The large number of children coming into contact with the legal system – mostly as a result of abuse cases – has motivated intense scientific effort to understand children’s true and false reports,” said UC Davis psychology professor and study author Gail S. Goodman. “The seriousness of abuse charges and the frequency with which children’s testimony provides central prosecutorial evidence makes children’s eyewitness memory abilities important considerations. Arguably even more important, however, are adults’ abilities to evaluate children’s reports.”

In an effort to determine if adults can discern children’s true from false reports, Goodman and her co-investigators asked more than 100 adults to view videotapes of 3- and 5-year-olds being interviewed about “true” and “false” events. For true events, the children either accurately confirmed that the event had occurred or inaccurately denied that it had happened. For “false” events – ones that the children had not experienced – they either truthfully denied having experienced them or falsely reported that they had occurred.

Afterward, the adults were asked to evaluate each child’s veracity. The adults were relatively good at detecting accounts of events that never happened. But the adults were apt to mistakenly believe children’s denials of actual events.

“The findings suggest that adults are better at detecting false reports than they are at detecting false denials,” Goodman said. “While accurately detecting false reports protects innocent people from false allegations, the failure to detect false denials could mean that adults fail to protect children who falsely deny actual victimization.”

India’s Novel Use of Brain Scans in Courts Is Debated

According to a report in the New York Times (14 Sept), an Indian judge has taken the results a brain scan as “proof that the [murder] suspect’s brain held ‘experiential knowledge’ about the crime that only the killer could possess”, and passed a life sentence.

The Brain Electrical Oscillations Signature test, or BEOS, was developed by Champadi Raman Mukundan, a neuroscientist who formerly ran the clinical psychology department of the National Institute of Mental Health and Neuro Sciences in Bangalore. His system builds on methods developed at American universities by other scientists, including Emanuel Donchin, Lawrence A. Farwell and J. Peter Rosenfeld.

Neuroethics and Law Blog comments, as does Dr Lawrence Farwell (inventor of the controversial ‘Brain Fingerprinting’ technique, which bears a passing resemblence to the BEOS test used in India).

Scary stuff.

Lie Detector Technology in court – seduced by neuroscience?

Jeffrey Bellin from the California Courts of Appeal has a paper forthcoming in Temple Law Review on the legal issues involved in deploying new lie detection technology – specifically fMRI technology – in real-world courtroom settings (hat tip to the Neuroethics and Law blog ).

Bellin examines the ‘scientific validity’ requirements and argues that the research has progressed to the point where fMRI evidence in deception detection issues will soon reach the standard required to be admissible under the Daubert criteria. However, Bellin’s key issue with using fMRI evidence in court is not on scientific but on legal grounds: he claims that fMRI evidence would fall foul of the hearsay prohibition. He explains that “The hearsay problem arises because lie detector evidence consists of expert analysis of out-of-court statements offered for their truth (i.e., hearsay) and is consequently inadmissible under Federal Rule of Evidence 801 absent an applicable hearsay exception” (p.102).

I am not a lawyer so can’t really comment on the hearsay issue raised by Bellin, except to say that it’s an interesting observation and not one I’ve heard before. I feel better placed to assess his analysis that fMRI technology is only a small step from reaching the Daubert standard. In this Bellin is – in my judgement – way off-beam. His argument runs something like this:

1. The US Government has poured lots of money into lie detection techologies (Bellin quotes a Time magazine guess-timate of “tens of millions to hundreds of millions of dollars” – an uncorroborated rumour, not an established fact).

2. fMRI is “the most promising of the emerging new lie detection technologies” (p.106) because “brain activities will be more difficult to suppress than typical stress reactions measured by traditional polygraph examinations, [so] new technologies like fMRI show great promise for the development of scientifically valid lie detectors” (p.106).

3. Thus, “The infusion of money and energy into the science of lie detection coupled with the pace of recent developments in that science suggest that it is only a matter of time before lie detector evidence meets the Daubert threshold for scientific validity.” (p.107).

And the references he provides for this analysis? Steve Silberman’s “Don’t Even Think About Lying” in Wired Magazine from 2006, a piece in Time magazine the same year, entitled “How to Spot a Liar“, by Jeffrey Kluger and Coco Masters. Now both of these articles are fine pieces of journalism, but they hardly constitute good grounds for Bellin’s assertion that fMRI techology is almost ready to be admitted in court. (And if you’re going to use journalistic pieces as references, can I recommend, as a much better source, an excellent article: “Duped: Can brain scans uncover lies?” by Margaret Talbot from The New Yorker [July 2, 2007].)

Let’s just remind ourselves of the Daubert criteria. To paraphrase the comprehensive Wikipedia page, before expert testimony can be entered into evidence it must be relevant to the case at hand, and the expert’s conclusions must be scientific. This latter condition means that a judge deciding on whether to admit expert testimony based on a technique has to address five points:

1. Has the technique been tested in actual field conditions (and not just in a laboratory)?
2. Has the technique been subject to peer review and publication?
3. What is the known or potential rate of error? Is it zero, or low enough to be close to zero?
4. Do standards exist for the control of the technique’s operation?
5. Has the technique been generally accepted within the relevant scientific community?

As far as fMRI for lie detection is concerned I think the answers are:

  1. No, with only a couple of exceptions.
  2. Yes, though there is a long way to go before the technique has been tested in relevant conditions.
  3. In some lab conditions, accuracy rates reach 95%. But what about in real life situations? We don’t have enough research to say.
  4. There are no published or agreed standards for undertaking deception detection fMRI scans.
  5. No, the arguments are still raging!

As an example of 5, one of the crucial arguments is over the interpretation of the results of fMRI experiments (Logothetis, 2008). Mind Hacks had a terrific article a few weeks ago in which they summarise the key issue:

It starts with this simple question: what is fMRI measuring? When we talk about imaging experiments, we usually say it measures ‘brain activity’, but you may be surprised to know that no-one’s really sure what this actually means.

And as Jonah Lehrer points out more recently :

[...T]he critical flaw of such studies is that they neglect the vast interconnectivity of the brain… Because large swaths of the cortex are involved in almost every aspect of cognition – even a mind at rest exhibits widespread neural activity – the typical fMRI image, with its highly localized spots of color, can be deceptive. The technology makes sense of the mind by leaving lots of stuff out – it attempts to sort the “noise” from the “signal” – but sometimes what’s left out is essential to understanding what’s really going on.

Bellin is not alone in perhaps being seduced by the fMRI myth, as two recent studies (McCabe & Castel, 2007; Wiesberg et al., 2008) demonstrate very nicely. McCabe and Castel showed that participants judged news stories as ‘more scientific’ when accompanied by images of brain scans than without, and Weisberg et al.’s participants rated bad explanations of psychological phenomena as more scientifically sound when they included a spurious neuroscience reference. Why are people so beguiled by the blobs in the brain? Here are McCabe and Castel, quoted in the BPS Research Blog:

McCabe and Castel said their results show people have a “natural affinity for reductionistic explanations of cognitive phenomena, such that physical representations of cognitive processes, like brain images, are more satisfying, or more credible, than more abstract representations, like tables or bar graphs.”

References:

See also:

When Jurors Lie

jurydutyAnne Reed has posted a couple of thoughtful pieces on deceptive jurors over at her Deliberations Blog.

In part I of “When Jurors Lie” Anne highlights the extent of the problem of potential jurors lying to get onto a jury. For example:

One of the jurors who convicted Martha Stewart, “by far the most outspoken juror on the panel,” failed to disclose an arrest and three lawsuits against him on a jury questionnaire that reportedly asked for this information. Just in recent months on Deliberations, we’ve had the football fan who faithfully responded when asked who his favorite team was, but said nobody asked him whether he hated the plaintiff Oakland Raiders, which he did; the juror in the California Peregrine trial who at first said he wasn’t reading media coverage of the trial, but later admitted he had; and the blogging lawyer juror who bragged that he’d slipped through voir dire by describing himself as a “project manager,” leaving out the lawyer part.

In part II Anne considers how to recognise a lying juror, pointing out how difficult detecting deceit can be. But she offers some interesting – and above all practical - tips for trying to weed out deceptive jurors during the voir dire process, including asking about experiences and behaviours (rather than about attitudes and beliefs), being aware of your own stereotypes, paying attention to the language jurors use under questioning, and leveraging peer pressure by asking the whole group of potential jurors to buy into the importance of voir dire.

Anne also wonders why lawyers get so twitchy about deceptive jurors. She suggests that it’s

partly because a lying juror puts a lawyer personally at risk, and we take it personally… But I want to suggest there’s something else at work here too, a deep belief we share with all those other jurors who don’t lie. People believe the courtroom is a place where justice happens.

And no one likes having a deep-seated belief shaken.

Photo credit: plemeljr, Creative Commons Licence

Cross-Examining The Brain

Hat tip to Prof Peter Tillers for pointing us to a paper from Charles Keckler, George Mason University School of Law, on admissibility in court of neuroimaging evidence of deception. Here’s the abstract:

The last decade has seen remarkable process in understanding ongoing psychological processes at the neurobiological level, progress that has been driven technologically by the spread of functional neuroimaging devices, especially magnetic resonance imaging, that have become the research tools of a theoretically sophisticated cognitive neuroscience. As this research turns to specification of the mental processes involved in interpersonal deception, the potential evidentiary use of material produced by devices for detecting deception, long stymied by the conceptual and legal limitations of the polygraph, must be re-examined.

Although studies in this area are preliminary, and I conclude they have not yet satisfied the foundational requirements for the admissibility of scientific evidence, the potential for use – particularly as a devastating impeachment threat to encourage factual veracity – is a real one that the legal profession should seek to foster through structuring the correct incentives and rules for admissibility. In particular, neuroscience has articulated basic memory processes to a sufficient degree that contemporaneously neuroimaged witnesses would be unable to feign ignorance of a familiar item (or to claim knowledge of something unfamiliar). The brain implementation of actual lies, and deceit more generally, is of greater complexity and variability. Nevertheless, the research project to elucidate them is conceptually sound, and the law cannot afford to stand apart from what may ultimately constitute profound progress in a fundamental problem of adjudication.

Reference:

The Law and Ethics of Brain Scanning – audio material online

MP3onredHat tip to Mind Hacks (25 June) for alterting us to the fact that the organisers of the conference on The Law and Ethics of Brain Scanning: Coming soon to a courtroom near you?, held in Arizona in April, have uploaded both the powerpoint presentations and MP3s of most of the lectures to the conference website.

A feast of interesting material here that should keep you going, even on the longest commute, including:

  • “Brain Imaging and the Mind: Pseudoscience or Science?” – William Uttal, Arizona State University
  • “Overview of Brain Scanning Technologies” – John J.B. Allen, Department of Psychology, University of Arizona
  • “Brain Scanning and Lie Detection” – Steven Laken, Founder and CEO, Cephos Corporation
  • “Brain Scanning in the Courts: The Story So Far” – Gary Marchant, Center for the Study of Law, Science, & Technology Sandra Day O’Connor College of Law
  • “Legal Admissibility of Neurological Lie Detection Evidence” – Archie A. Alexander, Health Law & Policy Institute, University of Houston Law Center
  • “Demonstrating Brain Injuries with Brain Scanning” – Larry Cohen, The Cohen Law Firm
  • “Harm and Punishment: An fMRI Experiment” – Owen D. Jones, Vanderbilt University School of Law & Department of Biological Sciences
  • “Through a Glass Darkly: Transdisciplinary Brain Imaging Studies to Predict and Explain Abnormal Behavior” – James H. Fallon, Department of Psychiatry and Human Behavior, University of California, Irvine
  • “Authenticity, Bluffing, and the Privacy of Human Thought: Ethical Issues in Brain Scanning” – Emily Murphy, Stanford Center for Biomedical Ethics
  • “Health, Disability, and Employment Law Implications of MRI” – Stacey Tovino, Hamline University School of Law

From a deception researcher’s point of view, the chance to hear from Steven Laken of commercial fMRI deception detection company Cephos will be particularly interesting.

Mind Hacks also notes that ABC Radio National’s All in the Mind on 23 June featured many of the speakers from this conference in a discussion of neuroscience, criminality and the courtroom. The webpage accompanying this programme has a great reference list. For those interested in deception research, I particularly recommend Wolpe, Foster & Langleben (2005) for an informative overview of the potential uses and dangers of neurotechnologies and deception detection.

Reference:

Discriminating fact from fiction in recovered memories of childhood sexual abuse

childshirtA press release from the Association for Psychological Science (13 June) draws attention to research by Elke Geraerts, a psych post doc from Harvard and Maastricht Universities. Geraerts and her colleagues have a paper coming out next month in Psychological Science, presenting results of research on the accuracy of ‘recovered memories’, decribed in the press release as “one of the most contentious issues in the fields of psychology and psychiatry”.

The press release explains:

A decade or so ago, a spate of high profile legal cases arose in which people were accused, and often convicted, on the basis of “recovered memories.” These memories, usually recollections of childhood abuse, arose years after the incident occurred and often during intensive psychotherapy. [...]

[...] Recovered memories are inherently tricky to validate for several reasons, most notably because the people who hold them are thoroughly convinced of their authenticity. Therefore, to maneuver around this obstacle Geraerts and her colleagues attempted to corroborate the memories through outside sources.

The researchers recruited a sample of people who reported being sexually abused as children and divided them based on how they remembered the event. [...] The results [...] showed that, overall, spontaneously recovered memories were corroborated about as often (37% of the time) as continuous memories (45%). Thus, abuse memories that are spontaneously recovered may indeed be just as accurate as memories that have persisted since the time the incident took place. Interestingly, memories that were recovered in therapy could not be corroborated at all.

The July issue of Psychological Science isn’t online yet, but the paper is available as a pdf on Geraerts’ website – access via the link below.

Reference:

Photo credit: HarlanH, Creative Commons License

Juries and deception

jurydomeDeliberations, a blog about “Law, news, and thoughts on juries and jury trials” has been keeping my attention since its launch in February.

Anne Reed, a trial lawyer and jury consultant from Wisconsin, posts regularly on research and news relating to juries and court cases. If you have an interest in the psychology of juries – or even just a more general interest in forensic psychology – it’s well worth adding to your list of required reading.

Last week Anne published two posts on deception with specific reference to trials: Deceived about deception and How to expose a lie, and was nice enough to make some kind comments about this blog. Thanks Anne!

Photo credit: Duo de Hale, Creative Commons License

Most lies told with best of intentions, psychologist says

Shankar Vedantam comments on the Scooter Libby trial, from The Washington Post (19 Feb):

The perjury trial of Lewis “Scooter” Libby goes to the jury this week. The case speaks to several issues – how the Bush administration deals with critics of the war in Iraq, and the games that Washington’s reporters and politicians play with each other. As far as the jury is concerned, however, the case is about only one thing: lying.

One particularly well-qualified witness on this subject was not called by either the prosecution or the defense, so today we cross-examine Robert Feldman ourselves. Feldman is a social psychologist at the University of Massachusetts who studies lying in everyday life, and his findings are just the kind of thing that Libby’s lawyers could have pounced on.

Note: the WP article requires registration, but you can read it in full without logging or some other sites, including this one.

Brain scans used in trial in India

Via OmniBrain (28 March 06), a rather troubling report that “brain scans” have been used in the trial of an alleged rapist in India.

The results of the brain-mapping and polygraph (lie-detector) tests conducted on rape accused Abhishek Kasliwal have come out in favour of the prosecution. The Mumbai Police had conducted the tests on March 19 at the Central Forensic Science Laboratory in Bangalore.

Sadly, no further details are available, but OmniBrain author J. Stephen Higgins is on the case. Keep an eye on the comments there to see how far he gets.

UPDATE (3 April)

Sandra at Neurofuture is also on the case, and has posted some more detail and links. One link in particular is illuminating, revealing that the Indian police use the polygraph, EEG and sodium pentathol. [Which is apparently fine, because:

“Narcoanalysis is a very scientific and a humane approach in dealing with an accused’s psychological expressions, definitely better than third degree treatment to extract truth from an accused,” affirms Dr Malini.

Well, definitely better that the 'third degree' to be sure. However, here's Wikipedia on the topic:

While fictional accounts of intelligence interrogation gives these drugs near magical abilities, information obtained by publicly-disclosed truth drugs has been shown to be highly unreliable, with subjects apparently freely mixing fact and fantasy. Much of the claimed effect relies on the belief of the subject that they cannot tell a lie while under the influence of the drug.]

I digress. According to the 2004 Deccan Herald article that Sandra found, “EEG ” appears to refer to Larry Farwell’s controversial Brain Fingerprinting Technique. I might come back to that when I have more time. Meanwhile, keep up to speed with Sandra’s detective work via del.icio.us.

Using Witness Confidence can Impair the Ability to Detect Deception

Veronica S. Tetterton and Amye R. Warren
Criminal Justice and Behavior, 32(4), 433-451, August 2005

Prior research has shown that jurors rely on confidence in discriminating between accurate and inaccurate testimonies despite the weak relationship between the two. The purpose of this study is to learn if truth seekers also use confidence in judging truthfulness. In two studies, participants were either not given instructions regarding witness confidence or were told not to use witness confidence, and then they were asked to rate the believability of the videotaped testimony of four witnesses who varied in confidence and truthfulness. Regardless of the instructions, participants did rely on confidence and rated highly confident testimonies as more believable. They also rated false testimonies as significantly more believable than true statements.