Category Archives: Ethics

Research round-up 2: New technologies and deception detection

Part two of the Deception Blog round-up of “all those articles I haven’t had a chance to blog about”. Part one was about catching liars via non-mechanical techniques. This post covers articles and discussion about new technologies to detect deception, including fMRI and measurement of Event-Related Potentials.

fMRI and deception: discussion on the journal pages

It’s been quite a year for advances in neuroscience and deception detection, so much so that in a recent paper in of the American Academy of Psychiatry & Law, Daniel Langleben and Frank Dattilio suggested that a new discipline of “forensic MRI” was emerging. One interesting exchange appeared recently in the Journal of the American Academy of Psychiatry & Law:

…The new approach promises significantly greater accuracy than the conventional polygraph—at least under carefully controlled laboratory conditions. But would it work in the real world? Despite some significant concerns about validity and reliability, fMRI lie detection may in fact be appropriate for certain applications. This new ability to peer inside someone’s head raises significant questions of ethics. Commentators have already begun to weigh in on many of these questions. A wider dialogue within the medical, neuroscientific, and legal communities would be optimal in promoting the responsible use of this technology and preventing abuses.

…The present article concludes that the use of functional imaging to discriminate truth from lies does not meet the Daubert criteria for courtroom testimony.

…we update and interpret the data described by Simpson, from the points of view of an experimental scientist and a forensic clinician. We conclude that the current research funding and literature are prematurely skewed toward discussion of existing findings, rather than generation of new fMRI data on deception and related topics such as mind-reading, consciousness, morality, and criminal responsibility. We propose that further progress in brain imaging research may foster the emergence of a new discipline of forensic MRI.

Earlier this year Kamila Sip and colleagues challenged proponents of neuroimaging for deception detection to take more account of the real world context in which deception occurs, which led to a robust defence from John-Dylan Haynes and an equally robust rebuttal from Sip et al. It all happened in the pages of Trends in Cognitive Sciences:

With the increasing interest in the neuroimaging of deception and its commercial application, there is a need to pay more attention to methodology. The weakness of studying deception in an experimental setting has been discussed intensively for over half a century. However, even though much effort has been put into their development, paradigms are still inadequate. The problems that bedevilled the old technology have not been eliminated by the new. Advances will only be possible if experiments are designed that take account of the intentions of the subject and the context in which these occur.

In their recent article, Sip and colleagues raise several criticisms that question whether neuroimaging is suitable for lie detection. Here, two of their points are critically discussed. First, contrary to the view of Sip et al., the fact that brain regions involved in deception are also involved in other cognitive processes is not a problem for classification-based detection of deception. Second, I disagree with their proposition that the development of lie-detection requires enriched experimental deception scenarios. Instead, I propose a data-driven perspective whereby powerful statistical techniques are applied to data obtained in real-world scenarios.

…Valid experimental paradigms for eliciting deception are still required, and such paradigms will be particularly difficult to apply in real-life settings… We agree with Haynes, however, that there are important ethical issues at stake for researchers in this field. In our opinion, one of the most important of these is careful consideration of how results derived from highly controlled laboratory settings compare with those obtained from real-life scenarios, and if and when imaging technology should be transferred from the laboratory to the judicial system.

fMRI and deception: new research findings

Of course discussion is worth nothing if you don’t have research results to discuss. Shawn Christ and colleagues delved deeper into to the cognitive processes associated with deception:

Previous neuroimaging studies have implicated the prefrontal cortex (PFC) and nearby brain regions in deception. This is consistent with the hypothesis that lying involves the executive control system….Our findings support the notion that executive control processes, particularly working memory, and their associated neural substrates play an integral role in deception. This work provides a foundation for future research on the neurocognitive basis of deception.

Meanwhile, two groups of researchers reported that fMRI techniques can differentiate between mistakes and false memories vs deliberate deception, with Tatia Lee and colleagues showing that in the case of feigning memory impairment, deception “is not only more cognitively demanding than making unintentional errors but also utilizes different cognitive processes”:

fMRI and deception in the blogosphere

Commentary and discussion of fMRI was not limited to the pages of scholarly journals, however. A terrific post by Vaughan over at Mind Hacks on the limitations of fMRI studies zipped around the blogosphere (and rightly so) and is well worth a read if you are interested in becoming a more critical consumer of fMRI deception detection studies (see also Neurophilosophy’s post MRI: What is it good for? ).

There’s a detailed write-up by Hank Greely of the University of Akron Law School’s conference on Law and Neuroscience held in September, which covers the science, the practicalities and the ethics of using neuroscience in forensic contexts (see also his summary of a presentation at an earlier conference on ‘neurolaw’). Judges too, are “waking up to the potential misuse of brain-scanning technologies” with a recent judges’ summit in the US to “discuss protecting courts from junk neuroscience”, reports New Scientist .

Nevertheless, purveyors of MRI lie-detection technology continue to push their wares. For instance, the Antipolygraph Blog picked up a radio discussion on commercial fMRI-based lie detection in June (the audio download is still available as an mp3 download).

ERP and deception: the controversial BEOS test

Earlier this year I and many others blogged about the disturbing use of brain scanning in a recent murder trial in India. The technique is known as the Brain Electrical Oscillations Signature test and is based on measuring Event-Related Potentials (electrical activity across the brain). Neurologica blog and Neuroethics and Law have a write-ups and links for those who wish to know more.

Neuroethics and Law blog links to a pdf of the judge’s opinion in the case, where pages 58-64 include a summary of the judge’s understanding of the BEOS procedure and what it ‘revealed’ in this case. Most disturbing is the apparent certainty of the judge that the tests were appropriate, scientifically robust and applied correctly by “Sunny Joseph who is working as Assistant Chemical Analyser in Forensic Science Laboratory, Mumbai” (p.55-56):

…competency of this witness to conduct the Test is not seriously challenged. His evidence also reveals that he was working as Clinical Psychologist in National Institute of Mental Health and Neuro Sciences at Bangalore and he has experience in the field of Neuro psychology since last 6 years and in forensic technique since last 1½ years. He has himself conducted approximately 15 Polygraph Tests and has been associated with almost 100 Polygraph Tests. He has conducted 16 BEOS Tests and has been associated in conducting of about 12 Neuro Psychology Tests. Therefore his expertise in my opinion, can in no way be challenged and nothing is brought on record in his cross examination to show that the Tests conducted were not proper and requisite procedure was not followed (p.62).

On a happier note, my hot tip for the New Year is to keep your eye on Social Neuroscience – there are several articles on neural correlates of deception in press there which they are saving up for a special issue in 2009.

More soon – part 3 covers the 2008 flurry of interest in deception and magic!

fMRI and deception: report on a recent symposium

From Science Daily, 19 Feb, a report on the recent symposium Is There Science Underlying Truth Detection? sponsored by the American Academy of Arts and Sciences and the McGovern Institute for Brain Research at MIT. It does a good job at summarising some of the practical, legal, ethical and theoretical issues surrounding the use of fMRI for deception detection. Here’s an excerpt, but it’s worth reading in full:

The symposium explored whether functional magnetic resonance imaging (fMRI), which images brain regions at work, can detect lying. “There are some bold claims regarding the potential to use functional MRI to detect deception, so it’s important to learn what is known about the science,” said Emilio Bizzi, president of the American Academy of Arts and Sciences, an investigator at MIT’s McGovern Institute for Brain Research and one of the organizers of the event.

[...] In 2005, two separate teams of researchers announced that their algorithms had been able to reliably identify “neural signatures” that indicated when a subject was lying. But the research, conducted on only a handful of subjects, was flawed, Kanwisher said. Subjects were asked to lie about whether they were holding a certain card or whether they had “stolen” certain items. These are not actually lies, she pointed out, because subjects were asked to make such statements. “What does this have to do with real-world lie detection? Making a false response when instructed isn’t a lie.

[...] In addition, the subject may not want to cooperate. “FMRI results are garbage if the subject is moving even a little bit. A subject can completely mess up the data by moving his tongue in his mouth or performing mental arithmetic,” she said. Testing also poses problems. To ensure accurate results, fMRIs would have to be tested on a wide variety of people, some guilty and some innocent, and they would need to believe that the data would have real consequences on their lives. The work would need to published in peer-reviewed journals and replicated without conflicts of interest.

In short, Kaniwsher said, “There’s no compelling evidence fMRIs will work for lie detection in the real world.”

Revealing secret intentions in the brain

Press release from the Max Planck Institute (8 Feb):

Our secret intentions remain concealed until we put them into action -so we believe. Now researchers have been able to decode these secret intentions from patterns of their brain activity. They let subjects freely and covertly choose between two possible tasks – to either add or subtract two numbers. They were then asked to hold in mind their intention for a while until the relevant numbers were presented on a screen. The researchers were able to recognize the subjects intentions with 70% accuracy based alone on their brain activity – even before the participants had seen the numbers and had started to perform the calculation.

[...] The work of Haynes and his colleagues goes far beyond simply confirming previous theories. It has never before been possible to read out of brain activity how a person has decided to act in the future.

This press release prompted a piece in the UK Guardian (9 Feb) that explored both the research and the possible applications of this knowledge:

The latest work reveals the dramatic pace at which neuroscience is progressing, prompting the researchers to call for an urgent debate into the ethical issues surrounding future uses for the technology. If brain-reading can be refined, it could quickly be adopted to assist interrogations of criminals and terrorists, and even usher in a “Minority Report” era (as portrayed in the Steven Spielberg science fiction film of that name), where judgments are handed down before the law is broken on the strength of an incriminating brain scan.”These techniques are emerging and we need an ethical debate about the implications, so that one day we’re not surprised and overwhelmed and caught on the wrong foot by what they can do. These things are going to come to us in the next few years and we should really be prepared,” Professor Haynes told the Guardian.


  • John-Dylan Haynes, Katsuyuki Sakai, Geraint Rees, Sam Gilbert, Chris Frith, Dick Passingham (2007). Reading hidden intentions in the human brain. Current Biology, February 20th, 2007 (online: February 8th). PDF and HTML full text freely available (as of 9 Feb 07)

Symposium: Is There Science Underlying Truth Detection?

If you’re in Cambridge MA next week you might be interested in a symposium on brain imaging and deception detection, to be held at the American Academy of Arts & Science on 2 February, from 2-5pm:

The American Academy of Arts and Sciences, the McGovern Institute for Brain Research at MIT, and Harvard University are holding a symposium on the science, law, and ethics of using brain imaging technology to detect deception. The program will focus on the status of the science behind detecting deception using fMRI. Presenters will also consider the legal, ethical, and public policy implications of using brain imaging for lie detection.

The symposium is free, but advanced registration is required (more details here).

More on truth serums

Shelley at the neuroscience blog Retrospectacle mused on truth serums last week (9 Jan):

Let’s just assume for a moment that there existed some potion that extracted the truth from people, rendered them unable to lie when questioned. Wouldn’t that negate free will?

Not so much in the religious sense of the word, but rather in the sense of information or confessions being freely given. It would certainly change our judicial system, where criminals are seen as repentant if they confess their crimes and parole boards would be pretty much moot. My feeling is that our secrets are part of what defines us. I don’t mean secrets like cheating on a spouse or child abuse or something, but rather the thoughts and small actions that we choose to keep to ourselves. [...]

There’s an interesting conversation going on in the comments to Shelley’s post, including a discussion of how truth serums work (or don’t work) and the ethics of using them for criminal investigations, as the Indian police are doing.

Time Magazine wonders how to spot a liar

A lengthy piece in last week’s Time Magazine (20 August) rakes over familiar ground:

[...] In the post-9/11 world, where anyone with a boarding pass and a piece of carry-on is a potential menace, the need is greater than ever for law enforcement’s most elusive dream: a simple technique that can expose a liar as dependably as a blood test can identify DNA or a Breathalyzer can nail a drunk. Quietly over the past five years, Department of Defense agencies and the Department of Homeland Security have dramatically stepped up the hunt. Though the exact figures are concealed in the classified “black budget,” tens of millions to hundreds of millions of dollars are believed to have been poured into lie-detection techniques as diverse as infrared imagers to study the eyes, scanners to peer into the brain, sensors to spot liars from a distance, and analysts trained to scrutinize the unconscious facial flutters that often accompany a falsehood.

The article goes on to discuss research on deception using fMRI, electroencephalograms, eye scans and microexpressions. They conclude:

For now, the new lie-detection techniques are likely to remain in the same ambiguous ethical holding area as so many other privacy issues in the twitchy post-9/11 years. We’ll give up a lot to keep our cities, airplanes and children safe. But it’s hard to say in the abstract when “a lot” becomes “too much.” We can only hope that we’ll recognize it when it happens.

Poll Finds White Lies a Necessary Evil

Earlier this month the Associated Press sponsored an opnion poll on lying, publishing the results on 11 July.

Apparently white lies are an acceptable, even necessary, part of many lives – even though we dislike the idea of lying.

Nearly two-thirds of Americans agree. In the AP-Ipsos poll, 65 percent of those questioned said it was sometimes OK to lie to avoid hurting someone’s feelings, even though 52 percent said lying, overall, was never justified.

The article discusses the philosophy and ethics of lying, and brings in the work of noted expert in the psychology of deception, Bella DePaulo, who is quoted thus:

“People who say lying is wrong are often thinking in the abstract,” DePaulo says. “In our real lives, we can’t always pick honesty without compromising some other value that might be as important” – like maintaining a happy relationship.

FOI request from the ACLU aims to expose whether US Government agencies are using brain scanning technology to detect deception

The Internet is buzzing with the news that the American Civil Liberties Union has filed a Freedom of Information request seeking information about Government use of brain scanners in interrogations

According to the ACLU press release, the organisation has filed the request because:

“There are certain things that have such powerful implications for our society — and for humanity at large — that we have a right to know how they are being used so that we can grapple with them as a democratic society,” said Barry Steinhardt, Director of the ACLU’s Technology and Liberty Project. “These brain-scanning technologies are far from ready for forensic uses and if deployed will inevitably be misused and misunderstood.”

[…] “These brain-scanning technologies have potentially far-reaching implications, yet uncertain results and effectiveness,” said Steinhardt. “And we are still in our infancy when it comes to understanding the underlying processes of the brain that the scanners have begun to reveal. We do not want to see our government yet again deploying a potentially momentous technology unilaterally and in secret, before Americans have had a chance to figure out how it fits in with our values as a nation.”

Earlier in June the ACLU sponsored a forum featuring experts discussing the use of fMRI as a “lie detector”, the video of which can be downloaded here.

Other coverage that goes beyond reprinting the ACLU press release:

More on fMRI to detect deception on this blog here.

Nature focuses on ethics of brain scanning to detect deception

BrainEthics Blog discusses two articles in the latest issue of Nature:

[...] this week’s issue of Nature caught me surprised with the release of two articles on ethical aspects of neuroscience. It really demonstrates how hot and important this issue is. Basically, both articles are on the application of brain scanners to detect lies.

The articles are in Nature Volume 441 Number 7096, on page 207:

  • Neuroethics needed: Researchers should speak out on claims made on behalf of their science.

and page 918:

  • Lure of lie detectors spooks ethicists: US companies are planning to profit from lie-detection technology that uses brain scans, but the move to commercialize a little-tested method is ringing ethical and scientific alarm bells. Helen Pearson reports.

Regardless of whether you can access the Nature articles, I urge you to go take a look at the BrainEthics post.  Does a great job of summarising the key issues.

Call me picky, but Brain Fingerprinting and fMRI are not the same thing…

The All About Forensic Psychology Blog (28 May) picked up on an article by Robert Ellman (Intrepid Liberal Journal) in Political Cortex about the implications of brain scanning for lie detection in forensic settings. AAFP author David Webb writes:

Fascinating article adressing the potential use and implications of Functional Magnetic Resonance Imaging (FMRI) i.e. brain fingerprinting. [...] Given the concerns regarding the documented unreliability of the polygraph (lie detector), Silberman contends that subject to further empircial testing, Functional Magnetic Resonance Imaging (FMRI) may prove effective in solving crimes or preventing terrorism.

I’m enjoying the AAFP Blog a lot: David often finds interesting FP articles that I wouldn’t otherwise come across. But, at the risk of being picky, I need to correct a few things David’s post about brain scanning. To be fair, his post reflects confusion in the original article about the differences between brain fingerprinting, brain scanning and fMRI. They are not the same thing. Let’s try and sort this out, without getting too technical:

Brain scanning is a generic term used in many different contexts by many different authors usually to refer to any technique that measures brain activity.

Brain Fingerprinting is the name given by Lawrence Farwell to a technique for detecting guilty knowledge using EEG. EEGs measure electrical activity across a subject’s brain (electrical signals known as Event Related Potentials or ERPs) when the subject is exposed to a stimulus. Farwell claims that Brain Fingerprinting can detect whether a stimulus is novel or familiar to the subject.

fMRI is a technique for measuring brain activity by measuring changes in the concentration of oxygenated haemoglobin in the brain. In deception detection, using fMRI can, researchers suggest, detect changes in the pattern of subjects’ brain activity when lying compared to when they are telling the truth.

None of these techniques consitutes a ‘lie detector’. Nor is the polygraph (sorry, David, I’m being picky again). The techniques simply measure changes in physiology or brain activity in response to a stimulus. The decision as to whether that change is a result of deception depends on the stimulus that caused the change (e.g. the question being asked, or, perhaps, the photo being shown) and how that change is interpreted.

For anyone new to this area – or even with a bit of knowledge of the field – I’d recommend a great article that appeared in the American Journal of Bioethics last year on the promises and perils of lie detection. The paper, by Paul Root Wolpe, Kenneth R Foster and Daniel L Langleben (all at the University of Pennsylvania) does a terrific job of setting out clearly some of the problems in deploying Emerging Neurotechnologies for Lie Detection and is also a good source for summaries of the theoretical, ethical and practical issues involved in deploying EEG measurements and fMRI for deception detection.

As far as the authors know, Farwell’s Brain Fingerprinting is the only one of the new neurotechnologies that has been used in real-life forensic settings and only once (according to Wolpe et al.) has it been used in court in the US. As Wolpe et al explain, in Harrington vs Iowa (a 2003 post-conviction relief hearing), Farwell submitted Brain Fingerprinting evidence that Harrington had no knowledge of the crime scene. Harrington was freed, but Wolpe et al. contend that the Brain Fingerprinting evidence was essentially irrelevant to the court’s decision. They add,

as the State of Iowa complained in its brief against Brain Fingerprinting in Harrington, the most critical problem with admission of Brain Fingerprinting evidence is the lack of any track record establishing its reliability (p47).

Despite this, it appears that the Indian Police is using Brain Fingerprinting in real-life cases, where suspects’ liberty is at stake. This blog – and OmniBrain and Neurofuture – mentioned these concerns before and Robert Ellman picks up on more evidence that this untested technology is being used by the Indian Police. He provides links to articles in New Kerala newspaper highlighting how Brain Fingerprinting (not fMRI, as Ellman suggests) is being used, for instance:

The court had earlier awarded life imprisonment to Javed and six others [...] for rioting and murdering a man on November 11, 2003 [...] The judge awarded the sentence after considering the results of the brain fingerprinting tests performed on the accused, among other facts in this case.

Ellman also highlights concerns that ‘cognitive profiling’ might one day be applied to

[...] search suspects for brain waves that suggest a propensity toward violence — a sort of cognitive profiling. ‘You can do an FMRI scan showing that the structures in the brain responsible for impulse control and empathy are underactive and the parts of the brain responsible for aggression and more animalistic, violent activities are overactive,’ Snead explained. ‘Maybe with these nascent technologies, we’ll be able to develop some kind of profile for a terrorist.’ Suspects who show a propensity for violence might be detained indefinitely as enemy combatants even though they committed no crimes.

Minority Report here we come.


UPDATE (16 June):

Sandra helpfully points out in the comments below that the ‘Brain Fingerprinting’ being used in India is not the same as the BF developed by Lawrence Farwell, though it seems to be based on the same principle.

    The ethics of fMRI for deception detection

    Thank you to Enrica Dente’s Lie-Detection list, for bringing a new article in the Stanford Report (May 3) to our attention. The article summarises a recent talk by ethicist and law Professor Hank Greely about the ethics of using fMRI for deception detection:

    Greely [...] discussed his concerns about the new lie detection technology at a campus Science, Technology and Society seminar April 14. Greely said he is excited by the potential for improved lie detection but concerned that it could lead to personal-privacy violations and a host of legal problems—especially if the techniques prove unreliable.

    [...] “Deception is not a very clear-cut, well-defined thing,” Greely said. “We know people can remember things that never happened. How does that show up on an fMRI lie detection test?”

    Access the full article here.

    Reading Minds: Lie Detection, Neuroscience, Law, and Society

    If you are in California this Friday, 10 March (and how I wish I was!), this would be a very interesting way to spend your day. Stanford Law School is putting on a one-day conference on lie detection and neuroscience. Here’s the blurb:

    A revolution in neuroscience has vastly expanded our understanding of the human brain and its operations. Our increasing ability to monitor the brain’s operations holds the possibility of being able to detect directly a person’s mental state. One of the most interesting possible applications is using neuroscientific methods to provide reliable lie detection. Several scientists, and several companies, claim that this use has arrived. The morning session of the conference will examine the scientific plausibility of reliable lie detection through neuroscientific methods, discussing different methods and assessing their likely success. The afternoon session will assume that at least one of those methods is established as reliable and will then explore what social and legal ramifications will follow. This conference is free and open to the public.

    There’s a link to the full agenda on the conference website, but just look at the line up:

    Wow. Thank you to Neuroethics & Law Blog for the highlight!