Category Archives: Applications

Applications of deception detection methods

A few deception tweets from recent days

  • Insurance “claim fraudsters think too much”. Some great Portsmouth Uni research covered by Irish Independent http://retwt.me/1P8R0
  • “If You Want to Catch a Liar, Make Him Draw” David DiSalvo @Neuronarrative on more great Portsmouth Uni research http://retwt.me/1P8ZB
  • fMRI scans of people with schizophrenia show they have same functional anatomical distinction between truth telling & deception as others http://bit.ly/aO5cI2 via @Forpsych
  • In press: Promising to tell truth makes 8- 16 year-olds more honest (but lectures on morality don’t). Beh Sciences & Law http://is.gd/fCa7X

Quick deception links for the last few weeks

Gah, Twitter update widget broken. Here are the deception-relevant tweets from the last few weeks:

Polygraph and similar:

  • Detecting concealed information w/ reaction times: Validity & comparison w/ polygraph App Cog Psych 24(7) http://is.gd/fhPMW
  • Important (rare) study on polygraph w/ UK sex offenders: leads to more admissions; case mgrs perceive increased risk http://is.gd/eoW4Q

fMRI and other brain scanning:

  • If Brain Scans Really Detected Deception, Who Would Volunteer to be Scanned? J Forensic Sci http://is.gd/eiz2o
  • FMRI & deception: “The production and detection of deception in an interactive game” in Neuropsychologia http://is.gd/eUMO3
  • In the free access PLoS1: fMRI study indicates neural activity associated with deception is valence-related. PLoS One 5(8). http://is.gd/f6IaM

Verbal cues:

  • Distinguishing truthful from invented accounts using reality monitoring criteria – http://ht.ly/2z8FC
  • Detecting Deceptive Discussions in Conference Calls. Linguistic analysis method 50-65% accuracy. SSRN via http://is.gd/eI0bA
  • Effect of suspicion & liars’ strategies on reality monitoring Gnisci, Caso & Vrij in App Cog Psy 24:762–773 http://is.gd/eCFyA

Applied contexts:

  • A new Canadian study on why sex offenders confess during police interrogation (no polygraph necessary) http://is.gd/eoWl7
  • Can fabricated evidence induce false eyewitness testimony? App Cog Psych 24(7) http://is.gd/fhPDd Free access
  • In press, B J Soc Psy Cues to deception in context. http://is.gd/fhPcY Apparently ‘context’ = ‘Jeremy Kyle Show’. Can’t wait for the paper!
  • Can people successfully feign high levels of interrogative suggestibility & compliance when given instructions to malinger? http://ht.ly/2z8Wz

Kids fibbing:

  • Eliciting cues to children’s deception via strategic disclosure of evidence App Cog Psych 24(7) http://is.gd/fhPIS
  • Perceptions about memory reliability and honesty for children of 3 to 18 years old – http://ht.ly/2z8O1

And some other links of interest:

Lie-detection biases among male police interrogators, prisoners, and laypersons

I know, I’ve been away a long time, finishing off my doctorate and working hard, so no time for blogging. The doctorate is finally out of the way but I still don’t have masses of spare time. When I can I’ll update these blogs with studies that catch my eye, though I don’t think I’ll be able to comment in depth on many of them in the way that I used to. That’s partly a time issue, but also I haven’t got access to as many full text articles as I did when I was registered at a university. I’ll do what I can.

Here’s a study that sounds like an interesting addition to the literature on what people think of their own lie-detection abilities:

Beliefs of 28 male police interrogators, 30 male prisoners, and 30 male laypersons about their skill in detecting lies and truths told by others, and in telling lies and truths convincingly themselves, were compared. As predicted, police interrogators overestimated their lie-detection skills. In fact, they were affected by stereotypical beliefs about verbal and nonverbal cues to deception. Prisoners were similarly affected by stereotypical misconceptions about deceptive behaviors but were able to identify that lying is related to pupil dilation. They assessed their lie-detection skill as similar to that of laypersons, but less than that of police interrogators. In contrast to interrogators, prisoners tended to rate lower their lie-telling skill than did the other groups. Results were explained in terms of anchoring and self-assessment bias. Practical aspects of the results for criminal interrogation were discussed.

The full text is behind a paywall – I can’t find a direct link so you have to get there by going to the publisher’s website and searching their e-journals.

Research round-up 6: And finally, kids’ lies, online lies and my deception book of the year

Happy new year! Here is the final part of the 2008 deception research round-up, put together to make amends for having neglected this blog over the past few months. This post includes bits and pieces of deception research that didn’t fit too well into the first five round-up posts. Hope you’ve enjoyed them all!

Children

First, a couple of articles about how children learn to lie:

Eye gaze plays a pivotal role during communication. When interacting deceptively, it is commonly believed that the deceiver will break eye contact and look downward. We examined whether children’s gaze behavior when lying is consistent with this belief. …Younger participants (7- and 9-year-olds) broke eye contact significantly more when lying compared with other conditions. Also, their averted gaze when lying differed significantly from their gaze display in other conditions. In contrast, older participants did not differ in their durations of eye contact or averted gaze across conditions. Participants’ knowledge about eye gaze and deception increased with age. This knowledge significantly predicted their actual gaze behavior when lying. These findings suggest that with increased age, participants became increasingly sophisticated in their use of display rule knowledge to conceal their deception.

The relation between children’s lie-telling and their social and cognitive development was examined. Children (3-8 years) were told not to peek at a toy. Most children peeked and later lied about peeking. Children’s subsequent verbal statements were not always consistent with their initial denial and leaked critical information revealing their deceit. Children’s conceptual moral understanding of lies, executive functioning, and theory-of-mind understanding were also assessed. Children’s initial false denials were related to their first-order belief understanding and their inhibitory control. Children’s ability to maintain their lies was related to their second-order belief understanding. Children’s lying was related to their moral evaluations. These findings suggest that social and cognitive factors may play an important role in children’s lie-telling abilities.

Technotreachery – lying via CMC

It’s a popular topic and the literature is growing all the time. Here’s some of the new research published in 2008 about lying in computer-mediated communication:

This study aimed to elaborate the relationships between sensation-seeking, Internet dependency, and online interpersonal deception. Of the 707 individuals recruited to this study, 675 successfully completed the survey. The results showed high sensation-seekers and high Internet dependents were more likely to engage in online interpersonal deception than were their counterparts.

Deception research has been primarily studied from a Western perspective, so very little is known regarding how other cultures view deception… this study proposes a framework for understanding the role Korean and American culture plays in deceptive behavior for both face-to-face (FTF) and computer-mediated communication (CMC). … Korean respondents exhibited greater collectivist values, lower levels of power distance, and higher levels of masculine values than Americans. Furthermore, deceptive behavior was greater for FTF communication than for CMC for both Korean and American respondents. In addition to a significant relationship between culture and deception, differences were found between espoused cultural values and deceptive behavior, regardless of national culture. These results indicate the need for future research to consider cultural differences when examining deceptive behavior.

This study set out to investigate the type of media individuals are more likely to tell self-serving and other-oriented lies, and whether this varied according to the target of the lie. One hundred and fifty participants rated on a likert-point scale how likely they would tell a lie. Participants were more likely to tell self-serving lies to people not well-known to them. They were more likely to tell self-serving lies in email, followed by phone, and finally face-to-face. Participants were more likely to tell other-oriented lies to individuals they felt close to and this did not vary according to the type media. Participants were more likely to tell harsh truths to people not well-known to them via email.

Detecting deception

OK, I know this probably could have gone into an earlier post. However, it does involve a bit of machinery so it didn’t fit in part 1, but the machinery has been in use for several decades so it couldn’t really fit in post 2.

An increasing number of researchers are exploring variations of the Concealed Knowledge Test (CKT) as alternatives to traditional ‘lie-detector’ tests. For example, the response times (RT)-based CKT has been previously shown to accurately detect participants who possess privileged knowledge. Although several studies have reported successful RT-based tests, they have focused on verbal stimuli despite the prevalence of photographic evidence in forensic investigations. Related studies comparing pictures and phrases have yielded inconsistent results. The present work compared an RT-CKT using verbal phrases as stimuli to one using pictures of faces. This led to equally accurate and efficient tests using either stimulus type. Results also suggest that previous inconsistent findings may be attributable to study procedures that led to better memory for verbal than visual items. When memory for verbal phrases and pictures were equated, we found nearly identical detection accuracies.

Deception book of the year

And finally, an important publication in 2008 was the second edition of Aldert Vrij’s Detecting Lies and Deceit: Pitfalls and Opportunities. The first edition (published in 2000) has been one of my key references for scholarly research on deception, along with Paul Ekman’s Telling Lies: Clues to Deceit in the Marketplace, Politics and Marriage and Granhag and Stronwall’s edited volume on The Detection of Deception in Forensic Contexts. Not surprising then that Vrij’s second edition is already one of the most frequently consulted volumes on my deception bookshelf.

Vrij says that he did not originally envisage updating his 2000 book until at least 2010, but felt with the increasing amount of new research in this area, and increasing interest from law enforcement and security agencies in detecting deception that he could not wait that long. The result is a volume that is substantially updated with research published up to about the middle of 2007. The book has been completely rewritten and there are several new chapters covering recent developments in mechanical methods of deception detection, including brain scanning technologies (e.g., fMRI, P300 brain waves), thermal imaging and voice stress analysis. Vrij also adds a helpful chapter on how professionals can become better lie detectors.

It’s not perfect – I’d welcome more detail on on understanding the reasons why people lie (the book is mostly about catching liars), more on creating a context in which someone is more likely to tell the truth, and more discussion of cross-cultural differences in deception (though to be fair there is shockingly little research in this area to discuss). But despite these criticisms, Vrij’s new book remains a ‘must have’ reference for academics and professionals interested in up-to-date research on deception detection. Practitioners in particular should heed Vrij’s warning about over-hyped techniques for ‘deception detection’: as Vrij says, the best way to avoid falling for the hype is by keeping up to date with the independent, objective research on deception detection. This book is a great tool for giving yourself a grounding in that research.

Phew. Six months’ blogging in 6 days. Hope you enjoyed it!

Research round-up 4: When people lie

On to part 4 of this series on research published in 2008 that I didn’t get a chance to blog about when it came out, where we take a peek at some of the new research on circumstances in which people lie and what makes them seem credible.

Part 1: Catching liars
Part 2: New technologies
Part 3: Magic

First, lying in an extreme situation: Harpster and her colleagues reported results of a study that suggests that detailed linguistic analysis of calls made to the emergency services can help determine whether the caller might have committed the homicide they are reporting:

This study examined verbal indicators to critically analyze 911 homicide statements for predictive value in determining the caller’s innocence or guilt regarding the offense. One hundred audio recordings and transcripts of 911 homicide telephone calls obtained from police and sheriffs departments throughout the United States provided the database for the study. Using qualitative approaches for formulating the linguistic attributes of these communications and appropriate quantitative analyses of the resulting variables, the likelihood of guilt or innocence of the 911 callers in these adjudicated cases was examined. The results suggest that the presence or absence of as many as 18 of the variables are associated with the likelihood of the caller’s guilt or innocence regarding the offense of homicide. These results are suggestive of up to six distinct linguistic dimensions that may be useful for examination of all homicide calls to support effective investigations of these cases by law enforcement.

Staying in the forensic realm, Tess Neal and Stanley Brodsky wondered how expert witnesses can enhance their credibility. They reported results indicating that eye contact with the lawyer cross-questioning them and with mock jurors enhances the credibility of male experts, though it does not seem to have an impact on female experts’ credibility:

The effect of eye contact on credibility was examined via a 3 (low, medium, high eye contact) x 2 (male, female) between-groups design with 232 undergraduate participants. A trial transcript excerpt about a defendant’s recidivism likelihood was utilized as the experts’ script. A main effect was found: Experts with high eye contact had higher credibility ratings than in the medium and low conditions. Although a confound precluded comparisons between the genders, results indicated that males with high eye contact were more credible than males with medium or low eye contact. The female experts’ credibility was not significantly different regardless of eye contact. Eye contact may be especially important for males: Male experts should maintain eye contact for maximum credibility.

If you’re a rape victim, however, police investigators believe you’re more credible when you cry or show despair whilst giving your evidence:

Credibility judgments by police investigators were examined. Sixty-nine investigators viewed one of three video-recorded versions of a rape victim’s statement where the role was played by a professional actress. The statements were given in a free recall manner with identical wording, but differing in the emotions displayed, termed congruent, neutral and incongruent emotional expressions. Results showed that emotions displayed by the rape victim affected police officers’ judgments of credibility. The victim was judged as most credible when crying and showing despair, and less credible when being neutral or expressing more positive emotions. This result indicates stereotypic beliefs about rape victim behavior among police officers, similar to those found for lay persons. Results are discussed in terms of professional expertise.

From detecting lying by the police to police deception: Geoffrey Alpert and Jeffrey Noble published a discussion piece in Police Quarterly in which they consider the circumstances, nature and impact of conscious, unconscious, ‘acceptable’ and unacceptable lying by police officers:

Police officers often tell lies; they act in ways that are deceptive, they manipulative people and situations, they coerce citizens, and are dishonest. They are taught, encouraged, and often rewarded for their deceptive practices. Officers often lie to suspects about witnesses and evidence, and they are deceitful when attempting to learn about criminal activity. Most of these actions are sanctioned, legal, and expected. Although they are allowed to be dishonest in certain circumstances, they are also required to be trustworthy, honest, and maintain the highest level of integrity. The purpose of this article is to explore situations when officers can be dishonest, some reasons that help us understand the dishonesty, and circumstances where lies may lead to unintended consequences such as false confessions. The authors conclude with a discussion of how police agencies can manage the lies that officers tell and the consequences for the officers, organizations, and the criminal justice system.

In everyday life, when do people think it’s ok to lie? BeverlyMcLeod and Randy Genereux’s results suggest that your personality traits influence what sorts of lying you find acceptable, and when:

The present study investigated the role of individual differences in the perceived acceptability and likelihood of different types of lies. Two-hundred and eighty seven college students completed scales assessing six personality variables (honesty, kindness, assertiveness, approval motivation, self-monitoring, and Machiavellianism) and rated 16 scenarios involving lies told for four different motives (altruistic, conflict avoidance, social acceptance, and self-gain lies). Our central hypothesis that the perceived acceptability and likelihood of lying would be predicted by interactions between personality characteristics of the rater and the type of lie being considered was supported. For each type of lie, a unique set of personality variables significantly predicted lying acceptability and likelihood.

What is the impact of lying? Robert Lount and his colleagues warned that it’s difficult to recover from an early breach of trust in a relationship:

Few interpersonal relationships endure without one party violating the other’s expectations. Thus, the ability to build trust and to restore cooperation after a breach can be critical for the preservation of positive relationships. Using an iterated prisoner’s dilemma, this article presents two experiments that investigated the effects of the timing of a trust breach—at the start of an interaction, after 5 trials, after 10 trials, or not at all. The findings indicate that getting off on the wrong foot has devastating long-term consequences. Although later breaches seemed to limit cooperation for only a short time, they still planted a seed of distrust that surfaced in the end.

And finally, a couple outside the psychology/criminology literature that may be of interest:

Next round up (part 5): research on the psychophysiology of lying.

Research round-up 2: New technologies and deception detection

Part two of the Deception Blog round-up of “all those articles I haven’t had a chance to blog about”. Part one was about catching liars via non-mechanical techniques. This post covers articles and discussion about new technologies to detect deception, including fMRI and measurement of Event-Related Potentials.

fMRI and deception: discussion on the journal pages

It’s been quite a year for advances in neuroscience and deception detection, so much so that in a recent paper in of the American Academy of Psychiatry & Law, Daniel Langleben and Frank Dattilio suggested that a new discipline of “forensic MRI” was emerging. One interesting exchange appeared recently in the Journal of the American Academy of Psychiatry & Law:

…The new approach promises significantly greater accuracy than the conventional polygraph—at least under carefully controlled laboratory conditions. But would it work in the real world? Despite some significant concerns about validity and reliability, fMRI lie detection may in fact be appropriate for certain applications. This new ability to peer inside someone’s head raises significant questions of ethics. Commentators have already begun to weigh in on many of these questions. A wider dialogue within the medical, neuroscientific, and legal communities would be optimal in promoting the responsible use of this technology and preventing abuses.

…The present article concludes that the use of functional imaging to discriminate truth from lies does not meet the Daubert criteria for courtroom testimony.

…we update and interpret the data described by Simpson, from the points of view of an experimental scientist and a forensic clinician. We conclude that the current research funding and literature are prematurely skewed toward discussion of existing findings, rather than generation of new fMRI data on deception and related topics such as mind-reading, consciousness, morality, and criminal responsibility. We propose that further progress in brain imaging research may foster the emergence of a new discipline of forensic MRI.

Earlier this year Kamila Sip and colleagues challenged proponents of neuroimaging for deception detection to take more account of the real world context in which deception occurs, which led to a robust defence from John-Dylan Haynes and an equally robust rebuttal from Sip et al. It all happened in the pages of Trends in Cognitive Sciences:

With the increasing interest in the neuroimaging of deception and its commercial application, there is a need to pay more attention to methodology. The weakness of studying deception in an experimental setting has been discussed intensively for over half a century. However, even though much effort has been put into their development, paradigms are still inadequate. The problems that bedevilled the old technology have not been eliminated by the new. Advances will only be possible if experiments are designed that take account of the intentions of the subject and the context in which these occur.

In their recent article, Sip and colleagues raise several criticisms that question whether neuroimaging is suitable for lie detection. Here, two of their points are critically discussed. First, contrary to the view of Sip et al., the fact that brain regions involved in deception are also involved in other cognitive processes is not a problem for classification-based detection of deception. Second, I disagree with their proposition that the development of lie-detection requires enriched experimental deception scenarios. Instead, I propose a data-driven perspective whereby powerful statistical techniques are applied to data obtained in real-world scenarios.

…Valid experimental paradigms for eliciting deception are still required, and such paradigms will be particularly difficult to apply in real-life settings… We agree with Haynes, however, that there are important ethical issues at stake for researchers in this field. In our opinion, one of the most important of these is careful consideration of how results derived from highly controlled laboratory settings compare with those obtained from real-life scenarios, and if and when imaging technology should be transferred from the laboratory to the judicial system.

fMRI and deception: new research findings

Of course discussion is worth nothing if you don’t have research results to discuss. Shawn Christ and colleagues delved deeper into to the cognitive processes associated with deception:

Previous neuroimaging studies have implicated the prefrontal cortex (PFC) and nearby brain regions in deception. This is consistent with the hypothesis that lying involves the executive control system….Our findings support the notion that executive control processes, particularly working memory, and their associated neural substrates play an integral role in deception. This work provides a foundation for future research on the neurocognitive basis of deception.

Meanwhile, two groups of researchers reported that fMRI techniques can differentiate between mistakes and false memories vs deliberate deception, with Tatia Lee and colleagues showing that in the case of feigning memory impairment, deception “is not only more cognitively demanding than making unintentional errors but also utilizes different cognitive processes”:

fMRI and deception in the blogosphere

Commentary and discussion of fMRI was not limited to the pages of scholarly journals, however. A terrific post by Vaughan over at Mind Hacks on the limitations of fMRI studies zipped around the blogosphere (and rightly so) and is well worth a read if you are interested in becoming a more critical consumer of fMRI deception detection studies (see also Neurophilosophy’s post MRI: What is it good for? ).

There’s a detailed write-up by Hank Greely of the University of Akron Law School’s conference on Law and Neuroscience held in September, which covers the science, the practicalities and the ethics of using neuroscience in forensic contexts (see also his summary of a presentation at an earlier conference on ‘neurolaw’). Judges too, are “waking up to the potential misuse of brain-scanning technologies” with a recent judges’ summit in the US to “discuss protecting courts from junk neuroscience”, reports New Scientist .

Nevertheless, purveyors of MRI lie-detection technology continue to push their wares. For instance, the Antipolygraph Blog picked up a radio discussion on commercial fMRI-based lie detection in June (the audio download is still available as an mp3 download).

ERP and deception: the controversial BEOS test

Earlier this year I and many others blogged about the disturbing use of brain scanning in a recent murder trial in India. The technique is known as the Brain Electrical Oscillations Signature test and is based on measuring Event-Related Potentials (electrical activity across the brain). Neurologica blog and Neuroethics and Law have a write-ups and links for those who wish to know more.

Neuroethics and Law blog links to a pdf of the judge’s opinion in the case, where pages 58-64 include a summary of the judge’s understanding of the BEOS procedure and what it ‘revealed’ in this case. Most disturbing is the apparent certainty of the judge that the tests were appropriate, scientifically robust and applied correctly by “Sunny Joseph who is working as Assistant Chemical Analyser in Forensic Science Laboratory, Mumbai” (p.55-56):

…competency of this witness to conduct the Test is not seriously challenged. His evidence also reveals that he was working as Clinical Psychologist in National Institute of Mental Health and Neuro Sciences at Bangalore and he has experience in the field of Neuro psychology since last 6 years and in forensic technique since last 1½ years. He has himself conducted approximately 15 Polygraph Tests and has been associated with almost 100 Polygraph Tests. He has conducted 16 BEOS Tests and has been associated in conducting of about 12 Neuro Psychology Tests. Therefore his expertise in my opinion, can in no way be challenged and nothing is brought on record in his cross examination to show that the Tests conducted were not proper and requisite procedure was not followed (p.62).

On a happier note, my hot tip for the New Year is to keep your eye on Social Neuroscience – there are several articles on neural correlates of deception in press there which they are saving up for a special issue in 2009.

More soon – part 3 covers the 2008 flurry of interest in deception and magic!

True and False Memories – a new paper

No time to blog properly, but just wanted to draw your attention to a new paper (download via SSRN) on separating true from false memories. Here’s the abstract:

Many people believe that emotional memories (including those that arise in therapy) are particularly likely to represent true events because of their emotional content. But is emotional content a reliable indicator of memory accuracy? The current research assessed the emotional content of participants’ pre-existing (true) and manipulated (false) memories for childhood events. False memories for one of three emotional childhood events were planted using a suggestive manipulation and then compared, a long several subjective dimensions, with other participants’ true memories. On most emotional dimensions (e.g., how emotional was this event for you?), true and false memories were indistinguishable. On a few measures (e.g., intensity of feelings at the time of the event), true memories were more emotional than false memories in the aggregate, yet true and false memories were equally likely to be rated as uniformly emotional. These results suggest that even substantial emotional content may not reliably indicate memory accuracy.

Reference:

Hat tip to the ever-interesting Neuroethics and Law Blog.

New research: Outsmarting the Liars: The Benefit of Asking Unanticipated Questions

In press in the journal Law and Human Behavior, Aldert Vrij and colleagues test a method of questioning that (in lab situations) exposes liars with an up to 80% success rate. Here’s the abstract:

We hypothesised that the responses of pairs of liars would correspond less with each other than would responses of pairs of truth tellers, but only when the responses are given to unanticipated questions. Liars and truth tellers were interviewed individually about having had lunch together in a restaurant. The interviewer asked typical opening questions which we expected the liars to anticipate, followed by questions about spatial and/or temporal information which we expected suspects not to anticipate, and also a request to draw the layout of the restaurant. The results supported the hypothesis, and based on correspondence in responses to the unanticipated questions, up to 80% of liars and truth tellers could be correctly classified, particularly when assessing drawings.

Reference:

At the moment you can download the article for free here (pdf).

Facial expressions and verbal cues to deception

Hat tip to Neuroethics and Law blog for pointing us towards an article in New Scientist (17 Sept) about lies and spin in the current US Presidential campaign.

NS briefly touches on Paul Ekman’s work on microfacial expressions before devoting more attention to the work of David Skillicorn:

Skillicorn has been watching out for verbal “spin”. He has developed an algorithm that evaluates word usage within the text of a conversation or speech to determine when a person “presents themselves or their content in a way that does not necessarily reflect what they know to be true”.

NS then turns to Branka Zei Pollermann, who combines voice and facial analysis:

“The voice analysis profile for McCain looks very much like someone who is clinically depressed,” says Pollermann… [who] uses auditory analysis software to map seven parameters of a person’s speech, including pitch modulation, volume and fluency, to create a voice profile. She then compares that profile with the speaker’s facial expressions, using as a guide a set of facial expressions mapped out by Ekman, called the Facial Action Coding System, to develop an overall picture of how they express themselves.

This story prompted quite a flurry of comments on the website (some of which are worth reading!).

Skillicorn has posted more about his research and its theoretical basis (James Pennebaker’s LIWC techniquepdf here) at his blog Finding Bad Guys in Data.

Deception in the news

I’ve been lax posting on the blogs recently, I know (real life interferes with blogging). Consider this a catch-up post with some of the deception-related issues hitting the news stands over the last few weeks.

Polygraphing sex offenders gains momentum in the UK: A new pilot scheme to polygraph test sex offenders to see if they “are a risk to the public or are breaking the terms of their release from jail”, according to The Times (20 Sept 2008).

Brain fingerprinting in the news again: Brain test could be next polygraph (Seattle Post-Intelligencer, 14 Sept):

A Seattle scientist who has developed an electronic brain test that he says could improve our ability to force criminals to reveal themselves, identify potential terrorists and free those wrongly convicted may have finally broken through the bureaucratic barriers that he believes have served to stifle adoption of the pioneering technique.

“There seems to be a renewed surge of interest in this by the intelligence agencies and the military,” said Larry Farwell, neuroscientist and founder of Brain Fingerprinting Laboratories based at the Seattle Science Foundation.

Not-brain-fingerprinting deception detection brain scan procedure isn’t scientific, according to a well-qualified panel (The Hindu, 8 Sept). India’s Directorate of Forensic Sciences chooses not to accept the panel’s findings:

The Directorate of Forensic Sciences, which comes under the Union Ministry of Home, will not accept the findings of the six-member expert committee that looked into brain mapping and its variant “brain electrical oscillation signature (BEOS) profiling” on the ground that the committee has been dissolved.

The six-member technical peer review committee, headed by National Institute of Mental Health and Neuro Sciences (NIMHANS) Director D. Nagaraja, started work in May 2007. The panel had concluded that the two procedures were unscientific and had recommended against their use as evidence in court or as an investigative tool.

  • See also: more on “BEOS profiling” in The Times of India (21 July) which claims that “This brain test maps the truth”.

Perhaps the answer can be found with infrared light? New Scientist (22 Sept) reports on a patent application to develop a new type of brain scanning lie detection technology:

Scott Bunce, at Drexel University’s College of Medicine in Philadelphia, thinks a better solution [to the problem of detecting lies] is to send near-infrared light through the scalp and skull into the brain and see how much is reflected back. And he has designed a special headband that does just that. The amount of reflected light is dependent on the levels of oxygen in the blood, which in turn depends on how active the brain is at that point.

This, he says, gives a detailed picture of real-time activity within the brain that can be used to determine whether the subject is lying. The technique is both cheaper and easier to apply than fMRI and gives a higher resolution than an EEG. …Of course, nobody knows whether brain activity can reliably be decoded to reveal deception, but that’s another question.

Adults easily fooled by children’s false denials

University of California – Davis press release (17 August):

Adults are easily fooled when a child denies that an actual event took place, but do somewhat better at detecting when a child makes up information about something that never happened, according to new research from the University of California, Davis….

“The large number of children coming into contact with the legal system – mostly as a result of abuse cases – has motivated intense scientific effort to understand children’s true and false reports,” said UC Davis psychology professor and study author Gail S. Goodman. “The seriousness of abuse charges and the frequency with which children’s testimony provides central prosecutorial evidence makes children’s eyewitness memory abilities important considerations. Arguably even more important, however, are adults’ abilities to evaluate children’s reports.”

In an effort to determine if adults can discern children’s true from false reports, Goodman and her co-investigators asked more than 100 adults to view videotapes of 3- and 5-year-olds being interviewed about “true” and “false” events. For true events, the children either accurately confirmed that the event had occurred or inaccurately denied that it had happened. For “false” events – ones that the children had not experienced – they either truthfully denied having experienced them or falsely reported that they had occurred.

Afterward, the adults were asked to evaluate each child’s veracity. The adults were relatively good at detecting accounts of events that never happened. But the adults were apt to mistakenly believe children’s denials of actual events.

“The findings suggest that adults are better at detecting false reports than they are at detecting false denials,” Goodman said. “While accurately detecting false reports protects innocent people from false allegations, the failure to detect false denials could mean that adults fail to protect children who falsely deny actual victimization.”

Polygraph reasoning applied to spotting terrorists…

Remember that the rationale behind the polygraph is that (with an appropriate questioning regime) guilty people are assumed have physiological responses that differ from innocents? Well, the new “anxiety-detecting machines” that the DHS hopes might one day spot terrorists seem to work on the same basis. Here’s the report from USA Today (18 Sept):

A scene from the airport of the future: A man’s pulse races as he walks through a checkpoint. His quickened heart rate and heavier breathing set off an alarm. A machine senses his skin temperature jumping. Screeners move in to question him. Signs of a terrorist? Or simply a passenger nervous about a cross-country flight?

It may seem Orwellian, but on Thursday, the Homeland Security Department showed off an early version of physiological screeners that could spot terrorists. The department’s research division is years from using the machines in an airport or an office building— if they even work at all. But officials believe the idea could transform security by doing a bio scan to spot dangerous people.

Critics doubt such a system can work. The idea, they say, subjects innocent travelers to the intrusion of a medical exam.

According to the news report, there is some effort going into testing the equipment, though if the details in the news report are to be believed it sounds like the research is still at a very early stage:

To pinpoint the physiological reactions that indicate hostile intent, researchers… recruited 140 local people with newspaper and Internet ads seeking testers in a “security study.” Each person receives $150.

On Thursday, subjects walked one by one into a trailer with a makeshift checkpoint. A heat camera measured skin temperature. A motion camera watched for tiny skin movements to measure heart and breathing rates. As a screener questioned each tester, five observers in another trailer looked for sharp jumps on the computerized bands that display the person’s physiological characteristics.

Some subjects were instructed in advance to try to cause a disruption when they got past the checkpoint, and to lie about their intentions when being questioned. Those people’s physiological responses are being used to create a database of reactions that signal someone may be planning an attack. More testing is planned for the next year.

The questioning element does make it sound like what is being developed is a ‘remote’ polygraph.

Hat tip to Crim Prof Blog.

UPDATE: Lots of places picking this up all over the www. New Scientist has a post on the same topic here, and an earlier article on the system here. The Telegraph’s report adds some new information.

India’s Novel Use of Brain Scans in Courts Is Debated

According to a report in the New York Times (14 Sept), an Indian judge has taken the results a brain scan as “proof that the [murder] suspect’s brain held ‘experiential knowledge’ about the crime that only the killer could possess”, and passed a life sentence.

The Brain Electrical Oscillations Signature test, or BEOS, was developed by Champadi Raman Mukundan, a neuroscientist who formerly ran the clinical psychology department of the National Institute of Mental Health and Neuro Sciences in Bangalore. His system builds on methods developed at American universities by other scientists, including Emanuel Donchin, Lawrence A. Farwell and J. Peter Rosenfeld.

Neuroethics and Law Blog comments, as does Dr Lawrence Farwell (inventor of the controversial ‘Brain Fingerprinting’ technique, which bears a passing resemblence to the BEOS test used in India).

Scary stuff.

Lie Detector Technology in court – seduced by neuroscience?

Jeffrey Bellin from the California Courts of Appeal has a paper forthcoming in Temple Law Review on the legal issues involved in deploying new lie detection technology – specifically fMRI technology – in real-world courtroom settings (hat tip to the Neuroethics and Law blog ).

Bellin examines the ‘scientific validity’ requirements and argues that the research has progressed to the point where fMRI evidence in deception detection issues will soon reach the standard required to be admissible under the Daubert criteria. However, Bellin’s key issue with using fMRI evidence in court is not on scientific but on legal grounds: he claims that fMRI evidence would fall foul of the hearsay prohibition. He explains that “The hearsay problem arises because lie detector evidence consists of expert analysis of out-of-court statements offered for their truth (i.e., hearsay) and is consequently inadmissible under Federal Rule of Evidence 801 absent an applicable hearsay exception” (p.102).

I am not a lawyer so can’t really comment on the hearsay issue raised by Bellin, except to say that it’s an interesting observation and not one I’ve heard before. I feel better placed to assess his analysis that fMRI technology is only a small step from reaching the Daubert standard. In this Bellin is – in my judgement – way off-beam. His argument runs something like this:

1. The US Government has poured lots of money into lie detection techologies (Bellin quotes a Time magazine guess-timate of “tens of millions to hundreds of millions of dollars” – an uncorroborated rumour, not an established fact).

2. fMRI is “the most promising of the emerging new lie detection technologies” (p.106) because “brain activities will be more difficult to suppress than typical stress reactions measured by traditional polygraph examinations, [so] new technologies like fMRI show great promise for the development of scientifically valid lie detectors” (p.106).

3. Thus, “The infusion of money and energy into the science of lie detection coupled with the pace of recent developments in that science suggest that it is only a matter of time before lie detector evidence meets the Daubert threshold for scientific validity.” (p.107).

And the references he provides for this analysis? Steve Silberman’s “Don’t Even Think About Lying” in Wired Magazine from 2006, a piece in Time magazine the same year, entitled “How to Spot a Liar“, by Jeffrey Kluger and Coco Masters. Now both of these articles are fine pieces of journalism, but they hardly constitute good grounds for Bellin’s assertion that fMRI techology is almost ready to be admitted in court. (And if you’re going to use journalistic pieces as references, can I recommend, as a much better source, an excellent article: “Duped: Can brain scans uncover lies?” by Margaret Talbot from The New Yorker [July 2, 2007].)

Let’s just remind ourselves of the Daubert criteria. To paraphrase the comprehensive Wikipedia page, before expert testimony can be entered into evidence it must be relevant to the case at hand, and the expert’s conclusions must be scientific. This latter condition means that a judge deciding on whether to admit expert testimony based on a technique has to address five points:

1. Has the technique been tested in actual field conditions (and not just in a laboratory)?
2. Has the technique been subject to peer review and publication?
3. What is the known or potential rate of error? Is it zero, or low enough to be close to zero?
4. Do standards exist for the control of the technique’s operation?
5. Has the technique been generally accepted within the relevant scientific community?

As far as fMRI for lie detection is concerned I think the answers are:

  1. No, with only a couple of exceptions.
  2. Yes, though there is a long way to go before the technique has been tested in relevant conditions.
  3. In some lab conditions, accuracy rates reach 95%. But what about in real life situations? We don’t have enough research to say.
  4. There are no published or agreed standards for undertaking deception detection fMRI scans.
  5. No, the arguments are still raging!

As an example of 5, one of the crucial arguments is over the interpretation of the results of fMRI experiments (Logothetis, 2008). Mind Hacks had a terrific article a few weeks ago in which they summarise the key issue:

It starts with this simple question: what is fMRI measuring? When we talk about imaging experiments, we usually say it measures ‘brain activity’, but you may be surprised to know that no-one’s really sure what this actually means.

And as Jonah Lehrer points out more recently :

[...T]he critical flaw of such studies is that they neglect the vast interconnectivity of the brain… Because large swaths of the cortex are involved in almost every aspect of cognition – even a mind at rest exhibits widespread neural activity – the typical fMRI image, with its highly localized spots of color, can be deceptive. The technology makes sense of the mind by leaving lots of stuff out – it attempts to sort the “noise” from the “signal” – but sometimes what’s left out is essential to understanding what’s really going on.

Bellin is not alone in perhaps being seduced by the fMRI myth, as two recent studies (McCabe & Castel, 2007; Wiesberg et al., 2008) demonstrate very nicely. McCabe and Castel showed that participants judged news stories as ‘more scientific’ when accompanied by images of brain scans than without, and Weisberg et al.’s participants rated bad explanations of psychological phenomena as more scientifically sound when they included a spurious neuroscience reference. Why are people so beguiled by the blobs in the brain? Here are McCabe and Castel, quoted in the BPS Research Blog:

McCabe and Castel said their results show people have a “natural affinity for reductionistic explanations of cognitive phenomena, such that physical representations of cognitive processes, like brain images, are more satisfying, or more credible, than more abstract representations, like tables or bar graphs.”

References:

See also:

Deceptive Self-Presentation in Online Dating Profiles

In the latest issue of Personality and Social Psychology Bulletin, Catalina Toma and colleagues consider how people lie in online dating profiles, and what they lie about. Here’s the abstract:

This study examines self-presentation in online dating profiles using a novel cross-validation technique for establishing accuracy. Eighty online daters rated the accuracy of their online self-presentation. Information about participants’ physical attributes was then collected (height, weight, and age) and compared with their online profile, revealing that deviations tended to be ubiquitous but small in magnitude. Men lied more about their height, and women lied more about their weight, with participants farther from the mean lying more. Participants’ self-ratings of accuracy were significantly correlated with observed accuracy, suggesting that inaccuracies were intentional rather than self-deceptive. Overall, participants reported being the least accurate about their photographs and the most accurate about their relationship information. Deception patterns suggest that participants strategically balanced the deceptive opportunities presented by online self-presentation (e.g., the editability of profiles) with the social constraints of establishing romantic relationships (e.g., the anticipation of future interaction).

Reference:

See also:

Increasing Cognitive Load to Facilitate Lie Detection: The Benefit of Recalling an Event in Reverse Order

Continuing with their research on the ‘cognitive load hypothesis’, Aldert Vrij and colleagues from Portsmouth University report on a technique for facilitating lie detection – telling the story in reverse order. This article appears in the latest issue of Law and Human Behavior, although the study featured extensively in the press a few months ago (see here ).

Here’s the abstract:

In two experiments, we tested the hypotheses that (a) the difference between liars and truth tellers will be greater when interviewees report their stories in reverse order than in chronological order, and (b) instructing interviewees to recall their stories in reverse order will facilitate detecting deception. In Experiment 1, 80 mock suspects told the truth or lied about a staged event and did or did not report their stories in reverse order. The reverse order interviews contained many more cues to deceit than the control interviews. In Experiment 2, 55 police officers watched a selection of the videotaped interviews of Experiment 1 and made veracity judgements. Requesting suspects to convey their stories in reverse order improved police observers’ ability to detect deception and did not result in a response bias.

Reference:

Secrets – Their Use and Abuse in Organizations

In the latest issue of Journal of Management Inquiry, Carl Keane from Queen’s University, Canada considers organisational secrets. Here’s the abstract:

Organizational scholars, and most social scientists for that matter, have rarely examined the use of the secret in controlling organizational behavior. On one hand, organizational secrets are necessary for the survival of the organization; on the other hand, organizational secrets are often used to hide unethical and illegal behavior. In this essay, the author examines the phenomenon of the secret as part of organizational life, from both a functional and dysfunctional perspective. Specifically, the author illustrates how from a functional point of view, secrets can legally protect organizational vulnerabilities, whereas from a dysfunctional point of view, secrets control organizational members and prevent the communication of knowledge to others. Both processes occur through the construction of social and cognitive boundaries as a form of social control.

Reference:

Learning to lie

From New York Magazine (10 Feb), a detailed article on how kids learn to lie:

Kids lie early, often, and for all sorts of reasons—to avoid punishment, to bond with friends, to gain a sense of control. But now there’s a singular theory for one way this habit develops: They are just copying their parents.

… In the last few years, a handful of intrepid scholars have decided it’s time to try to understand why kids lie. For a study to assess the extent of teenage dissembling, Dr. Nancy Darling… recruited a special research team of a dozen undergraduate students, all under the age of 21… “They began the interviews saying that parents give you everything and yes, you should tell them everything,” Darling observes. By the end of the interview, the kids saw for the first time how much they were lying and how many of the family’s rules they had broken. Darling says 98 percent of the teens reported lying to their parents.

… For two decades, parents have rated “honesty” as the trait they most wanted in their children. Other traits, such as confidence or good judgment, don’t even come close. On paper, the kids are getting this message. In surveys, 98 percent said that trust and honesty were essential in a personal relationship. Depending on their ages, 96 to 98 percent said lying is morally wrong.

So when do the 98 percent who think lying is wrong become the 98 percent who lie?

Full article here.

See also:

Quick round up of deception news

Sorry for the slow posting recently – real life is getting in the way of blogging at the moment., and is likely to continue to do so for some time yet, so please bear with me. Perhaps some of these items will give you your deception research fix in the meantime.

If you’d like something to listen to during the daily commute why not download an interview with John F. Sullivan, author of Gatekeeper: Memoirs of a CIA Polygraph Examiner (h/t Antipolygraph Blog).

Alternatively, try a short NPR Morning Edition segment on the neuropsychology of lying (h/t and see also The Frontal Cortex).

The ever-interesting BPS Research Digest discusses a study of how toddlers tell a joke from a mistake. According to the researchers, Elena Hoicka and Merideth Gattis:

…the ability to recognise humorous intent comes after the ability to recognise jokes, but before the ability to recognise pretense and lies. “We propose that humour understanding is an important step toward understanding that human actions can be intentional not just when actions are right, but even when they are wrong,” they concluded.

Karen Franklin has a terrific commentary on the Wall Street Journal’s discussion of a subscale of the MMPI, which claims to detect malingerers but which, according to critics, results in a large number of false positives (i.e., labelling truthful test-takers as malingerers). (See also a short commentary by Steven Erikson).

There are two articles by Jeremy Dean of the glorious PsyBlog on false memories (here and here).

And finally, Kai Chang at Overcoming Bias reports on an unusual teaching technique which involves asking students to spot the Lie of the Day.

To lie or not to lie: To whom and under what circumstances.

Here’s an interesting article that I missed from last year on how teenagers judge the acceptabillity of lying in different situations. The abstract explains:

This research examined adolescents’ judgments about lying to circumvent directives from parents or friends in the moral, personal, and prudential domains. One hundred and twenty-eight adolescents (12.1-17.3 years) were presented with situations in which an adolescent avoids a directive through deception. The majority of adolescents judged some acts as acceptable, including deception regarding parental directives to engage in moral violations and to restrict personal activities. Other acts of deception were judged as unacceptable, including deception of parents regarding prudential acts, as well as deception of friends in each domain. In addition, lying to conceal a misdeed was negatively evaluated. Most adolescents thought that directives from parents and friends to engage in moral violations or to restrict personal acts were not legitimate, whereas parental directives concerning prudential acts were seen as legitimate. Results indicate that adolescents value honesty, but sometimes subordinate it to moral and personal concerns in relationships of inequality.

Reference: