Category Archives: Lie-catchers

Studies on accuracy and beliefs of lie-catchers

The Truth About Truth Serum

… from Damn Interesting (30 August):

Popular culture makes gratuitous use of powerful lie-repelling agents known as Truth Serums. They are usually depicted as injected drugs which strongly inhibit a subject’s ability to lie, causing him or her to mechanically recite the truth to an interviewer upon questioning.

[…] But are these truth serums effective? Do they produce any useful results?

The short answer is, no. The long answer is “Noooooooooooo!” while running in slow-motion.

Nice. Read the whole thing.

ABC programme on lying

The Antipolygraph Blog pointed us towards an Australian Broadcasting Corporation TV special on deception entitled To Catch a Liar that aired back in July.

A transcript of the show is available (some good links at the end of this if you scroll down). The programme included comments from Steve Apernen, Maureen O’Sullivan and Andrew Ryan of DoDPI.

More details on the Antipolygraph Blog, where John Furedy (University of Toronto) has posted a review in the comments.

Police officers ability to detect deception in high stakes situations

There’s another study from Aldert Vrij and his colleagues at Portsmouth University in the latest issue of Applied Cognitive Psychology. The rationale for the study is interesting and relevant to anyone who is interested in real-life deception detection. In previous studies, participants have tended to be shown just one (or one set) of clips on one occasion. Conclusions are drawn about individual differences on the basis of such tests, but Vrij and his colleagues wondered how meaningful these conclusions actually are. After all:

The problem is that participants are typically tested only once. It therefore cannot be ruled out that a particular good or bad performance on the single lie detection task was just a matter of luck. This is a particularly relevant point in studies where attempts are made to unravel the strategies used by good lie detectors.

The materials were clips from real suspect interviews where ground truth was known (and the stakes were high), and the participants were real, experienced police officers (average 6.9 years in police service). A welcome step forward from the usual students / students studies, and well done to the UK police force concerned for facilitating the research. The officers’ task was to judge four sets of clips of liars / truth tellers on four different occasions. The results:

1. The total accuracy (four tests combined) was 72%. This is an improvement on the usual 50-60% hit rate typically found in lab studies of deception detection (DePaulo et al., 2003).

2. Vrij and colleagues “did not find consistency in officers’ performance over the four tests, suggesting that performance on individual tests is, as we predicted, partly caused by luck”.

3. They found that officers were equally good at detecting truth (70% accuracy) and lies (73%).

4. They found that “officers were overly modest about, rather than overconfident in, their performance”. On average officers tended to believe (both before and after the tests) that they had only performed at chance level.

Neat.

How to detect bullshit

An entertaining piece by Scott Berkun (9 Aug) on how to detect lies and BS.

[…] One particularly troublesome kind of lie is known as Bullshit (BS). These are unnecessary deceptions, committed in the gray area between polite white lies and complete malicious fabrications. BS is usually defined as inventions made in ignorance of the facts, where the primary goal is to protect oneself. The aim of BS isn’t to harm another person, although that often happens collaterally. For a variety of reasons BS can be hard to detect […]

Hat tip to Lifehacker for the link!

Time Magazine wonders how to spot a liar

A lengthy piece in last week’s Time Magazine (20 August) rakes over familiar ground:

[…] In the post-9/11 world, where anyone with a boarding pass and a piece of carry-on is a potential menace, the need is greater than ever for law enforcement’s most elusive dream: a simple technique that can expose a liar as dependably as a blood test can identify DNA or a Breathalyzer can nail a drunk. Quietly over the past five years, Department of Defense agencies and the Department of Homeland Security have dramatically stepped up the hunt. Though the exact figures are concealed in the classified “black budget,” tens of millions to hundreds of millions of dollars are believed to have been poured into lie-detection techniques as diverse as infrared imagers to study the eyes, scanners to peer into the brain, sensors to spot liars from a distance, and analysts trained to scrutinize the unconscious facial flutters that often accompany a falsehood.

The article goes on to discuss research on deception using fMRI, electroencephalograms, eye scans and microexpressions. They conclude:

For now, the new lie-detection techniques are likely to remain in the same ambiguous ethical holding area as so many other privacy issues in the twitchy post-9/11 years. We’ll give up a lot to keep our cities, airplanes and children safe. But it’s hard to say in the abstract when “a lot” becomes “too much.” We can only hope that we’ll recognize it when it happens.

Weeding Out Terrorists: Officials Turn To Behavior Profiling To Find Would-Be Attackers

A reader (who prefers to remain anonymous) kindly pointed us towards this interesting article which appeared on the CBS news site on 15 August. It highlights the work of psychology Prof Mark Frank who is working, along with his former supervisor Paul Ekman, with the US TSA to develop methods of behavioural profiling.

While X-ray machines and explosive scanners focus on weapons, Transportation Security chief Kip Hawley said Tuesday that screeners are studying passengers for signs of nervousness. “It is involuntary muscular behaviors that are across the board, that doesn’t matter what you look like. You don’t have to look like a terrorist to exhibit these involuntary behaviors,” he says. It’s the first step in so-called behavioral profiling. The next step can be seen inside a lab at the University of Buffalo, where a research suspect is about to tell a lie.

[…] Frank, who’s developing the profiling technique for the Department of Homeland Security, claims he can spot a liar 90 percent of the time. “I think at this point it is at least as accurate as a polygraph,” he says. Frank says the kind of behavioral analysis he’s doing in his lab can be taught to screeners in the real world with as little as 30 minutes training. Hundreds of faces have convinced him the science is solid in identifying people who might be lying.

There are two videos on the site where you can see Frank discussing his work in more detail.

New-age lie detector takes a different tack

There’s an interview with Dr Britton Chance, Professor Emeritus of biophysics at the University of Pennsylvania, in the latest issue of the RCMP Gazette (Vol 68, No 2) entitled “Detecting deception” in which Chance outlines his team’s work to develop “a new-generation lie detector that measures deception by detecting sudden spikes in the brain’s bloodflow”. Here’s an extract:

How does this technology measure deceit?

Dr Chance: Deceit usually involves a decision to tell a lie instead of a decision to tell the truth. We can “image” this thought pattern before it’s articulated since it causes an increase in bloodflow to the cerebral cortex, or the brain’s decision-making centre. Users of the cognoscope detect changes in bloodflow through a red spot that appears on the computed images. I observed this relationship between bloodflow and deception in my work, as well as the work of my colleague, Dr. Daniel Langleben, an Assistant Professor of Psychiatry at the University of Pennsylvania.

[…]

There are many questions around the accuracy of conventional lie-detection techniques such as the polygraph. Could the use of near-infrared light sensors on the brain serve to boost the accuracy of lie detection techniques?

Dr Chance: Preliminary experiments with the cognoscope at the U.S. Department of Defense’s Polygraph Institute suggest the brain’s frontal cortex gives reliable signals. We have also proposed non-contact sensing of prefrontal activation, thus our optical method is one of the few, if not the only, technique that can be used-under proper ethical considerations-for remote sensing of brain functional activity. It is therefore suited for advanced government security tests, such as baggage handling checkpoints at airports. In this case, users could detect deception in passengers who are taken aside and asked if anyone else has handled their bags, etc.

Conclusions about the science of such technology are one thing, but implying that this sort of ‘brain scanning’ technology might be used for “advanced government security tests” at airports is, I believe, pretty irresponsible. It’s not suited for such an application, not now and not any time soon. I’ve written about overhyping of brain imaging techniques many times before so I’ll try not to repeat myself. But this article is in an official publication, which will be read by law enforcement officers throughout Canada and beyond. It’s a highly technical issue, but with no discussion on the limitations of such technology and no mention of the practical problems of ‘brain scanning’ suspicious individuals, how are readers with limited or no scientific background supposed to judge how useful this technology really will be?

Pages on deception on WikiHow

WikiHow, a collaborative writing project to build the world’s largest how-to manual, has a few pages relevant to deception and deception detection.

On 2 August their page on How to Cheat a Polygraph Test (Lie Detector) was a featured article. It’s an extensive and detailed set of suggestions for how to beat the polygraph, and also includes a summary of the polygraph procedure.

WikiHow also has an entertaining page on How to Lie which isn’t bad. But the page on How to Detect Lies contains some suggestions for detecting lies which have absolutely no empirical support, such as:

Notice the person’s eye movements. Someone who is lying will be more reluctant than usual to make direct eye contact.

Nope. Countless studies have indicated that gaze aversion is one of the least reliable ‘deception clues’, despite being one of the most commonly cited (DePaulo et al 2003; Vrij 2000; Bond 2006)

Liars also tend to blink more often.

Nervous people blink more often. People who are thinking hard blink less often. So far the studies suggest that liars tend on balance to show more signs of cognitive load (thinking hard) than nerves. (Mann et al., 2003; Vrij et al., 2006)

A typical right-handed person tends to look towards his right when remembering something that actually happened and towards their left when they’re making something up.

An NLP-based cue, widely cited as a reliable sign of deception. There is no empirical evidence for eye movements being connected to deception, and it’s worth noting that the founders of NLP never claimed that eye movements could be used to detect deception (also, Vrij & Lochun, 1997).

Check for sweating. People tend to sweat more when they lie.

Nervous people sweat more often. Not a reliable sign of deception.

Watch their hands, arms and legs, which tend to be limited, stiff, and self-directed when the person is lying.

Not sure about the self-directed bit, but this the empirical evidence DOES suggest that liars tend to make fewer hand and foot movements than truth tellers. Again, this is a sign of cognitive load (DePaulo et al 2003; Vrij 2000; Vrij et al., 2006).

The hands may touch or scratch their face, nose or behind an ear, but are not likely to touch their chest or heart with an open hand.

The empirical evidence indicates that liars are no more likely to touch their faces than truth-tellers (DePaulo et al, 2003). I’ve not seen any research relating to touching of the chest.

Training law enforcement officers to detect deception

A good overview of literature on deception detection training, and some sensible suggestions for improving training in this article from the latest issue of Police Quarterly.

Abstract:

The current study surveyed a random sample of Texas law enforcement officers (N = 109) about their training in detecting deception. Texas officers reported that their training entailed the equivalent of a 2-day, lecture-style workshop in the kinesic interview technique or Reid technique, two popular police training modules, with subsequent training more often the exception than the rule. The authors examine these results in light of previous social science research regarding officers’ accuracy in detecting deception and make suggestions for future training programs for police officers in this area.

Recommendations for training include:

  • Draw on the research literature (“In general, training appeared to neglect any discussion of social science research findings…” [p285])
  • Address myths about deception detection (current courses apparently focus on cues but not erroneous beliefs)
  • Give students plenty of practice with statements and video clips (currently lectures seem to predominate)
  • Give consistent and timely feedback on accuracy
  • Spend time on credibility assessment for victims and witnesses (current courses seem almost exclusively focused on suspect interviewing)

Reference:

How long to decide whether someone is trustworthy?

The BPS Research Digest has a post this week (20 July) on a recently published study indicating that people make snap judgements of trustworthiness based on facial appearance.

“These findings suggest that minimal exposure to faces is sufficient for people to form trait impressions, and that additional exposure time can simply boost confidence in these impressions. That is, additional encounters with a person may only serve to justify quick, initial, on-line judgments”, the researchers said.

BPS-RD commentary on the article here.

Confounding influences on police detection of suspiciousness

Published online last week, this article by Richard Johnson (Washburn University, KS, USA) in the Journal of Criminal Justice reports a study of how police officers make decisions about ‘suspiciousness’, in particular looking at how US officers interacted with Caucasian, Hispanic and African American members of the public. Here’s the abstract:

The social psychological literature had shown wide acceptance by the police of the use of nonverbal behaviors such as smiles, speech disruptions, gaze aversion, and hand gestures as cues to deceptive or suspicious activity by criminal suspects. Current police investigative training also reinforces these beliefs. The present study analyzed the influence of race and emotional agitation level on the frequency with which these ‘suspicious’ nonverbal behaviors are displayed. Reviewing 120 videotaped police-citizen interactions of a noncriminal nature involving law-abiding citizens, the results suggested that level of emotional agitation had a weak but significant influence on the frequency with which two of these nonverbal behaviors were displayed. Race also had a significant influence that ranged from moderate to strong, as African-American and Hispanic citizens displayed significantly higher levels than Caucasians of behaviors thought of as ‘suspicious’ by police officers.

If you have a subscription to Journal of Criminal Justice you can read the article in press via the Science Direct pages for the journal.

We have evolved to lie because it is an effective strategy for human survival, says a British psychologist

Ubiquitous British psychologist Richard Wiseman gave a talk earlier this month in Kuala Lumpur, organised by the British Council, entitled “How to Catch a Liar”, reports Malaysian news site Sun2Surf.com (10 July).

Although lying is actually difficult to do convincingly, we’ve evolved to lie because it is an effective strategy for human survival, he said. […] “Some lying helps bond society together, although some people may manipulate it,” he said

[…] Wiseman’s research, conducted over 12 years, has found visual signals to be the least revealing about when a person is lying because there is a decrease in gestures and body movement. He says the linguistic approach is the most accurate way to detect a liar. When somebody is lying, there is an increase in pauses, speech errors and response latency and a decrease in speech rate and emotional involvement.

Testing the Behavior Analysis Interview

The Reid Technique of Interviewing and Interrogation is probably the most widely known interrogation technique – or, to be accurate, collection of techniques – and is used by thousands of law enforcement officers throughout the USA and beyond (though not in the UK). However, the method is controversial: many claim that the potentially coercive nature of the questioning may prompt false confessions by innocent but suggestible interviewees (see, for example, the work of Saul Kassin, Gisli Gudjonsson, Richard Leo and Richard Ofshe).

The latest issue of Law and Human Behavior 30(3) includes an article by Aldert Vrij, Samantha Mann and Ronald Fisher, who present the results of their study to test the Reid Technique’s Behaviour Analysis Interview. According to the Reid Institute website:

The BAI consists of a series of investigative questions that are specifically developed for each case, and a series of behavior provoking questions that elicit verbal and nonverbal responses which serve to help identify those persons who should be eliminated from suspicion, and those who are most likely involved in committing the act under investigation. During the BAI the subject is also asked a series of questions to determine whether or not they have the propensity to commit the act in question.

Vrij and his colleagues set out to test whether the BAI did indeed provoke responses that reliably differentiate liars from truth-tellers. Read on to find out what they found.
Continue reading Testing the Behavior Analysis Interview

Paraverbal indicators of deception: a meta-analytic synthesis

In the latest edition of Applied Cognitive Psychology, Siegfried Sporer and Barbara Schwandt present a meta-analysis of paraverbal cues to deception. The article also serves as a pretty good critique of previous deception studies. As the authors explain, meaningful meta-analyses are not as easy to do as they perhaps should be, because of the huge variation in experimental paradigms in deception studies. In particular “very little is still known about high stakes lies” and “very few researchers were successful in or even thought about creating unsanctioned lie conditions” (p442). From the research that has been done, the authors conclude that “there are considerable differences in behaviour when individuals lie with or without permission and when they are motivated to deceive successfully or not” (p441).

Reference:

Follow the link for the abstract on the publisher’s site.

Detecting Lies in Children and Adults

In the latest issue of Law and Human Behavior, an article reporting the results of a study by Gail S. Goodman and her colleagues exploring whether observers could detect children’s lies. The authors tested both adults’ ability to detect lies told by children and adults, with some interesting findings, notably that

  • observers detected children’s lies more accurately than adults’ lies
  • observers were more likely to detect adults’ truthful statements than children’s truthful statements
  • observers who were highly accurate in detecting children’s lies were similarly accurate in detecting adults’ lies
  • observers were biased toward judging adults’ but not children’s statements as truthful

In other words, the results suggest that we might be biased towards believing adults and disbelieving children. This has potentially important implications in forensic settings. For instance, might investigators and jurors be biased to believe that children are telling lies in abuse allegations? At the moment, of course, we cannot know, but it looks like an important and worthwhile area for further study.

Reference:

Follow the link above for the abstract on the publisher’s website.

Lying is exposed by micro-expressions we can’t control

Mark Frank moved up to the University of Buffalo last year to continue the deception research he had been doing at Rutgers. Now Buffalo has issued a press release (5 May) highlighting some of the interesting facets of Frank’s research. Much of Frank’s research continues the pioneering work of Paul Ekman (Frank’s former teacher and ongoing collaborator) on identifying facial microexpressions of emotion:

[…Frank’s] revolutionary research on human facial expressions in situations of high stakes deception debunks myths that have permeated police and security training for decades. His work has come to be recognized by security officials in the U.S. and abroad as very useful tool in the identification and interrogation of terrorism suspects.

[…] “Fleeting facial expressions are expressed by minute and unconscious movements of facial muscles like the frontalis, corregator and risorius,” Frank says, “and these micro-movements, when provoked by underlying emotions, are almost impossible for us to control.”

But Frank doesn’t think that understanding microexpressions are the ‘silver bullet’ for deception detection:

“I want to make it clear that one micro-expression or collection of them is not proof of anything,” Frank says. “They have meaning only in the context of other behavioral cues, and even then are not an indictment of an individual, just very good clues.”

Links:

  • Link to press release from Buffalo
  • Link to more on Frank’s work (hosted at Rutgers)
  • Link to New Yorker article by Malcolm Gladwell about facial microexpressions (thank you to the Thinking Meat blog for reminding me of this!)
  • Link to more on facial expressions
  • Link to news item on Mark Frank’s work with TSA

Detecting deception by manipulating cognitive load

The latest edition of Trends in Cognitive Sciences carries a short article from European deception researcher Aldert Vrij and his colleagues, in which they suggest manipulating the cognitive load for a liar as a means to detect deceit [1].

Most methods for detecting deception are predicated on the theory that lying is associated with emotional arousal: for instance, fear, guilt, nervousness about consequences or excitement at getting away with a lie (the phenomenon Paul Ekman describes as ‘duping delight’). Machines are designed, or humans taught, to detect signs of emotional arousal to help judge whether the subject’s response may be deceitful. It’s how the polygraph works, and is also the basis of the theory that teaching lie-catchers to spot nonverbal behaviours can help in detecting deceit (although no reputable researcher or trainer will claim that a particular nonverbal cue is in itself a sign of deception).

Detecting changes in the level of emotional arousal when a suspected liar answers particular questions can be an effective method of detecting deceit, but only when combined with a hypothesis testing approach: an individual may show changes in emotional arousal for many different reasons, only one of which is because they are lying. And indeed, if a liar does not feel emotionally aroused by lying (or if their arousal is already extremely high, generating too much ‘noise’ to spot any signals) then this approach is not particularly effective.

In their TiCS article, Ron Fisher, Vrij and and Vrij’s colleagues Samantha Mann and Sharon Leal propose another approach. Previous research has suggested that, regardless of their level of emotional arousal, liars may also find lying cognitively demanding [2]. Making up, and sustaining a lie, can often be very difficult, particularly if the liar was not expecting a particular line of questioning and/or had not prepared their lie. Previous work [3] by Vrij suggested that telling lie-catchers to ask themselves whether or not a suspected liar was ‘thinking hard’ (rather than asking ‘are they lying?’) led to a better hit rate in catching liars. Based on this research, the current authors suggest that rather than attempting to detect emotional arousal in response to particular questions, lie-catchers should instead use a questioning strategy that raises cognitive load for the suspected liar. Their suggestion – and their current line of research – is that as well as instructing lie-catchers to ask themselves “is s/he thinking hard”,

Lie detection could be enhanced further by using interview techniques strategically to increase interviewees’ cognitive demand; for example, by requiring interviewees to perform a concurrent secondary task (‘time-sharing’) while being interviewed. Liars, whose cognitive resources will already be partially depleted by the act of lying, should find this additional, concurrent task particularly debilitating.

At the moment, of course, this is still just a theory. And, as with the ’emotional arousal’ approach, practitioners attempting to detect deceit by spotting signs of cognitive load would still need to use a hypothesis-testing approach – just as with emotional arousal, there are many reasons why an interviewee might have to think hard about their answer.

But I like this line of research. It gets away from the common (and unreliable) “if he’s nervous he must be lying” approach, but, more importantly, Vrij et al’s proposal that researchers should explore different interview protocols gets to the heart of an issue that bugs me a lot: there is no point in teaching lie-catchers to spot changes in behaviours (verbal or non-verbal, signs of arousal or cognitive load) if they ask bad questions in the first place. Yet few researchers test the efficacy of different interview methods. Indeed, there aren’t many recommended interview protocols for detecting deception, and one of the few, the Reid Technique’s Behavioral Analysis Interview, was tested and found to be unreliable in laboratory tests by Vrij and his colleagues [4]. When it comes to training practitioners, it is good interview technique that ultimately exposes liars, not just being able to spot behaviours.

References:

[1] Vrij, A., Fisher, R., Mann, S. and Leal, S. (2006). Detecting deception by manipulating cognitive load. Trends in Cognitive Sciences 10(4): 141-142

[2] Mann, S., Vrij, A. and Bull, R. (2002). Suspects, lies and videotape: an analysis of authentic high-stakes liars. Law and Human Behavior 26,365–376

[3] Vrij, A. (2004). Why professionals fail to catch liars and how they can improve. Legal and Criminological Psychology 9, 159–183

[4] Vrij, A. and Mann, S. An Empirical Test of the Behaviour Analysis Interview. Presented at the 6th International Conference of the Society for Applied Memory and Cognition, Jan 2005. (I believe that this has now been submitted for publication, but can’t track it down at the moment.)

New book about lying

Another new book on lying was published last month. The Truth About Lies by Andy Shea and Steve Van Aperen doesn’t seem to be available in the UK or US, but I have it on order from the publisher and may review it here in due course. (If you’ve read it, please let us all know what you thought via the comments.)

According to the publisher’s blurb:

Ranging from medieval witch-ducking to state-of-the-art truth serums, Andy Shea and Steve Van Aperen use examples from history and from modern-day celebrity cases to spin a tale about lies and lie detection through the ages. They pull apart written and spoken words to show how lies are so hard to carry off because our bodies betray us and, if you know what to look for, how easy they are to spot. The Truth About Lies provides compelling insight into why people lie — and how to make sure you don’t get taken for a ride.

And about the authors:

Andy Shea is a former London police officer turned writer and journalist. Steve Van Aperen was a […] homicide detective but is now a deceptive behaviour expert and FBI-trained polygraph examiner.

The Sydney Morning Herald (20 March) reviews the book here. The SMH discusses (in very broad terms) lying and deception as it applies to political decisions, although the book blurb suggests that it is another ‘how to’ guide to spotting liars.

Details:
The Truth About Lies
Andy Shea and Steve Van Aperen
Publisher: ABC Books
IBSN: 0733317030

New article on global beliefs about deception

An interesting paper on worldwide stereotypes of liars, published by the Global Deception Research Team, 90 international researchers recruited via e-mail by Charles F. Bond, Jr.

A World of Lies appears in the January 2006 issue of Journal Of Cross-cultural Psychology, vol. 37, no. 1, pp. 60-74. Abstract:

This article reports two worldwide studies of stereotypes about liars. These studies are carried out in 75 different countries and 43 different languages. In Study 1, participants respond to the open-ended question “How can you tell when people are lying?” In Study 2, participants complete a questionnaire about lying. These two studies reveal a dominant pan-cultural stereotype: that liars avert gaze. The authors identify other common beliefs and offer a social control interpretation.

Training professional groups and lay persons to use CBCA to detect deception

Applied Cognitive Psychology Volume 18, Issue 7 , Pages 877 – 891

The effects of training professional groups and lay persons to use criteria-based content analysis to detect deception
Lucy Akehurst, Ray Bull, Aldert Vrij, Gunter Kohnken

This experiment was designed to assess, for the first time, the effects of training police officers, social workers and students in Criteria-Based Content Analysis (CBCA) in an attempt to increase lie detection accuracy. A within-subjects design was implemented. Participants rated the truthfulness of a maximum of four statements before training in CBCA and rated the truthfulness of a different set of four statements after training. The raters were only exposed to the written transcripts of the communicators. Two thirds of the statements utilized were truthful and one third were based on fabrications.

Before training, there were no significant differences in detection accuracy between the police officers (66% accuracy), the social workers (72% accuracy) and the students (56% accuracy). After training, the social workers were 77% accurate and significantly more accurate than the police officers (55%) and the students (61%). However, none of the three groups of raters significantly improved their lie detection accuracy after training, in fact, the police officers performed significantly poorer. Overall, police officers were significantly more confident than social workers and lay persons regardless of accuracy. Further, participants were most confident when labelling a statement truthful regardless of whether or not this was the correct decision.