Category Archives: Lie-catchers

Studies on accuracy and beliefs of lie-catchers

Lie-detection biases among male police interrogators, prisoners, and laypersons

I know, I’ve been away a long time, finishing off my doctorate and working hard, so no time for blogging. The doctorate is finally out of the way but I still don’t have masses of spare time. When I can I’ll update these blogs with studies that catch my eye, though I don’t think I’ll be able to comment in depth on many of them in the way that I used to. That’s partly a time issue, but also I haven’t got access to as many full text articles as I did when I was registered at a university. I’ll do what I can.

Here’s a study that sounds like an interesting addition to the literature on what people think of their own lie-detection abilities:

Beliefs of 28 male police interrogators, 30 male prisoners, and 30 male laypersons about their skill in detecting lies and truths told by others, and in telling lies and truths convincingly themselves, were compared. As predicted, police interrogators overestimated their lie-detection skills. In fact, they were affected by stereotypical beliefs about verbal and nonverbal cues to deception. Prisoners were similarly affected by stereotypical misconceptions about deceptive behaviors but were able to identify that lying is related to pupil dilation. They assessed their lie-detection skill as similar to that of laypersons, but less than that of police interrogators. In contrast to interrogators, prisoners tended to rate lower their lie-telling skill than did the other groups. Results were explained in terms of anchoring and self-assessment bias. Practical aspects of the results for criminal interrogation were discussed.

The full text is behind a paywall – I can’t find a direct link so you have to get there by going to the publisher’s website and searching their e-journals.

Research round-up 6: And finally, kids’ lies, online lies and my deception book of the year

Happy new year! Here is the final part of the 2008 deception research round-up, put together to make amends for having neglected this blog over the past few months. This post includes bits and pieces of deception research that didn’t fit too well into the first five round-up posts. Hope you’ve enjoyed them all!

Children

First, a couple of articles about how children learn to lie:

Eye gaze plays a pivotal role during communication. When interacting deceptively, it is commonly believed that the deceiver will break eye contact and look downward. We examined whether children’s gaze behavior when lying is consistent with this belief. …Younger participants (7- and 9-year-olds) broke eye contact significantly more when lying compared with other conditions. Also, their averted gaze when lying differed significantly from their gaze display in other conditions. In contrast, older participants did not differ in their durations of eye contact or averted gaze across conditions. Participants’ knowledge about eye gaze and deception increased with age. This knowledge significantly predicted their actual gaze behavior when lying. These findings suggest that with increased age, participants became increasingly sophisticated in their use of display rule knowledge to conceal their deception.

The relation between children’s lie-telling and their social and cognitive development was examined. Children (3-8 years) were told not to peek at a toy. Most children peeked and later lied about peeking. Children’s subsequent verbal statements were not always consistent with their initial denial and leaked critical information revealing their deceit. Children’s conceptual moral understanding of lies, executive functioning, and theory-of-mind understanding were also assessed. Children’s initial false denials were related to their first-order belief understanding and their inhibitory control. Children’s ability to maintain their lies was related to their second-order belief understanding. Children’s lying was related to their moral evaluations. These findings suggest that social and cognitive factors may play an important role in children’s lie-telling abilities.

Technotreachery – lying via CMC

It’s a popular topic and the literature is growing all the time. Here’s some of the new research published in 2008 about lying in computer-mediated communication:

This study aimed to elaborate the relationships between sensation-seeking, Internet dependency, and online interpersonal deception. Of the 707 individuals recruited to this study, 675 successfully completed the survey. The results showed high sensation-seekers and high Internet dependents were more likely to engage in online interpersonal deception than were their counterparts.

Deception research has been primarily studied from a Western perspective, so very little is known regarding how other cultures view deception… this study proposes a framework for understanding the role Korean and American culture plays in deceptive behavior for both face-to-face (FTF) and computer-mediated communication (CMC). … Korean respondents exhibited greater collectivist values, lower levels of power distance, and higher levels of masculine values than Americans. Furthermore, deceptive behavior was greater for FTF communication than for CMC for both Korean and American respondents. In addition to a significant relationship between culture and deception, differences were found between espoused cultural values and deceptive behavior, regardless of national culture. These results indicate the need for future research to consider cultural differences when examining deceptive behavior.

This study set out to investigate the type of media individuals are more likely to tell self-serving and other-oriented lies, and whether this varied according to the target of the lie. One hundred and fifty participants rated on a likert-point scale how likely they would tell a lie. Participants were more likely to tell self-serving lies to people not well-known to them. They were more likely to tell self-serving lies in email, followed by phone, and finally face-to-face. Participants were more likely to tell other-oriented lies to individuals they felt close to and this did not vary according to the type media. Participants were more likely to tell harsh truths to people not well-known to them via email.

Detecting deception

OK, I know this probably could have gone into an earlier post. However, it does involve a bit of machinery so it didn’t fit in part 1, but the machinery has been in use for several decades so it couldn’t really fit in post 2.

An increasing number of researchers are exploring variations of the Concealed Knowledge Test (CKT) as alternatives to traditional ‘lie-detector’ tests. For example, the response times (RT)-based CKT has been previously shown to accurately detect participants who possess privileged knowledge. Although several studies have reported successful RT-based tests, they have focused on verbal stimuli despite the prevalence of photographic evidence in forensic investigations. Related studies comparing pictures and phrases have yielded inconsistent results. The present work compared an RT-CKT using verbal phrases as stimuli to one using pictures of faces. This led to equally accurate and efficient tests using either stimulus type. Results also suggest that previous inconsistent findings may be attributable to study procedures that led to better memory for verbal than visual items. When memory for verbal phrases and pictures were equated, we found nearly identical detection accuracies.

Deception book of the year

And finally, an important publication in 2008 was the second edition of Aldert Vrij’s Detecting Lies and Deceit: Pitfalls and Opportunities. The first edition (published in 2000) has been one of my key references for scholarly research on deception, along with Paul Ekman’s Telling Lies: Clues to Deceit in the Marketplace, Politics and Marriage and Granhag and Stronwall’s edited volume on The Detection of Deception in Forensic Contexts. Not surprising then that Vrij’s second edition is already one of the most frequently consulted volumes on my deception bookshelf.

Vrij says that he did not originally envisage updating his 2000 book until at least 2010, but felt with the increasing amount of new research in this area, and increasing interest from law enforcement and security agencies in detecting deception that he could not wait that long. The result is a volume that is substantially updated with research published up to about the middle of 2007. The book has been completely rewritten and there are several new chapters covering recent developments in mechanical methods of deception detection, including brain scanning technologies (e.g., fMRI, P300 brain waves), thermal imaging and voice stress analysis. Vrij also adds a helpful chapter on how professionals can become better lie detectors.

It’s not perfect – I’d welcome more detail on on understanding the reasons why people lie (the book is mostly about catching liars), more on creating a context in which someone is more likely to tell the truth, and more discussion of cross-cultural differences in deception (though to be fair there is shockingly little research in this area to discuss). But despite these criticisms, Vrij’s new book remains a ‘must have’ reference for academics and professionals interested in up-to-date research on deception detection. Practitioners in particular should heed Vrij’s warning about over-hyped techniques for ‘deception detection’: as Vrij says, the best way to avoid falling for the hype is by keeping up to date with the independent, objective research on deception detection. This book is a great tool for giving yourself a grounding in that research.

Phew. Six months’ blogging in 6 days. Hope you enjoyed it!

Research round-up 4: When people lie

On to part 4 of this series on research published in 2008 that I didn’t get a chance to blog about when it came out, where we take a peek at some of the new research on circumstances in which people lie and what makes them seem credible.

Part 1: Catching liars
Part 2: New technologies
Part 3: Magic

First, lying in an extreme situation: Harpster and her colleagues reported results of a study that suggests that detailed linguistic analysis of calls made to the emergency services can help determine whether the caller might have committed the homicide they are reporting:

This study examined verbal indicators to critically analyze 911 homicide statements for predictive value in determining the caller’s innocence or guilt regarding the offense. One hundred audio recordings and transcripts of 911 homicide telephone calls obtained from police and sheriffs departments throughout the United States provided the database for the study. Using qualitative approaches for formulating the linguistic attributes of these communications and appropriate quantitative analyses of the resulting variables, the likelihood of guilt or innocence of the 911 callers in these adjudicated cases was examined. The results suggest that the presence or absence of as many as 18 of the variables are associated with the likelihood of the caller’s guilt or innocence regarding the offense of homicide. These results are suggestive of up to six distinct linguistic dimensions that may be useful for examination of all homicide calls to support effective investigations of these cases by law enforcement.

Staying in the forensic realm, Tess Neal and Stanley Brodsky wondered how expert witnesses can enhance their credibility. They reported results indicating that eye contact with the lawyer cross-questioning them and with mock jurors enhances the credibility of male experts, though it does not seem to have an impact on female experts’ credibility:

The effect of eye contact on credibility was examined via a 3 (low, medium, high eye contact) x 2 (male, female) between-groups design with 232 undergraduate participants. A trial transcript excerpt about a defendant’s recidivism likelihood was utilized as the experts’ script. A main effect was found: Experts with high eye contact had higher credibility ratings than in the medium and low conditions. Although a confound precluded comparisons between the genders, results indicated that males with high eye contact were more credible than males with medium or low eye contact. The female experts’ credibility was not significantly different regardless of eye contact. Eye contact may be especially important for males: Male experts should maintain eye contact for maximum credibility.

If you’re a rape victim, however, police investigators believe you’re more credible when you cry or show despair whilst giving your evidence:

Credibility judgments by police investigators were examined. Sixty-nine investigators viewed one of three video-recorded versions of a rape victim’s statement where the role was played by a professional actress. The statements were given in a free recall manner with identical wording, but differing in the emotions displayed, termed congruent, neutral and incongruent emotional expressions. Results showed that emotions displayed by the rape victim affected police officers’ judgments of credibility. The victim was judged as most credible when crying and showing despair, and less credible when being neutral or expressing more positive emotions. This result indicates stereotypic beliefs about rape victim behavior among police officers, similar to those found for lay persons. Results are discussed in terms of professional expertise.

From detecting lying by the police to police deception: Geoffrey Alpert and Jeffrey Noble published a discussion piece in Police Quarterly in which they consider the circumstances, nature and impact of conscious, unconscious, ‘acceptable’ and unacceptable lying by police officers:

Police officers often tell lies; they act in ways that are deceptive, they manipulative people and situations, they coerce citizens, and are dishonest. They are taught, encouraged, and often rewarded for their deceptive practices. Officers often lie to suspects about witnesses and evidence, and they are deceitful when attempting to learn about criminal activity. Most of these actions are sanctioned, legal, and expected. Although they are allowed to be dishonest in certain circumstances, they are also required to be trustworthy, honest, and maintain the highest level of integrity. The purpose of this article is to explore situations when officers can be dishonest, some reasons that help us understand the dishonesty, and circumstances where lies may lead to unintended consequences such as false confessions. The authors conclude with a discussion of how police agencies can manage the lies that officers tell and the consequences for the officers, organizations, and the criminal justice system.

In everyday life, when do people think it’s ok to lie? BeverlyMcLeod and Randy Genereux’s results suggest that your personality traits influence what sorts of lying you find acceptable, and when:

The present study investigated the role of individual differences in the perceived acceptability and likelihood of different types of lies. Two-hundred and eighty seven college students completed scales assessing six personality variables (honesty, kindness, assertiveness, approval motivation, self-monitoring, and Machiavellianism) and rated 16 scenarios involving lies told for four different motives (altruistic, conflict avoidance, social acceptance, and self-gain lies). Our central hypothesis that the perceived acceptability and likelihood of lying would be predicted by interactions between personality characteristics of the rater and the type of lie being considered was supported. For each type of lie, a unique set of personality variables significantly predicted lying acceptability and likelihood.

What is the impact of lying? Robert Lount and his colleagues warned that it’s difficult to recover from an early breach of trust in a relationship:

Few interpersonal relationships endure without one party violating the other’s expectations. Thus, the ability to build trust and to restore cooperation after a breach can be critical for the preservation of positive relationships. Using an iterated prisoner’s dilemma, this article presents two experiments that investigated the effects of the timing of a trust breach—at the start of an interaction, after 5 trials, after 10 trials, or not at all. The findings indicate that getting off on the wrong foot has devastating long-term consequences. Although later breaches seemed to limit cooperation for only a short time, they still planted a seed of distrust that surfaced in the end.

And finally, a couple outside the psychology/criminology literature that may be of interest:

Next round up (part 5): research on the psychophysiology of lying.

Research round-up 1: Catching liars

I know I’ve really neglected this blog over the past few months (pressure of work and a doctorate to finish). Over the next few posts I’ll share with you all the articles and stories I hoped I’d have time to comment on this year but just didn’t. I’d like to promise to be better in 2009, but for the first half at least I’m going to struggle. Hang in there, eventually I’ll get back to being a better blogger!

The second post in this series deals with research and commentary on new technologies, like fMRI, for lie detection. But first I round up some recent research on deception detection methods which don’t require a $1 million giant magnet or wiring your subject up to a polygraph or brain scanner.

Who can catch a liar?

Let’s start with a bit of back and forth that began with an article by Charles Bond and Bella Depaulo which appeared in Psychological Bulletin this year. Bond and Depaulo’s analysis suggested that individual differences in lie detection ability are vanishingly small. Accuracy in lie detection, argue Bond and Depaulo, is more to do with how good the liar is at lying than individuals are at detecting deceit.

The authors report a meta-analysis of individual differences in detecting deception… Although researchers have suggested that people differ in the ability to detect lies, psychometric analyses of 247 samples reveal that these ability differences are minute. In terms of the percentage of lies detected, measurement-corrected standard deviations in judge ability are less than 1%. In accuracy, judges range no more widely than would be expected by chance, and the best judges are no more accurate than a stochastic mechanism would produce. When judging deception, people differ less in ability than in the inclination to regard others’ statements as truthful. People also differ from one another as lie- and truth-tellers. They vary in the detectability of their lies. Moreover, some people are more credible than others whether lying or truth-telling. Results reveal that the outcome of a deception judgment depends more on the liar’s credibility than any other individual difference.

This is a direct challenge to, in particular, the work of psychologists Maureen O’Sullivan and Paul Ekman, who have been investigating people they claim are extraordinarily accurate at deception detection – people they have dubbed ‘wizards’ of deception detection. So here comes O’Sullivan, right back at Bond and Depaulo:

…[Bond and Depaulo's] conclusions are based principally on studies with college students as lie detectors and lie scenarios of dubious ecological validity. When motivated professional groups have been shown either high stakes lie scenarios or scenarios involving appropriate liars and truth-tellers, average accuracies significantly above chance have been found for 7 different professional groups reported by 12 researchers in 3 countries. The replicated and predicted performance of extremely accurate individual lie detectors (“truth wizards”) also undermines the claim of no individual differences in lie detection accuracy…

Therese Pigott and Meng-Jia Wu also weigh in highlighting some methodological problems with Bond and Depaulo’s novel meta-analytic technique:

…[Bond and Depaulo] have presented a creative solution to the problem of estimating the standard deviation of deception judgments in the literature. Their article raises methodological questions about how to synthesize a measure of variation across studies. Although the standard deviation presents a number of problems as an effect size measure, more methodological research is needed to address directly the question raised by Bond and DePaulo (p.500).

Of course Bond and Depaulo get a right to reply. They repeat their analyses using the suggestions made by Piggott and Wu, and come to the same conclusion. They also take on O’Sullivan’s criticism and analyse data on experience to see what differences exist between college students and judges with ‘professional experience’. They conclude: “In moderator analyses, we looked separately at inexperienced and experienced judges. The individual differences in lie-detection accuracy were actually smaller among experienced judges than inexperienced ones” (p. 503).

Some empirical research that appears to support Bond and Depaulo’s claim was published online earlier this year, in advance of print in Law and Human Behaviour:

We examined whether individuals’ ability to detect deception remained stable over time. In two sessions, held one week apart, university students viewed video clips of individuals and attempted to differentiate between the lie-tellers and truth-tellers. Overall, participants had difficulty detecting all types of deception. When viewing children answering yes-no questions about a transgression (Experiments 1 and 5), participants’ performance was highly reliable. However, rating adults who provided truthful or fabricated accounts did not produce a significant alternate forms correlation (Experiment 2). This lack of reliability was not due to the types of deceivers (i.e., children versus adults) or interviews (i.e., closed-ended questions versus extended accounts) (Experiment 3). Finally, the type of deceptive scenario (naturalistic vs. experimentally-manipulated) could not account for differences in reliability (Experiment 4). Theoretical and legal implications are discussed.

But just in case you think it’s all over for the ‘wizards’, here’s a study from Gary Bond which suggests it can’t be dismissed quite yet. Out of more than 200 participants (law enforcement officers and college students) G. Bond discovered eleven (all LEOs) who could detect deception at greater than 80% chance, and from these, two potential ‘wizards’ who could maintain this accuracy rate over time. All of which indicates that there may indeed be ‘wizards’, but that they are (as O’Sullican has always recognised) very rare.

…Two experiments sought to (a) identify expert(s) in detection and assess them twice with four tests, and (b) study their detection behavior using eye tracking. Paroled felons produced videotaped statements that were presented to students and law enforcement personnel. Two experts were identified, both female Native American BIA correctional officers. Experts were over 80% accurate in the first assessment, and scored at 90% accuracy in the second assessment. In Signal Detection analyses, experts showed high discrimination, and did not evidence biased responding. They exploited nonverbal cues to make fast, accurate decisions. These highly-accurate individuals can be characterized as experts in deception detection.

How do you catch a liar?

What about the rest of us who aren’t ‘wizards’? Another discussion piece, which appeared earlier this year in a special issue of Criminal Justice and Behaviour focusing on scientific and psuedoscientific practices in law enforcement, was by Aldert Vrij who issued a challenge to law enforcement professionals to focus on verbal rather than non-verbal cues to lie detection:

…deception research has revealed that many verbal cues are more diagnostic cues to deceit than nonverbal cues. Paying attention to nonverbal cues results in being less accurate in truth/lie discrimination, particularly when only visual nonverbal cues are taken into account. Also, paying attention to visual nonverbal cues leads to a stronger lie bias (i.e., indicating that someone is lying). The author recommends a change in police practice and argues that for lie detection purposes it may be better to listen carefully to what suspects say.

Signs of lying

Continuing with their programme of research on the ‘cognitive load’ hypothesis Vrij, Sharon Leal and their colleagues published some psychophysiological evidence that lying does indeed increase mental effort, and that this increased effort can be detected by studying blink rates and skin conductance:

Previous research has shown that suspects in real-life interviews do not display stereotypical signs of nervous behaviours, even though they may be experiencing high detection anxiety. We hypothesised that these suspects may have experienced cognitive load when lying and that this cognitive load reduced their tonic arousal, which suppressed signs of nervousness. We conducted two experiments to test this hypothesis. Tonic electrodermal arousal and blink rate were examined during task-induced (Experiment 1) and deception-induced cognitive load (Experiment 2). Both increased cognitive difficulty and deception resulted in decreased tonic arousal and blinking. This demonstrated for the first time that when lying results in heightened levels of cognitive load, signs of nervousness are decreased. We discuss implications for detecting deception and more wide-ranging phenomena related to emotional behaviour.

And finally in this round-up, an article published in Psychology of Aging this year which tested older adults’ ability to detect deceit and whether any impairments were related to a lesser ability to recognise facial emotion expressed by the lie-teller.

Facial expressions of emotion are key cues to deceit (M. G. Frank & P. Ekman, 1997). Given that the literature on aging has shown an age-related decline in decoding emotions, we investigated (a) whether there are age differences in deceit detection and (b) if so, whether they are related to impairments in emotion recognition. Young and older adults (N = 364) were presented with 20 interviews (crime and opinion topics) and asked to decide whether each interview subject was lying or telling the truth. There were 3 presentation conditions: visual, audio, or audiovisual. In older adults, reduced emotion recognition was related to poor deceit detection in the visual condition for crime interviews only.

Next round-up will cover recent research on new technologies for lie detection!

New research: Outsmarting the Liars: The Benefit of Asking Unanticipated Questions

In press in the journal Law and Human Behavior, Aldert Vrij and colleagues test a method of questioning that (in lab situations) exposes liars with an up to 80% success rate. Here’s the abstract:

We hypothesised that the responses of pairs of liars would correspond less with each other than would responses of pairs of truth tellers, but only when the responses are given to unanticipated questions. Liars and truth tellers were interviewed individually about having had lunch together in a restaurant. The interviewer asked typical opening questions which we expected the liars to anticipate, followed by questions about spatial and/or temporal information which we expected suspects not to anticipate, and also a request to draw the layout of the restaurant. The results supported the hypothesis, and based on correspondence in responses to the unanticipated questions, up to 80% of liars and truth tellers could be correctly classified, particularly when assessing drawings.

Reference:

At the moment you can download the article for free here (pdf).

Lie acceptability

When do people think it might be ok to lie? religionSusanna Robinson Ning and Angela M. Crossman from John Jay College of Criminal Justice in New York have just published the results of an interesting study of lie acceptability.

The authors start off with a good summary of the literature on lie acceptability, and age, gender, cultural and religious differences in attitudes to different types of lie. Different types of lie may be more or less acceptable, depending on the motivaton for telling them and the context in which they are told. Broadly, lies which are told for personal gain or to harm others – so-called ‘antisocial lies’ – are generally considered less acceptable than those told to help another or for politeness – ‘prosocial lies’.

Ning and Crossman set out to explore how perceptions of lie acceptability vary across situations and by different cultural or subcultural groups at a detailed level. They argue that:

These issues are important, as one’s perceptions of the acceptability of lies may relate to the frequency with which one lies and to the facility with which one lies (e.g., whether or not one provides obvious cues that give away deceptive attempts out of discomfort with the act of lying). (p.2131)

They authors also note that understanding how liars and deceived people feel about the acceptability of the particular lies may also be important once a liar has been found out: “perceptions of lie acceptability predict reactions to the discovery that one has been deceived (DePaulo et al., 2004), which could be relevant to issues such as relationship stability in the wake of such a discovery” (p.2131).

In this study, individuals from various different religious denominations, including 44% from the Church of Jesus Christ of Latter-Day Saints (LDS), 28% claiming to be athiest or agnostic, and 20% claiming to be non-LDS Christian, rated the acceptability of lies in 12 different scenarios in which types of lie and context were varied. There’s a lot in the results, but briefly:

  • Unsurprisingly, prosocial lies were considered by all to be more acceptable than anti-social lies.
  • “Lies told to strangers were generally considered more acceptable than were lies told to spouses” (p.2147)
  • “Women in this study tended to rate both self- and other-oriented lies as more acceptable than did men, particularly for lies told to strangers” (p.2148)
  • Lie acceptability decreased overall with age and “age was negatively associated with [acceptability of] lies told to avoid conflict in a spousal relationship, but was not related to perceptions of such lies told to a stranger” (p.2149)
  • “As predicted, LDS participants consistently rated lies as significantly less acceptable overall than did non-LDS participants, regardless of lie motivation, relationship category, or participant sex”. (p. 2149)

There are some important caveats, notably that the sample was biased towards females (77%) and was relatively young (mean age approximately 27 years). Furthermore, the manner of recruiting the LDS sample, via a church listserv (the others were recruited via a secular listserv), may have led to an increase in socially desirable responding. As the authors acknowledge, this study does little more than highlight some interesting areas for more detailed research.

Reference:

Abstract below the fold

Photo credit: simpologist, Creative Commons License

Continue reading

Microexpressions and deception detection

truthsignA Newsweek article (16 Aug) on TSA behaviour detection officers in airports and their training in spotting microexpressions stirred up some blog commentary. Reporter Patti Davis commented:

In the study of “micro-expressions”—yes, it is actually a field of study and there are some who are arrogant enough to call it a science—it has been decided that when people wish to conceal emotions, the truth of their feelings is revealed in facial flashes. These experts have determined that fear and disgust are the key things to look for because they can hint of deception…. Let’s see, fear and disgust in an airport? I’m frightened and disgusted weeks before I have to show up at an airport.

Eyes for Lies rightly takes Davis to task for contradicting herself: It’s not about spotting people who have emotional expressions that are consistent with experience at an airport. If you have a genuine fear of flying or disgust at the state of the washrooms, for instance, in most cases you won’t show microexpressions because in most cases you won’t be actively trying to conceal these emotions. Instead, EfL says, “Someone who sees microexpressions will be looking for the guy who is showing inconsistencies in emotions and behavior. For example, he will look for a guy who is acting jovial, yet strangely preoccupied and flashes an expression of disgust or fear across his face simultaneously.”

However, as Mind Hacks points out, published research that supports the notion that the ability to spot microexpressions is associated with the ability to detect deception is limited.

Dr X links to the piece and reminds us of Paul Ekman’s notion of ‘duper’s delight’ – the sheer pleasure that some people get out of fooling others.

Want to see some microexpressions for yourself? See Paul Ekman discussing microexpressions, together with a classic example, on YouTube. And if you’d like to test your ability to spot them, try your luck here.

Photo credit: jacampos, Creative Commons License

Verbal and Nonverbal Behaviour as a Basis for Credibility Attribution

pinoccioThe Journal of Experimental Social Psychology has given subscribers a sneak preview of an article on lie detection from Marc-Andre Reinhard and Siegfried Sporer that has just been accepted for publication. The researchers were interested in the impact on credibility assessment when people were not highly motivated to detect deception, and/or were distracted with other tasks.

Reinhard and Sporer showed that when participants were not highly motivated or were distracted they tended to rely on non-verbal cues to decide whether someone’s story was credible. When participants were motivated and could concentrate on the task, they used both non-verbal and verbal cues to judge the plausibility and credibility of the story.

The theory that we process information superficially in conditions of low motivation or high cognitive load, and more carefully when we are motivated and have more cognitive capacity, is the basis of several so-called dual process models. We know from research on dual process models that when we have a lot of other things to think about, we’re rushed or we don’t really care about the task we tend to base our judgments on superficial cues. Under such circumstances we tend not to bother with thinking too hard about what we are doing. Advertisers and salespeople know very well that in low motivation conditions we won’t think too much about the ‘buy this’ message if it comes from someone who is good-looking / authoritative, and that when we see the words ‘last chance to buy’ we’ll use this as an automatic cue that the product is valuable. (Robert Cialdini has many other examples of how easily we are swayed in his classic text Influence.) But this is the first time that dual process theories have been applied to lie detection. As Reinhard and Sporer note:

our studies have shown that dual-process theories can help us to understand the process of lie detection. These theories also allow predictions about the conditions under which people are worse or better lie detectors.

The researchers did not check whether or not these participants were actually more accurate at detecting deception, they just explored what strategies were used to assess credibility. (I guess that’s a study for another day.) But we do know that there is no single “Pinocchio’s nose”-type cue for detecting lies. You stand a better chance of catching a liar if you use multiple sources of information and use them to test the alternative explanations for what you are hearing. The results of this study suggest that people who are distracted or have low motivation will tend to rely on the “Pinocchio” approach. So people who are trying to assess credibility (for instance, law enforcement officers interviewing suspects or eyewitnesses; or jurors assessing the quality of witness testimony) need to concentrate on the task at hand to give themselves the best chance of using all the relevant cues to deception.

And if you’re a kid trying to fib to your parents, wait until they are busy with other tasks before you tell them your lie.

Reference:

Abstract under the fold.

Photo credit: Anne in Beziers, Creative Commons License

Continue reading

The uncomfortable truth about liars

girllyingThe UK Guardian newspaper runs a regular column entitled “Improbable research”, and the most recent (12 June) was about deception research. Mark Abrahams highlights research published last year (he says ‘recently published’, I say ‘is Jan 2006 still recent??’) in the Journal of Cross-Cultural Psychology, from a group of researchers calling themselves the Global Deception Research Team:

The team is big. It has 91 members, spread all around the world. Their stated goal: “studying stereotypes about liars”. They ask someone, “How can you tell when people are lying?”, then follow this up with 10 simple questions about liars.

Abrahams lists the questions, and gives details of the countries in which these questions were asked. The results?

Here is their pithy distillation: “[There are] common stereotypes about the liar, and these should not be ignored. Liars shift their posture, they touch and scratch themselves, liars are nervous, and their speech is flawed. These beliefs are common across the globe. Yet in prevalence, these stereotypes are dwarfed by the most common belief about liars: ‘they can’t look you in the eye’.”

That is their great discovery. And it accords with previous discoveries by other researchers.

It’s not really a very good article, which is a shame, because the original research is pretty interesting. What Abrahams doesn’t explain is that most research on deception indicates that gaze aversion and fidgeting are not reliable signs of lying, though there is some evidence that linguistic cues can be useful in discriminating lies from deceit. The authors of the ‘world of lies’ study explore some possible explanations for the near-universal belief that liars can’t look you in the eye, and I think their hypothesis is an intriguing one:

If stereotypes about lying do not reflect observations of deceptive behavior, how do they arise? Let us propose an answer to this question. Stereotypes about lying are designed to discourage lies. They are not intended to be descriptive; rather, they embody a worldwide norm. Children should be ashamed when they lie to their parents, and liars should feel bad. Lying should not pay, and liars should be caught. Stereotypes of the liar capture and promote these prescriptions. As vehicles for social control, these stereotypes are transmitted from one generation to the next.Worldwide, socialization agents face a common challenge. They cannot always be present and must control misbehavior that occurs in their absence. If the ultimate goal of socialization is to inculcate a wide set of norms, children must first learn to report their misdeeds. Thus, caregivers have an incentive to pass along the usual lore: that lying will make the child feel bad, that the child’s lies will be transparent, and that deceit will be more severely punished than any acknowledged transgression. The hope is that lying will be deterred or (at least) that the caregiver’s prophesies of shame will be self-fulfilling. By vilifying deception, stereotypes of the liar are designed to extend the reach of societal norms to actions that go unwitnessed (pp. 69-70).

You should read the original article than rely on Abrahams’ effort – the reference is below, and the link takes you to a free pdf download of the full article.

There’s a strange and sad post-script (or should that be ‘prescript’, since the events happened before the Guardian article was published) to this story: Charles F Bond, the lead author and one of very few researchers who have taken the trouble to conduct research on deception in non-Western populations, has been in the news recently after being arrested for allegedly threatening colleagues.

Reference:

  • Global Deception Research Team (2006). A World of Lies [pdf]. Journal Of Cross-cultural Psychology 37(1):60-74.

Photo credit: Allison_mc, Creative Commons License

New interview technique could help police spot deception

liar… according to a press release from the Economic and Social Research Council (7 June):

Shifting uncomfortably in your seat? Stumbling over your words? Can’t hold your questioner’s gaze? Police interviewing strategies place great emphasis on such visual and speech-related cues, although new research funded by the Economic and Social Research Council and undertaken by academics at the University of Portsmouth casts doubt on their effectiveness. However, the discovery that placing additional mental stress on interviewees could help police identify deception has attracted interest from investigators in the UK and abroad.

[...] A series of experiments involving over 250 student ‘interviewees’ and 290 police officers, the study saw interviewees either lie or tell the truth about staged events. Police officers were then asked to tell the liars from the truth tellers using the recommended strategies. Those paying attention to visual cues proved significantly worse at distinguishing liars from those telling the truth than those looking for speech-related cues.

[...] However, the picture changed when researchers raised the ‘cognitive load’ on interviewees by asking them to tell their stories in reverse order. Professor Aldert Vrij explained: “Lying takes a lot of mental effort in some situations, and we wanted to test the idea that introducing an extra demand would induce additional cues in liars. Analysis showed significantly more non-verbal cues occurring in the stories told in this way and, tellingly, police officers shown the interviews were better able to discriminate between truthful and false accounts.”

Asking an interviewee to tell their story in reverse order is not a new interview technique – it’s one of the techniques used in the Cognitive Interview, more usually deployed to get maximum detail in statements from victims and witnesses.

There are also detailed articles in the UK Times and Daily Telegraph newspapers based on (and building on) this press release.

More details, and links to downloadable reports, are available on the ESRC website via this link.

References:

Photo credit: Bingo_little, Creative Commons License

Deception links from around the web

linksSome quick deception-related links from around the blogosphere:

PsyBlog presents the “Top 3 Myths, Top 5 Proven Factors” on lie detection (12 May).

Wired (10 May) picks up on the UK government trial of voice stress analysis for alleged benefit cheats.

The Psychjourney Podcast for 27 April is on Malingering and PTSD (mp3).

If podcasts are your thing you can also listen to an interview with Ken Alder, author of a new book on the polygraph, on the Bat Segundo show (mp3). As the Anti-Polygraph Blog points you, you have to sit through a little silliness first…

Photo credit: mklingo, Creative Commons License

The truth about lying and laughing

truthFrom media darling, psychologist Prof Richard Wiseman, writing in this weekend’s Guardian Magazine (21 April):

[...] A few years ago I carried out a national survey into lying, focusing on adults. Only 8% of respondents claimed never to have lied. Other work has invited people to keep a detailed diary of every conversation that they have, and of all of the lies that they tell, over a two-week period. The results suggest that most people tell about two important lies each day, that a third of conversations involve some form of deception, that four in five lies remain undetected, that more than 80% of people have lied to secure a job, and that more than 60% of the population have cheated on their partners at least once.

[...Can we catch liars?] Psychologists have been exploring this question for 30 years. The research has studied the lying behaviour of salespeople, shoppers, students, drug addicts and criminals. Some of my own work in this area has involved showing people video tapes of instances in which people have made high-profile public appeals for information about a murder, only later to confess and be convicted of the crime themselves.

The results have been remarkably consistent – when it comes to lie detection, the public might as well simply toss a coin. It doesn’t matter if you are male or female, young or old; very few people are able reliably to detect deception.

Photo credit: jasoneppink, Creative Commons License

Children’s prepared and unprepared lies: can adults see through their strategies?

Just to show how bad people are at detecting lies, even 11-13 year old kids can easily pull the wool over our eyes! Leif Stromwall and collegues in Sweden found that adults could do no better than 46% accuracy when children had a chance to prepare their lies. Even when lies were not prepared they only got 57% correct:

We investigated adults’ ability to detect children’s prepared and unprepared lies and truths. Furthermore, we examined children’s strategies when lying. Thirty children (11-13 years) were interviewed about one self-experienced and one invented event each. Half had prepared their statements, the other half not. Sixty adult observers assessed the veracity of 10 videotaped statements each. Overall deception detection accuracy (51.5%) was not better than chance. The adults showed higher accuracy for unprepared statements (56.6%), than prepared statements (46.1%). The adults reported to have used more verbal than nonverbal cues to deception, especially the Detail criterion. The most frequent verbal strategy reported by the children was to use real-life components (e.g. own or others’ experiences); the most frequent nonverbal strategy was to stay calm. Arguably, the low accuracy is due to adults’ failure to see through the lying children’s strategies. Copyright © 2006 John Wiley & Sons, Ltd.

Reference:

How people cope with uncertainty due to chance or deception

In making social judgments people process effects caused by humans differently from effects caused by non-human agencies. We assume that when they have to predict outcomes that are attributed to non-human causes, people acknowledge their ignorance and try to focus on what is most diagnostic. However, when events are attributed to human agency, they believe that nothing is arbitrary and that one can understand the decision situation well enough to eliminate error. If so, then people should behave differently when an uncertainty is attributed to chance (a non-human agency) or to deception (a human agency). We tested this prediction using the probability-matching paradigm and found reasonable support for our analysis in four experiments. Individuals who attributed uncertainty to deception were less likely to adopt the optimal rule-based strategy than those who attributed it to chance. Indeed, only when the former were prevented from thinking about and elaborating the outcomes (the high-interference condition in Experiment 3) was their performance comparable to the level of individuals in the chance condition.

Reference:

What Influences Believing Child Sexual Abuse Disclosures?

A University of Orgeon press release (13 Feb) highlights research that explores some of the influences on whether someone is sceptical of a disclosure about child sexual abuse:

A University of Oregon study has found that young men who have never been traumatized are the least likely population to believe a person’s recounting of child sexual abuse. The study – published in the March issue of the journal Psychology of Women Quarterly – also finds that males with high sexism beliefs also tend to believe that such incidents, if they happened at all, were not harmful to the victim.

Some 80,000 cases of child sexual abuse are reported annually in the United States, according to federal statistics. Jennifer Freyd, a UO professor of psychology and co-author of the new study [...has] been studying the factors that may explain why some people don’t believe that such abuses occur, a phenomenon that discourages victims from speaking out and allows perpetrators to escape unpunished and possibly repeat such crimes.

Reference and abstract:

This vignette study investigated factors that influence believing child sexual abuse disclosures. College student participants (N= 318) in a university human subject pool completed measures about their own trauma history and responded to questions about sexist attitudes. Participants then read vignettes in which an adult disclosed a history of child sexual abuse, rated disclosures for accuracy and believability, and judged the level of abusiveness. Continuous memories were believed more than recovered memories. Men believed abuse reports less than did women, and people who had not experienced trauma were less likely to believe trauma reports. Gender and personal history interacted such that trauma history did not impact women’s judgments but did impact men’s judgments. Men with a trauma history responded similarly to women with or without a trauma history. High sexism predicted lower judgments of an event being abusive. Hostile sexism was negatively correlated with believing abuse disclosures. Results are considered in light of myths about child sexual abuse.

10 Ways to Catch a Liar

WebMd’s 23 November article on catching liars is remarkable. It’s the first of its kind that I’ve come across where I think I agree with every one of the tips. No assertion that gaze aversion is a cue to deception! No suggestion that liars fidget or look nervous! No claim that NLP eye access cues can help you spot a liar! Why can’t all articles on lie detection be like this?

In brief, here are the ten tips. But do go and read the whole article, which has more detail on each tip.

1: Inconsistencies: [...] “When you want to know if someone is lying, look for inconsistencies in what they are saying,” says [JJ] Newberry, who was a federal agent for 30 years and a police officer for five. [...]

2: Ask the Unexpected: [...] “Watch them carefully,” says Newberry. “And then when they don’t expect it, ask them one question that they are not prepared to answer to trip them up.” [...]

3: Gauge Against a Baseline: [...] The trick, explains [Maureen O'Sullivan, PhD, a professor of psychology at the University of San Francisco], is to gauge their behavior against a baseline. Is a person’s behavior falling away from how they would normally act? If it is, that could mean that something is up. [...]

4: Look for Insincere Emotions: [...] “Most people can’t fake smile,” says O’Sullivan. “The timing will be wrong, it will be held too long, or it will be blended with other things. [...]

5: Pay Attention to Gut Reactions: [...] “People say, ‘Oh, it was a gut reaction or women’s intuition,’ but what I think they are picking up on are the deviations of true emotions,” O’Sullivan tells WebMD. [...]

6: Watch for Microexpressions: [...] “A microexpression is a very brief expression, usually about a 25th of a second, that is always a concealed emotion,” says [Paul] Ekman, PhD, professor emeritus of psychology at the University of California Medical School in San Francisco. [...]

7: Look for Contradictions: [...] “The general rule is anything that a person does with their voice or their gesture that doesn’t fit the words they are saying can indicate a lie,” says Ekman. [...]

1 to 7 are all sensible, well-founded tips. I have a couple of caveats on 8 and 9:

8: A Sense of Unease: [...] “When someone isn’t making eye contact and that’s against how they normally act, it can mean they’re not being honest,” says Jenn Berman, PhD, a psychologist in private practice. [...]

Emphasis added – lack of eye contact in itself is not a reliable indicator of deception, but if it is out of character then you might have to ask yourself: why the change in behaviour? All of which underlines the importance of establishing baseline behaviour (tip 3).

9: Too Much Detail: [...] Too much detail could mean they’ve put a lot of thought into how they’re going to get out of a situation and they’ve crafted a complicated lie as a solution.

Caveat: unless they’re the sort of person who always provides excessive detail in stories (some people are just like that!).

10: Don’t Ignore the Truth: “It’s more important to recognize when someone is telling the truth than telling a lie because people can look like they’re lying but be telling truth,” says Newberry.

Hat tip to Enrica Dente’s Lie Detection list for the link!

Ten Ways To Tell If Someone Is Lying To You

…according to a recent article on Forbes.com (3 Nov):

In business, politics and romance, it would be nice to know when we’re being lied to. Unfortunately humans aren’t very good at detecting lies. Our natural tendency is to trust others, and for day-to-day, low-stakes interactions, that makes sense. We save time and energy by taking statements like “I saw that movie” or “I like your haircut” at face value. But while it would be too much work to analyze every interaction for signs of deception, there are times when we really need to know if we’re getting the straight story. Maybe a crucial negotiation depends on knowing the truth, or we’ve been lied to and want to find out if it’s part of a pattern.

The article has ten accompanying slides, with suggestions for the would-be lie catcher. Among the sensible suggestions – like monitoring pauses, seeking detail and asking the person to repeat their story – other slides suggest that gaze aversion, sweating and fidgeting are all signs of deception, despite the fact that there is no scientific evidence for such behaviours being more common in liars than truth-tellers. They also suggest:

Look for dilated pupils and a rise in vocal pitch. Psychologists DePaulo and Morris found that both phenomena were more common in liars than truth-tellers.

Both pupil dilation and pitch changes are indications of changes in arousal level (stress cues), and can often be very subtle. Probably not the best cues for a lie-detector to rely on. The Forbes article concludes:

Psychologists who study deception, though, are quick to warn that there is no foolproof method. [...] It’s tough to tell the difference between a liar and an honest person who happens to be under a lot of stress.

Accuracy of Deception Judgments

Hot off the press in Personality and Social Psychology Review, Charles Bond and Bella DePaulo analyse the accuracy of deception judgements. From the abstract:

We analyze the accuracy of deception judgments, synthesizing research results from 206 documents and 24,483 judges. In relevant studies, people attempt to discriminate lies from truths in real time with no special aids or training. In these circumstances, people achieve an average of 54% correct lie-truth judgments, correctly classifying 47% of lies as deceptive and 61% of truths as nondeceptive. Relative to cross-judge differences in accuracy, mean lie-truth discrimination abilities are nontrivial, with a mean accuracy d of roughly .40. This produces an effect that is at roughly the 60th percentile in size, relative to others that have been meta-analyzed by social psychologists. Alternative indexes of lie-truth discrimination accuracy correlate highly with percentage correct, and rates of lie detection vary little from study to study. Our meta-analyses reveal that people are more accurate in judging audible than visible lies, that people appear deceptive when motivated to be believed, and that individuals regard their interaction partners as honest. We propose that people judge others’ deceptions more harshly than their own and that this double standard in evaluating deceit can explain much of the accumulated literature.

Reference:

When training to detect deception works

In the October 2006 issue of Law and Human Behavior (vol 30, issue 5), Maria Hartwig and her colleagues present research on training law enforcement officers to detect deception:

Research on deception detection in legal contexts has neglected the question of how the use of evidence can affect deception detection accuracy. In this study, police trainees (N=82) either were or were not trained in strategically using the evidence when interviewing lying or truth telling mock suspects (N=82). The trainees’ strategies as well as liars’ and truth tellers’ counter-strategies were analyzed. Trained interviewers applied different strategies than did untrained. As a consequence of this, liars interviewed by trained interviewers were more inconsistent with the evidence compared to liars interviewed by untrained interviewers. Trained interviewers created and utilized the statement-evidence consistency cue, and obtained a considerably higher deception detection accuracy rate (85.4%) than untrained interviewers (56.1%).

Check out that accuracy rate – 85%!

Reference:

US police officers’ knowledge regarding behaviors indicative of deception

Lori Colwell and colleagues from Sam Houston State University have published in the latest issue of Psychology Crime and Law:

The current study surveyed a random sample of Texas law enforcement officers (n=109) about their knowledge regarding behaviors indicative of deception. The officers were not highly knowledgeable about this topic, overall performing at a chance level in assessing how various behavioral cues relate to deception. Confidence in one’s skill was unrelated to accuracy, and officers who reported receiving the most training and utilizing these skills more often were more confident but no more accurate in their knowledge of the behaviors that typically betray deception. The authors compare these results to previous studies that have examined officers’ beliefs in other countries and discuss the implication of these results in terms of developing future training programs that may debunk the common misconceptions that officers possess.

Lori H. Colwell, Holly A. Miller, Rowland S. Miller, Phillip M. Lyons, Jr. (2006). US police officers’ knowledge regarding behaviors indicative of deception: Implications for eradicating erroneous beliefs through training. Psychology, Crime and Law 12(5): 489-503

See also:

  • Training law enforcement officers to detect deception which reported on the same study, presented differently (Lori H. Colwell, Holly A. Miller, Phillip M. Lyons, Jr., Rowland S. Miller (2006). The Training of Law Enforcement Officers in Detecting Deception: A Survey of Current Practices and Suggestions for Improving Accuracy. Police Quarterly 9(3): pp 275-290)