Category Archives: Non-verbal behaviour

A few deception tweets from recent days

  • Insurance “claim fraudsters think too much”. Some great Portsmouth Uni research covered by Irish Independent http://retwt.me/1P8R0
  • “If You Want to Catch a Liar, Make Him Draw” David DiSalvo @Neuronarrative on more great Portsmouth Uni research http://retwt.me/1P8ZB
  • fMRI scans of people with schizophrenia show they have same functional anatomical distinction between truth telling & deception as others http://bit.ly/aO5cI2 via @Forpsych
  • In press: Promising to tell truth makes 8- 16 year-olds more honest (but lectures on morality don’t). Beh Sciences & Law http://is.gd/fCa7X

The Impact of Lie to Me on Viewers’ Actual Ability to Detect Deception

Wonderful!

Timothy R. Levine, Kim B. Serota, Hillary C. Shulman (in press). The Impact of Lie to Me on Viewers’ Actual Ability to Detect Deception Communication Research first published on June 17, 2010 doi:10.1177/0093650210362686

The new television series Lie to Me portrays a social scientist solving crimes through his ability to read nonverbal communication. Promotional materials claim the content is based on actual science. Participants (N = 108) watched an episode of Lie to Me, a different drama, or no program and then judged a series of honest and deceptive interviews. Lie to Me viewers were no better at distinguishing truths from lies but were more likely than control participants to misidentify honest interviewees as deceptive. Watching Lie to Me decreases truth bias thereby increasing suspicion of others while at the same time reducing deception detection ability.

Hat tip to Karen Franklin.

“Everyone likes to bust a liar”

If you’re based in the US and you’re interested in deception you can’t have missed the launch of the new TV drama series “Lie to Me” based on the research of Paul Ekman.

Professor Ekman has a long and distinguished record of research on emotions and on lying. In the last few years he has focused on applying his work to practical problems of law enforcement and national security, including developing training packages for professionals who want to become better lie detectors. Ekman’s well-known in the psychological and, increasingly, security/law enforcement community, but the TV drama looks set to make him famous. This may be a good thing for better public understanding of the myths and realities of deception research: Ekman writes a commentary on each episode, explaining the science behind the drama.

There’s a lot of coverage and comment across the web, including a profile of Ekman in the New York Times (20 Jan), commentary and links on the blogs Neuronarrative and Eyes for Lies, and a heap of news articles reviewing and commenting on the series (such as this one from the Calgary Herald).

Malcolm Gladwell’s 2002 New Yorker article, which inspired the TV producer Brian Grazer to develop the idea behind the series, can be read here.

As the NYT profile concludes:

… the combination of crime-solving and insight into how to recognize liars may prove to have potent appeal, Mr. Grazer said. “Everyone likes to bust a liar,” he said.

Research round-up 1: Catching liars

I know I’ve really neglected this blog over the past few months (pressure of work and a doctorate to finish). Over the next few posts I’ll share with you all the articles and stories I hoped I’d have time to comment on this year but just didn’t. I’d like to promise to be better in 2009, but for the first half at least I’m going to struggle. Hang in there, eventually I’ll get back to being a better blogger!

The second post in this series deals with research and commentary on new technologies, like fMRI, for lie detection. But first I round up some recent research on deception detection methods which don’t require a $1 million giant magnet or wiring your subject up to a polygraph or brain scanner.

Who can catch a liar?

Let’s start with a bit of back and forth that began with an article by Charles Bond and Bella Depaulo which appeared in Psychological Bulletin this year. Bond and Depaulo’s analysis suggested that individual differences in lie detection ability are vanishingly small. Accuracy in lie detection, argue Bond and Depaulo, is more to do with how good the liar is at lying than individuals are at detecting deceit.

The authors report a meta-analysis of individual differences in detecting deception… Although researchers have suggested that people differ in the ability to detect lies, psychometric analyses of 247 samples reveal that these ability differences are minute. In terms of the percentage of lies detected, measurement-corrected standard deviations in judge ability are less than 1%. In accuracy, judges range no more widely than would be expected by chance, and the best judges are no more accurate than a stochastic mechanism would produce. When judging deception, people differ less in ability than in the inclination to regard others’ statements as truthful. People also differ from one another as lie- and truth-tellers. They vary in the detectability of their lies. Moreover, some people are more credible than others whether lying or truth-telling. Results reveal that the outcome of a deception judgment depends more on the liar’s credibility than any other individual difference.

This is a direct challenge to, in particular, the work of psychologists Maureen O’Sullivan and Paul Ekman, who have been investigating people they claim are extraordinarily accurate at deception detection – people they have dubbed ‘wizards’ of deception detection. So here comes O’Sullivan, right back at Bond and Depaulo:

…[Bond and Depaulo’s] conclusions are based principally on studies with college students as lie detectors and lie scenarios of dubious ecological validity. When motivated professional groups have been shown either high stakes lie scenarios or scenarios involving appropriate liars and truth-tellers, average accuracies significantly above chance have been found for 7 different professional groups reported by 12 researchers in 3 countries. The replicated and predicted performance of extremely accurate individual lie detectors (“truth wizards”) also undermines the claim of no individual differences in lie detection accuracy…

Therese Pigott and Meng-Jia Wu also weigh in highlighting some methodological problems with Bond and Depaulo’s novel meta-analytic technique:

…[Bond and Depaulo] have presented a creative solution to the problem of estimating the standard deviation of deception judgments in the literature. Their article raises methodological questions about how to synthesize a measure of variation across studies. Although the standard deviation presents a number of problems as an effect size measure, more methodological research is needed to address directly the question raised by Bond and DePaulo (p.500).

Of course Bond and Depaulo get a right to reply. They repeat their analyses using the suggestions made by Piggott and Wu, and come to the same conclusion. They also take on O’Sullivan’s criticism and analyse data on experience to see what differences exist between college students and judges with ‘professional experience’. They conclude: “In moderator analyses, we looked separately at inexperienced and experienced judges. The individual differences in lie-detection accuracy were actually smaller among experienced judges than inexperienced ones” (p. 503).

Some empirical research that appears to support Bond and Depaulo’s claim was published online earlier this year, in advance of print in Law and Human Behaviour:

We examined whether individuals’ ability to detect deception remained stable over time. In two sessions, held one week apart, university students viewed video clips of individuals and attempted to differentiate between the lie-tellers and truth-tellers. Overall, participants had difficulty detecting all types of deception. When viewing children answering yes-no questions about a transgression (Experiments 1 and 5), participants’ performance was highly reliable. However, rating adults who provided truthful or fabricated accounts did not produce a significant alternate forms correlation (Experiment 2). This lack of reliability was not due to the types of deceivers (i.e., children versus adults) or interviews (i.e., closed-ended questions versus extended accounts) (Experiment 3). Finally, the type of deceptive scenario (naturalistic vs. experimentally-manipulated) could not account for differences in reliability (Experiment 4). Theoretical and legal implications are discussed.

But just in case you think it’s all over for the ‘wizards’, here’s a study from Gary Bond which suggests it can’t be dismissed quite yet. Out of more than 200 participants (law enforcement officers and college students) G. Bond discovered eleven (all LEOs) who could detect deception at greater than 80% chance, and from these, two potential ‘wizards’ who could maintain this accuracy rate over time. All of which indicates that there may indeed be ‘wizards’, but that they are (as O’Sullican has always recognised) very rare.

…Two experiments sought to (a) identify expert(s) in detection and assess them twice with four tests, and (b) study their detection behavior using eye tracking. Paroled felons produced videotaped statements that were presented to students and law enforcement personnel. Two experts were identified, both female Native American BIA correctional officers. Experts were over 80% accurate in the first assessment, and scored at 90% accuracy in the second assessment. In Signal Detection analyses, experts showed high discrimination, and did not evidence biased responding. They exploited nonverbal cues to make fast, accurate decisions. These highly-accurate individuals can be characterized as experts in deception detection.

How do you catch a liar?

What about the rest of us who aren’t ‘wizards’? Another discussion piece, which appeared earlier this year in a special issue of Criminal Justice and Behaviour focusing on scientific and psuedoscientific practices in law enforcement, was by Aldert Vrij who issued a challenge to law enforcement professionals to focus on verbal rather than non-verbal cues to lie detection:

…deception research has revealed that many verbal cues are more diagnostic cues to deceit than nonverbal cues. Paying attention to nonverbal cues results in being less accurate in truth/lie discrimination, particularly when only visual nonverbal cues are taken into account. Also, paying attention to visual nonverbal cues leads to a stronger lie bias (i.e., indicating that someone is lying). The author recommends a change in police practice and argues that for lie detection purposes it may be better to listen carefully to what suspects say.

Signs of lying

Continuing with their programme of research on the ‘cognitive load’ hypothesis Vrij, Sharon Leal and their colleagues published some psychophysiological evidence that lying does indeed increase mental effort, and that this increased effort can be detected by studying blink rates and skin conductance:

Previous research has shown that suspects in real-life interviews do not display stereotypical signs of nervous behaviours, even though they may be experiencing high detection anxiety. We hypothesised that these suspects may have experienced cognitive load when lying and that this cognitive load reduced their tonic arousal, which suppressed signs of nervousness. We conducted two experiments to test this hypothesis. Tonic electrodermal arousal and blink rate were examined during task-induced (Experiment 1) and deception-induced cognitive load (Experiment 2). Both increased cognitive difficulty and deception resulted in decreased tonic arousal and blinking. This demonstrated for the first time that when lying results in heightened levels of cognitive load, signs of nervousness are decreased. We discuss implications for detecting deception and more wide-ranging phenomena related to emotional behaviour.

And finally in this round-up, an article published in Psychology of Aging this year which tested older adults’ ability to detect deceit and whether any impairments were related to a lesser ability to recognise facial emotion expressed by the lie-teller.

Facial expressions of emotion are key cues to deceit (M. G. Frank & P. Ekman, 1997). Given that the literature on aging has shown an age-related decline in decoding emotions, we investigated (a) whether there are age differences in deceit detection and (b) if so, whether they are related to impairments in emotion recognition. Young and older adults (N = 364) were presented with 20 interviews (crime and opinion topics) and asked to decide whether each interview subject was lying or telling the truth. There were 3 presentation conditions: visual, audio, or audiovisual. In older adults, reduced emotion recognition was related to poor deceit detection in the visual condition for crime interviews only.

Next round-up will cover recent research on new technologies for lie detection!

Facial expressions and verbal cues to deception

Hat tip to Neuroethics and Law blog for pointing us towards an article in New Scientist (17 Sept) about lies and spin in the current US Presidential campaign.

NS briefly touches on Paul Ekman’s work on microfacial expressions before devoting more attention to the work of David Skillicorn:

Skillicorn has been watching out for verbal “spin”. He has developed an algorithm that evaluates word usage within the text of a conversation or speech to determine when a person “presents themselves or their content in a way that does not necessarily reflect what they know to be true”.

NS then turns to Branka Zei Pollermann, who combines voice and facial analysis:

“The voice analysis profile for McCain looks very much like someone who is clinically depressed,” says Pollermann… [who] uses auditory analysis software to map seven parameters of a person’s speech, including pitch modulation, volume and fluency, to create a voice profile. She then compares that profile with the speaker’s facial expressions, using as a guide a set of facial expressions mapped out by Ekman, called the Facial Action Coding System, to develop an overall picture of how they express themselves.

This story prompted quite a flurry of comments on the website (some of which are worth reading!).

Skillicorn has posted more about his research and its theoretical basis (James Pennebaker’s LIWC techniquepdf here) at his blog Finding Bad Guys in Data.

Polygraph reasoning applied to spotting terrorists…

Remember that the rationale behind the polygraph is that (with an appropriate questioning regime) guilty people are assumed have physiological responses that differ from innocents? Well, the new “anxiety-detecting machines” that the DHS hopes might one day spot terrorists seem to work on the same basis. Here’s the report from USA Today (18 Sept):

A scene from the airport of the future: A man’s pulse races as he walks through a checkpoint. His quickened heart rate and heavier breathing set off an alarm. A machine senses his skin temperature jumping. Screeners move in to question him. Signs of a terrorist? Or simply a passenger nervous about a cross-country flight?

It may seem Orwellian, but on Thursday, the Homeland Security Department showed off an early version of physiological screeners that could spot terrorists. The department’s research division is years from using the machines in an airport or an office building— if they even work at all. But officials believe the idea could transform security by doing a bio scan to spot dangerous people.

Critics doubt such a system can work. The idea, they say, subjects innocent travelers to the intrusion of a medical exam.

According to the news report, there is some effort going into testing the equipment, though if the details in the news report are to be believed it sounds like the research is still at a very early stage:

To pinpoint the physiological reactions that indicate hostile intent, researchers… recruited 140 local people with newspaper and Internet ads seeking testers in a “security study.” Each person receives $150.

On Thursday, subjects walked one by one into a trailer with a makeshift checkpoint. A heat camera measured skin temperature. A motion camera watched for tiny skin movements to measure heart and breathing rates. As a screener questioned each tester, five observers in another trailer looked for sharp jumps on the computerized bands that display the person’s physiological characteristics.

Some subjects were instructed in advance to try to cause a disruption when they got past the checkpoint, and to lie about their intentions when being questioned. Those people’s physiological responses are being used to create a database of reactions that signal someone may be planning an attack. More testing is planned for the next year.

The questioning element does make it sound like what is being developed is a ‘remote’ polygraph.

Hat tip to Crim Prof Blog.

UPDATE: Lots of places picking this up all over the www. New Scientist has a post on the same topic here, and an earlier article on the system here. The Telegraph’s report adds some new information.

Increasing Cognitive Load to Facilitate Lie Detection: The Benefit of Recalling an Event in Reverse Order

Continuing with their research on the ‘cognitive load hypothesis’, Aldert Vrij and colleagues from Portsmouth University report on a technique for facilitating lie detection – telling the story in reverse order. This article appears in the latest issue of Law and Human Behavior, although the study featured extensively in the press a few months ago (see here ).

Here’s the abstract:

In two experiments, we tested the hypotheses that (a) the difference between liars and truth tellers will be greater when interviewees report their stories in reverse order than in chronological order, and (b) instructing interviewees to recall their stories in reverse order will facilitate detecting deception. In Experiment 1, 80 mock suspects told the truth or lied about a staged event and did or did not report their stories in reverse order. The reverse order interviews contained many more cues to deceit than the control interviews. In Experiment 2, 55 police officers watched a selection of the videotaped interviews of Experiment 1 and made veracity judgements. Requesting suspects to convey their stories in reverse order improved police observers’ ability to detect deception and did not result in a response bias.

Reference:

Deception research across the blogosphere

The physiology of lying by exaggerating: Over at the BPS Research Digest Blog, a summary of research that has caused ripples around the media: lying by exaggeration doesn’t seem to cause the typical physiological arousal effects that some associate with liars:

Telling lies about our past successes can sometimes be self-fulfilling, at least when it come to exam performance. That’s according to the New York Times, which reports on studies by Richard Gramzow at the University of Southampton and colleagues.

Their research has shown that, when asked, many students exaggerate their past exam performance, and that those students who do this tend to go on to perform better in the future.

What’s more, a study published in February showed that when these exaggerators are interviewed about their past academic performance, they don’t show any of the physiological hallmarks associated with lying, but rather their bodies stay calm. It’s almost as though this is a different kind of lying, aimed more at the self, with the hope of encouraging improved future performance.

More commentary on this research over at Deric Bownds’ Mind Blog.

Reference:

Two popular articles on deception: Via the Situationist Blog (7 April), a link to an article in Forbes on “how to sniff out a liar” (which doesn’t include any hints for olfactory detection of deceivers!). And hat tip to the Antipolygraph Blog (16 April) for pointing us toThe Lie of Lie Detectors By Rob Shmerling:

Recently, two studies announced effective ways to determine whether a person was telling the truth — one used a brain scan while the other detected heat around the face. Since you probably tell the truth all of the time, it is likely that these reports will have no direct bearing on you. But, for those who perform lie detector tests or for those who might be asked to submit to one, these techniques could someday change how these tests are performed.

The Pentagon’s “Porta-Poly”: The news that the Pentagon is trialling a ‘pocket lie detector’ known as the Preliminary Credibility Assessment Screening System (PCASS) for soldiers has been picked up and commented upon by a number of sources including Bruce Schneier and the Anti-Polygraph Blog, but don’t skip the original MSN story which is well worth reading.

Update: Missed one: Over at Practical Ethics, in Fighting Absenteeism with Voice Analysis (16 May).  The news that some companies are apparently considering using this discredited technology to check up on workers calling in sick is chilling.

NPR on lie detection

Hat tip to blog.bioethics.net (a great blog associated with the American Journal of Bioethics):

This past week NPR’s Morning Edition carried a three-part series about lie detection reported by Dina Temple-Raston. (The segments are posted as both audio and text, so they’re easy to scan if you can’t listen.) The series covers the questionable accuracy of polygraphs, the emerging field of lie detection by fMRI, and the examination of facial “micro expressions” for hints of lies.

Head over to blog.bioethics.net for some commentary, or go straight to the NPR site for more details.

Investigating the Features of Truthful and Fabricated Reports of Traumatic Experiences

painStephen Porter and colleagues have a paper in the April 2007 issue of Canadian Journal of Behavioural Science exploring the differences between truthful and fabricated accounts of traumatic experiences.

They examined the written accounts of students fabricating and giving truthful accounts of traumatic events and found that:

… narratives based on false and genuine traumatic events showed several qualitative differences, some contrasting our predictions. Whereas we predicted that participants would be able to produce fabricated events that appeared to be as credible as truthful accounts, we found that fabricated events were rated lower on plausibility by coders with no knowledge of their actual veracity. This suggests that mistakes in the courtroom may result from liars who are able to effectively distract attention from their stories by manipulating their demeanour and speech (e.g., tone) (p.88).

In other words, lie catchers need to focus on what is being said, and try avoid being misled by non-verbal behaviour.

In addition, attention to specific types of details in the narratives helped to discriminate honesty from deception. When relating a fabricated experience, participants were unable to provide the same level of contextual information as when relating a genuine experience. They provided fewer time and location details and their reports were abbreviated overall, despite our prediction that they may be more detailed in an attempt to make their trauma stories more credible and to elicit sympathy (p.88).

As far as I can see, the following is the only attempt to motivate participants, during the instructions for the study:

Your goal in this section is to provide a believable (but fabricated) traumatic memory report. These reports will be shown to legal professionals and students (if you consented to this aspect of the study) in future research for them to determine how credible your experience appears (p.83).

It doesn’t appear from the description of the method that participants had much time to prepare their truthful or fabricated accounts. Perhaps it is not surprising then, that the results did not confirm to the researchers’ predictions? Perhaps real life malingerers, with the results of a court case at stake, and time to practice their account, might try harder to make their stories credible, and be better at it?

Participants also completed three widely used measures: the Revised Impact of Event Scale, which measures the level of traumatic stress associated with traumatic experience, the Trauma Symptom Inventory, which measures trauma and posttraumatic stress disorder symptoms, and the Post-Traumatic Stress Disorder Checklist, which also screens for the presence of PTSD symptomology. Analysis of the results suggested that

…genuine and fabricated reports of trauma could be differentiated based on the patterns of traumatic stress or symptoms reported. It was anticipated that symptoms on the three measures of traumatic stress would be exaggerated when participants were fabricating. The results provided strong evidence for this hypothesis (p.88).

Abstract below the fold.

Reference:

Photo credit: aussie_patches, Creative Commons License

Continue reading

Microexpressions and deception detection

truthsignA Newsweek article (16 Aug) on TSA behaviour detection officers in airports and their training in spotting microexpressions stirred up some blog commentary. Reporter Patti Davis commented:

In the study of “micro-expressions”—yes, it is actually a field of study and there are some who are arrogant enough to call it a science—it has been decided that when people wish to conceal emotions, the truth of their feelings is revealed in facial flashes. These experts have determined that fear and disgust are the key things to look for because they can hint of deception…. Let’s see, fear and disgust in an airport? I’m frightened and disgusted weeks before I have to show up at an airport.

Eyes for Lies rightly takes Davis to task for contradicting herself: It’s not about spotting people who have emotional expressions that are consistent with experience at an airport. If you have a genuine fear of flying or disgust at the state of the washrooms, for instance, in most cases you won’t show microexpressions because in most cases you won’t be actively trying to conceal these emotions. Instead, EfL says, “Someone who sees microexpressions will be looking for the guy who is showing inconsistencies in emotions and behavior. For example, he will look for a guy who is acting jovial, yet strangely preoccupied and flashes an expression of disgust or fear across his face simultaneously.”

However, as Mind Hacks points out, published research that supports the notion that the ability to spot microexpressions is associated with the ability to detect deception is limited.

Dr X links to the piece and reminds us of Paul Ekman’s notion of ‘duper’s delight’ – the sheer pleasure that some people get out of fooling others.

Want to see some microexpressions for yourself? See Paul Ekman discussing microexpressions, together with a classic example, on YouTube. And if you’d like to test your ability to spot them, try your luck here.

Photo credit: jacampos, Creative Commons License

Cues to Deception and Ability to Detect Lies as a Function of Police Interview Styles

interrogateIf you were a police officer, what sort of interview style would offer you the best chance of detecting whether or not your interviewee was telling lies? Aldert Vrij and his colleagues ran a study to find out:

In Experiment 1, we examined whether three interview styles used by the police, accusatory, information-gathering and behaviour analysis, reveal verbal cues to deceit, measured with the Criteria-Based Content Analysis (CBCA) and Reality Monitoring (RM) methods. A total of 120 mock suspects told the truth or lied about a staged event and were interviewed by a police officer employing one of these three interview styles. The results showed that accusatory interviews, which typically result in suspects making short denials, contained the fewest verbal cues to deceit. Moreover, RM distinguished between truth tellers and liars better than CBCA. Finally, manual RM coding resulted in more verbal cues to deception than automatic coding of the RM criteria utilising the Linguistic Inquiry and Word Count (LIWC) software programme.

In Experiment 2, we examined the effects of the three police interview styles on the ability to detect deception. Sixty-eight police officers watched some of the videotaped interviews of Experiment 1 and made veracity and confidence judgements. Accuracy scores did not differ between the three interview styles; however, watching accusatory interviews resulted in more false accusations (accusing truth tellers of lying) than watching information-gathering interviews. Furthermore, only in accusatory interviews, judgements of mendacity were associated with higher confidence. We discuss the possible danger of conducting accusatory interviews.

In the discussion, Vrij and colleagues summarise:

The present experiment revealed that style of interviewing did not affect on overall accuracy (ability to distinguish between truths or lies) or on lie detection accuracy (ability to correctly identify liars). In fact, the overall accuracy rates were low and did not differ from the level of chance. This study, like so many previous studies (Vrij, 2000), thus shows the difficulty police officers face when discerning truths from lies by observing the suspect’s verbal and nonverbal behaviours.

In other words, if law enforcement officers want to increase their chances of detecting deception, they need to make sure interviewers use an information gathering approach. But simply watching that interview (live or on tape) might not help them decide whether or not the suspect is telling the truth – they may need to subject a transcript to linguistic analysis to give themselves the best chance.

Even if it doesn’t result in better ‘live’ judgements of veracity, an information gathering approach has another advantage for the law enforcement officer: it maximises the number of checkable facts elicited from the suspect, and being able to check a fact against the truth is pretty much the most effective means of uncovering false information. Of course, someone can provide false information without deliberately lying: if they have misremembered something, for instance, or are passing on something that someone else lied to them about. But then the point of any law enforcement interview is to get to the truth, which is a higher goal than simply uncovering a liar, in my opinion.

As always with lab-based studies, there are some limitations. Vrij et al., for instance, acknowledge that “in practice elements of all three styles may well be incorporated in one interview” but explain that “we distinguished between the three styles in our experiments because we can only draw conclusions about the effects of such styles only by examining them in their purest form”.

Further problems, which are difficult to overcome in structured lab settings, are caused because participants were assigned randomly to ‘guilty’ (liars) or ‘innocent’ (truth tellers) conditions. In the real world, individuals who are prepared put themselves in a situation in which they might later have to lie may differ in their ability to lie effectively than those who try to stay out of such situations. And real guilty suspects make a decision about whether they are going to lie (a few confess from the start, others will offer partial or whole untruths). It’s an issue that is open to empirical test: let participants choose whether they want to be in the ‘guilty’ or ‘innocent’ conditions (or have four conditions: guilty choice/guilty no choice/innocent choice/innocent no choice).

Also, in this study the liars were told what lie to tell (as opposed to being able to make one up). Real guilty suspects who decide to lie will presumably choose a lie that they they think they stand a good chance of being able to get away with. In real world conditions, the perception by the guilty individual of what sort of situation they’re in, the evidence against them, the plausible story they can tell to explain away the evidence, and their ability to lie effectively are probably all important.

Reference:

Photo credit: scottog, Creative Commons License

The uncomfortable truth about liars

girllyingThe UK Guardian newspaper runs a regular column entitled “Improbable research”, and the most recent (12 June) was about deception research. Mark Abrahams highlights research published last year (he says ‘recently published’, I say ‘is Jan 2006 still recent??’) in the Journal of Cross-Cultural Psychology, from a group of researchers calling themselves the Global Deception Research Team:

The team is big. It has 91 members, spread all around the world. Their stated goal: “studying stereotypes about liars”. They ask someone, “How can you tell when people are lying?”, then follow this up with 10 simple questions about liars.

Abrahams lists the questions, and gives details of the countries in which these questions were asked. The results?

Here is their pithy distillation: “[There are] common stereotypes about the liar, and these should not be ignored. Liars shift their posture, they touch and scratch themselves, liars are nervous, and their speech is flawed. These beliefs are common across the globe. Yet in prevalence, these stereotypes are dwarfed by the most common belief about liars: ‘they can’t look you in the eye’.”

That is their great discovery. And it accords with previous discoveries by other researchers.

It’s not really a very good article, which is a shame, because the original research is pretty interesting. What Abrahams doesn’t explain is that most research on deception indicates that gaze aversion and fidgeting are not reliable signs of lying, though there is some evidence that linguistic cues can be useful in discriminating lies from deceit. The authors of the ‘world of lies’ study explore some possible explanations for the near-universal belief that liars can’t look you in the eye, and I think their hypothesis is an intriguing one:

If stereotypes about lying do not reflect observations of deceptive behavior, how do they arise? Let us propose an answer to this question. Stereotypes about lying are designed to discourage lies. They are not intended to be descriptive; rather, they embody a worldwide norm. Children should be ashamed when they lie to their parents, and liars should feel bad. Lying should not pay, and liars should be caught. Stereotypes of the liar capture and promote these prescriptions. As vehicles for social control, these stereotypes are transmitted from one generation to the next.Worldwide, socialization agents face a common challenge. They cannot always be present and must control misbehavior that occurs in their absence. If the ultimate goal of socialization is to inculcate a wide set of norms, children must first learn to report their misdeeds. Thus, caregivers have an incentive to pass along the usual lore: that lying will make the child feel bad, that the child’s lies will be transparent, and that deceit will be more severely punished than any acknowledged transgression. The hope is that lying will be deterred or (at least) that the caregiver’s prophesies of shame will be self-fulfilling. By vilifying deception, stereotypes of the liar are designed to extend the reach of societal norms to actions that go unwitnessed (pp. 69-70).

You should read the original article than rely on Abrahams’ effort – the reference is below, and the link takes you to a free pdf download of the full article.

There’s a strange and sad post-script (or should that be ‘prescript’, since the events happened before the Guardian article was published) to this story: Charles F Bond, the lead author and one of very few researchers who have taken the trouble to conduct research on deception in non-Western populations, has been in the news recently after being arrested for allegedly threatening colleagues.

Reference:

  • Global Deception Research Team (2006). A World of Lies [pdf]. Journal Of Cross-cultural Psychology 37(1):60-74.

Photo credit: Allison_mc, Creative Commons License

New interview technique could help police spot deception

liar… according to a press release from the Economic and Social Research Council (7 June):

Shifting uncomfortably in your seat? Stumbling over your words? Can’t hold your questioner’s gaze? Police interviewing strategies place great emphasis on such visual and speech-related cues, although new research funded by the Economic and Social Research Council and undertaken by academics at the University of Portsmouth casts doubt on their effectiveness. However, the discovery that placing additional mental stress on interviewees could help police identify deception has attracted interest from investigators in the UK and abroad.

[…] A series of experiments involving over 250 student ‘interviewees’ and 290 police officers, the study saw interviewees either lie or tell the truth about staged events. Police officers were then asked to tell the liars from the truth tellers using the recommended strategies. Those paying attention to visual cues proved significantly worse at distinguishing liars from those telling the truth than those looking for speech-related cues.

[…] However, the picture changed when researchers raised the ‘cognitive load’ on interviewees by asking them to tell their stories in reverse order. Professor Aldert Vrij explained: “Lying takes a lot of mental effort in some situations, and we wanted to test the idea that introducing an extra demand would induce additional cues in liars. Analysis showed significantly more non-verbal cues occurring in the stories told in this way and, tellingly, police officers shown the interviews were better able to discriminate between truthful and false accounts.”

Asking an interviewee to tell their story in reverse order is not a new interview technique – it’s one of the techniques used in the Cognitive Interview, more usually deployed to get maximum detail in statements from victims and witnesses.

There are also detailed articles in the UK Times and Daily Telegraph newspapers based on (and building on) this press release.

More details, and links to downloadable reports, are available on the ESRC website via this link.

References:

Photo credit: Bingo_little, Creative Commons License

Moderators of Nonverbal Indicators of Deception

A new meta-analysis of non-verbal indicators of deception from Siegfried Sporer and Barbara Schwandt from the Justus-Liebig-University of Giessen in Germany:

[…] This meta-analysis investigated directly observable nonverbal correlates of deception as a function of different moderator variables. Although lay people and professionals alike assume that many nonverbal behaviors are displayed more frequently while lying, of 11 different behaviors observable in the head and body area, only 3 were reliably associated with deception. Nodding, foot and leg movements, and hand movements were negatively related to deception in the overall analyses weighted by sample size. Most people assume that nonverbal behaviors increase while lying; however, these behaviors decreased, whereas others showed no change. There was no evidence that people avoid eye contact while lying, although around the world, gaze aversion is deemed the most important signal of deception. Most effect sizes were found to be heterogeneous. Analyses of moderator variables revealed that many of the observed relationships varied as a function of content, motivation, preparation, sanctioning of the lie, experimental design, and operationalization. Existing theories cannot readily account for the heterogeneity in findings. Thus, practitioners are cautioned against using these indicators in assessing the truthfulness of oral reports. (PsycINFO Database Record (c) 2007 APA, all rights reserved)

Reference:

See also:

Psychopathy and Nonverbal Indicators of Deception in Offenders

Lying and deceit is a common feature of psychopathy, yet few studies have explored the behaviours of psychopaths while they lie. In an in-press article to appear in Law and Human Behavior, Jessica R. Klaver, Zina Lee and Stephen D. Hart from Simon Fraser University in Canada write:

Extant research suggests that, contrary to what might be expected, psychopathy is generally unrelated to a greater capacity for successful lying behavior. However, it may be that psychopathic offenders are able to deceive successfully by making use of interpersonal skills that are not captured by relatively structured assessments. Their ability to captivate and appear genuine while conning and manipulating others may be enhanced by an effective nonverbal behavioral presentation.

Klaver and her colleagues tested 45 offenders, some of whom were classified as psychopaths on the basis of the Hare Psychopathy checklist – Revised. Participants were video taped talking about the circumstances the offence for which they were incarcerated (true condition) and telling a false story about an offence they didn’t commit (false condition). The authors summarise the results in the abstract:

[…] Interpersonal features of psychopathy were associated with inflated views of lying ability, verbosity, and increases in blinking, illustrator use, and speech hesitations. While lying, the more psychopathic offenders spoke faster and demonstrated increases in blinking and head movements. Indicators of deception in offenders were somewhat different from those typically observed in non-offender populations. These findings indicate that personality factors may have an impact on nonverbal indicators of deception in criminal justice settings where the detection of deception is of utmost concern.

Reference:

10 Ways to Catch a Liar

WebMd’s 23 November article on catching liars is remarkable. It’s the first of its kind that I’ve come across where I think I agree with every one of the tips. No assertion that gaze aversion is a cue to deception! No suggestion that liars fidget or look nervous! No claim that NLP eye access cues can help you spot a liar! Why can’t all articles on lie detection be like this?

In brief, here are the ten tips. But do go and read the whole article, which has more detail on each tip.

1: Inconsistencies: […] “When you want to know if someone is lying, look for inconsistencies in what they are saying,” says [JJ] Newberry, who was a federal agent for 30 years and a police officer for five. […]

2: Ask the Unexpected: […] “Watch them carefully,” says Newberry. “And then when they don’t expect it, ask them one question that they are not prepared to answer to trip them up.” […]

3: Gauge Against a Baseline: […] The trick, explains [Maureen O’Sullivan, PhD, a professor of psychology at the University of San Francisco], is to gauge their behavior against a baseline. Is a person’s behavior falling away from how they would normally act? If it is, that could mean that something is up. […]

4: Look for Insincere Emotions: […] “Most people can’t fake smile,” says O’Sullivan. “The timing will be wrong, it will be held too long, or it will be blended with other things. […]

5: Pay Attention to Gut Reactions: […] “People say, ‘Oh, it was a gut reaction or women’s intuition,’ but what I think they are picking up on are the deviations of true emotions,” O’Sullivan tells WebMD. […]

6: Watch for Microexpressions: […] “A microexpression is a very brief expression, usually about a 25th of a second, that is always a concealed emotion,” says [Paul] Ekman, PhD, professor emeritus of psychology at the University of California Medical School in San Francisco. […]

7: Look for Contradictions: […] “The general rule is anything that a person does with their voice or their gesture that doesn’t fit the words they are saying can indicate a lie,” says Ekman. […]

1 to 7 are all sensible, well-founded tips. I have a couple of caveats on 8 and 9:

8: A Sense of Unease: […] “When someone isn’t making eye contact and that’s against how they normally act, it can mean they’re not being honest,” says Jenn Berman, PhD, a psychologist in private practice. […]

Emphasis added – lack of eye contact in itself is not a reliable indicator of deception, but if it is out of character then you might have to ask yourself: why the change in behaviour? All of which underlines the importance of establishing baseline behaviour (tip 3).

9: Too Much Detail: […] Too much detail could mean they’ve put a lot of thought into how they’re going to get out of a situation and they’ve crafted a complicated lie as a solution.

Caveat: unless they’re the sort of person who always provides excessive detail in stories (some people are just like that!).

10: Don’t Ignore the Truth: “It’s more important to recognize when someone is telling the truth than telling a lie because people can look like they’re lying but be telling truth,” says Newberry.

Hat tip to Enrica Dente’s Lie Detection list for the link!

Ten Ways To Tell If Someone Is Lying To You

…according to a recent article on Forbes.com (3 Nov):

In business, politics and romance, it would be nice to know when we’re being lied to. Unfortunately humans aren’t very good at detecting lies. Our natural tendency is to trust others, and for day-to-day, low-stakes interactions, that makes sense. We save time and energy by taking statements like “I saw that movie” or “I like your haircut” at face value. But while it would be too much work to analyze every interaction for signs of deception, there are times when we really need to know if we’re getting the straight story. Maybe a crucial negotiation depends on knowing the truth, or we’ve been lied to and want to find out if it’s part of a pattern.

The article has ten accompanying slides, with suggestions for the would-be lie catcher. Among the sensible suggestions – like monitoring pauses, seeking detail and asking the person to repeat their story – other slides suggest that gaze aversion, sweating and fidgeting are all signs of deception, despite the fact that there is no scientific evidence for such behaviours being more common in liars than truth-tellers. They also suggest:

Look for dilated pupils and a rise in vocal pitch. Psychologists DePaulo and Morris found that both phenomena were more common in liars than truth-tellers.

Both pupil dilation and pitch changes are indications of changes in arousal level (stress cues), and can often be very subtle. Probably not the best cues for a lie-detector to rely on. The Forbes article concludes:

Psychologists who study deception, though, are quick to warn that there is no foolproof method. […] It’s tough to tell the difference between a liar and an honest person who happens to be under a lot of stress.

More on microexpressions

Those of you interested in Paul Ekman’s article on behavioural profiling at Logan Airport can read more about his work on microexpressions in the latest issue of Scientific American Mind (October 2006).

As soon as we observe another person, we try to read his or her face for signs of happiness, sorrow, anxiety, anger. Sometimes we are right, sometimes we are wrong, and errors can create some sticky personal situations. Yet Paul Ekman is almost always right. The psychology professor emeritus at the University of California, San Francisco, has spent 40 years studying human facial expressions. He has catalogued more than 10,000 possible combinations of facial muscle movements that reveal what a person is feeling inside. And he has taught himself how to catch the fleeting involuntary changes, called microexpressions, that flit across even the best liar’s face, exposing the truth behind what he or she is trying to hide.

This article is free to view, though another in the same issue, Exposing Lies, is behind a paywall.

How to Spot a Terrorist on the Fly

Psychologist Paul Ekman has an article in today’s Washington Post (29 Oct) on his experience with the behavioural profiling TSA team at Boston Logan airport.

Critics of the controversial new security program I was taking stock of — known as SPOT, for Screening Passengers by Observational Techniques — have said that it is an unnecessary invasion of privacy, based on an untested method of observation, that is unlikely to yield much in the way of red-handed terrorists set on blowing up a plane or flying it into a building, but would violate fliers’ civil rights.

I disagree. I’ve participated in four decades’ worth of research into deception and demeanor, and I know that researchers have amassed enough knowledge about how someone who is lying looks and behaves that it would be negligent not to use it in the search for terrorists. Along with luggage checks, radar screening, bomb-sniffing dogs and the rest of our security arsenal, observational techniques can help reduce risks — and potentially prevent another deadly assault like the attacks of Sept. 11, 2001.

Read the whole article online here.