Category Archives: Verbal behaviour

Quick deception links for the last few weeks

Gah, Twitter update widget broken. Here are the deception-relevant tweets from the last few weeks:

Polygraph and similar:

  • Detecting concealed information w/ reaction times: Validity & comparison w/ polygraph App Cog Psych 24(7)
  • Important (rare) study on polygraph w/ UK sex offenders: leads to more admissions; case mgrs perceive increased risk

fMRI and other brain scanning:

  • If Brain Scans Really Detected Deception, Who Would Volunteer to be Scanned? J Forensic Sci
  • FMRI & deception: “The production and detection of deception in an interactive game” in Neuropsychologia
  • In the free access PLoS1: fMRI study indicates neural activity associated with deception is valence-related. PLoS One 5(8).

Verbal cues:

  • Distinguishing truthful from invented accounts using reality monitoring criteria –
  • Detecting Deceptive Discussions in Conference Calls. Linguistic analysis method 50-65% accuracy. SSRN via
  • Effect of suspicion & liars’ strategies on reality monitoring Gnisci, Caso & Vrij in App Cog Psy 24:762–773

Applied contexts:

  • A new Canadian study on why sex offenders confess during police interrogation (no polygraph necessary)
  • Can fabricated evidence induce false eyewitness testimony? App Cog Psych 24(7) Free access
  • In press, B J Soc Psy Cues to deception in context. Apparently ‘context’ = ‘Jeremy Kyle Show’. Can’t wait for the paper!
  • Can people successfully feign high levels of interrogative suggestibility & compliance when given instructions to malinger?

Kids fibbing:

  • Eliciting cues to children’s deception via strategic disclosure of evidence App Cog Psych 24(7)
  • Perceptions about memory reliability and honesty for children of 3 to 18 years old –

And some other links of interest:

Research round-up 6: And finally, kids’ lies, online lies and my deception book of the year

Happy new year! Here is the final part of the 2008 deception research round-up, put together to make amends for having neglected this blog over the past few months. This post includes bits and pieces of deception research that didn’t fit too well into the first five round-up posts. Hope you’ve enjoyed them all!


First, a couple of articles about how children learn to lie:

Eye gaze plays a pivotal role during communication. When interacting deceptively, it is commonly believed that the deceiver will break eye contact and look downward. We examined whether children’s gaze behavior when lying is consistent with this belief. …Younger participants (7- and 9-year-olds) broke eye contact significantly more when lying compared with other conditions. Also, their averted gaze when lying differed significantly from their gaze display in other conditions. In contrast, older participants did not differ in their durations of eye contact or averted gaze across conditions. Participants’ knowledge about eye gaze and deception increased with age. This knowledge significantly predicted their actual gaze behavior when lying. These findings suggest that with increased age, participants became increasingly sophisticated in their use of display rule knowledge to conceal their deception.

The relation between children’s lie-telling and their social and cognitive development was examined. Children (3-8 years) were told not to peek at a toy. Most children peeked and later lied about peeking. Children’s subsequent verbal statements were not always consistent with their initial denial and leaked critical information revealing their deceit. Children’s conceptual moral understanding of lies, executive functioning, and theory-of-mind understanding were also assessed. Children’s initial false denials were related to their first-order belief understanding and their inhibitory control. Children’s ability to maintain their lies was related to their second-order belief understanding. Children’s lying was related to their moral evaluations. These findings suggest that social and cognitive factors may play an important role in children’s lie-telling abilities.

Technotreachery – lying via CMC

It’s a popular topic and the literature is growing all the time. Here’s some of the new research published in 2008 about lying in computer-mediated communication:

This study aimed to elaborate the relationships between sensation-seeking, Internet dependency, and online interpersonal deception. Of the 707 individuals recruited to this study, 675 successfully completed the survey. The results showed high sensation-seekers and high Internet dependents were more likely to engage in online interpersonal deception than were their counterparts.

Deception research has been primarily studied from a Western perspective, so very little is known regarding how other cultures view deception… this study proposes a framework for understanding the role Korean and American culture plays in deceptive behavior for both face-to-face (FTF) and computer-mediated communication (CMC). … Korean respondents exhibited greater collectivist values, lower levels of power distance, and higher levels of masculine values than Americans. Furthermore, deceptive behavior was greater for FTF communication than for CMC for both Korean and American respondents. In addition to a significant relationship between culture and deception, differences were found between espoused cultural values and deceptive behavior, regardless of national culture. These results indicate the need for future research to consider cultural differences when examining deceptive behavior.

This study set out to investigate the type of media individuals are more likely to tell self-serving and other-oriented lies, and whether this varied according to the target of the lie. One hundred and fifty participants rated on a likert-point scale how likely they would tell a lie. Participants were more likely to tell self-serving lies to people not well-known to them. They were more likely to tell self-serving lies in email, followed by phone, and finally face-to-face. Participants were more likely to tell other-oriented lies to individuals they felt close to and this did not vary according to the type media. Participants were more likely to tell harsh truths to people not well-known to them via email.

Detecting deception

OK, I know this probably could have gone into an earlier post. However, it does involve a bit of machinery so it didn’t fit in part 1, but the machinery has been in use for several decades so it couldn’t really fit in post 2.

An increasing number of researchers are exploring variations of the Concealed Knowledge Test (CKT) as alternatives to traditional ‘lie-detector’ tests. For example, the response times (RT)-based CKT has been previously shown to accurately detect participants who possess privileged knowledge. Although several studies have reported successful RT-based tests, they have focused on verbal stimuli despite the prevalence of photographic evidence in forensic investigations. Related studies comparing pictures and phrases have yielded inconsistent results. The present work compared an RT-CKT using verbal phrases as stimuli to one using pictures of faces. This led to equally accurate and efficient tests using either stimulus type. Results also suggest that previous inconsistent findings may be attributable to study procedures that led to better memory for verbal than visual items. When memory for verbal phrases and pictures were equated, we found nearly identical detection accuracies.

Deception book of the year

And finally, an important publication in 2008 was the second edition of Aldert Vrij’s Detecting Lies and Deceit: Pitfalls and Opportunities. The first edition (published in 2000) has been one of my key references for scholarly research on deception, along with Paul Ekman’s Telling Lies: Clues to Deceit in the Marketplace, Politics and Marriage and Granhag and Stronwall’s edited volume on The Detection of Deception in Forensic Contexts. Not surprising then that Vrij’s second edition is already one of the most frequently consulted volumes on my deception bookshelf.

Vrij says that he did not originally envisage updating his 2000 book until at least 2010, but felt with the increasing amount of new research in this area, and increasing interest from law enforcement and security agencies in detecting deception that he could not wait that long. The result is a volume that is substantially updated with research published up to about the middle of 2007. The book has been completely rewritten and there are several new chapters covering recent developments in mechanical methods of deception detection, including brain scanning technologies (e.g., fMRI, P300 brain waves), thermal imaging and voice stress analysis. Vrij also adds a helpful chapter on how professionals can become better lie detectors.

It’s not perfect – I’d welcome more detail on on understanding the reasons why people lie (the book is mostly about catching liars), more on creating a context in which someone is more likely to tell the truth, and more discussion of cross-cultural differences in deception (though to be fair there is shockingly little research in this area to discuss). But despite these criticisms, Vrij’s new book remains a ‘must have’ reference for academics and professionals interested in up-to-date research on deception detection. Practitioners in particular should heed Vrij’s warning about over-hyped techniques for ‘deception detection’: as Vrij says, the best way to avoid falling for the hype is by keeping up to date with the independent, objective research on deception detection. This book is a great tool for giving yourself a grounding in that research.

Phew. Six months’ blogging in 6 days. Hope you enjoyed it!

New research: Recording lying, cheating, and defiance in an Internet Based Simulated Environment

Over in the latest issue of Computers in Human Behavior, Sara Russell and Lawrence James report on research on lying and cheating in a virtual environment. This paper is less about lying and cheating per se, however, and more about a new method for eliciting and recording such behaviour. Why does this matter? The authors explain:

… most psychological research being conducted on the Internet includes simply changing the delivery method of questionnaires from paper-and-pencil to an electronic equivalence. Although this transition will offer advantages over traditional methods and is a step forward, it does not embrace the full capabilities of an Internet environment as a tool. The authors found no reports of Internet research being conducted in which behavior was both elicited and recorded in a pre-defined, controlled Internet environment…

Other than highly complex and expensive software creations of virtual realities such as flight training simulations, research is lacking in utilizing electronically generated environment such as an Internet Based Simulated Environment (IBSE) to mimic experiences that can elicit and record specific pre-defined behaviors of interest to scientists. More specifically, research utilizing such technology to record behavioral manifestations of human personality traits is needed. This need for an electronic environment such as an IBSE will be driven by the inherent difficulty of utilizing direct observation in a real or laboratory setting which often leads researchers to rely on self-reports of behavior. (p.2015)

The researchers asked participants to complete (on paper) the Conditional Reasoning Test of Aggression (CRT-A), a test that measures the tendency to “respond to frustrating situations in an aggressive way” (p.2016) and then directed them to an online quiz that was designed “to initially and continually cause frustrating situations to occur from the start to the end of user experience” (p.2017). These included patronising instructions (“use your keyboard to type your username”), and random ‘errors’ that were not the participant’s fault (“Page cannot be found, you performed an incorrect operation. Click here to retrieve your quiz”). (The authors said that when constructing the test they drew inspiration from their own experiences – I think many of us will identify with the sort of situations they created!)

In response, participants were given the opportunity to lie (about whether or not they had read a detailed set of instructions) or cheat (when a link they clicked took them apparently to an administrator site that allowed them to change their scores). Defiant participants (those who, for instance, truthfully said they had not read the instructions but nevertheless started the quiz) were also logged.

In all, a quarter of the 191 participants in this study cheated, with around 11% defiant and 7% lying, with just under 40% performing at least one of these behaviours. Those who received high scores on the CRT-A were significantly more likely to engage in one or more of these behaviours.

So what? Well, the researchers rightly note that there are problems with this research, the most significant being that it the method cannot measure offline behaviours such as “yelling, cursing, or possibly physical aggression towards the computer” (p2023), and that it does not provide an objective – or even subjective – measure of ‘frustration’ (we all have different thresholds and experience different levels of frustration). But the research does show that it is possible to conduct internet-based research in a more creative and productive way than simply transposing questionnaires from paper to a website. In particular, the study demonstrated a method for getting away from reliance on self-reports and instead easily (and covertly) measuring actual occurrances of particular (potentially socially undesirable) behaviours, which has application in a range of scenarios, not just deception-related.


Facial expressions and verbal cues to deception

Hat tip to Neuroethics and Law blog for pointing us towards an article in New Scientist (17 Sept) about lies and spin in the current US Presidential campaign.

NS briefly touches on Paul Ekman’s work on microfacial expressions before devoting more attention to the work of David Skillicorn:

Skillicorn has been watching out for verbal “spin”. He has developed an algorithm that evaluates word usage within the text of a conversation or speech to determine when a person “presents themselves or their content in a way that does not necessarily reflect what they know to be true”.

NS then turns to Branka Zei Pollermann, who combines voice and facial analysis:

“The voice analysis profile for McCain looks very much like someone who is clinically depressed,” says Pollermann… [who] uses auditory analysis software to map seven parameters of a person’s speech, including pitch modulation, volume and fluency, to create a voice profile. She then compares that profile with the speaker’s facial expressions, using as a guide a set of facial expressions mapped out by Ekman, called the Facial Action Coding System, to develop an overall picture of how they express themselves.

This story prompted quite a flurry of comments on the website (some of which are worth reading!).

Skillicorn has posted more about his research and its theoretical basis (James Pennebaker’s LIWC techniquepdf here) at his blog Finding Bad Guys in Data.

Ways for repairing trust breakdowns in one-off online interactions

What can you do if you’ve unintentionally offended someone by being or appearing deceptive online? Here’s a recent article on restoring trust online, from the June 2008 issue of International Journal of Human-Computer Studies:

Online offences are generally considered as frequent and intentional acts performed by a member with the aim to deceive others. However, an offence may also be unintentional or exceptional, performed by a benevolent member of the community. This article examines whether a victim’s decrease in trust towards an unintentional or occasional offender can be repaired in an online setting, by designing and evaluating systems to support forgiveness. We study which of three systems enable the victim of a trust breakdown to fairly assess this kind of offender. The three systems are: (1) a reputation system, (2) a reputation system with a built-in apology forum that may display the offender’s apology to the victim and (3) a reputation system with a built-in apology forum that also includes a “forgiveness” component. The “forgiveness” component presents the victim with information that demonstrates the offender’s trustworthiness as judged by the system. We experimentally observe that systems (2) and (3), endorsing apology and supporting forgiveness, allow victims to recover their trust after online offences. An apology from the offender restores the victim’s trust only if the offender cooperates in a future interaction; it does not alleviate the trust breakdown immediately after it occurs. By contrast, the “forgiveness” component restores the victim’s trust directly after the offence and in a subsequent interaction. The applicability of these findings for extending reputation systems is discussed.


Deceptive Self-Presentation in Online Dating Profiles

In the latest issue of Personality and Social Psychology Bulletin, Catalina Toma and colleagues consider how people lie in online dating profiles, and what they lie about. Here’s the abstract:

This study examines self-presentation in online dating profiles using a novel cross-validation technique for establishing accuracy. Eighty online daters rated the accuracy of their online self-presentation. Information about participants’ physical attributes was then collected (height, weight, and age) and compared with their online profile, revealing that deviations tended to be ubiquitous but small in magnitude. Men lied more about their height, and women lied more about their weight, with participants farther from the mean lying more. Participants’ self-ratings of accuracy were significantly correlated with observed accuracy, suggesting that inaccuracies were intentional rather than self-deceptive. Overall, participants reported being the least accurate about their photographs and the most accurate about their relationship information. Deception patterns suggest that participants strategically balanced the deceptive opportunities presented by online self-presentation (e.g., the editability of profiles) with the social constraints of establishing romantic relationships (e.g., the anticipation of future interaction).


See also:

Psychopathy and verbal indicators of deception in offenders

psychopath bookA new article from Zina Lee, Jessica R. Klaver and Stephen D. Hart reminds us that we need to be careful when assuming that promising results from lie detection studies where people without serious psychopathology are the subjects can be generalised to a forensic context.

Lee et al wondered whether a tool commonly used for assessing credibility of verbal or written statements could be used to discriminate lying from truth-telling psychopaths. It’s been estimated that up to about 2% of the general population and between 15 and 25% of incarcerated criminals meet the criteria for psychopathy. One of the characteristics of psychopaths is their ability and willingness to deceive others – they are pathological liars who think nothing of manipulating and deceiving others for their own gain. This pathological lying, coupled with superficial charm and inability to feel guilt or remorse, makes a psychopath a particularly dangerous and unpleasant individual.

Previous studies of psychopaths’ deceptive behaviour have reported mixed results, with some suggesting that psychopaths are effective at deceiving others, whilst others report no differences between psychopathic and non-psychopathic individuals. When it comes to verbal behaviour, there is some evidence that psychopaths’ deceptive verbal behaviour may differ from that of non-psychopaths’, being less coherent and less cohesive. Lee et al’s study is, however, the first to investigate psychopathy and verbal indicators of deception in a systematic fashion, using Criteria Based Content Analysis (CBCA).

The researchers asked 45 randomly selected prisoners to tell the truth about the crime for which they had been convicted and to lie about a theft they did not commit. In summary, the authors “found fewer, and different, distinguishing features between true and false accounts among psychopathic and non-psychopathic offenders” (p.81). The results included:

  • More appropriate details provided by psychopathic offenders compared to nonpsychopathic offenders when lying (but no difference when telling the truth)
  • No difference in narrative length between the true and false conditions among psychopathic offenders, and for both groups, truthful narratives were longer than false narratives
  • For psychopathic offenders, spontaneous corrections more frequent when lying compared to telling the truth. This is opposite to the finding with non-criminal populations – according to CBCA, the presence of spontaneous corrections is thought to be associated with credibility.
  • Psychopathic offenders judged less credible than non-psychopathic offenders, even when telling the truth. Seven times less likely to be judged credible to be precise.
  • Narratives produced by psychopathic offenders were judged to be less coherent overall than narratives produced by non-psychopathic offenders.

The study has limitations, the most important being the relatively small sample size, the lack of stakes (the participants had no particular motivation to lie) and the fact that participants were given very little time to prepare their lies. The authors also wonder whether the fact that participants gave uninterrupted narratives might have given an unrealistic impression of psychopaths’ lying ability:

It may be that during an interaction, psychopathic individuals are able to pick up on subtle cues or adjust their speech and presentation based on feedback from the listener. Future studies examining individual variables within the listener (e.g. naive or gullible) or situational factors associated with the interaction (e.g. greater distractions in the environment) may provide further insight into how psychopaths successfully manipulate and deceive others.


See also:

Photo credit: kenchanayo, Creative Commons License

Abstract below the fold.

Continue reading Psychopathy and verbal indicators of deception in offenders

Investigating the Features of Truthful and Fabricated Reports of Traumatic Experiences

painStephen Porter and colleagues have a paper in the April 2007 issue of Canadian Journal of Behavioural Science exploring the differences between truthful and fabricated accounts of traumatic experiences.

They examined the written accounts of students fabricating and giving truthful accounts of traumatic events and found that:

… narratives based on false and genuine traumatic events showed several qualitative differences, some contrasting our predictions. Whereas we predicted that participants would be able to produce fabricated events that appeared to be as credible as truthful accounts, we found that fabricated events were rated lower on plausibility by coders with no knowledge of their actual veracity. This suggests that mistakes in the courtroom may result from liars who are able to effectively distract attention from their stories by manipulating their demeanour and speech (e.g., tone) (p.88).

In other words, lie catchers need to focus on what is being said, and try avoid being misled by non-verbal behaviour.

In addition, attention to specific types of details in the narratives helped to discriminate honesty from deception. When relating a fabricated experience, participants were unable to provide the same level of contextual information as when relating a genuine experience. They provided fewer time and location details and their reports were abbreviated overall, despite our prediction that they may be more detailed in an attempt to make their trauma stories more credible and to elicit sympathy (p.88).

As far as I can see, the following is the only attempt to motivate participants, during the instructions for the study:

Your goal in this section is to provide a believable (but fabricated) traumatic memory report. These reports will be shown to legal professionals and students (if you consented to this aspect of the study) in future research for them to determine how credible your experience appears (p.83).

It doesn’t appear from the description of the method that participants had much time to prepare their truthful or fabricated accounts. Perhaps it is not surprising then, that the results did not confirm to the researchers’ predictions? Perhaps real life malingerers, with the results of a court case at stake, and time to practice their account, might try harder to make their stories credible, and be better at it?

Participants also completed three widely used measures: the Revised Impact of Event Scale, which measures the level of traumatic stress associated with traumatic experience, the Trauma Symptom Inventory, which measures trauma and posttraumatic stress disorder symptoms, and the Post-Traumatic Stress Disorder Checklist, which also screens for the presence of PTSD symptomology. Analysis of the results suggested that

…genuine and fabricated reports of trauma could be differentiated based on the patterns of traumatic stress or symptoms reported. It was anticipated that symptoms on the three measures of traumatic stress would be exaggerated when participants were fabricating. The results provided strong evidence for this hypothesis (p.88).

Abstract below the fold.


Photo credit: aussie_patches, Creative Commons License

Continue reading Investigating the Features of Truthful and Fabricated Reports of Traumatic Experiences

Deception in cyberspace

avatarAn interesting article in the September 2007 issue of International Journal of Human-Computer Studies explores various aspects of deception in online chat.

The researchers were particularly interested in the use of avatars (“a virtual representation of oneself that other users can see or interact with in a virtual environment”, p.770) in deception. Are people influenced in their choice of avatar when they have deception in mind? Are people hiding behind avatars are perceived as trustworthy in conversation compared to when they are engaged in text-to-text chat without avatars? Does the use of an avatar as a ‘mask’ help liars reduce anxiety about deceiving?

In the study, student participants were randomly assigned to truth telling or lying conditions and some were allowed to choose avatars. They then conducted conversations with each other in text-only or avatar-supported chat rooms.

The researchers found that participants who had been assigned to the ‘deception’ condition were more likely than truth-tellers to choose avatars that looked different from themselves. The authors suggest that “by selecting an avatar that is different from oneself (i.e., ‘putting on a mask’), the deceiver may perceive a greater distance from their conversation partner and a reduced likelihood that the deception can be detected” (p.778).

Supportive of this, the researchers found that in text-to-text chat, deceivers had higher (self-reported) anxiety levels than truth-tellers, but the same effect was not found in the avatar-supported chat. This suggests that hiding one’s identity behind an avatar ‘mask’ may help relieve any anxiety about deceiving your communication partner.

But there was no difference in ratings of trustworthiness of conversation partner, regardless of the use of avatars or whether the partner was in fact a deceiver. In other words, these participants were not able to pick up cues to deception in either conversation environment. Using an avatar may make the deceiver feel better but doesn’t necessarily mean they’ll be perceived as any more – or indeed less – trustworthy than someone who is genuinely telling the truth.

More on the Deception Blog about deception in online communication here, and some links to other scholarly work on ‘techno-treachery’ here .


Photo credit: gexplorer, Creative Commons License

Cues to Deception and Ability to Detect Lies as a Function of Police Interview Styles

interrogateIf you were a police officer, what sort of interview style would offer you the best chance of detecting whether or not your interviewee was telling lies? Aldert Vrij and his colleagues ran a study to find out:

In Experiment 1, we examined whether three interview styles used by the police, accusatory, information-gathering and behaviour analysis, reveal verbal cues to deceit, measured with the Criteria-Based Content Analysis (CBCA) and Reality Monitoring (RM) methods. A total of 120 mock suspects told the truth or lied about a staged event and were interviewed by a police officer employing one of these three interview styles. The results showed that accusatory interviews, which typically result in suspects making short denials, contained the fewest verbal cues to deceit. Moreover, RM distinguished between truth tellers and liars better than CBCA. Finally, manual RM coding resulted in more verbal cues to deception than automatic coding of the RM criteria utilising the Linguistic Inquiry and Word Count (LIWC) software programme.

In Experiment 2, we examined the effects of the three police interview styles on the ability to detect deception. Sixty-eight police officers watched some of the videotaped interviews of Experiment 1 and made veracity and confidence judgements. Accuracy scores did not differ between the three interview styles; however, watching accusatory interviews resulted in more false accusations (accusing truth tellers of lying) than watching information-gathering interviews. Furthermore, only in accusatory interviews, judgements of mendacity were associated with higher confidence. We discuss the possible danger of conducting accusatory interviews.

In the discussion, Vrij and colleagues summarise:

The present experiment revealed that style of interviewing did not affect on overall accuracy (ability to distinguish between truths or lies) or on lie detection accuracy (ability to correctly identify liars). In fact, the overall accuracy rates were low and did not differ from the level of chance. This study, like so many previous studies (Vrij, 2000), thus shows the difficulty police officers face when discerning truths from lies by observing the suspect’s verbal and nonverbal behaviours.

In other words, if law enforcement officers want to increase their chances of detecting deception, they need to make sure interviewers use an information gathering approach. But simply watching that interview (live or on tape) might not help them decide whether or not the suspect is telling the truth – they may need to subject a transcript to linguistic analysis to give themselves the best chance.

Even if it doesn’t result in better ‘live’ judgements of veracity, an information gathering approach has another advantage for the law enforcement officer: it maximises the number of checkable facts elicited from the suspect, and being able to check a fact against the truth is pretty much the most effective means of uncovering false information. Of course, someone can provide false information without deliberately lying: if they have misremembered something, for instance, or are passing on something that someone else lied to them about. But then the point of any law enforcement interview is to get to the truth, which is a higher goal than simply uncovering a liar, in my opinion.

As always with lab-based studies, there are some limitations. Vrij et al., for instance, acknowledge that “in practice elements of all three styles may well be incorporated in one interview” but explain that “we distinguished between the three styles in our experiments because we can only draw conclusions about the effects of such styles only by examining them in their purest form”.

Further problems, which are difficult to overcome in structured lab settings, are caused because participants were assigned randomly to ‘guilty’ (liars) or ‘innocent’ (truth tellers) conditions. In the real world, individuals who are prepared put themselves in a situation in which they might later have to lie may differ in their ability to lie effectively than those who try to stay out of such situations. And real guilty suspects make a decision about whether they are going to lie (a few confess from the start, others will offer partial or whole untruths). It’s an issue that is open to empirical test: let participants choose whether they want to be in the ‘guilty’ or ‘innocent’ conditions (or have four conditions: guilty choice/guilty no choice/innocent choice/innocent no choice).

Also, in this study the liars were told what lie to tell (as opposed to being able to make one up). Real guilty suspects who decide to lie will presumably choose a lie that they they think they stand a good chance of being able to get away with. In real world conditions, the perception by the guilty individual of what sort of situation they’re in, the evidence against them, the plausible story they can tell to explain away the evidence, and their ability to lie effectively are probably all important.


Photo credit: scottog, Creative Commons License

New interview technique could help police spot deception

liar… according to a press release from the Economic and Social Research Council (7 June):

Shifting uncomfortably in your seat? Stumbling over your words? Can’t hold your questioner’s gaze? Police interviewing strategies place great emphasis on such visual and speech-related cues, although new research funded by the Economic and Social Research Council and undertaken by academics at the University of Portsmouth casts doubt on their effectiveness. However, the discovery that placing additional mental stress on interviewees could help police identify deception has attracted interest from investigators in the UK and abroad.

[…] A series of experiments involving over 250 student ‘interviewees’ and 290 police officers, the study saw interviewees either lie or tell the truth about staged events. Police officers were then asked to tell the liars from the truth tellers using the recommended strategies. Those paying attention to visual cues proved significantly worse at distinguishing liars from those telling the truth than those looking for speech-related cues.

[…] However, the picture changed when researchers raised the ‘cognitive load’ on interviewees by asking them to tell their stories in reverse order. Professor Aldert Vrij explained: “Lying takes a lot of mental effort in some situations, and we wanted to test the idea that introducing an extra demand would induce additional cues in liars. Analysis showed significantly more non-verbal cues occurring in the stories told in this way and, tellingly, police officers shown the interviews were better able to discriminate between truthful and false accounts.”

Asking an interviewee to tell their story in reverse order is not a new interview technique – it’s one of the techniques used in the Cognitive Interview, more usually deployed to get maximum detail in statements from victims and witnesses.

There are also detailed articles in the UK Times and Daily Telegraph newspapers based on (and building on) this press release.

More details, and links to downloadable reports, are available on the ESRC website via this link.


Photo credit: Bingo_little, Creative Commons License

Lie detector software catches e-mail fibbers

From The Sunday Times, 25 Feb:

People who lie in their e-mails and text messages face being rumbled by new “truth detection” software being developed by researchers.

The academics have analysed tens of thousands of electronic messages and claim to have identified telltale signs that show if a person is being economical with the truth. […] The academics behind the software — which could be commercially available from next year — say it has also attracted the interest of law enforcement agencies.  Police believe it could help trap online fraudsters and make it easier to identify internet paedophiles who pose as youths to groom victims in chat-rooms or on social networking websites.

The lie-detection software is being developed by a team led by Jeff Hancock, director of the Computer-Mediated Communication Research Laboratory at Cornell University in New York state.

Hancock’s website has links to downloadable publications and more details of his research.

Workers ‘prefer lying by e-mail’

From BBC News Online (10 Jan):

Workers do not like lying to colleagues face-to-face and prefer the anonymity of the phone or e-mail, a study says.

About a third of all work communication involves some kind of deception, the study of North West firms found.

Withholding or distorting information and changing the subject of e-mails to confuse colleagues are among the most frequent tricks.

The results of the University of Central Lancashire research were being presented in Bristol on Wednesday.

Haven’t found anything more detailed about the actual study yet, but it does sound a bit more scientific than the Friends Provident study on the same topic that I mentioned a couple of weeks back…

Scientific and unscientific research on ‘techno-treachery’

Friends Provident (a financial services company) has garnered a fair amount of interest in the media with a pop survey of deception behaviour. Here’s how Reuters (28 Dec) covered it:
Gadgets seen as best way to tell white lies

More than four out of five people admit to telling little white lies at least once a day and the preferred way of being “economical with the truth” is to use technology such as cell phones, texts and e-mails, a survey on Thursday said.

The research by UK pollsters 72 Point found that “techno-treachery” was widespread with nearly 75 percent of people saying gadgets like Blackberrys made it easier to fib.

Just over half of respondents said using gadgets made them feel less guilty when telling a lie than doing it face to face, the study on behalf of financial services group Friends Provident found.

You can find the Friends Provident press release here. This seems to be becoming an annual adventure for FP – in December 2005 they announced another new survey on lying in a press release entitled “Three in four Britons tell white lies at least once a day“. Some of the topics were the same in the 2005 study as the 2006 one, and comparing the reported percentage agreements will give you a good idea of how ‘scientific’ these surveys are (or aren’t).

If you would like to read real scientific research on deception and computer-mediated communication, you could take a look at the work of Lina Zhou who has has been researching deception via gadets and online for the last few years, or Adam Joinson, who has a book coming out in 2007 on “Truth, Trust and Lies on the Internet”. Or try Hancock et al.’s 2004 study of deception via email, phone and face to face communication. Here are some references to get you started:

* Hancock, J. T., Thom-Santelli, J., & Ritchie, T. (2004). Deception and Design: The Impact of Communication Technology on Lying Behavior. Letters CHI, 6(1). See also Tasty Research commentary.
* Joinson, A.N. and Dietz-Uhler, B. (2002). Explanations for the perpetration of and reactions to deception in a virtual community. [PDF full text] Social Science Computer Review, 20 (3), 275-289.
* Zhou, L. (2005). An empirical investigation of deception behavior in instant messaging. Ieee Transactions on Professional Communication, 48(2), 147-160.
* Zhou, L., Burgoon, J. K., Zhang, D. S., & Nunamaker, J. F. (2004). Language dominance in interpersonal deception in computer-mediated communication. Computers in Human Behavior, 20(3), 381-402.
* Zhou, L., Burgoon, J. K., & Twitchell, D. P. (2003). A longitudinal analysis of language behavior of deception in e-mail. In Intelligence and Security Informatics, Proceedings (Vol. 2665, pp. 102-110).
* Zhou, L., Burgoon, J. K., Twitchell, D. P., Qin, T. T., & Nunamaker, J. F. (2004). A comparison of classification methods for predicting deception in computer-mediated communication.[PDF full text] Journal of Management Information Systems, 20(4), 139-165.

    Griping aside, I do like the term ‘techno-treachery’ though!

Ten Ways To Tell If Someone Is Lying To You

…according to a recent article on (3 Nov):

In business, politics and romance, it would be nice to know when we’re being lied to. Unfortunately humans aren’t very good at detecting lies. Our natural tendency is to trust others, and for day-to-day, low-stakes interactions, that makes sense. We save time and energy by taking statements like “I saw that movie” or “I like your haircut” at face value. But while it would be too much work to analyze every interaction for signs of deception, there are times when we really need to know if we’re getting the straight story. Maybe a crucial negotiation depends on knowing the truth, or we’ve been lied to and want to find out if it’s part of a pattern.

The article has ten accompanying slides, with suggestions for the would-be lie catcher. Among the sensible suggestions – like monitoring pauses, seeking detail and asking the person to repeat their story – other slides suggest that gaze aversion, sweating and fidgeting are all signs of deception, despite the fact that there is no scientific evidence for such behaviours being more common in liars than truth-tellers. They also suggest:

Look for dilated pupils and a rise in vocal pitch. Psychologists DePaulo and Morris found that both phenomena were more common in liars than truth-tellers.

Both pupil dilation and pitch changes are indications of changes in arousal level (stress cues), and can often be very subtle. Probably not the best cues for a lie-detector to rely on. The Forbes article concludes:

Psychologists who study deception, though, are quick to warn that there is no foolproof method. […] It’s tough to tell the difference between a liar and an honest person who happens to be under a lot of stress.

New research: Analysis of written statements made to police

The latest issue of the International Journal of Speech, Language and the Law carries an article by Susan H. Adams and John P. Jarvis, both from the Federal Bureau of Investigation Academy at Quantico, reporting the results of a study of veracity and deception in written statements to the police.

Various different types of statement analysis are used fairly widely to support criminal investigations. Such techniques, which include Criteria Based Content Analysis, Scientific Content Analysis (SCAN) and Reality Monitoring, have rather variable support from empirical research. In particular, Adams and Jarvis argue:

Few definitive studies focusing on written statements provide empirical support for the identification of indicators of deception in realistic, stressful settings. This study responds to this void by examining written statements given to police regarding criminal incidents.

The authors analysed 60 written statements made by adult witnesses, victims or suspects in the course of real criminal investigations, focusing on indications of equivocation and negation; length of prologue; unique sensory details; emotional details and quoted discourse. The results:

Support was found for a positive relationship between deception and the attributes of equivocation, negation and relative length of the prologue. A positive relationship was also found between veracity and unique sensory details. Weak support was found for a relationship between veracity and emotions in the epilogues. Using a logistic regression model, 82.1 per cent of the statements were correctly classified as containing veracity or deception. The most significant predictor of veracity was unique sensory details, while the most significant predictor of deception was relative length of the prologues.


Criteria-Based Content Analysis: An empirical test of its underlying processes

The latest issue of Psychology, Crime and Law features an article by Aldert Vrij and Sam Mann from Portsmouth University (UK) on Criteria-Based Content Analysis

Here’s the abstract:

Criteria-Based Content Analysis (CBCA) is a tool to assess the veracity of written statements, and is used as evidence in criminal courts in several countries in the world. CBCA scores are expected to be higher for truth tellers than for liars. The underlying assumption of CBCA is that (i) lying is cognitively more difficult than truth telling, and (ii) that liars are more concerned with the impression they make on others than truth tellers. However, these assumptions have not been tested to date. In the present experiment 80 participants (undergraduate students) lied or told the truth about an event. Afterwards, they completed a questionnaire measuring “cognitive load” and “tendency to control speech”. The interviews were transcribed and coded by trained CBCA raters. In agreement with CBCA assumptions, (i) truth tellers obtained higher scores than liars, (ii) liars experienced more cognitive load than truth tellers, and (iii) liars tried harder to control their speech. However, cognitive load and speech control were not correlated with CBCA scores in the predicted way.

Yes, I know I’m featuring rather a lot from Vrij and his colleagues, but they publish so darn frequently!


We have evolved to lie because it is an effective strategy for human survival, says a British psychologist

Ubiquitous British psychologist Richard Wiseman gave a talk earlier this month in Kuala Lumpur, organised by the British Council, entitled “How to Catch a Liar”, reports Malaysian news site (10 July).

Although lying is actually difficult to do convincingly, we’ve evolved to lie because it is an effective strategy for human survival, he said. […] “Some lying helps bond society together, although some people may manipulate it,” he said

[…] Wiseman’s research, conducted over 12 years, has found visual signals to be the least revealing about when a person is lying because there is a decrease in gestures and body movement. He says the linguistic approach is the most accurate way to detect a liar. When somebody is lying, there is an increase in pauses, speech errors and response latency and a decrease in speech rate and emotional involvement.

Paraverbal indicators of deception: a meta-analytic synthesis

In the latest edition of Applied Cognitive Psychology, Siegfried Sporer and Barbara Schwandt present a meta-analysis of paraverbal cues to deception. The article also serves as a pretty good critique of previous deception studies. As the authors explain, meaningful meta-analyses are not as easy to do as they perhaps should be, because of the huge variation in experimental paradigms in deception studies. In particular “very little is still known about high stakes lies” and “very few researchers were successful in or even thought about creating unsanctioned lie conditions” (p442). From the research that has been done, the authors conclude that “there are considerable differences in behaviour when individuals lie with or without permission and when they are motivated to deceive successfully or not” (p441).


Follow the link for the abstract on the publisher’s site.

New article dealing with online deception

A Comparison of Deception Behavior in Dyad and Triadic Group Decision Making in Synchronous Computer-Mediated Communication
Lina Zhou and Dongsong Zhang
Published in the April 06 issue of Small Group Reseach 27(2)

Here’s an extract from the abstract:

This study is the first attempt to investigate whether deceivers behave differently in dyads and triadic groups in synchronous computer-mediated communication. […]The empirical results revealed that cues to deception were contingent on the group size. […] This study raises a broad yet critical issue of group effect on deception behavior. It has significant implications for deception detection in computer-mediated communication.