“The more sophisticated the animal, it seems, the more commonplace the con games”

New York Times reports on deception in the animal kingdom in A Highly Evolved Propensity for Deceit (22 Dec)

…Deceitful behavior has a long and storied history in the evolution of social life, and the more sophisticated the animal, it seems, the more commonplace the con games, the more cunning their contours.

In a comparative survey of primate behavior, Richard Byrne and Nadia Corp of the University of St. Andrews in Scotland found a direct relationship between sneakiness and brain size. The larger the average volume of a primate species’ neocortex — the newest, “highest” region of the brain — the greater the chance that the monkey or ape would pull a stunt like this one described in The New Scientist: a young baboon being chased by an enraged mother intent on punishment suddenly stopped in midpursuit, stood up and began scanning the horizon intently, an act that conveniently distracted the entire baboon troop into preparing for nonexistent intruders.

There’s much more, including tales of deception by dophins, butterflies, chimps and college students.

Could brain scans ever be safe evidence?

New Scientist (3 Oct) asks: Could brain scans ever be safe evidence?

DONNA insists that she met friends for lunch on the afternoon of 25 January 2008 and did not violate a restraining order against Marie. But Marie told police that Donna broke the terms of the order by driving up to her car while it was stopped at a traffic light, yelling and cursing, and then driving off.

A polygraph test proved unsatisfactory: every time Marie’s name was mentioned Donna’s responses went sky-high. But when Donna approached Cephos of Tyngsboro, Massachusetts, for an fMRI scan, which picks up changes in blood flow and oxygenation in the brain, it was a different story.

“Her results indicated that she was telling the truth about the January 25 incident,” says Steven Laken of Cephos, who maintains that when people are lying, more areas of the brain “light up” than when they are telling the truth (see the scans of Donna’s brain).

Unfortunately the rest of the article is secured to subscribers.

True and False Memories – a new paper

No time to blog properly, but just wanted to draw your attention to a new paper (download via SSRN) on separating true from false memories. Here’s the abstract:

Many people believe that emotional memories (including those that arise in therapy) are particularly likely to represent true events because of their emotional content. But is emotional content a reliable indicator of memory accuracy? The current research assessed the emotional content of participants’ pre-existing (true) and manipulated (false) memories for childhood events. False memories for one of three emotional childhood events were planted using a suggestive manipulation and then compared, a long several subjective dimensions, with other participants’ true memories. On most emotional dimensions (e.g., how emotional was this event for you?), true and false memories were indistinguishable. On a few measures (e.g., intensity of feelings at the time of the event), true memories were more emotional than false memories in the aggregate, yet true and false memories were equally likely to be rated as uniformly emotional. These results suggest that even substantial emotional content may not reliably indicate memory accuracy.

Reference:

Hat tip to the ever-interesting Neuroethics and Law Blog.

New research: Outsmarting the Liars: The Benefit of Asking Unanticipated Questions

In press in the journal Law and Human Behavior, Aldert Vrij and colleagues test a method of questioning that (in lab situations) exposes liars with an up to 80% success rate. Here’s the abstract:

We hypothesised that the responses of pairs of liars would correspond less with each other than would responses of pairs of truth tellers, but only when the responses are given to unanticipated questions. Liars and truth tellers were interviewed individually about having had lunch together in a restaurant. The interviewer asked typical opening questions which we expected the liars to anticipate, followed by questions about spatial and/or temporal information which we expected suspects not to anticipate, and also a request to draw the layout of the restaurant. The results supported the hypothesis, and based on correspondence in responses to the unanticipated questions, up to 80% of liars and truth tellers could be correctly classified, particularly when assessing drawings.

Reference:

At the moment you can download the article for free here (pdf).

New research: Recording lying, cheating, and defiance in an Internet Based Simulated Environment

Over in the latest issue of Computers in Human Behavior, Sara Russell and Lawrence James report on research on lying and cheating in a virtual environment. This paper is less about lying and cheating per se, however, and more about a new method for eliciting and recording such behaviour. Why does this matter? The authors explain:

… most psychological research being conducted on the Internet includes simply changing the delivery method of questionnaires from paper-and-pencil to an electronic equivalence. Although this transition will offer advantages over traditional methods and is a step forward, it does not embrace the full capabilities of an Internet environment as a tool. The authors found no reports of Internet research being conducted in which behavior was both elicited and recorded in a pre-defined, controlled Internet environment…

Other than highly complex and expensive software creations of virtual realities such as flight training simulations, research is lacking in utilizing electronically generated environment such as an Internet Based Simulated Environment (IBSE) to mimic experiences that can elicit and record specific pre-defined behaviors of interest to scientists. More specifically, research utilizing such technology to record behavioral manifestations of human personality traits is needed. This need for an electronic environment such as an IBSE will be driven by the inherent difficulty of utilizing direct observation in a real or laboratory setting which often leads researchers to rely on self-reports of behavior. (p.2015)

The researchers asked participants to complete (on paper) the Conditional Reasoning Test of Aggression (CRT-A), a test that measures the tendency to “respond to frustrating situations in an aggressive way” (p.2016) and then directed them to an online quiz that was designed “to initially and continually cause frustrating situations to occur from the start to the end of user experience” (p.2017). These included patronising instructions (“use your keyboard to type your username”), and random ‘errors’ that were not the participant’s fault (“Page cannot be found, you performed an incorrect operation. Click here to retrieve your quiz”). (The authors said that when constructing the test they drew inspiration from their own experiences – I think many of us will identify with the sort of situations they created!)

In response, participants were given the opportunity to lie (about whether or not they had read a detailed set of instructions) or cheat (when a link they clicked took them apparently to an administrator site that allowed them to change their scores). Defiant participants (those who, for instance, truthfully said they had not read the instructions but nevertheless started the quiz) were also logged.

In all, a quarter of the 191 participants in this study cheated, with around 11% defiant and 7% lying, with just under 40% performing at least one of these behaviours. Those who received high scores on the CRT-A were significantly more likely to engage in one or more of these behaviours.

So what? Well, the researchers rightly note that there are problems with this research, the most significant being that it the method cannot measure offline behaviours such as “yelling, cursing, or possibly physical aggression towards the computer” (p2023), and that it does not provide an objective – or even subjective – measure of ‘frustration’ (we all have different thresholds and experience different levels of frustration). But the research does show that it is possible to conduct internet-based research in a more creative and productive way than simply transposing questionnaires from paper to a website. In particular, the study demonstrated a method for getting away from reliance on self-reports and instead easily (and covertly) measuring actual occurrances of particular (potentially socially undesirable) behaviours, which has application in a range of scenarios, not just deception-related.

Reference:

Facial expressions and verbal cues to deception

Hat tip to Neuroethics and Law blog for pointing us towards an article in New Scientist (17 Sept) about lies and spin in the current US Presidential campaign.

NS briefly touches on Paul Ekman’s work on microfacial expressions before devoting more attention to the work of David Skillicorn:

Skillicorn has been watching out for verbal “spin”. He has developed an algorithm that evaluates word usage within the text of a conversation or speech to determine when a person “presents themselves or their content in a way that does not necessarily reflect what they know to be true”.

NS then turns to Branka Zei Pollermann, who combines voice and facial analysis:

“The voice analysis profile for McCain looks very much like someone who is clinically depressed,” says Pollermann… [who] uses auditory analysis software to map seven parameters of a person’s speech, including pitch modulation, volume and fluency, to create a voice profile. She then compares that profile with the speaker’s facial expressions, using as a guide a set of facial expressions mapped out by Ekman, called the Facial Action Coding System, to develop an overall picture of how they express themselves.

This story prompted quite a flurry of comments on the website (some of which are worth reading!).

Skillicorn has posted more about his research and its theoretical basis (James Pennebaker’s LIWC techniquepdf here) at his blog Finding Bad Guys in Data.

Deception in the news

I’ve been lax posting on the blogs recently, I know (real life interferes with blogging). Consider this a catch-up post with some of the deception-related issues hitting the news stands over the last few weeks.

Polygraphing sex offenders gains momentum in the UK: A new pilot scheme to polygraph test sex offenders to see if they “are a risk to the public or are breaking the terms of their release from jail”, according to The Times (20 Sept 2008).

Brain fingerprinting in the news again: Brain test could be next polygraph (Seattle Post-Intelligencer, 14 Sept):

A Seattle scientist who has developed an electronic brain test that he says could improve our ability to force criminals to reveal themselves, identify potential terrorists and free those wrongly convicted may have finally broken through the bureaucratic barriers that he believes have served to stifle adoption of the pioneering technique.

“There seems to be a renewed surge of interest in this by the intelligence agencies and the military,” said Larry Farwell, neuroscientist and founder of Brain Fingerprinting Laboratories based at the Seattle Science Foundation.

Not-brain-fingerprinting deception detection brain scan procedure isn’t scientific, according to a well-qualified panel (The Hindu, 8 Sept). India’s Directorate of Forensic Sciences chooses not to accept the panel’s findings:

The Directorate of Forensic Sciences, which comes under the Union Ministry of Home, will not accept the findings of the six-member expert committee that looked into brain mapping and its variant “brain electrical oscillation signature (BEOS) profiling” on the ground that the committee has been dissolved.

The six-member technical peer review committee, headed by National Institute of Mental Health and Neuro Sciences (NIMHANS) Director D. Nagaraja, started work in May 2007. The panel had concluded that the two procedures were unscientific and had recommended against their use as evidence in court or as an investigative tool.

  • See also: more on “BEOS profiling” in The Times of India (21 July) which claims that “This brain test maps the truth”.

Perhaps the answer can be found with infrared light? New Scientist (22 Sept) reports on a patent application to develop a new type of brain scanning lie detection technology:

Scott Bunce, at Drexel University’s College of Medicine in Philadelphia, thinks a better solution [to the problem of detecting lies] is to send near-infrared light through the scalp and skull into the brain and see how much is reflected back. And he has designed a special headband that does just that. The amount of reflected light is dependent on the levels of oxygen in the blood, which in turn depends on how active the brain is at that point.

This, he says, gives a detailed picture of real-time activity within the brain that can be used to determine whether the subject is lying. The technique is both cheaper and easier to apply than fMRI and gives a higher resolution than an EEG. …Of course, nobody knows whether brain activity can reliably be decoded to reveal deception, but that’s another question.

Adults easily fooled by children’s false denials

University of California – Davis press release (17 August):

Adults are easily fooled when a child denies that an actual event took place, but do somewhat better at detecting when a child makes up information about something that never happened, according to new research from the University of California, Davis….

“The large number of children coming into contact with the legal system – mostly as a result of abuse cases – has motivated intense scientific effort to understand children’s true and false reports,” said UC Davis psychology professor and study author Gail S. Goodman. “The seriousness of abuse charges and the frequency with which children’s testimony provides central prosecutorial evidence makes children’s eyewitness memory abilities important considerations. Arguably even more important, however, are adults’ abilities to evaluate children’s reports.”

In an effort to determine if adults can discern children’s true from false reports, Goodman and her co-investigators asked more than 100 adults to view videotapes of 3- and 5-year-olds being interviewed about “true” and “false” events. For true events, the children either accurately confirmed that the event had occurred or inaccurately denied that it had happened. For “false” events – ones that the children had not experienced – they either truthfully denied having experienced them or falsely reported that they had occurred.

Afterward, the adults were asked to evaluate each child’s veracity. The adults were relatively good at detecting accounts of events that never happened. But the adults were apt to mistakenly believe children’s denials of actual events.

“The findings suggest that adults are better at detecting false reports than they are at detecting false denials,” Goodman said. “While accurately detecting false reports protects innocent people from false allegations, the failure to detect false denials could mean that adults fail to protect children who falsely deny actual victimization.”

Polygraph reasoning applied to spotting terrorists…

Remember that the rationale behind the polygraph is that (with an appropriate questioning regime) guilty people are assumed have physiological responses that differ from innocents? Well, the new “anxiety-detecting machines” that the DHS hopes might one day spot terrorists seem to work on the same basis. Here’s the report from USA Today (18 Sept):

A scene from the airport of the future: A man’s pulse races as he walks through a checkpoint. His quickened heart rate and heavier breathing set off an alarm. A machine senses his skin temperature jumping. Screeners move in to question him. Signs of a terrorist? Or simply a passenger nervous about a cross-country flight?

It may seem Orwellian, but on Thursday, the Homeland Security Department showed off an early version of physiological screeners that could spot terrorists. The department’s research division is years from using the machines in an airport or an office building— if they even work at all. But officials believe the idea could transform security by doing a bio scan to spot dangerous people.

Critics doubt such a system can work. The idea, they say, subjects innocent travelers to the intrusion of a medical exam.

According to the news report, there is some effort going into testing the equipment, though if the details in the news report are to be believed it sounds like the research is still at a very early stage:

To pinpoint the physiological reactions that indicate hostile intent, researchers… recruited 140 local people with newspaper and Internet ads seeking testers in a “security study.” Each person receives $150.

On Thursday, subjects walked one by one into a trailer with a makeshift checkpoint. A heat camera measured skin temperature. A motion camera watched for tiny skin movements to measure heart and breathing rates. As a screener questioned each tester, five observers in another trailer looked for sharp jumps on the computerized bands that display the person’s physiological characteristics.

Some subjects were instructed in advance to try to cause a disruption when they got past the checkpoint, and to lie about their intentions when being questioned. Those people’s physiological responses are being used to create a database of reactions that signal someone may be planning an attack. More testing is planned for the next year.

The questioning element does make it sound like what is being developed is a ‘remote’ polygraph.

Hat tip to Crim Prof Blog.

UPDATE: Lots of places picking this up all over the www. New Scientist has a post on the same topic here, and an earlier article on the system here. The Telegraph’s report adds some new information.

India’s Novel Use of Brain Scans in Courts Is Debated

According to a report in the New York Times (14 Sept), an Indian judge has taken the results a brain scan as “proof that the [murder] suspect’s brain held ‘experiential knowledge’ about the crime that only the killer could possess”, and passed a life sentence.

The Brain Electrical Oscillations Signature test, or BEOS, was developed by Champadi Raman Mukundan, a neuroscientist who formerly ran the clinical psychology department of the National Institute of Mental Health and Neuro Sciences in Bangalore. His system builds on methods developed at American universities by other scientists, including Emanuel Donchin, Lawrence A. Farwell and J. Peter Rosenfeld.

Neuroethics and Law Blog comments, as does Dr Lawrence Farwell (inventor of the controversial ‘Brain Fingerprinting’ technique, which bears a passing resemblence to the BEOS test used in India).

Scary stuff.

Lie Detector Technology in court – seduced by neuroscience?

Jeffrey Bellin from the California Courts of Appeal has a paper forthcoming in Temple Law Review on the legal issues involved in deploying new lie detection technology – specifically fMRI technology – in real-world courtroom settings (hat tip to the Neuroethics and Law blog ).

Bellin examines the ‘scientific validity’ requirements and argues that the research has progressed to the point where fMRI evidence in deception detection issues will soon reach the standard required to be admissible under the Daubert criteria. However, Bellin’s key issue with using fMRI evidence in court is not on scientific but on legal grounds: he claims that fMRI evidence would fall foul of the hearsay prohibition. He explains that “The hearsay problem arises because lie detector evidence consists of expert analysis of out-of-court statements offered for their truth (i.e., hearsay) and is consequently inadmissible under Federal Rule of Evidence 801 absent an applicable hearsay exception” (p.102).

I am not a lawyer so can’t really comment on the hearsay issue raised by Bellin, except to say that it’s an interesting observation and not one I’ve heard before. I feel better placed to assess his analysis that fMRI technology is only a small step from reaching the Daubert standard. In this Bellin is – in my judgement – way off-beam. His argument runs something like this:

1. The US Government has poured lots of money into lie detection techologies (Bellin quotes a Time magazine guess-timate of “tens of millions to hundreds of millions of dollars” – an uncorroborated rumour, not an established fact).

2. fMRI is “the most promising of the emerging new lie detection technologies” (p.106) because “brain activities will be more difficult to suppress than typical stress reactions measured by traditional polygraph examinations, [so] new technologies like fMRI show great promise for the development of scientifically valid lie detectors” (p.106).

3. Thus, “The infusion of money and energy into the science of lie detection coupled with the pace of recent developments in that science suggest that it is only a matter of time before lie detector evidence meets the Daubert threshold for scientific validity.” (p.107).

And the references he provides for this analysis? Steve Silberman’s “Don’t Even Think About Lying” in Wired Magazine from 2006, a piece in Time magazine the same year, entitled “How to Spot a Liar“, by Jeffrey Kluger and Coco Masters. Now both of these articles are fine pieces of journalism, but they hardly constitute good grounds for Bellin’s assertion that fMRI techology is almost ready to be admitted in court. (And if you’re going to use journalistic pieces as references, can I recommend, as a much better source, an excellent article: “Duped: Can brain scans uncover lies?” by Margaret Talbot from The New Yorker [July 2, 2007].)

Let’s just remind ourselves of the Daubert criteria. To paraphrase the comprehensive Wikipedia page, before expert testimony can be entered into evidence it must be relevant to the case at hand, and the expert’s conclusions must be scientific. This latter condition means that a judge deciding on whether to admit expert testimony based on a technique has to address five points:

1. Has the technique been tested in actual field conditions (and not just in a laboratory)?
2. Has the technique been subject to peer review and publication?
3. What is the known or potential rate of error? Is it zero, or low enough to be close to zero?
4. Do standards exist for the control of the technique’s operation?
5. Has the technique been generally accepted within the relevant scientific community?

As far as fMRI for lie detection is concerned I think the answers are:

  1. No, with only a couple of exceptions.
  2. Yes, though there is a long way to go before the technique has been tested in relevant conditions.
  3. In some lab conditions, accuracy rates reach 95%. But what about in real life situations? We don’t have enough research to say.
  4. There are no published or agreed standards for undertaking deception detection fMRI scans.
  5. No, the arguments are still raging!

As an example of 5, one of the crucial arguments is over the interpretation of the results of fMRI experiments (Logothetis, 2008). Mind Hacks had a terrific article a few weeks ago in which they summarise the key issue:

It starts with this simple question: what is fMRI measuring? When we talk about imaging experiments, we usually say it measures ‘brain activity’, but you may be surprised to know that no-one’s really sure what this actually means.

And as Jonah Lehrer points out more recently :

[…T]he critical flaw of such studies is that they neglect the vast interconnectivity of the brain… Because large swaths of the cortex are involved in almost every aspect of cognition – even a mind at rest exhibits widespread neural activity – the typical fMRI image, with its highly localized spots of color, can be deceptive. The technology makes sense of the mind by leaving lots of stuff out – it attempts to sort the “noise” from the “signal” – but sometimes what’s left out is essential to understanding what’s really going on.

Bellin is not alone in perhaps being seduced by the fMRI myth, as two recent studies (McCabe & Castel, 2007; Wiesberg et al., 2008) demonstrate very nicely. McCabe and Castel showed that participants judged news stories as ‘more scientific’ when accompanied by images of brain scans than without, and Weisberg et al.’s participants rated bad explanations of psychological phenomena as more scientifically sound when they included a spurious neuroscience reference. Why are people so beguiled by the blobs in the brain? Here are McCabe and Castel, quoted in the BPS Research Blog:

McCabe and Castel said their results show people have a “natural affinity for reductionistic explanations of cognitive phenomena, such that physical representations of cognitive processes, like brain images, are more satisfying, or more credible, than more abstract representations, like tables or bar graphs.”

References:

See also:

Ways for repairing trust breakdowns in one-off online interactions

What can you do if you’ve unintentionally offended someone by being or appearing deceptive online? Here’s a recent article on restoring trust online, from the June 2008 issue of International Journal of Human-Computer Studies:

Online offences are generally considered as frequent and intentional acts performed by a member with the aim to deceive others. However, an offence may also be unintentional or exceptional, performed by a benevolent member of the community. This article examines whether a victim’s decrease in trust towards an unintentional or occasional offender can be repaired in an online setting, by designing and evaluating systems to support forgiveness. We study which of three systems enable the victim of a trust breakdown to fairly assess this kind of offender. The three systems are: (1) a reputation system, (2) a reputation system with a built-in apology forum that may display the offender’s apology to the victim and (3) a reputation system with a built-in apology forum that also includes a “forgiveness” component. The “forgiveness” component presents the victim with information that demonstrates the offender’s trustworthiness as judged by the system. We experimentally observe that systems (2) and (3), endorsing apology and supporting forgiveness, allow victims to recover their trust after online offences. An apology from the offender restores the victim’s trust only if the offender cooperates in a future interaction; it does not alleviate the trust breakdown immediately after it occurs. By contrast, the “forgiveness” component restores the victim’s trust directly after the offence and in a subsequent interaction. The applicability of these findings for extending reputation systems is discussed.

Reference:

Deceptive Self-Presentation in Online Dating Profiles

In the latest issue of Personality and Social Psychology Bulletin, Catalina Toma and colleagues consider how people lie in online dating profiles, and what they lie about. Here’s the abstract:

This study examines self-presentation in online dating profiles using a novel cross-validation technique for establishing accuracy. Eighty online daters rated the accuracy of their online self-presentation. Information about participants’ physical attributes was then collected (height, weight, and age) and compared with their online profile, revealing that deviations tended to be ubiquitous but small in magnitude. Men lied more about their height, and women lied more about their weight, with participants farther from the mean lying more. Participants’ self-ratings of accuracy were significantly correlated with observed accuracy, suggesting that inaccuracies were intentional rather than self-deceptive. Overall, participants reported being the least accurate about their photographs and the most accurate about their relationship information. Deception patterns suggest that participants strategically balanced the deceptive opportunities presented by online self-presentation (e.g., the editability of profiles) with the social constraints of establishing romantic relationships (e.g., the anticipation of future interaction).

Reference:

See also:

Increasing Cognitive Load to Facilitate Lie Detection: The Benefit of Recalling an Event in Reverse Order

Continuing with their research on the ‘cognitive load hypothesis’, Aldert Vrij and colleagues from Portsmouth University report on a technique for facilitating lie detection – telling the story in reverse order. This article appears in the latest issue of Law and Human Behavior, although the study featured extensively in the press a few months ago (see here ).

Here’s the abstract:

In two experiments, we tested the hypotheses that (a) the difference between liars and truth tellers will be greater when interviewees report their stories in reverse order than in chronological order, and (b) instructing interviewees to recall their stories in reverse order will facilitate detecting deception. In Experiment 1, 80 mock suspects told the truth or lied about a staged event and did or did not report their stories in reverse order. The reverse order interviews contained many more cues to deceit than the control interviews. In Experiment 2, 55 police officers watched a selection of the videotaped interviews of Experiment 1 and made veracity judgements. Requesting suspects to convey their stories in reverse order improved police observers’ ability to detect deception and did not result in a response bias.

Reference:

Secrets – Their Use and Abuse in Organizations

In the latest issue of Journal of Management Inquiry, Carl Keane from Queen’s University, Canada considers organisational secrets. Here’s the abstract:

Organizational scholars, and most social scientists for that matter, have rarely examined the use of the secret in controlling organizational behavior. On one hand, organizational secrets are necessary for the survival of the organization; on the other hand, organizational secrets are often used to hide unethical and illegal behavior. In this essay, the author examines the phenomenon of the secret as part of organizational life, from both a functional and dysfunctional perspective. Specifically, the author illustrates how from a functional point of view, secrets can legally protect organizational vulnerabilities, whereas from a dysfunctional point of view, secrets control organizational members and prevent the communication of knowledge to others. Both processes occur through the construction of social and cognitive boundaries as a form of social control.

Reference:

Deception research across the blogosphere

The physiology of lying by exaggerating: Over at the BPS Research Digest Blog, a summary of research that has caused ripples around the media: lying by exaggeration doesn’t seem to cause the typical physiological arousal effects that some associate with liars:

Telling lies about our past successes can sometimes be self-fulfilling, at least when it come to exam performance. That’s according to the New York Times, which reports on studies by Richard Gramzow at the University of Southampton and colleagues.

Their research has shown that, when asked, many students exaggerate their past exam performance, and that those students who do this tend to go on to perform better in the future.

What’s more, a study published in February showed that when these exaggerators are interviewed about their past academic performance, they don’t show any of the physiological hallmarks associated with lying, but rather their bodies stay calm. It’s almost as though this is a different kind of lying, aimed more at the self, with the hope of encouraging improved future performance.

More commentary on this research over at Deric Bownds’ Mind Blog.

Reference:

Two popular articles on deception: Via the Situationist Blog (7 April), a link to an article in Forbes on “how to sniff out a liar” (which doesn’t include any hints for olfactory detection of deceivers!). And hat tip to the Antipolygraph Blog (16 April) for pointing us toThe Lie of Lie Detectors By Rob Shmerling:

Recently, two studies announced effective ways to determine whether a person was telling the truth — one used a brain scan while the other detected heat around the face. Since you probably tell the truth all of the time, it is likely that these reports will have no direct bearing on you. But, for those who perform lie detector tests or for those who might be asked to submit to one, these techniques could someday change how these tests are performed.

The Pentagon’s “Porta-Poly”: The news that the Pentagon is trialling a ‘pocket lie detector’ known as the Preliminary Credibility Assessment Screening System (PCASS) for soldiers has been picked up and commented upon by a number of sources including Bruce Schneier and the Anti-Polygraph Blog, but don’t skip the original MSN story which is well worth reading.

Update: Missed one: Over at Practical Ethics, in Fighting Absenteeism with Voice Analysis (16 May).  The news that some companies are apparently considering using this discredited technology to check up on workers calling in sick is chilling.

Why the spurious link to deception?

From New Scientist (4 April):

Our skin may contain millions of tiny “antennas” in the form of microscopic sweat ducts, say researchers in Israel. In experiments, they found evidence that signals produced by bouncing electromagnetic waves off the tiny tubes might reveal a person’s physical and emotional state from a distance.

So far so good, but then:

The research might eventually result in lie detectors that require no physical contact with the subject.

Why the spurious link to deception? The original article doesn’t mention it – the authors’ comment about the possible application of the technique is this:

This phenomenon can be used as the basis for a generic remote sensing technique for providing a spatial map of the sweat gland activity of the examined subjects. As the mental state and sweat gland activity are correlated it has the potential to become a method for providing by remote sensing information regarding some physiological parameters and the mental state of the patients.

I guess that just isn’t as sexy as “hey, what about this as a lie detector!”.

As several erudite commenters on Slashdot have noted, despite the common misconception, lying does not necessarily lead to a stress reaction in the deceiver. And people can have stress reactions when they are telling the truth. So machines that measure stress can be very unreliable detectors of deceit.

Reference:

Learning to lie

From New York Magazine (10 Feb), a detailed article on how kids learn to lie:

Kids lie early, often, and for all sorts of reasons—to avoid punishment, to bond with friends, to gain a sense of control. But now there’s a singular theory for one way this habit develops: They are just copying their parents.

… In the last few years, a handful of intrepid scholars have decided it’s time to try to understand why kids lie. For a study to assess the extent of teenage dissembling, Dr. Nancy Darling… recruited a special research team of a dozen undergraduate students, all under the age of 21… “They began the interviews saying that parents give you everything and yes, you should tell them everything,” Darling observes. By the end of the interview, the kids saw for the first time how much they were lying and how many of the family’s rules they had broken. Darling says 98 percent of the teens reported lying to their parents.

… For two decades, parents have rated “honesty” as the trait they most wanted in their children. Other traits, such as confidence or good judgment, don’t even come close. On paper, the kids are getting this message. In surveys, 98 percent said that trust and honesty were essential in a personal relationship. Depending on their ages, 96 to 98 percent said lying is morally wrong.

So when do the 98 percent who think lying is wrong become the 98 percent who lie?

Full article here.

See also:

Voice Stress Analysis: Only 15 Percent of Lies About Drug Use Detected in Field Test

The latest issue of the National Institute of Justice journal (NIJ Journal No. 259, March 2008) features a great article by Kelly Damphousse summarising recent research on voice stress analysis (VSA). Here’s an extract:

According to a recent study funded by the National Institute of Justice (NIJ), two of the most popular VSA programs in use by police departments across the country are no better than flipping a coin when it comes to detecting deception regarding recent drug use. The study’s findings also noted, however, that the mere presence of a VSA program during an interrogation may deter a respondent from giving a false answer.

The findings of our study revealed:

  • Deceptive respondents. Fifteen percent who said they had not used drugs—but who, according to their urine tests, had—were correctly identified by the VSA programs as being deceptive.
  • Nondeceptive respondents. Eight and a half percent who were telling the truth—that is, their urine tests were consistent with their statements that they had or had not used drugs—were incorrectly classified by the VSA programs as being deceptive.

Using these percentages to determine the overall accuracy rates of the two VSA programs, we found that their ability to accurately detect deception about recent drug use was about 50 percent.

Based solely on these statistics, it seems reasonable to conclude that these VSA programs were not able to detect deception about drug use, at least to a degree that law enforcement professionals would require—particularly when weighed against the financial investment. We did find, however, that arrestees who were questioned using the VSA instruments were less likely to lie about illicit drug use compared to arrestees whose responses were recorded by the interviewer with pen and paper.

Damphousse concludes:

It is important to look at both “hard” and “hidden” costs when deciding whether to purchase or maintain a VSA program. The monetary costs are substantial: it can cost up to $20,000 to purchase LVA. The average cost of CVSA® training and equipment is $11,500. Calculating the current investment nationwide—more than 1,400 police departments currently use CVSA®, according to the manufacturer—the total cost is more than $16 million not including the manpower expense to use it.

The hidden costs are, of course, more difficult to quantify. As VSA programs come under greater scrutiny—due, in part, to reports of false confessions during investigations that used VSA—the overall value of the technology continues to be questioned.

See also:

Quick round up of deception news

Sorry for the slow posting recently – real life is getting in the way of blogging at the moment., and is likely to continue to do so for some time yet, so please bear with me. Perhaps some of these items will give you your deception research fix in the meantime.

If you’d like something to listen to during the daily commute why not download an interview with John F. Sullivan, author of Gatekeeper: Memoirs of a CIA Polygraph Examiner (h/t Antipolygraph Blog).

Alternatively, try a short NPR Morning Edition segment on the neuropsychology of lying (h/t and see also The Frontal Cortex).

The ever-interesting BPS Research Digest discusses a study of how toddlers tell a joke from a mistake. According to the researchers, Elena Hoicka and Merideth Gattis:

…the ability to recognise humorous intent comes after the ability to recognise jokes, but before the ability to recognise pretense and lies. “We propose that humour understanding is an important step toward understanding that human actions can be intentional not just when actions are right, but even when they are wrong,” they concluded.

Karen Franklin has a terrific commentary on the Wall Street Journal’s discussion of a subscale of the MMPI, which claims to detect malingerers but which, according to critics, results in a large number of false positives (i.e., labelling truthful test-takers as malingerers). (See also a short commentary by Steven Erikson).

There are two articles by Jeremy Dean of the glorious PsyBlog on false memories (here and here).

And finally, Kai Chang at Overcoming Bias reports on an unusual teaching technique which involves asking students to spot the Lie of the Day.