Category Archives: Mechanical methods

Posts on methods of lie-detection that rely on machines

Can lie detectors be trusted?

Detailed commentary from Patrick Barkham in the Guardian (18 Sept), exploring the use of ‘lie detecting’ machines in the UK. He covers the use of voice stress analysis in benefit offices and insurance companies, and polygraphy for sex offenders. Interesting stuff, and well worth reading in full over on the Guardian site. Here’s a flavour:

[Harrow] council prefers the phrase “voice risk analysis” and Capita calls its combination of software, special scripts and training for handlers the “Advanced Validation Solution”. Just don’t say it’s a lie detector. “Please don’t call it that. We’re not happy with that. It’s an assessment,” says Fabio Esposito, Harrow’s assistant benefit manager.

… Voice stress analysis systems have been used for more than five years in the British insurance industry but have yet to really catch on, according to the Association of British Insurers. There was an initial flurry of publicity when motor insurance companies introduced the technology in 2001 but it is still “the exception rather than the norm,” says Malcolm Tarling of the ABI. “Not many companies use it and those that do use it in very controlled circumstances. They never use the results of a voice risk analysis alone because the technology is not infallible.”

… Next year, in a pilot study, the government will introduce a mandatory polygraph for convicted sex offenders in three regions. … Professor Don Grubin, a forensic psychiatrist at Newcastle University… admits he was initially sceptical but argues that polygraphs are a useful tool. “We were less concerned about accuracy per se than with the disclosures and the changes in behaviour it encourages these guys to make,” he says. “It should not be seen as a lie detector but as a truth facilitator. What you find is you get markedly increased disclosures. You don’t get the full story but you get more than you had.”

…critics argue that most kinds of lie-detector studies are lab tests, which can never replicate the high stakes of real lies and tend to test technology on healthy individuals (usually students) of above-average intelligence. Children, criminals, the psychotic, the stupid and even those not speaking in their first language (a common issue with benefit claimants) are rarely involved in studies.

Using voice analysis to detect benefit cheats

ratlieThe media is reporting that a pilot scheme in the UK to use voice stress analysis (or, more accurately, “voice risk analysis”) on benefit applicants is a success. The Observer headline proclaims Technology set to be introduced nationwide after pilot saves £110,000 (2 September):

Benefit claimants and job seekers could be forced to take lie detector tests as early as next year after an early review of a pilot scheme exposed 126 benefit cheats in just three months, saving one local authority £110,000.

The news report also points out that many are skeptical:

Experts in America, where the most comprehensive scrutiny of the technology has taken place, warn that the technology is far from failsafe. David Ashe, chief deputy of the Virginia Board for Professional and Occupational Regulation, said, ‘The experience of being tested, or of claiming a benefit and being told that your voice is being checked for lies, is inherently stressful. ‘Lie detector tests have a tendency to pass people for whom deception is a way of life and fail those who are scrupulously honest.’

Reading beyond the headlines, it’s clear that the pilot study is not finished, it hasn’t been properly evaluated, and no decision has yet been made. In Lie detector beats benefit fraud, silicon.com (3 Sept) reveals

A spokesman for the Department for Work and Pensions (DWP) – which funded the pilot – told silicon.com the department will evaluate the technology when the trial is completed next May. He said the DWP will “look at the evaluation results and see if it’s viable, see if it’s something to work on and see if other councils are interested in doing it”. If the benefits are seen as sufficient, the system could potentially be rolled out across the country, although no firm plans are currently in place.

But this hasn’t stopped others jumping on the VSA bandwagon, as the Telegraph (9 Sept) and BBC Online (7 Sept) report that Birmingham Council is next in line to adopt the system.

More Deception Blog posts on this story here and here, and more generally on VSA here.

Photo credit: niznoz, Creative Commons License

More fMRI stuff and nonsense

ABC News (30 Aug) is the latest media outlet to get on the MRI Lie-Detection bandwagon. “See a Lie Inside the Brain – Researchers Detect the Truth and Find Lies With an FMRI” is their breathless headline. How exciting! But Don Q Blogger points out it’s mostly uncritical puff for commercial companies offering fMRI lie detection tests

Meanwhile, Mind Hacks, Boing Boing and The Neurocritic all weigh in on a recent New York Times article on the growing commercialisation of fMRI technology for lie detection, pain control and a host of other purposes.

UPDATE (5 Sept): further commentary on the ABC story over at Cognitive Daily and The Neurocritic.

Voice Stress Analysis – a new report (still doesn’t work)

telltruthHat tip to the Anti-Polygraph Blog for alerting us to a new study to test the efficacy of Voice Stress Analysis. From the study’s abstract:

…The goal of this study was to test the validity and reliability of two popular VSA programs (LVA and CVSA) in a “real world” setting. Questions about recent drug use were asked of a random sample of arrestees in a county jail. Their responses and the VSA output were compared to a subsequent urinalysis to determine if the VSA programs could detect deception.

Both VSA programs show poor validity – neither program efficiently determined who was being deceptive about recent drug use. The programs were not able to detect deception at a rate any better than chance. The data also suggest poor reliability for both VSA products when we compared expert and novice interpretations of the output. …

However, the researchers did find that arrestees who knew they were going to be VSA tested “were much less likely to be deceptive about recent drug use than arrestees in a non-VSA research project” (though they do admit that the non-VSA project was not carried out in exactly the same way as the VSA study). The authors suggest that regardless of its validity, a VSA device may produce a bogus pipeline effect, which is perhaps why so many law enforcement agencies believe that it works:

When police officers report that VSA programs “work,” they generally mean that they were able to obtain a confession from suspects by telling them that the computer “said they were lying.” The potential problem, of course, is with false confessions. Several high profile cases have emerged in the past decade that suggest impressionable suspects may confess to a crime that they did not commit because they believed the software. The rationalization is usually that they “must have forgotten” that they did it. Obviously, the bogus pipeline effect of VSA products has important positive and negative implications (p.86).

The authors also highlight the financial implications, estimating the cost for training just one person from each of the 1400 law enforcement agencies who claim to use VSA at more than $16 million. The software and laptop is nearly $10K per agency. And “computer upgrades can increase the cost to almost $13,000” (p.4). Yikes.

Download the full text report as a pdf from the link below, and read more on the Deception Blog about VSA here and about the bogus pipeline effect here.

Reference

Photo credit: arimoore, Creative Commons License

Cross-Examining The Brain

Hat tip to Prof Peter Tillers for pointing us to a paper from Charles Keckler, George Mason University School of Law, on admissibility in court of neuroimaging evidence of deception. Here’s the abstract:

The last decade has seen remarkable process in understanding ongoing psychological processes at the neurobiological level, progress that has been driven technologically by the spread of functional neuroimaging devices, especially magnetic resonance imaging, that have become the research tools of a theoretically sophisticated cognitive neuroscience. As this research turns to specification of the mental processes involved in interpersonal deception, the potential evidentiary use of material produced by devices for detecting deception, long stymied by the conceptual and legal limitations of the polygraph, must be re-examined.

Although studies in this area are preliminary, and I conclude they have not yet satisfied the foundational requirements for the admissibility of scientific evidence, the potential for use – particularly as a devastating impeachment threat to encourage factual veracity – is a real one that the legal profession should seek to foster through structuring the correct incentives and rules for admissibility. In particular, neuroscience has articulated basic memory processes to a sufficient degree that contemporaneously neuroimaged witnesses would be unable to feign ignorance of a familiar item (or to claim knowledge of something unfamiliar). The brain implementation of actual lies, and deceit more generally, is of greater complexity and variability. Nevertheless, the research project to elucidate them is conceptually sound, and the law cannot afford to stand apart from what may ultimately constitute profound progress in a fundamental problem of adjudication.

Reference:

The Law and Ethics of Brain Scanning – audio material online

MP3onredHat tip to Mind Hacks (25 June) for alterting us to the fact that the organisers of the conference on The Law and Ethics of Brain Scanning: Coming soon to a courtroom near you?, held in Arizona in April, have uploaded both the powerpoint presentations and MP3s of most of the lectures to the conference website.

A feast of interesting material here that should keep you going, even on the longest commute, including:

  • “Brain Imaging and the Mind: Pseudoscience or Science?” – William Uttal, Arizona State University
  • “Overview of Brain Scanning Technologies” – John J.B. Allen, Department of Psychology, University of Arizona
  • “Brain Scanning and Lie Detection” – Steven Laken, Founder and CEO, Cephos Corporation
  • “Brain Scanning in the Courts: The Story So Far” – Gary Marchant, Center for the Study of Law, Science, & Technology Sandra Day O’Connor College of Law
  • “Legal Admissibility of Neurological Lie Detection Evidence” – Archie A. Alexander, Health Law & Policy Institute, University of Houston Law Center
  • “Demonstrating Brain Injuries with Brain Scanning” – Larry Cohen, The Cohen Law Firm
  • “Harm and Punishment: An fMRI Experiment” – Owen D. Jones, Vanderbilt University School of Law & Department of Biological Sciences
  • “Through a Glass Darkly: Transdisciplinary Brain Imaging Studies to Predict and Explain Abnormal Behavior” – James H. Fallon, Department of Psychiatry and Human Behavior, University of California, Irvine
  • “Authenticity, Bluffing, and the Privacy of Human Thought: Ethical Issues in Brain Scanning” – Emily Murphy, Stanford Center for Biomedical Ethics
  • “Health, Disability, and Employment Law Implications of MRI” – Stacey Tovino, Hamline University School of Law

From a deception researcher’s point of view, the chance to hear from Steven Laken of commercial fMRI deception detection company Cephos will be particularly interesting.

Mind Hacks also notes that ABC Radio National’s All in the Mind on 23 June featured many of the speakers from this conference in a discussion of neuroscience, criminality and the courtroom. The webpage accompanying this programme has a great reference list. For those interested in deception research, I particularly recommend Wolpe, Foster & Langleben (2005) for an informative overview of the potential uses and dangers of neurotechnologies and deception detection.

Reference:

Can brain scans uncover lies?

brainscan2Wow. Mind Hacks is right. A great article from the New Yorker on fMRI and deception detection. Here’s a little snippet but as the article is freely available online you should really head on over there and read the whole thing:

To date, there have been only a dozen or so peer-reviewed studies that attempt to catch lies with fMRI technology, and most of them involved fewer than twenty people. Nevertheless, the idea has inspired a torrent of media attention, because scientific studies involving brain scans dazzle people, and because mind reading by machine is a beloved science-fiction trope, revived most recently in movies like “Minority Report” and “Eternal Sunshine of the Spotless Mind.” Many journalistic accounts of the new technology—accompanied by colorful bitmapped images of the brain in action—resemble science fiction themselves.

And later, commenting on University of Pennsylvania psychiatrist Daniel Langleben’s studies that kicked off the current fMRI-to-detect-deception craze:

Nearly all the volunteers for Langleben’s studies were Penn students or members of the academic community. There were no sociopaths or psychopaths; no one on antidepressants or other psychiatric medication; no one addicted to alcohol or drugs; no one with a criminal record; no one mentally retarded. These allegedly seminal studies look exclusively at unproblematic, intelligent people who were instructed to lie about trivial matters in which they had little stake. An incentive of twenty dollars can hardly be compared with, say, your freedom, reputation, children, or marriage—any or all of which might be at risk in an actual lie-detection scenario.

Links:

Photo credit: killermonkeys, Creative Commons License

The Comparison Question Test: Does It Work and If So How?

polygraphHeinz and Suzanne Offe have just published a paper in Law and Human Behavior, in which they present the results of a study exploring when and how the controversial Control Question Test works in polygraph testing.

The logic of the CQT is that innocent subjects will respond more strongly to Control Questions (CQs, which relate to previous history of – or inclination towards – wrong-doing) than to Relevant Questions (RQs, which relate to the particular offence being investigated). Guilty subjects, on the other hand, will, it is theorised, respond more strongly to RQs.

In order for this procedure to be effective, it is claimed, subjects need to be convinced that being judged ‘not guilty’ depends on them giving socially desirable responses to the CQs. Examiners will tell their subjects something along the following lines:

“I want to find out whether you are the sort of person capable of [the crime under investigation] based on your history. So the questions I am going to ask you about your history will allow me to make these judgements about you. Now, tell me if you have ever taken something that was not yours…”.

In reality the explanations are a lot more detailed than this, all designed to raise the anxiety an innocent subject might feel at the prospect of being accused of something they did not do. (Offe and Offe give a detailed example of how this is done in the first appendix to their study.)

However, as Offe and Offe point out, it is debatable whether or not this type of questioning actually does increase the salience of the CQs for subjects.

The researchers set out to test the workings of the CQT by giving a mix of students and law enforcement trainees the opprtunity to steal some money. Participants were allowed to choose for themselves whether or not to steal, making the simulation more realistic. They were then polygraphed under various different conditions, in which the researchers tested whether explaining the CQ in detail made a difference to the ability to discriminate guilty and innocent subjects.

Read on for the results.

Photo credit: pauldwaite, Creative Commons License

Continue reading The Comparison Question Test: Does It Work and If So How?

Lie lab on Channel 4

I’ve been out of the country for the last couple of weeks and missed the start of what looks to be an interesting series from UK’s Channel 4 on lie detection. Luckily the trusty Mind Hacks is on hand to pick it up!

Lie Lab is a three-part TV series where they use the not-very-accurate brain scan lie detection method to test high profile people who have been accused of lying.

Read all about it on the Mind Hacks post here, or on the Channel 4 website here.

If, like me, you’ve missed the start of the series, UK/Eire viewers can use the Channel 4 ‘on demand’ feature to catch up over the internet.

Watching the Brain Lie: Can fMRI replace the polygraph?

This is the question asked in the May 2007 issue of The Scientist, which discusses the recent commercialisation of fMRI for lie detection, and concludes with a good summary of the persistent problems using this technology in forensic contexts:

[…] in reality, a nonconsensual testtaker need only move his or her head slightly to render the results useless. And there are other challenges. For one, individuals with psychopathologies or drug use (overrepresented in the criminal defendant population) may have very different brain responses to lying, says [New York University Psychology prof Elizabeth] Phelps. They might lack the sense of conflict or guilt used to detect lying in other individuals. […]

If a person actually believes an untruth, it’s not clear if a machine could ever identify it as such. Researchers including Phelps are still debating whether the brain can distinguish true from false memory in the first place. […]

Jed Rakoff, US District Judge for the Southern District of New York, says he doubts that fMRI tests will meet the courtroom standards for scientific evidence (reliability and acceptance within the scientific community) anytime in the near future, or that the limited information they provide will have much impact on the stand.

[…] According to Rakoff, the best way to get at the truth in the courtroom is still “plain old cross-examination.” And in the national security sphere, there’s “much more to detecting spies than the perfect gadget,” [Marcus Raichle, professor at the Washington University in St. Louis School of Medicine] agrees. “There’s some plain old-fashioned footwork that needs to be done.”

See also:

  • Hat tip to Mind Hacks (11 May), which has a detailed commentary on the article.

Deception links from around the web

linksSome quick deception-related links from around the blogosphere:

PsyBlog presents the “Top 3 Myths, Top 5 Proven Factors” on lie detection (12 May).

Wired (10 May) picks up on the UK government trial of voice stress analysis for alleged benefit cheats.

The Psychjourney Podcast for 27 April is on Malingering and PTSD (mp3).

If podcasts are your thing you can also listen to an interview with Ken Alder, author of a new book on the polygraph, on the Bat Segundo show (mp3). As the Anti-Polygraph Blog points you, you have to sit through a little silliness first…

Photo credit: mklingo, Creative Commons License

DoD researches high-tech ways to find liars

pinnoccioFrom the Associated Press (28 April):

An eerie image of a magenta, blue-green and yellow face glows on a screen as a government employee steps behind a heat-sensing camera on this sprawling U.S. Army base. Not far away, researchers are studying lasers’ ability to detect muscle contraction. Other technology tracks the movement of a person’s eyes.

Liars beware. The Defense Department facility that trains the people who run the government’s polygraph machines is looking to an even higher plane of technology in its quest to separate fact from fiction.

Hat tip to the Antipolygraph Blog for the link.

Photo credit: ATENCION, Creatlve Commons License

Don’t make benefits claimants take lie detector tests says TUC

The Trades Union Congress has called for the Department for Work and Pensions to abandon plans to use voice stress analysis in benefit centres (press release, 4 May). Quite right too. They say:

The Government should abandon plans to trial lie detector tests for people claiming benefits because the accuracy of the technology has not been scientifically proven, and individuals with genuine cases are likely to be discouraged from applying for the help they desperately need, says the TUC today (Friday).

[…] a TUC briefing ‘Lies, damned lies and lie detectors’ says that the science just isn’t there to back up the technology, and any use of the software when dealing with benefit claimants means that the innocent are just as likely to fall foul of the system as the genuinely guilty.

The TUC says that the problem with the lie detection technology that the DWP intends to use is that it cannot detect lies. Voice risk analysis and lie detectors can only detect, with varying accuracy, changes in the body, such as heart or breathing rate, or any changes in the tone, pitch or tremors in the voice.

The TUC has published a briefing note here (warning: word document).

See also:

Hat tip to Enrica Dente’s Lie Detection email list for picking the story up first.

Increasing Honest Responding on Cognitive Distortions in Child Molesters: The Bogus Pipeline Revisited

science and machinesAn article in the March 2007 issue of Sexual Abuse: A Journal of Research and Treatment presents the results of an experimental comparision between child molesters’ responses on a questionnaire and their responses when attached to a fake lie detector known as a ‘bogus pipeline’. Here’s the abstract:

Questionnaires are relied upon by forensic psychologists, clinicians, researchers, and social services to assess child molesters’ (CMs’) offense-supportive beliefs (or cognitive distortions). In this study, we used an experimental procedure to evaluate whether extrafamilial CMs underreported their questionnaire-assessed beliefs. At time one, 41 CMs were questionnaire-assessed under standard conditions (i.e., they were free to impression manage). At time two, CMs were questionnaire-assessed again; 18 were randomly attached to a convincing fake lie detector (a bogus pipeline), the others were free to impression manage. The results showed that bogus pipeline CMs significantly increased cognitive distortion endorsements compared to their own previous endorsements, and their control counterparts’ endorsements. The findings are the first experimental evidence showing that CMs consciously depress their scores on transparent questionnaires.

The article is interesting on many levels: let’s unpack it a little.

Continue reading Increasing Honest Responding on Cognitive Distortions in Child Molesters: The Bogus Pipeline Revisited

Benefit cheats face lie detectors

liartrashOh please.

Here’s a great way to start your day: in the BBC headlines this morning (5 April), news that voice stress analysis will be used in job centres to test benefit claimants:

Lie detectors will be used to help root out benefit cheats, Work and Pensions Secretary John Hutton has said. So-called “voice-risk analysis software” will be used by council staff to help identify suspect claims. It can detect minute changes in a caller’s voice which give clues as to when they may be lying. The technology is already used by the insurance industry to combat fraud and will be trialled by Harrow Council, in north London, from May.

Said it before, and will no doubt say it again: voice stress analysis [pdf] is junk science. Relying on VSA in job centres will mean that genuine benefit claimants will be wrongly accused and fraudsters will continue to get away with it. So depressing to see the snake oil salespeople achieving success in the UK.

Photo credit: Patrick T Power, Creative Commons License

Conference announcement: The Law and Ethics of Brain Scanning

brainscanAnywhere near Arizona in a couple of weeks? Arizona State University is running a one day conference on Friday, April 13, entitled The Law and Ethics of Brain Scanning: Coming soon to a courtroom near you? The conference is free but you must pre-register.

The conference has four consecutive sessions, on Brain Scanning Technologies; Brain Scanning in the Courts; Specific Applications of Brain Scanning Technologies; and Ethical Aspects of Brain Scanning.

The full line-up of speakers and talks is here. Most of the day looks like being interesting from but from a deception point of view, two particular presentations stand out:

  • Brain Scanning and Lie Detection from Daniel Langleben, University of Pennsylvania School of Medicine
  • Legal Admissibility of Neurological Lie Detection Evidence – Archie A. Alexander, Health Law & Policy Institute, University of Houston Law Center

Hat tip to the Neuroethics and Law Blog for bringing this to our attention!

Photo: R_Bish, Creative Commons License

How America became obsessed with the polygraph

ratlieIn a Washington Monthly article (April 07) entitled The Big Lie (How America became obsessed with the polygraph—even though it has never really worked), David Wallace-Wells reviews Ken Alder’s recently published book The Lie Detectors.

[…] The device has been derided by teams of experts as junk science, hardly more reliable than methods of pure chance, barred from the courts, a favorite tool of overzealous investigators and an instrument of state-sponsored vigilantism, a handmaiden to McCarthyism, an accomplice to the pink scare, and a nightmare vision of justice as arbitrary and expansive as the judgment of a totalitarian court, in a box no bigger or more conspicuous than the briefcase of a company man. And yet, as Ken Alder shows in his revealing, colloquial social history The Lie Detectors, by the time scientific scrutiny finally caught up to the scientistic ambition of the device in the late 1980s, generations of Americans had been seduced by it.

The article concludes with a charming quote from G. K. Chesterton:

“Who but a Yankee would think of proving anything from heart-throbs?” asked G. K. Chesterton’s fictional detective, Father Brown. “Why, they must be as sentimental as a man who thinks a woman is in love with him if she blushes.”

Photo credit: niznoz, Creative Commons License

Polygraph use by the Department of Energy

polygraphVia Bruce Schneier, a CRS report for Congress on Polygraph Use by the Department of Energy [pdf] is available on the Federation of American Scientists website.

Extract from the summary:

This report examines how DOE’s new polygraph screening policy has evolved and reviews certain scientific findings with regard to the polygraph’s accuracy. As part of its continuing oversight of DOE’s polygraph program, the 110th Congress could address several issues, including whether DOE’s new screening program is sufficiently focused on a small number of individuals occupying only the most sensitive positions; program implementation; the desirability of further research into scientific validity of the polygraph and possible alternatives to the polygraph; and whether to continue or discontinue polygraph screening.

Reference:

Photo credit: pauldwaite, Creative Commons license.

New research: Automation of a screening polygraph test increases accuracy

Charles Honts and Susan Amato have just published a study in Psychology Crime and Law that indicates that an automated polygraph test may lead to more accurate results than one administered by a human being. As Honts and Amato explain:

Much of the criticism of polygraph practice has focused on the polygraph examiners […who have been] criticized for a variety of reasons, including, but not limited to: poor training, bias, incompetence, inability to use statistically relevant information, and for being an uncontrollable and unquantifiable variable in the conduct of polygraph tests.

Participants were randomly assigned to lie (‘guilty’) or tell the truth (‘innocent’) conditions. The human version of the test was conducted by an experienced polygraph examiner, and in the automated version the participants were given their questions via audio tape recording.

In this study, around two thirds of the guilty participants who had been tested by a human were correctly judged to be guilty, and 63% of the innocent participants were correctly judged innocent. However, in the automated version, the correct ‘guilty’ decisions went up to 79% and correct ‘innocent’ decisions to 76%. It’s worth noting that the human examiner was not the one who made the decision about guilt or innocence – this was calculated in exactly the same way for both ‘human’ and ‘automated’ condition, via statistical analyses of the polygraph readings.

Here’s the abstract:

The present study examined the effects of automating the Relevant-Irrelevant (RI) psychophysiological detection of deception test within a mock-screening paradigm. Eighty participants, recruited from the local community, took part in the study. Experimental design was a 2 (truthful/deceptive) by 2 (human/automation) factorial. Participants in the deceptive conditions attempted deception on two items of an employment application. Examinations conducted with the automated polygraph examination were significantly more accurate than examinations conducted by the human polygraph examiner. Statistical analyses revealed different patterns of physiological responses to deceptive items depending upon the automation condition. Those results have potentially interesting theoretical implications. The results of the present study are clearly supportive of additional efforts to develop a field application of an automated polygraph examination.

Reference:

Is commercial lie detection set to go?

… asks Ronald Bailey on Reason Online (23 Feb):

[…] Deception arises in our brains. The utility of finding a way to look under the hood directly for the source of deception is undeniable. Not surprisingly, a number of researchers have been trying to find correlates in the brain for truth and lies. […] Now a couple of American companies are claiming to be able to do just that. No Lie MRI in Tarzana, Calif., and Cephos Corporation in Pepperell, Mass. use fMRI scanning to uncover deception. No Lie MRI asserts that its technology, “represents the first and only direct measure of truth verification and lie detection in human history.” Both companies say that their technology can distinguish lies from truth with an accuracy rate of 90 percent.

[…] What evidence does No Lie MRI and Cephos Corporation offer for their assertion of 90 percent accuracy in detecting lies? A look at the studies cited on No Lie MRI’s website is not reassuring. The company links to one done using 26 right-handed male undergraduates; to another with 22 right-handed male undergraduates; and to a third one with 23 right-handed participants (11 men and 12 women).

Cephos links to just three fMRI studies, one using a total of 61 subjects (29 male and 32 female of whom 52 were right-handed); another using 14 right-handed adults who did not smoke or drink coffee; and a third one that tested 8 men. So adding up the studies cited by these two companies, we get a total of 154 subjects whose brains have been probed for lying in controlled laboratory settings.

[…] Right now its accuracy has not yet been proven beyond a reasonable doubt. Or as Stanford law professor Hank Greeley succinctly put it: “I want proof before this gets used, and proof is not three studies of 40 college students lying about whether or not they are holding the three of spades.”