Category Archives: Brain scans

Revealing secret intentions in the brain

Press release from the Max Planck Institute (8 Feb):

Our secret intentions remain concealed until we put them into action -so we believe. Now researchers have been able to decode these secret intentions from patterns of their brain activity. They let subjects freely and covertly choose between two possible tasks – to either add or subtract two numbers. They were then asked to hold in mind their intention for a while until the relevant numbers were presented on a screen. The researchers were able to recognize the subjects intentions with 70% accuracy based alone on their brain activity – even before the participants had seen the numbers and had started to perform the calculation.

[…] The work of Haynes and his colleagues goes far beyond simply confirming previous theories. It has never before been possible to read out of brain activity how a person has decided to act in the future.

This press release prompted a piece in the UK Guardian (9 Feb) that explored both the research and the possible applications of this knowledge:

The latest work reveals the dramatic pace at which neuroscience is progressing, prompting the researchers to call for an urgent debate into the ethical issues surrounding future uses for the technology. If brain-reading can be refined, it could quickly be adopted to assist interrogations of criminals and terrorists, and even usher in a “Minority Report” era (as portrayed in the Steven Spielberg science fiction film of that name), where judgments are handed down before the law is broken on the strength of an incriminating brain scan.”These techniques are emerging and we need an ethical debate about the implications, so that one day we’re not surprised and overwhelmed and caught on the wrong foot by what they can do. These things are going to come to us in the next few years and we should really be prepared,” Professor Haynes told the Guardian.


  • John-Dylan Haynes, Katsuyuki Sakai, Geraint Rees, Sam Gilbert, Chris Frith, Dick Passingham (2007). Reading hidden intentions in the human brain. Current Biology, February 20th, 2007 (online: February 8th). PDF and HTML full text freely available (as of 9 Feb 07)

Can a brain scan prove you’re telling the truth?

The latest issue of New Scientist (issue 2590, 10 Feb 07) has an article on how “the apparent emergence of an fMRI truth-telling industry in the US has come as something of a surprise”. Sadly, it’s behind a paywall. It begins:

The trouble began in 2003 when a fire gutted Harvey Nathan’s deli in Charleston, South Carolina. In the aftermath, Nathan fought off police charges of arson, but his insurers’ lingering doubts over his innocence have tied up a payout that could exceed $200,000.

Which is why, last December, Nathan travelled across the US and paid $1500 to have his brain scanned. “We provide a service for people who need to prove they are telling the truth,” says Joel Huizenga, a biologist turned entrepreneur and CEO of No Lie MRI of Tarzana, California. In what amounted to the world’s first commercial lie-detection test using function magnetic resonance imaging (fMRI), technicians at No Lie mapped blood flow within Nathan’s brain while he answered a battery of questions about the deli fire and compared the results to control tests during which Nathan was asked to lie.

Symposium: Is There Science Underlying Truth Detection?

If you’re in Cambridge MA next week you might be interested in a symposium on brain imaging and deception detection, to be held at the American Academy of Arts & Science on 2 February, from 2-5pm:

The American Academy of Arts and Sciences, the McGovern Institute for Brain Research at MIT, and Harvard University are holding a symposium on the science, law, and ethics of using brain imaging technology to detect deception. The program will focus on the status of the science behind detecting deception using fMRI. Presenters will also consider the legal, ethical, and public policy implications of using brain imaging for lie detection.

The symposium is free, but advanced registration is required (more details here).

Truth serums and “brain fingerprinting” used in Indian serial killer case

A serial killing case in India has caused quite a stir in recent weeks, with suspicions that the murders are linked to human organ trafficking operations and allegations of police incompetence in investigating the disappearances of the children. The Observer (UK, 7 Jan) explains:

Forty or more people, ranging from a boy aged 10 months to a 32-year-old mother of three, may have fallen victim to two of India’s most prolific serial killers as the authorities revealed their suspicion that murders may have been carried out to harvest body parts such as kidneys, livers and kneecaps.

[…] Yesterday, as police fought to control further riots by angry locals, the leader of India’s ruling coalition, Sonia Gandhi, made a surprise visit to the scene of the crime and harshly criticised the local police handling of the investigation. Responsibility for it has now been handed over to India’s top federal investigating agency, the Central Bureau of Investigation.

In the last week, six police officers have been suspended after it emerged that Pandher, the prime suspect in the case, was arrested 13 months ago following a series of complaints from local residents in the slum bordering his house who suspected his involvement in the disappearance of their children. But the suspect walked out of the police station the same night.

Two men have been arrested in the case, and CNN-IBN News (5 Jan) explains what lies in store:

The Forensic Science Laboratory (FSL) in Gandhinagar, Gujarat, is currently conducting a narco-analysis test on the two accused in the Nithari serial killings case – Moninder Singh Pandher and Surinder.

[…] An anesthetist, a forensic expert and two psychologists. All of them are being given a comprehensive briefing by the Noida Police as to the questions that need to be posed to the accused once the truth serum has been administered.

[…] Assistant Director, FSL, Gandhinagar, V H Patel, “We inject drugs into a person, which makes his conscious mind relax. It is under the influence of these drugs that a person begins to speak out the things that he would normally try to hide.”

The chemical injected during the test is sodium pentathol, which is popularly known as the truth serum, for obvious reasons. […] The effect of the drug makes the person semi conscious, restricting their ability to manipulate answers or use their imagination.

In addition to the narco-analysis test, the Nithari accused will have to undergo a Brain Finger Printing Test and a Lie Detection or Polygraph Test.

[…] Says an FSL official, Namrata Khopkar, “Once the sensors are placed, and we show pictures to the accused and make them hear things. The way one’s brain reacts to these sounds can establish a lot of things.”

Both the use of sodium pentathol and “brain fingerprinting” techniques are highly controversial. Previous Deception Blog posts on ‘truth serums’ can be found here, and previous coverage of the use of brain scans by the Indian police can be found here and here.

P300-based concealed information tests for self-referring versus incidentally obtained information

A highly technical in-press article from Biological Psychology reports a study dealing with detection of concealed information via measurement of the P300 ERP. The authors explain:

P300 is an event-related EEG potential (ERP), which may be elicited by rare, meaningful items of information in a context of serial presentation of frequent, items that lack meaningfulness (e.g., Johnson, 1988). Thus, if a subject watches a display screen upon which 10 names are repeatedly presented every 3 s
in random order, one of which is the subject’s name and the others are irrelevant for him/her, the P300 ERP will be elicited by his/her own name. As summarized next, various investigators have taken advantage of this novel, brain activity-based, physiological marker of recognition (P300) by using it to signal a subject’s recognition of items of concealed information pertinent to a crime, or to other forensic situations, such as malingered memory loss.

There have been two versions of the P300 based Concealed Information Test (CIT) or Guilty Knowledge Test (GKT) published, and both of these protocols have pros and cons, but their detection efficiencies have never been formally compared, as we attempt to do here.


The scope of No Lie MRI

Neuroethics and Law Blog (8 Nov) highlights out recent amendments to the No Lie MRI website:

No Lie MRI, which currently offers an fMRI-based “truth verification/lie detection” product […] recently amended its website to offer its services to: (1) individuals for “risk reduction in dating, trust issues in interpersonal relationships, and issues concerning the underlying topics of sex, power, and money”; (2) employers as a substitute for drug screenings, resume validation, and security background checks […] and (3) insurance companies for diminishing insurance fraud and lowering insurance premiums

No Lie MRI’s spiffy new front page features pretty pictures of brain scans to add a veneer of scientific respectability to their claims. Scariest of all, the company tells us that it “is presently working to have its testing allowed as evidence in U.S. and State courts“.

See also:

Brain on Fire

In the wake of news that NoLieMRI has started conducting commerical lie tests, the Washington Post (30 October), reports on brain scanning and lie detection. It starts with a detailed description of what it is like to take an MRI lie test:

You’re chambered into this dimly lit tunnel of truth like a shell into a shotgun. First you are instructed to twist plugs far into your ears. Then you lie on a gurney narrower than a stretcher. A woman in a lab coat slides a helmet over your head. It is not really like a Hannibal Lecter mask, although the researchers like to make that joke. Your nose barely clears the equipment, your eyes can only look up, and your head is cradled to discourage movement.

Into your hands the researchers place a box with two buttons. The left one, when punched, signifies a “yes” response to questions. The right one means “no.” When they slide you into the bore, it is barely wide enough for your shoulders. To your hip they’ve taped a bulb that you are supposed to squeeze if you have a panic attack, because there is the possibility that no one will hear you scream — when the machine goes to work, it pounds like a high-frequency jackhammer, except when it shrieks like the klaxon on a submarine when somebody shouts “Dive! Dive!”

But there are problems:

This commercialization is derided by many researchers as premature. It is not yet clear, they say, how well this technology identifies different kinds of lies, or how well it works across a great array of people, or how well it stands up to countermeasures.

[…] “It is a very deep problem,” [Antonio Damasio, the neurologist who is director of the Brain and Creativity Institute at the University of Southern California] says. “I don’t do any work on lie detection. But you are in essence having to detect a discrepancy between an overt behavior and an internal representation. It is complicated enough to find out what is going on when the idea and the behavior are consistent.”

[…] Damasio and other skeptics are concerned that [commerical companies are] engaging in nothing more than “neo-phrenology.” Phrenology is the discredited 19th-century idea that you can figure out a person’s character by examining the bumps on his head.

“It’s not a question of putting someone in a scanner and see what lights up,” says Damasio. “The idea of going immediately to the commercialization of a product identifying different mental states is premature.”

Great article, well worth a read.

Proceedings of the Workshop on the Use of Autonomic and Somatic Measures for Security Evaluations

The most recent issue of the free online Journal of Credibility Assessment and Witness Psychology (vol. 7, No. 2, published June 2006) is a special issue, containing the Proceedings of the Workshop on the Use of Autonomic and Somatic Measures for Security Evaluations.  The entire issue can be downloaded as a (very large!) PDF file, or you can access the papers (well, summaries of powerpoints, rather than papers) individually via the JCAWP page.


  • Polygraph Screening,  Donald J. Krapohl
  • Issues In The Study Of Polygraph Screening Techniques, Michael Bradley
  • Using the Polygraph in Employment and National Security David C. Raskin & Charles R. Honts
  • Emerging Technologies in Credibility Assessment, Andrew H. Ryan, Jr.
  • Toward a Neurocognitive Basis of Deception, Ray Johnson, Jr.
  • The Polygraph: One Machine, Two World Views, Stephen W. Porges
  • The Use of Voice in Security Evaluations, Harry Hollien & James Harnsberger
  • Voice Stress, James Meyerhoff
  • Evaluating Voice-Based Measures for Detecting Deception, Mitchell S. Sommers
  • Emerging Methods and Detecting Stress and Thermal Imaging, Dean Pollina
  • Body Odors as Biomarkers for Stress, Pamela Dalton
  • Radar Technology For Acquiring Biological Signals, Gene Greneker
  • The Physiology of Threat: Remote Assessment Using Laser Doppler Vibrometry John W. Rohrbaugh, Erik J. Sirevaag, John A. Stern, & Andrew H. Ryan, Jr.
  • The Gaze Control System and Detection of Deception, John A. Stern
  • Eye Movement-Based Assessment of Concealed Knowledge, Frank M. Marchak
  • Multimethod Assessment of Deception on Personnel Tests: Reading, Writing, and Response Time Measures, Andrea K Webb, Sean D. Kristjansson, Dahvyn Osher, Anne E. Cook, John C. Kircher, Douglas J. Hacker, & Dan J. Woltz

“We are going to the region of the brain which is actually formulating a [deceptive] response”

ScienCentral (12 Sept) rehashes earlier coverage (see here and here) to highlight the work of Temple University faculty Scott Faro and Feroze Mohamed : (12 Sept)

[…] rather than focusing on the potential end-result of lying, [Faro and Mohamed] are developing a way to detect deception by looking directly at people’s brain activity using MRI brain scanners. “We are going to the source, we are going to the region of the brain which is actually formulating a response,” says Mohamed, the MRI physicist on the team.

[…] In this preliminary study, the researchers wanted to see whether brain scans can even pick up a significant difference between brain activity during lying versus when telling the truth. The researchers had six of eleven volunteers fire a gun, then lie and say they didn’t. The other five could truthfully say they didn’t fire the gun. All the volunteers were then given functional MRI and polygraph tests during which they denied having fired the gun.

As they reported in The Journal of Radiology, the brain scans revealed unique areas that only lit up during lying. However, the researchers point out that there is never going to be one telltale spot in the brain that automatically indicates a lie. “There really is no one lying center,” says Faro. “There are multiple areas in the brain that activate because there’s a lot of processes that have to take place.”

[..] In fact, Faro hopes that this technology will usher in a new era of accuracy in lie detection, which could be applied in areas from preventing insurance fraud to freeing falsely-accused prisoners.

Reference: Feroze B. Mohamed, Scott H. Faro, Nathan J. Gordon, Steven M. Platek, Harris Ahmad, and J. Michael Williams (2006). Brain Mapping of Deception and Truth Telling about an Ecologically Valid Situation: Functional MR Imaging and Polygraph Investigation—Initial Experience . Radiology Volume 238, Issue 2

Time Magazine wonders how to spot a liar

A lengthy piece in last week’s Time Magazine (20 August) rakes over familiar ground:

[…] In the post-9/11 world, where anyone with a boarding pass and a piece of carry-on is a potential menace, the need is greater than ever for law enforcement’s most elusive dream: a simple technique that can expose a liar as dependably as a blood test can identify DNA or a Breathalyzer can nail a drunk. Quietly over the past five years, Department of Defense agencies and the Department of Homeland Security have dramatically stepped up the hunt. Though the exact figures are concealed in the classified “black budget,” tens of millions to hundreds of millions of dollars are believed to have been poured into lie-detection techniques as diverse as infrared imagers to study the eyes, scanners to peer into the brain, sensors to spot liars from a distance, and analysts trained to scrutinize the unconscious facial flutters that often accompany a falsehood.

The article goes on to discuss research on deception using fMRI, electroencephalograms, eye scans and microexpressions. They conclude:

For now, the new lie-detection techniques are likely to remain in the same ambiguous ethical holding area as so many other privacy issues in the twitchy post-9/11 years. We’ll give up a lot to keep our cities, airplanes and children safe. But it’s hard to say in the abstract when “a lot” becomes “too much.” We can only hope that we’ll recognize it when it happens.

New-age lie detector takes a different tack

There’s an interview with Dr Britton Chance, Professor Emeritus of biophysics at the University of Pennsylvania, in the latest issue of the RCMP Gazette (Vol 68, No 2) entitled “Detecting deception” in which Chance outlines his team’s work to develop “a new-generation lie detector that measures deception by detecting sudden spikes in the brain’s bloodflow”. Here’s an extract:

How does this technology measure deceit?

Dr Chance: Deceit usually involves a decision to tell a lie instead of a decision to tell the truth. We can “image” this thought pattern before it’s articulated since it causes an increase in bloodflow to the cerebral cortex, or the brain’s decision-making centre. Users of the cognoscope detect changes in bloodflow through a red spot that appears on the computed images. I observed this relationship between bloodflow and deception in my work, as well as the work of my colleague, Dr. Daniel Langleben, an Assistant Professor of Psychiatry at the University of Pennsylvania.


There are many questions around the accuracy of conventional lie-detection techniques such as the polygraph. Could the use of near-infrared light sensors on the brain serve to boost the accuracy of lie detection techniques?

Dr Chance: Preliminary experiments with the cognoscope at the U.S. Department of Defense’s Polygraph Institute suggest the brain’s frontal cortex gives reliable signals. We have also proposed non-contact sensing of prefrontal activation, thus our optical method is one of the few, if not the only, technique that can be used-under proper ethical considerations-for remote sensing of brain functional activity. It is therefore suited for advanced government security tests, such as baggage handling checkpoints at airports. In this case, users could detect deception in passengers who are taken aside and asked if anyone else has handled their bags, etc.

Conclusions about the science of such technology are one thing, but implying that this sort of ‘brain scanning’ technology might be used for “advanced government security tests” at airports is, I believe, pretty irresponsible. It’s not suited for such an application, not now and not any time soon. I’ve written about overhyping of brain imaging techniques many times before so I’ll try not to repeat myself. But this article is in an official publication, which will be read by law enforcement officers throughout Canada and beyond. It’s a highly technical issue, but with no discussion on the limitations of such technology and no mention of the practical problems of ‘brain scanning’ suspicious individuals, how are readers with limited or no scientific background supposed to judge how useful this technology really will be?

Researchers say technology can show when and how a lie is created inside the brain

Hat tip to the Anti-Polygraph Blog for a link to the San Francisco Chronicle of 6 August, which carried a detailed story on the work of No Lie MRI, the commercial fMRI-for-deception-detection company:

Next week, a San Diego-area company with the crass-but-catchy name No Lie MRI will begin offering clients in California a new high-tech lie-detection service, based on neuroscience that is zeroing in on the “Pinocchio Reflex.” Ensconced in an MRI machine in Newport Beach, these customers will answer questions while a slew of images reveals when and where there is heightened activity in their brains — theoretically indicating the creation of deception. The company claims 50 prospective customers already, including wives who want to assure their husbands of their sexual fidelity, fathers fighting accusations of child molestation in child-custody disputes, and one California defendant the company won’t identify who faces the possibility of a death penalty unless he can convince a jury of his innocence. […]

Skeptics already complain that No Lie MRI and another company, Cephos Corp. of Massachusetts, are rushing to market with technology that has not been rigorously tested to know how reliable it is.

One of the skeptics is Harvard Psychology Professor Stephen Kosslyn:

Kosslyn told the New York Times Magazine earlier this year that trying to combat terrorism by seeking a lie zone in the brain is rather like trying to get to the moon by climbing a tree: Your progress upward creates the illusion of progress, but in the end you’re still in the tree and the moon is still in the sky.

There is also a podcast available on the SFC site, featuring Chronicle reporters Vicki Haddock and Jonathan Curiel talking about the ethical and legal issues involved. Download MP3 here.

Commercialisation of MRI for deception detection

The recent ACLU FOI request seems to have prompted several comments on the thorny issue of commercialisation of MRI for deception detection.

First, Brain Waves (20 June) reported that two companies are to begin offering “Lie Detection” using MRI. (Actually, this is pretty old news – Malcolm Ritter reported on the work of the Cephos Corporation and NoLieMRI back in January this year.) Mind Hacks (27 June) picked up on the Brain Waves story and added value with some good links here.

On 26 June USA Today ran an interesting piece on the commercialisation of MRI that opened:

Two companies plan to market the first lie-detecting devices that use magnetic resonance imaging (MRI) and say the new tests can spot liars with 90% accuracy.

[…] No Lie MRI plans to begin offering brain-based lie-detector tests in Philadelphia in late July or August, says Joel Huizenga, founder of the San Diego-based start-up. Cephos Corp. of Pepperell, Mass., will offer a similar service later this year using MRI machines at the Medical University of South Carolina in Charleston, says its president, Steven Laken.

One would hope that with the research on such technology in its infancy, we wouldn’t be seeing the technology in court any time soon. Think again:

[…]The start-up companies say the technology is ready now. Both say they will focus on winning acceptance in court for tests taken by customers. No Lie MRI already is working with a defendant in a California criminal case, Huizenga says.

Finally, also on 26 June, NPR’s Talk of the Nation featured a segment on The Future of Lie Detecting in which Daniel Langeben and Paul Root Wolpe discussed the use of MRI for lie detection (Hat Tip to Mind Hacks and the Antipolygraph Blog).

FOI request from the ACLU aims to expose whether US Government agencies are using brain scanning technology to detect deception

The Internet is buzzing with the news that the American Civil Liberties Union has filed a Freedom of Information request seeking information about Government use of brain scanners in interrogations

According to the ACLU press release, the organisation has filed the request because:

“There are certain things that have such powerful implications for our society — and for humanity at large — that we have a right to know how they are being used so that we can grapple with them as a democratic society,” said Barry Steinhardt, Director of the ACLU’s Technology and Liberty Project. “These brain-scanning technologies are far from ready for forensic uses and if deployed will inevitably be misused and misunderstood.”

[…] “These brain-scanning technologies have potentially far-reaching implications, yet uncertain results and effectiveness,” said Steinhardt. “And we are still in our infancy when it comes to understanding the underlying processes of the brain that the scanners have begun to reveal. We do not want to see our government yet again deploying a potentially momentous technology unilaterally and in secret, before Americans have had a chance to figure out how it fits in with our values as a nation.”

Earlier in June the ACLU sponsored a forum featuring experts discussing the use of fMRI as a “lie detector”, the video of which can be downloaded here.

Other coverage that goes beyond reprinting the ACLU press release:

More on fMRI to detect deception on this blog here.

Nature focuses on ethics of brain scanning to detect deception

BrainEthics Blog discusses two articles in the latest issue of Nature:

[…] this week’s issue of Nature caught me surprised with the release of two articles on ethical aspects of neuroscience. It really demonstrates how hot and important this issue is. Basically, both articles are on the application of brain scanners to detect lies.

The articles are in Nature Volume 441 Number 7096, on page 207:

  • Neuroethics needed: Researchers should speak out on claims made on behalf of their science.

and page 918:

  • Lure of lie detectors spooks ethicists: US companies are planning to profit from lie-detection technology that uses brain scans, but the move to commercialize a little-tested method is ringing ethical and scientific alarm bells. Helen Pearson reports.

Regardless of whether you can access the Nature articles, I urge you to go take a look at the BrainEthics post.  Does a great job of summarising the key issues.

The ethics of fMRI for deception detection

Thank you to Enrica Dente’s Lie-Detection list, for bringing a new article in the Stanford Report (May 3) to our attention. The article summarises a recent talk by ethicist and law Professor Hank Greely about the ethics of using fMRI for deception detection:

Greely […] discussed his concerns about the new lie detection technology at a campus Science, Technology and Society seminar April 14. Greely said he is excited by the potential for improved lie detection but concerned that it could lead to personal-privacy violations and a host of legal problems—especially if the techniques prove unreliable.

[…] “Deception is not a very clear-cut, well-defined thing,” Greely said. “We know people can remember things that never happened. How does that show up on an fMRI lie detection test?”

Access the full article here.

Update on “brain fingerprinting” criminals in India

Last week, I highlighted recent blog activity discussing the use of brain scans by the Indian police and I said I might return to the contentious issue of “brain fingerprinting”. Well, I don’t need to, because Sandra at Neurofuture has done it for us. She has a great follow-up post on brain fingerprinting with loads of informative links and considered comments on the use of this unproven technology in court cases in India:

Nobody seems to be looking at its use in India since the US sold them the technology. It’s being misused, applied to the wrong cases, even though the source of the technology publicly warns against its use in sexual assault trials.

Sandra’s done a splendid detective job here and dug up some really interesting material. Not much left for me to add, apart from a comment on her statement that “brain fingerprinting was developed with the FBI…”. I don’t know if that’s true, but you might want to check out the comments of Sharon Smith from the open archives of the Forensic Linguistics email list. Smith is the (now retired) FBI officer who worked with Farwell in the early 2000s on a student project (she was working on a psychology qualification in her spare time):

Although I was the point of contact while I was employed by the FBI, whenever the media asked about its utility, I refused to advocate its use, however, I told people what I have told you, although in more detail, and let them make up their own minds.

Here is the journal article that she and Farwell published as a result, reprinted in full on Farwell’s site here.

Brain scans used in trial in India

Via OmniBrain (28 March 06), a rather troubling report that “brain scans” have been used in the trial of an alleged rapist in India.

The results of the brain-mapping and polygraph (lie-detector) tests conducted on rape accused Abhishek Kasliwal have come out in favour of the prosecution. The Mumbai Police had conducted the tests on March 19 at the Central Forensic Science Laboratory in Bangalore.

Sadly, no further details are available, but OmniBrain author J. Stephen Higgins is on the case. Keep an eye on the comments there to see how far he gets.

UPDATE (3 April)

Sandra at Neurofuture is also on the case, and has posted some more detail and links. One link in particular is illuminating, revealing that the Indian police use the polygraph, EEG and sodium pentathol. [Which is apparently fine, because:

“Narcoanalysis is a very scientific and a humane approach in dealing with an accused’s psychological expressions, definitely better than third degree treatment to extract truth from an accused,” affirms Dr Malini.

Well, definitely better that the ‘third degree’ to be sure. However, here’s Wikipedia on the topic:

While fictional accounts of intelligence interrogation gives these drugs near magical abilities, information obtained by publicly-disclosed truth drugs has been shown to be highly unreliable, with subjects apparently freely mixing fact and fantasy. Much of the claimed effect relies on the belief of the subject that they cannot tell a lie while under the influence of the drug.]

I digress. According to the 2004 Deccan Herald article that Sandra found, “EEG ” appears to refer to Larry Farwell’s controversial Brain Fingerprinting Technique. I might come back to that when I have more time. Meanwhile, keep up to speed with Sandra’s detective work via

Reading Minds: Lie Detection, Neuroscience, Law, and Society

If you are in California this Friday, 10 March (and how I wish I was!), this would be a very interesting way to spend your day. Stanford Law School is putting on a one-day conference on lie detection and neuroscience. Here’s the blurb:

A revolution in neuroscience has vastly expanded our understanding of the human brain and its operations. Our increasing ability to monitor the brain’s operations holds the possibility of being able to detect directly a person’s mental state. One of the most interesting possible applications is using neuroscientific methods to provide reliable lie detection. Several scientists, and several companies, claim that this use has arrived. The morning session of the conference will examine the scientific plausibility of reliable lie detection through neuroscientific methods, discussing different methods and assessing their likely success. The afternoon session will assume that at least one of those methods is established as reliable and will then explore what social and legal ramifications will follow. This conference is free and open to the public.

There’s a link to the full agenda on the conference website, but just look at the line up:

Wow. Thank you to Neuroethics & Law Blog for the highlight!