Protecting Sacred Space: Real "Lie Detection" Would Threaten Human Rights As We Know Them

In previous posts, dr. Vera Wilde detailed problems with the iBorderCtrl team’s claims about its secretive AI “lie detector” component and its supposed ability to detect “biomarkets of deceit,” as well as reasons why the tool is prone to automating discrimination; building on Ioannides, the iBorderCtrl team’s past accuracy claims and future published findings are likely to be false; and, building on Damasio, the emerging neuroscientific consensus against Cartesian dualism particularly in the realm of decision science suggests that building AI decision-making into processes that substantially impact security decisions, as iBorderCtrl does, may be a very bad idea. But who cares about any of this if better tech is coming soon? Could real “lie detection” solve all these problems? Imagine there’s no liberty of being, thought, or feeling prior to outside observation—no protected sacred space of personhood…

Inside every person, there is a force that may be evident yet is unseen. An internal spark of life some people would call spirit. A being some would call soul. Others would disdain the language and baggage of the whole religious realm, preferring to see and say things about the mind and perhaps the heart—but including perhaps a non-dualist understanding à la Damasio of multi-directional communication between the brain and body via the vagal nerve and other pathways as suggested by Porges and others. Still others would insist on talking about the brain alone, and its pride of metaphorical place as the operating system on the hardware of the body. Contemporary scholars of cognitive liberty have thus far not much grappled with the question of what exactly this force is. What is it that is sacred inside of us, where we are free to be, and from this being there can arise free thought and feeling—and from that free thought and feeling, free speech, religion, and association—and all the other civil rights and civil liberties to which we owe the quality of life in free societies that we might wish to better understand, to bolster, and to protect from future incursions by increasingly pervasive surveillance technologies, brutish authoritarianism, resurgent prejudice, and the latent traumas of climate and other disasters that seem likely to prune the burning bush of human civilization?

Science and the humanities cannot yet offer a definitive answer to this question. And yet we know quite well across both cultures, in C.P. Snow's parlance, the blessing and the curse of this sacred space. For in addition to being the uncelebrated basis of all other freedoms, this sacred space of being before will, knowing before articulation, becoming before conscious knowledge, and all the other dressings of the mind in the darkness of its full and still-mysterious workings—in addition to giving us freedom to be true, to live authentically, to express who we are—this sacred space also gives us the ability to lie. So religion, according to anthropologist Scott Atran, can be defined as: “(1) a community's hard-to-fake commitment (2) to a counterfactual and counterintuitive world of supernatural agents (3) who master people's existential anxieties, such as death and deception.”

Usually we hear about death and taxes (Benjamin Franklin), or death and freedom qua personal responsibility, isolation, and meaninglessness (Irvin D. Yalom) being the great existential anxieties. But here we have deception on par with death, and it makes sense. Not knowing whether people are telling the truth about their intentions can indeed be life-threatening. If your neighbor invites you over for a friendly dinner and then surprises you with a machete, or your wife prefers to pass on your best friend's genes over yours, than your genetic line might die out—a worse evolutionary loss than your own, inevitable death.

But we have yet to beat death or deception. Through incredible advances in hygiene, maternal healthcare, preventive and infectious medicine, nutrition, basic infrastructure, and more, we have greatly prolonged lifespans and enhanced quality of life for billions of people (probably more than you think). The average lifespan now is about 70. People still die, but they die much later than they used to, and having lived healthier lives in lots of ways. Similarly, people are still vulnerable to being lied to. There is no fountain of youth that will prevent death, and there is no “lie detector” that will determine when someone is lying or telling the truth. But of course there are fraudsters selling immortality, life extension, “credibility analysis,” and other bollocks. And not all of what they're doing is completely baseless—although it's been argued that when you look at how the math shakes out, the fact that there is signal in the noise actually makes currently dominant deception detection techniques like the polygraph "worse than worthless.“

“Lie Detection” So Far Is Fraud

As this website has explored in greater depth in numerous other places (starting here), the history of “lie detection” has always been a history of fraud. That sounds quite harsh, though. Another way of saying it would be that societies have often developed rituals around distinguishing truthful from deceptive people—sometimes involving outright torture as in trial by ordeal, and at other times involving mere pseudoscience as in polygraphs. Just as the threat of death has partly driven the development of rituals around mortality, so too the threat of deception has partly driven the development of formerly religious and increasingly secular rituals around truth-telling. But these rituals are not yet really scientific.

The popular imagination may be impressed (or at least titillated) by polygraphs on seedy talk shows. But scientific consensus has been consistent for a hundred years that this stuff does not work. There is no unique “lie response” to detect. That the polygraph remains an American obsession, and lie detection a multi-billion dollar annual industry in the U.S., is both a comparative anomaly largely specific to the U.S. context, and a non-specific symptom of rampant corruption throughout American institutions and society.

The industry peddling polygraphs is corrupt, and its industry scientists do demonstrably bad science. The military and intelligence communities are deeply involved in perpetuating this scam. Polygraph programs epitomize government waste, fraud, and abuse.

Of course polygraphs aren't the only game in town. There are also next-generation incarnations including border screening “lie detection” AI tools like the EU-funded automated system iBorderCtrl this website opposes. These tools are similarly not based on credible science.

But maybe it could be done. Whether scientists might yet discover a reliable sign or collection of signs of deception remains an open question. And, as the National Academy of Sciences noted in their report on the scientific evidence on polygraphs and others have since reiterated as current scientific consensus, the most promising research in this direction looks directly at the brain rather than its autonomic nervous system proxy measures in the body. Neuroscience may yet—someday—give us something like a “psychic X-ray.”

A cynical view of this field might see the history of fraud and abuse in lie detection, and call it all likely to come to nothing of ethical use or scientific validity in the end. But a more charitable view might return to the analogy with the field of life extension. There seem to be ever-more anti-aging drugs, technologies, and lifestyle interventions—some of them based on real, if still early, medical and scientific advances.1) Are we on the verge of discovering the Holy Grail that banishes aging? Some people think so. Others think no, not now; but it's still possible that we'll get there someday. Hardly anyone is intuitively sufficiently satisfied by the historical reality of mortality to say that we'll absolutely never achieve (additional) meaningful advances in life extension. Instead, there's a general consensus that it's not impossible that we'll find a new path or set of paths to a new normal, possibly even one that redoubles average human lifespans. Even if we're not there now, the threshold to such a new way might exist, and we might be getting closer. Could it be the same with discovering a psychic X-ray?

Neuroscience and neurotechnology today do indeed seem to be bursting with advances in mind-reading of a sort—from assessing for the presence of so-called “guilty knowledge,” to manipulating preferences, memory, and other information at the neural level. Directly tapping the brain (whether to observe, use, or alter its contents or functioning) seems to be demonstrably possible. Whether it will become reliable for real-world purposes, or usable in cases where people don't want to cooperate, however, remains an open question—one that the existing legal regime around human rights is ill-prepared to confront.

Here are just a few examples of how far these advances are already progressing…

Psychic X-rays in the Making?

1. Helping Paralyzed People “Talk” and Hearing-Impaired People Listen

People who are “locked in”—unable to speak or move due to loss of motor function from paralysis from spinal cord injury, motor neuron disease, stroke, muscular dystrophy, or loss of limbs—might be able to regain some function in these areas through brain activity alone with the help of thought-to-text neurotech device Stentrode2) (stent + electrode) and BrainOS software. In an early feasibility study with five patients set to begin in June 2019, doctors using cerebral angiography instead of open brain surgery will implant the new device that is a stent (metallic mesh tube) structure with electrode contacts into a blood vessel of the brain located in the motor cortex. This is the first human research into this new method of implanting electrical sensors less invasively—in effect, brain surgery without brain surgery—so paralyzed patients can then use brain signals to control assistive technology with their thoughts.

Similarly, research into helping hearing-impaired people hone in on the speaker they're trying to hear in a busy environment (the cocktail party or busy cafeteria problem) is progressing. Recently scientists using a new speech separation algorithm were able to tell which speaker the listener was trying to hear based on the listener's brainwaves, and then automatically amplify only that speaker's voice to facilitate the selective hearing that is listening in such real-world contexts. The algorithm works by using enhanced auditory cortex representation of the attended speaker relative to unattended sources. The automatic channel identification and amplification closes the gap between clean speech sounds in controlled settings, and mixed speech sounds in real-world environments that has long hindered technological efforts to help hearing-impaired people to functionally hear.3) This research lays the foundation for a brain-controlled hearing aid much like Stentrode's brain-controlled talk box.

This type of thought-to-text tech for “locked-in” patients seems to date to early EEG brainwave research (in the mid-late 1980s) on P300 responses, or bursts of electrical activity in the brain about 300 milliseconds after the relevant stimulus. Initially users would have to wait for each letter to flash on a computer screen, for their brain activity to select it, to slowly spell out what they wanted to say through a voice synthesizer. That has gotten much faster and better in tech like Stentrode, and it's also branched off into other use cases…

2. Checking Brains for “Guilty Knowledge”—and Retraining Your Own Mood or Attention

The CIA gave Larry Farwell over a million dollars to develop ”brain fingerprinting.“ Building on lifelong polygraph critic David Lykken's guilty knowledge or concealed information test4) , “brain fingerprinting” follows P300 responses to assess for familiarity. It has been admitted as exculpatory scientific evidence in criminal court.

The world has changed since this type of research was getting started in the '80s. The off-the-shelf EEG headband you might opt to use at home with free apps and/or the software it comes with is now about €150-300 on Amazon. The idea is that through training self-regulation of brain activity, users can learn to manage neuropsychiatric disorders like depression, anxiety, and ADHD—or simply learn to meditate better for well-being.

3. Brain Stimulation for Mental Health

In the realm of treating mental illnesses such as depression and anxiety—as well as in the broader health realm including language/movement disorders, impaired cognition, and quality of life issues relating to Parkinson's disease, traumatic brain injury, stroke, and chronic pain—there are also a range of brain stimulation technologies, from (painless and portable) transcranial direct-current stimulation to (the less comfortable and portable) transcranial magnetic stimulation, to (the more invasive, surgical) deep brain stimulation. It has a bad wrap from its history of abuse in involuntary psychiatric treatment as infamously relayed in Sylvia Plath's Bell Jar, but electroconvulsive therapy (ECT, “shock therapy”) is also increasingly studied and used by some of the same brain stimulation experts for its (kinder, gentler, and more actively consensual, not to mention anesthetized) uses in contemporary depression treatment. So far, psychedelics like ketamine for depression and MDMA for PTSD seem to have much greater reliability, efficacy, and accessibility. But the prospects for advances in health through brain stimulation remain promising.

4. “Mind-Reading”

There are repeating sorts of headlines about advances in mind-reading like there are about advances in cold fusion. Except when it comes to cold fusion, we're always thirty years away from achieving it. Meanwhile, the level of precision in mind-reading has been advancing steadily—within limits.

If you look up mind-reading and a year some decades back, such as 1970, you're likely to find articles on the Amazing Kreskin and other entertainers who sold the public (and sometimes the government) on their abilities. What magicians did then through cold reading, now researchers attempt through technology. Using fMRI and machine learning, researchers could reconstruct video clips people had watched based on their brain activity. Although at present it still requires training for the subjects and the clips are blurry, this is the sort of result that shows we are really getting somewhere when it comes to decoding brain activity.5)

Some of this decoding is about reconstructing what people have seen or guessing what they are about to do, but some of it is about decoding who they are in a broader way. For instance, liberals and conservatives seem to have significant differences in brain structure and activity that could be used to identify them. We're a long way from an airport scanner that only lets through people whose brains look like they're probably innately geared towards loyalty to the Party, but it's not an impossible future.

However, just because we might be able to tell something about people's innate political orientations, or movie clips they've just watched—or anything else relating to intrinsic propensities or factual content—from scanning their brains, doesn't mean that we can tell how they feel about it. That is one of neuroscientist Russell Poldrack's clearest conclusions from in-depth research on brain imaging techniques including fMRI: We can see what brain regions are active on imaging, but we can't infer from that what people are experiencing—or the sense they make of it in cognitive, emotional, or integrative terms.

So on one hand, neuroimaging tools like fMRI offer a more direct pipeline to the soul than do the polygraph and its next-generation analogues, which tend to look at proxy measures for autonomic nervous system arousal—looking more directly at the brain instead. But on the other hand, they're still running up against the same old wall. Just because you know someone experiences autonomic arousal, doesn't mean you know why (out of a broad range of internal experiences that could cause that). And just because you know someone has a particular brain structure or activity or pattern of activation in response to stimuli, doesn't mean you know how they feel—or how they feel about how they feel—or what they choose to do about it. The internal experience remains… internal. The brain seems more direct, and fMRIs might literally look like psychic X-rays… But maybe brain imaging does not actually get us meaningfully closer to “reading” someone else's internal experiences than the old polygraph proxies of autonomic arousal, after all. Perhaps it can still be done, but neuroimaging may or may not be how we end up ever doing it.

The psychic X-ray in this sense remains more pipe dream than pipeline. That hasn't stopped next-generation lie detection researchers like U.S. border security technology AVATAR from claiming in the media that they are actively looking for ways to incorporate non-invasive brain measurements into their screening. Such a use would be insufficiently scientifically based for reliance in real-world contexts.

5. Mind-Changing

So neuroscience and neurotech are advancing—within limits—in the mind-reading realm. But that's nothing compared to advances in mind-changing. In the political realm, recent research by Colin Holbrook and colleagues shows continuous theta burst transcranial magnetic stimulation applied to the posterior medial frontal cortex reduces typical magnifications in religious and nationalist ideological investment related to mortality threat. That is, we know from a long tradition of research into authoritarianism that usually priming people with death, threat to their own mortality, or other threat to security increases right-wing authoritarianism, bias toward in-group and against out-group, beliefs in God and country, and the like. It is really remarkable that a type of brain stimulation to a particular brain region decreases that response significantly, because it is known in social psychology as being a robust effect that is difficult to change.

A government could theoretically increase tolerance among right-wing terror groups or their criminal supporters by subjecting them to this type of treatment. At last, the open society has ways of making people tolerant! The utilitarian case for this is similar to that for treating other violent offenders against their will. It's fraught, but it exists—on both sides of the political spectrum. One could well imagine the right wanting to use this technique against would-be Islamist suicide bombers, the left wanting to use it against violent white nationalists or Nazis—and the state being happy to expand its power in either case.

Thus far, however, work like Holbrook et al's is just academic science for the sake of science. Corporations are also exploring new neuro-marketing methods for the sake of profit. The irony of advances in this field is that they are in some ways very much like pre-neurotech advances in marketing. There is a good chance that your favorite grocery store uses science to regularly trick you into unnecessary purchases that are in their interests and not yours. There's a decent chance that your favorite chocolate brand manipulates your emotions implicitly to get you to choose it. This stuff is unconscious, it's non-consensual, it arguably hurts consumers and society alike in contexts like the diabesity epidemic, and it's completely unregulated. As Trevor Paglen notes, corporations—and their clients and alliances, who may be individuals, organizations, governments, or a combination of all these—literally have teams of advertising psychologists who know how to influence your choices and can individually target you with negligible effort to influence your behavior in ways that other people will never know you've been influenced; but still you think—and your friends, colleagues, and the law thinks—that you have free will.6)

On one hand, it will shock no one to acknowledge that late capitalism is government by corporate interests. On the other hand, it is intuitively shocking to look at the history of advertising (to adults) and see that it has remained largely unregulated in the face of hugely harmful phenomena to which it contributes, such as diabesity and climate change. That corporations may legally manipulate unwitting consumers to act against their own interests (and society's) for concentrated private profit and power is a bit strange—unless you consider that the logical check against such manipulation is government, and governments are overwhelmingly bought in pertinent public policy terms by the same corporations.

This regulatory regime (or lack thereof) is important in the context of neuromarketing because it underscores how the myth of free will ironically prevails, even while the cognitive liberty underpinning freedom of thought and its ilk receives scant acknowledgment, articulation, and protection. Existing consumer protections in advertising are largely nonexistent when it comes to unconscious manipulation. And this lack matters in the private sphere beyond consumer relations, too. It is not only consumers who suffer at the hands of exploitative corporations, but also individuals in everyday relationships. As Stephen Porges writes:

“We live in a world that has a cognitive bias and assumes that our actions are voluntary. We are confronted with questions related to motivation and outcome. We are asked about costs, risks, and benefits. However, state shifts in the neural regulation of the autonomic nervous system are usually not voluntary, although the state shifts have profound impact on behavior.”7)

It is difficult to explain yourself to others, to be seen and understood, when you scarcely know why you respond the way you do, yourself. This is the problem “trigger warnings” are intended to address, but the relevance of neuroception (neural responses before the level of conscious perception that can shape emotion, thought, and behavior) is much broader and deeper than only trauma-triggering situations for traumatized people. In this sense, “trigger warnings” arguably misunderstand neuroception and do not help traumatized people or the allies who would include them in public life rather than contributing to the cycle of isolation characteristic of mental health problems like depression and PTSD. Everyone can be triggered into a state of greater autonomic arousal by environmental stimuli we cannot even perceive.

Despite the science showing how vulnerable we can all be to consciously imperceptible inputs, there is really a wild west of unregulated consumer manipulation that seems unlikely to meaningfully change anytime soon. In this sense, maybe states and other powerful actors don't even need a shiny new psychic X-ray for mind-reading or mind control purposes. They can already pretty well control enough individuals in enough of their real-world political and consumer behaviors to tip the balance of democratic processes, public and planetary health, and CEO bank accounts. Maybe the psychic X-ray is here, and it's your data.

Recap and Caveats

To recap, we have 100 years of lie detection not working in the sense of lacking sufficient evidentiary basis for any identified real-world uses according to long-standing scientific consensus. The industry peddling it is corrupt and pseudoscientific. But the polygraph and its next-generation “lie detection” tools, such as lie detection AI projects including iBorderCtrl, are not the only game in town. They are on the low end of a continuum that goes pretty high and is coming out with some fascinating advances in neuroscience and neurotech that have the potential to help—and hurt—a lot of people. We increasingly understand what's going on in the brain, giving us a more direct pipeline to the soul. It really looks like the future holds a better, albeit limited, view into the brain's underlying processes, intent, consciousness, opinion formation, memory… Mind. Maybe we can eventually see, hear, and even touch inside the sacred space.

This better view comes with all these possible caveats and pitfalls…

  • Maybe it's highly individually variable.
  • Maybe it's especially highly individually or subgroup variable within small and diverse but aggregate-large subgroups like the neurodiverse, that it would take a lot of expensive and time-consuming research to identify, and that monied interests are not super interested in.
  • Maybe it only works on a tiny percentage of people at all.
  • Maybe it requires cooperation. Think of needing to hold perfectly still in an MRI machine, where even the slightest movement can prevent accurate imaging. Are we going to restrain and/or temporarily paralyze criminals (or crime victims and other witnesses) to force compliance?
  • Maybe it produces lots of noise—there is a signal somewhere in there, but wading through the noise to find one in particular could remain too computationally intensive even for machine learning to conquer.
  • Maybe a lot of results that look impressive now will not stand the tests of time, new results, and attempted replication of the old results.

That last pitfall sounds like the most historical and the least nitty-gritty scientific. But of all the possible Achilles' heels of recent neuroscience and neurotech advances, this one actually seems the most likely to bring the arc of cumulative progress crashing back down into rubble and ritual when it comes to mind-reading. Just look at the big scandal recently where Eklund and colleagues have repeatedly shown that several commonly used fMRI analysis software tools hugely inflate false-positive rates (and a common software package was overestimating significance as well in a separate bug, but that was fixed). This work fits in the context of the replicability crisis and Ioannides' work on how most published research findings are likely to be false.

The history of lie detection does not really give us cause to be optimistic about its future. Neuroscience might change this. But the current meta-science does not really give us cause to be optimistic, either.

As-If…

Putting all that aside, suppose we can make something like a psychic x-ray in the future. Then what does that mean? Does it not mean we have a right to not be looked into like that? Doesn't this at least prospectively violate prohibitions against coercion to self-incriminate (e.g., being tortured or otherwise forced to confess)? Or are brains that can be read just like DNA that can be sequenced, so police could request and a judge could order mind-reading whether you like it or not, given probable cause? Is there a sacred space, a being or part of consciousness, that is perhaps also bodily in some ways and yet qualitatively not like a body part or bit of blood or hair—so that it might produce forensics in an era of a real psychic x-ray that we could rely on as a matter of science to determine the truth, to prove your guilt or innocence—but that we would not want to as a society?

The implicit inner castle doctrine behind the U.S. Fourth and Fifth Amendment protections against unreasonable search and seizure (including specifically of your person) and against self-crimination as advised in the customary police Miranda warning8) goes something like this:

A man's mind is his castle.9) The privacy of that castle is protected to the highest degree among privacy rights. Sort-of. At least, it's well-established in civil and human rights law that privacy matters for many other fundamental rights. In the U.S., its significance is enshrined in Constitutional law as ”penumbra theory,“ or the idea that the Bill of Rights only makes conceptual sense if we assume a privacy right that its explicitly delineated rights' shadows (penumbras) throw. (Apparently, “shadow theory” would have sounded too creepy and ridiculous.) So it's at once an afterthought and a foundation. Privacy of thought, feeling, and other aspects of what we might call the internal experience (although they might also be bodily in various ways) are logically necessary but not sufficient conditions of the other, expressly articulated freedoms. For there can be no freedom of religion, speech, press, or assembly, without freedom of being, thought, and feeling.

Why hasn't this been said, enshrined in law from the first constitution, and defended in court from the first civil rights case? Is privacy somehow a privileged twenty-first century concern that we only get to after greater external rights protections are entrenched (for many people in many places)? Or is the historical silence on this sacred space and its apparently foundational importance to civil rights a problem of imagination that technological advances force us to address in this moment for reasons of existential necessity, whereas before it was an optional point of conceptual clarity?

Neither ancient Stoics like Epictectus, nor Reformation error thinkers like Milton could imagine that will itself, the internal being of a person, could conceivably be manipulated and compelled in the direct ways we now know it may be through older techniques and tools such as hypnosis and psychedelics, and newer ones such as brain stimulation technologies discussed above. These new possibilities for invasion of the privacy of not only a man's physical castle (his home), but also his figurative one (his mind) blur the line between real or physical evidence (e.g., fingerprints and DNA) and testimonial evidence that was previously drawn in order to square the then-new science of forensics with the old protection against self-incrimination. Historically, privacy rights are relative, not absolute. Courts have weighed costs and benefits to individuals and society when drawing these lines.

But conceiving of privacy rights as relative conflicts with conceiving of the right to remain silent as a fundamental right in itself, and as part of the equally fundamental right of due process. If your mind itself can be directly tapped against your will, then your right to privacy protecting against that must be absolute—or else these other rights can no longer exist. Thus Paul Wolpe suggests the privacy of the mind is unique, foundational, and must be absolute.10)

But, relative or absolute, that right does not yet have a firm and explicit legal, scientific, or philosophical basis. This raises new questions about how legal and ethical regimes can and should protect fundamental rights. These questions have been most notably explored by Ienca and Andorno, who suggest new human rights may be needed to protect existing human rights from the implications of emerging neurotechnology applications: the rights to cognitive liberty (aka mental self-determination per Bublitz 2013, controlling one's own thought process as a necessary condition of nearly every other freedom per Sententia 2004), mental privacy, mental integrity, and psychological continuity. While scholars push for the law to catch up with technology, lawyers at organizations like the Electronic Frontier Foundation bend the digital world toward fairness using pre-digital ideas, according to EFF Executive Director Cindy Cohn.11) In this context, typical civil rights proponents like scholars, NGOs, journalists, and activist groups have a rare opportunity to shape the legal regime pre-emptively, before neurotech has advanced in research and field use, and before some of its more dangerous potentials have even been developed (if indeed they can and will be). Now—before the technology actually exists to violate these ill-protected rights—is the time to consider these questions and try to generate a framework to protect privacy at the level of the mind.

Conclusion

We have cause to worry about the possibility that someone might one day develop something like a psychic X-ray in the future. That doesn't mean anybody currently looking at this is on the right track. Especially everything using autonomic nervous system proxy measures to quantify or estimate probabilities of deception as an internal state corresponding to arousal—like the control question test polygraph and its next-generation pupil dilation measurement tools. These techniques are insufficiently evidence-based for reliance in real-world contexts. Just because it could happen someday, doesn't mean anyone can currently read your mind..

The fact that we can't do this yet—but there's already a whole industry that ostensibly thinks that we can, and makes lots of money (especially government money) selling that lie—is itself a very dangerous state. It's a situation rife for waste, fraud, and abuse. As Julia Angwin has noted, even inaccurate surveillance tools contribute to attempted compliance with illegitimate authorities, because they give people the illusion that the system can be gamed with good behavior: a perverse incentive to defect from the collective interest of fighting the power, and go along to get along instead.12) And that situation is worsening as that industry continues growing. Meanwhile, someone may actually someday develop a lie detector, pipeline to the soul, psychic X-ray, or whatever that really works. We should be ready for that way before it happens, or our human rights and societies will suffer.

If we define a right, and the tech that right guards against doesn't ever develop to a point where it really works, is that so bad? Then the tech can't properly violate the right. Ok. It's still better to define the right that tech turns out not to be able to violate. You don't have to be a believer in lie detection or anything like it in order to define this right. In fact, it's easier for a skeptic because, for a non-believer, there's no cost to defining this right. We're not losing the ability to capture terrorists or prevent mass murder, because we would've never gained it.

So if you believe the the power of technology now or in the future to more or less read minds, then you should be concerned about cognitive liberty. If you are, conversely, a non-believer in the current tech and its future potential alike, then you should advocate for this right more, not less. Because that would mean that the sacred space of being before conscious thought, feeling, and action, that underpins freedom of thought, speech, press, and all the rest of our most basic civil rights, is so unprotected at present that it is difficult even to find a common language in which to speak of it. And a right you cannot so much as name is one that can be most easily taken away.

1)
E.g., NAD+ and metformin—tools with potential in fighting aging, autoimmunity, and inflammation, but that we are just beginning to learn about.
2)
Of course the tech has been funded in part by DARPA and DOD.
3)
This research could also apply to normal-hearing subjects who would like to reduce listening effort in these busy channel-switching environments. Think of auditory processing disorder and its overlaps with ADHD and autism spectrum disorders. In the aggregate, this sort of problem is double-digit common. But, if the target market is people who will eventually want something like this implanted to continuously listen to their brainwaves, that market is probably closer to 2% than 20%.
4)
Lykken was a lifelong, vehement critic of the “control question test (CQT)” format polygraph interview and interrogation method that remains by far the most common type used in North America today. As part of his rigorous opposition to this method, he proposed a polygraph test format with scientific basis, the guilty knowledge or concealed information test (GKT or CIT). The latter type has supposedly since gained usage in military/intelligence and policing in Israel and Japan, although questions about the use of torture to obtain confessions in both countries remain commonplace and concerning. Lykken's intent was to discourage such abuse of power and to encourage scientific fact-finding in investigative and adjudicative processes by showing that the polygraph could be done in a valid way—albeit with far fewer possible use cases, and requiring more training, case knowledge, and planning on the part of the polygrapher—and was generally not.
5)
The most promising recent research specifically on fMRI and lie detection similarly requires training. That is, the computer model tries to learn a particular brain's patterns of lying versus truth-telling through studying examples. This facet, as well as the equipment's operating expense and non-portable nature, make it less interesting to governments and other entities that would like to develop lie detection for mass screening or specific-incident national security and criminal justice purposes. One needs extremely cooperative subjects for these techniques as yet, and those are arguably not the priority use cases for such technologies. At least, in cases where there is a clear and present danger to public safety, as in a planned terror attack, one could argue that it would be better to have a mind-reading tool as accurate and unconditional as DNA in the forensic realm. “Where is the bomb?” or “What is the pathogen's incubation period?” an interrogator asks—and the brain that knows can't help but answer.
6)
McSweeney's The End of Trust, p. 62.
8)
…and its equivalents in non-U.S. jurisdictions.
9)
A play on the ancient Roman and old English expression ”A man's home is his castle.“ With deference and gratitude to G for the poetic and apt suggestion.
10)
Ienca and Andorno: “Paul Root Wolpe has suggested that due to fears of government oppression, we should draw a bright line around the use of mind-reading technologies:
'The skull should be designated as a domain of absolute privacy. No one should be able to probe an individual’s mind against their will. We should not permit it with a court order. We should not permit it for military or national security. We should forgo the use of the technology under coercive circumstances even though using it may serve the public good' (Wolpe 2009).
Similarly, it has been argued that “nonconsensual mind reading is not something we should never [sic] engage in” (Stanley 2012). The claim is that mind-reading techniques constitute “a fundamental affront to human dignity” (Ibid). Consequently, “we must not let our civilization’s privacy principles degrade so far that attempting to peer inside a person’s own head against their will ever become [sic] regarded as acceptable” (Ibid).
11)
McSweeney's The End of Trust, p. 25.
12)
McSweeney's The End of Trust, p. 60-61