Protecting Sacred Space: Real "Lie Detection" Would Threaten Human Rights As We Know Them
In previous posts, dr. Vera Wilde detailed problems with the iBorderCtrl team’s claims about its secretive AI “lie detector” component and its supposed ability to detect “biomarkets of deceit,” as well as reasons why the tool is prone to automating discrimination; building on Ioannides, the iBorderCtrl team’s past accuracy claims and future published findings are likely to be false; and, building on Damasio, the emerging neuroscientific consensus against Cartesian dualism particularly in the realm of decision science suggests that building AI decision-making into processes that substantially impact security decisions, as iBorderCtrl does, may be a very bad idea. But who cares about any of this if better tech is coming soon? Could real “lie detection” solve all these problems? Imagine there’s no liberty of being, thought, or feeling prior to outside observation—no protected sacred space of personhood…
Inside every person, there is a force that may be evident yet is unseen. An internal spark of life some people would call spirit. A being some would call soul. Others would disdain the language and baggage of the whole religious realm, preferring to see and say things about the mind and perhaps the heart—but including perhaps a non-dualist understanding à la Damasio of multi-directional communication between the brain and body via the vagal nerve and other pathways as suggested by Porges and others. Still others would insist on talking about the brain alone, and its pride of metaphorical place as the operating system on the hardware of the body. Contemporary scholars of cognitive liberty have thus far not much grappled with the question of what exactly this force is. What is it that is sacred inside of us, where we are free to be, and from this being there can arise free thought and feeling—and from that free thought and feeling, free speech, religion, and association—and all the other civil rights and civil liberties to which we owe the quality of life in free societies that we might wish to better understand, to bolster, and to protect from future incursions by increasingly pervasive surveillance technologies, brutish authoritarianism, resurgent prejudice, and the latent traumas of climate and other disasters that seem likely to prune the burning bush of human civilization?
Science and the humanities cannot yet offer a definitive answer to this question. And yet we know quite well across both cultures, in C.P. Snow's parlance, the blessing and the curse of this sacred space. For in addition to being the uncelebrated basis of all other freedoms, this sacred space of being before will, knowing before articulation, becoming before conscious knowledge, and all the other dressings of the mind in the darkness of its full and still-mysterious workings—in addition to giving us freedom to be true, to live authentically, to express who we are—this sacred space also gives us the ability to lie. So religion, according to anthropologist Scott Atran, can be defined as: “(1) a community's hard-to-fake commitment (2) to a counterfactual and counterintuitive world of supernatural agents (3) who master people's existential anxieties, such as death and deception.”
Good Logic, Bad Life: Decision-Making Neuroscience Suggests AI Decision-Making's Weaknesses
We humans have tended to think we are sooo rational. Seventeenth-century French philosopher René Descartes's formulation of the cogito, or the proposition that we think therefore we are ( cogito, ergo sum )—that is, we cannot doubt our existence itself while we doubt—remains the best-known statement of the epistemological significance of human consciousness. But, fast-forward to the early twentieth century… And scholars like neuroscientist Antonio Damasio suggest the Cartesian theory of mind-body dualism doesn't fit the scientific evidence.1) At all. For that evidence increasingly suggests that good, ethical, socially conscious, and rationally self-serving decision-making actually involves parts of the brain responsible for emotions. Without social learning and emotional processing, in other words, there is no active rationality—no great decision-making prowess comes to us from reasoning without feeling. And without bodily feeling, there is no emotion: physicality comes first. So the more disconnected from the organism as a whole (including the body) we try to make our minds—the less that chatterbox, intellectualizing, explaining brain can tell us about what it is we want, what we should then choose, or who we are. In this context, the anti-cogito seems more accurate: cogito, ergo sum falsus… I think (without feeling)—therefore I am false/wrong. Or perhaps instead, as polyvagal theorist Stephen Porges suggests, "Je me sens, donc je suis"—I feel (myself), therefore I am. Either way, the limitations of human intelligence as a Cartesian construct bear on AI in the context of algorithmic decision-making… Making it an even worse idea in the realm of security than might at first seem obvious.
People are Dumb
Humans are actively not that smart. Not as compared to our evolutionary predecessors. Not on metrics of basic functioning. The paradox of human behavior is that we act dumber because we are (in some ways) more intelligent. No other species screws up its basic functions—eating, sleeping, socializing, mating, and co-existing in a sustainable ecosystem—half as much as us. We are the idiot-savants of the animal kingdom.
Our unique intelligence leads us to make more mistakes than other species, and our mistakes can impact all other species. But the same characteristic makes it so that we can also learn from our mistakes through conscious thought. That is our gift. And our curse.
Gift: We’re learning more and more about the frontiers of consciousness. With this advancement come advances in neurotechnology. At present, there is still no known, unique lie response to detect—and therefore no such thing as a "lie detector." Nor is there currently a machine that can see inside your head to determine your intent, read your thoughts, or divine your history. But we're getting closer and closer, such that it might be possible one day to have such a “psychic x-ray,“ or a direct pipeline from the outside, shared representations of reality, to what is going on inside your individual consciousness, be it in the realm of future intent, unvocalized internal monologue, memory, or something else. It could be helpful to know what you really want deep down when you're feeling consciously conflicted—or what your partner or colleague is really thinking when you can't understand their perspective. So this seems like it could happen and it could be good in some ways, although it also carries significant dangers to be discussed in a future post.
Why Most Published iBorderCtrl Research Findings Are Likely to Be False
In a previous post, dr. Vera Wilde detailed problems with the iBorderCtrl team's repeated claims regarding the tool's accuracy, likely future accuracy, and what it is the tool does in the first place. That post detailed why iBorderCtrl is almost certainly not as accurate as it claims.2) Its accuracy numbers are definitely not accurate themselves as a way of representing research findings of this type.3) And this type of AI “lie detection” will probably never be as accurate as the iBorderCtrl team repeatedly claims theirs will be in the immediate future,4) and is not doing what it claims to be doing.5) In another previous post, iBorderCtrl's susceptibility to a number of vulnerabilities that make its published research findings more likely to be false was briefly noted. This critique extends those two. All three posts criticize iBorderCtrl from a scientific perspective with a focus on methods. There are also important criticisms to consider from the standpoints of current legal and ethical norms, and future possible threats to human rights from these sorts of technologies. Those will be explored in upcoming posts…
In 2005, physician and leading evidence-based medicine proponent John Ioannides published a paper in the peer-reviewed open-access journal PloS one entitled “Why most published research findings are false.”6) His argument is at least as true for most published “lie detection” research findings as it is for medical and psychology publications—publications that tend to be subjected to greater scrutiny from various institutional sources, including typical university Institutional Review Board scrutiny for human subjects research and typical journal peer-review processes, than does industry- and government-led lie detection research. This means we should probably expect an even greater likelihood that most published research findings on lie detection are false.
Ioannides argues:
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.
Ioannides identifies six risk factors that lessen a finding's likelihood of being true. iBorderCtrl has all six. Here's how.
Point One: “a research finding is less likely to be true when the studies conducted in a field are smaller”
Smaller studies with a sample size of less than forty participants has been mentioned multiple times.
iBorderCtrl Automates Discrimination
(and no amount of handwaving makes that go away)
Discrimination in Europe is not up for debate. Article 6 of the Treaty on European Union founds the EU on common principles of liberty, democracy, respect for human rights and fundamental freedoms, and the rule of law. Title 3 of the Charter of Fundamental Rights fills in these principle: Everyone is equal before the law (Article 20), and so discrimination on the basis of sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation, as well as nationality within the scope of application of the Treaty establishing the European Community and of the Treaty on European Union, is prohibited (Article 21). This fundamental moral and legal consensus against discrimination has led the European Commission to issue several directives to protect your rights. For instance, Council Directive 2000/43/EC of 29 June 2000 implementing the principle of equal treatment between persons irrespective of racial or ethnic origin, which states:
The right to equality before the law and protection against discrimination for all persons constitutes a universal right recognised by the Universal Declaration of Human Rights, the United Nations Convention on the Elimination of all forms of Discrimination Against Women, the International Convention on the Elimination of all forms of Racial Discrimination and the United Nations Covenants on Civil and Political Rights and on Economic, Social and Cultural Rights and by the European Convention for the Protection of Human Rights and Fundamental Freedoms, to which all Member States are signatories.
As the EU's politically independent executive arm, the European Commission is supposed to protect your interests and enforce EU law by protecting you against discrimination. Yet it's the Commission itself that decided the EU would fund iBorderCtrl to the tune of €4,501,877. iBorderCtrl is an automated border security system currently being tested at EU borders. It involves subjecting non-EU citizens to biometric identification and “lie detection” AI—among other Orwellian components—that make automated discrimination likely.
In “Rise of the racist robots – how AI is learning all our worst impulses," (The Guardian, 8 Aug. 2017), Stephen Buranyi summarizes a collection of recent, well-publicized instances in which AI created rather than eliminating bias:
Programs developed by companies at the forefront of AI research have resulted in a string of errors that look uncannily like the darker biases of humanity: a Google image recognition program labelled the faces of several black people as gorillas; a LinkedIn advertising program showed a preference for male names in searches, and a Microsoft chatbot called Tay spent a day learning from Twitter and began spouting antisemitic messages.
But don't worry, says the iBorderCtrl consortium. There are no possible real-world consequences for subjects in their AI's test run! Because the tests are “encapsulated”! And AI could eliminate bias!
Stop. It is unclear what this encapsulation consists of, how it works, or why—as the consortium asserts—it would keep the system from causing problems for people using it in the real world. At best, the tool's developers seem to be unaware of the risks such a project entails. Because even under the best conditions and with good intentions on all sides, an AI decision-making support tool like iBorderCtrl is vulnerable to making prejudiced decisions look neutral and scientific by hiding bias in various ways. And this lack of awareness is a very serious problem in and of itself, because it suggests the researchers are not set up to identify possible bias.
Biomarkers of Scientific Deceit
The iBorderCtrl system contains a Do-It-Yourself “lie detector” where the traveller talks to an avatar on his/her own screen while secretive machine-learning AI in some server rack looks at the facial expressions through the subject's own camera to see if he/she is lying. Is that even real? In this blog post, dr. Vera Wilde examines some of the claims in detail.
The iBorderCtrl consortium (led by European Dynamics) repeatedly talks about how their ADDS/Silent Talker “lie detector” component is using micro-expressions to detect deception, even going so far as to claim that their system will identify and classify people based on “biomarkers of deceit.” But that claim is (grossly) insufficiently evidence-based. And because it is such a central claim, it is really a big red flag indicating that iBorderCtrl is engaging in pseudoscience for profit.
Just to be clear: there is absolutely no scientific basis for the assertion that unique “biomarkers of deceit” exist, or are about to be discovered after centuries of fruitless pursuit. Rather, the solid scientific consensus on physiological deception detection is that we can't do it. “Lie detection” doesn't exist, because there is no unique lie response to detect.
“This is Descartes' error: the abysmal separation between body and mind, between the sizable, dimensioned, mechanically operated, infinitely divisible body-stuff, on the one hand, and the unsizable, undimensioned, un-pushpullable, nondivisible mind stuff; the suggestion that reasoning, and moral judgment, and the suffering that comes from physical pain or emotional upheaval might exist separately from the body. Specifically: the separation of the most refined operations of mind from the structure and operation of a biological organism.”—Descartes' Error, p. 249-250As an alternative creative synthetic understanding of the contemporary scientific literature, he suggests the hypotheses “that feelings are a powerful influence on reason, that the brain systems required by the former are enmeshed in those needed by the latter, and that such specific systems are interwoven with those which regulate the body” (p. 245).