Protecting Sacred Space: Real "Lie Detection" Would Threaten Human Rights As We Know Them

In previous posts, dr. Vera Wilde detailed problems with the iBorderCtrl team’s claims about its secretive AI “lie detector” component and its supposed ability to detect “biomarkets of deceit,” as well as reasons why the tool is prone to automating discrimination; building on Ioannides, the iBorderCtrl team’s past accuracy claims and future published findings are likely to be false; and, building on Damasio, the emerging neuroscientific consensus against Cartesian dualism particularly in the realm of decision science suggests that building AI decision-making into processes that substantially impact security decisions, as iBorderCtrl does, may be a very bad idea. But who cares about any of this if better tech is coming soon? Could real “lie detection” solve all these problems? Imagine there’s no liberty of being, thought, or feeling prior to outside observation—no protected sacred space of personhood…

Inside every person, there is a force that may be evident yet is unseen. An internal spark of life some people would call spirit. A being some would call soul. Others would disdain the language and baggage of the whole religious realm, preferring to see and say things about the mind and perhaps the heart—but including perhaps a non-dualist understanding à la Damasio of multi-directional communication between the brain and body via the vagal nerve and other pathways as suggested by Porges and others. Still others would insist on talking about the brain alone, and its pride of metaphorical place as the operating system on the hardware of the body. Contemporary scholars of cognitive liberty have thus far not much grappled with the question of what exactly this force is. What is it that is sacred inside of us, where we are free to be, and from this being there can arise free thought and feeling—and from that free thought and feeling, free speech, religion, and association—and all the other civil rights and civil liberties to which we owe the quality of life in free societies that we might wish to better understand, to bolster, and to protect from future incursions by increasingly pervasive surveillance technologies, brutish authoritarianism, resurgent prejudice, and the latent traumas of climate and other disasters that seem likely to prune the burning bush of human civilization?

Science and the humanities cannot yet offer a definitive answer to this question. And yet we know quite well across both cultures, in C.P. Snow's parlance, the blessing and the curse of this sacred space. For in addition to being the uncelebrated basis of all other freedoms, this sacred space of being before will, knowing before articulation, becoming before conscious knowledge, and all the other dressings of the mind in the darkness of its full and still-mysterious workings—in addition to giving us freedom to be true, to live authentically, to express who we are—this sacred space also gives us the ability to lie. So religion, according to anthropologist Scott Atran, can be defined as: “(1) a community's hard-to-fake commitment (2) to a counterfactual and counterintuitive world of supernatural agents (3) who master people's existential anxieties, such as death and deception.”

→

Wed 19 Jun 2019 - 17:02

Good Logic, Bad Life: Decision-Making Neuroscience Suggests AI Decision-Making's Weaknesses

We humans have tended to think we are sooo rational. Seventeenth-century French philosopher René Descartes's formulation of the cogito, or the proposition that we think therefore we are ( cogito, ergo sum )—that is, we cannot doubt our existence itself while we doubt—remains the best-known statement of the epistemological significance of human consciousness. But, fast-forward to the early twentieth century… And scholars like neuroscientist Antonio Damasio suggest the Cartesian theory of mind-body dualism doesn't fit the scientific evidence.1) At all. For that evidence increasingly suggests that good, ethical, socially conscious, and rationally self-serving decision-making actually involves parts of the brain responsible for emotions. Without social learning and emotional processing, in other words, there is no active rationality—no great decision-making prowess comes to us from reasoning without feeling. And without bodily feeling, there is no emotion: physicality comes first. So the more disconnected from the organism as a whole (including the body) we try to make our minds—the less that chatterbox, intellectualizing, explaining brain can tell us about what it is we want, what we should then choose, or who we are. In this context, the anti-cogito seems more accurate: cogito, ergo sum falsus… I think (without feeling)—therefore I am false/wrong. Or perhaps instead, as polyvagal theorist Stephen Porges suggests, "Je me sens, donc je suis"—I feel (myself), therefore I am. Either way, the limitations of human intelligence as a Cartesian construct bear on AI in the context of algorithmic decision-making… Making it an even worse idea in the realm of security than might at first seem obvious.

People are Dumb

Humans are actively not that smart. Not as compared to our evolutionary predecessors. Not on metrics of basic functioning. The paradox of human behavior is that we act dumber because we are (in some ways) more intelligent. No other species screws up its basic functions—eating, sleeping, socializing, mating, and co-existing in a sustainable ecosystem—half as much as us. We are the idiot-savants of the animal kingdom.

Our unique intelligence leads us to make more mistakes than other species, and our mistakes can impact all other species. But the same characteristic makes it so that we can also learn from our mistakes through conscious thought. That is our gift. And our curse.

Gift: We’re learning more and more about the frontiers of consciousness. With this advancement come advances in neurotechnology. At present, there is still no known, unique lie response to detect—and therefore no such thing as a "lie detector." Nor is there currently a machine that can see inside your head to determine your intent, read your thoughts, or divine your history. But we're getting closer and closer, such that it might be possible one day to have such a “psychic x-ray,“ or a direct pipeline from the outside, shared representations of reality, to what is going on inside your individual consciousness, be it in the realm of future intent, unvocalized internal monologue, memory, or something else. It could be helpful to know what you really want deep down when you're feeling consciously conflicted—or what your partner or colleague is really thinking when you can't understand their perspective. So this seems like it could happen and it could be good in some ways, although it also carries significant dangers to be discussed in a future post.

→

Tue 28 May 2019 - 18:15

Why Most Published iBorderCtrl Research Findings Are Likely to Be False

In a previous post, dr. Vera Wilde detailed problems with the iBorderCtrl team's repeated claims regarding the tool's accuracy, likely future accuracy, and what it is the tool does in the first place. That post detailed why iBorderCtrl is almost certainly not as accurate as it claims.2) Its accuracy numbers are definitely not accurate themselves as a way of representing research findings of this type.3) And this type of AI “lie detection” will probably never be as accurate as the iBorderCtrl team repeatedly claims theirs will be in the immediate future,4) and is not doing what it claims to be doing.5) In another previous post, iBorderCtrl's susceptibility to a number of vulnerabilities that make its published research findings more likely to be false was briefly noted. This critique extends those two. All three posts criticize iBorderCtrl from a scientific perspective with a focus on methods. There are also important criticisms to consider from the standpoints of current legal and ethical norms, and future possible threats to human rights from these sorts of technologies. Those will be explored in upcoming posts…

In 2005, physician and leading evidence-based medicine proponent John Ioannides published a paper in the peer-reviewed open-access journal PloS one entitled “Why most published research findings are false.”6) His argument is at least as true for most published “lie detection” research findings as it is for medical and psychology publications—publications that tend to be subjected to greater scrutiny from various institutional sources, including typical university Institutional Review Board scrutiny for human subjects research and typical journal peer-review processes, than does industry- and government-led lie detection research. This means we should probably expect an even greater likelihood that most published research findings on lie detection are false.

Ioannides argues:

There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.

Ioannides identifies six risk factors that lessen a finding's likelihood of being true. iBorderCtrl has all six. Here's how.

Point One: “a research finding is less likely to be true when the studies conducted in a field are smaller”

Smaller studies with a sample size of less than forty participants has been mentioned multiple times.

→

Fri 3 May 2019 - 17:56

iBorderCtrl Automates Discrimination

(and no amount of handwaving makes that go away)

Discrimination in Europe is not up for debate. Article 6 of the Treaty on European Union founds the EU on common principles of liberty, democracy, respect for human rights and fundamental freedoms, and the rule of law. Title 3 of the Charter of Fundamental Rights fills in these principle: Everyone is equal before the law (Article 20), and so discrimination on the basis of sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation, as well as nationality within the scope of application of the Treaty establishing the European Community and of the Treaty on European Union, is prohibited (Article 21). This fundamental moral and legal consensus against discrimination has led the European Commission to issue several directives to protect your rights. For instance, Council Directive 2000/43/EC of 29 June 2000 implementing the principle of equal treatment between persons irrespective of racial or ethnic origin, which states:

The right to equality before the law and protection against discrimination for all persons constitutes a universal right recognised by the Universal Declaration of Human Rights, the United Nations Convention on the Elimination of all forms of Discrimination Against Women, the International Convention on the Elimination of all forms of Racial Discrimination and the United Nations Covenants on Civil and Political Rights and on Economic, Social and Cultural Rights and by the European Convention for the Protection of Human Rights and Fundamental Freedoms, to which all Member States are signatories.

As the EU's politically independent executive arm, the European Commission is supposed to protect your interests and enforce EU law by protecting you against discrimination. Yet it's the Commission itself that decided the EU would fund iBorderCtrl to the tune of €4,501,877. iBorderCtrl is an automated border security system currently being tested at EU borders. It involves subjecting non-EU citizens to biometric identification and “lie detection” AI—among other Orwellian components—that make automated discrimination likely.

In “Rise of the racist robots – how AI is learning all our worst impulses," (The Guardian, 8 Aug. 2017), Stephen Buranyi summarizes a collection of recent, well-publicized instances in which AI created rather than eliminating bias:

Programs developed by companies at the forefront of AI research have resulted in a string of errors that look uncannily like the darker biases of humanity: a Google image recognition program labelled the faces of several black people as gorillas; a LinkedIn advertising program showed a preference for male names in searches, and a Microsoft chatbot called Tay spent a day learning from Twitter and began spouting antisemitic messages.

But don't worry, says the iBorderCtrl consortium. There are no possible real-world consequences for subjects in their AI's test run! Because the tests are “encapsulated”! And AI could eliminate bias!

Stop. It is unclear what this encapsulation consists of, how it works, or why—as the consortium asserts—it would keep the system from causing problems for people using it in the real world. At best, the tool's developers seem to be unaware of the risks such a project entails. Because even under the best conditions and with good intentions on all sides, an AI decision-making support tool like iBorderCtrl is vulnerable to making prejudiced decisions look neutral and scientific by hiding bias in various ways. And this lack of awareness is a very serious problem in and of itself, because it suggests the researchers are not set up to identify possible bias.

→

Fri 18 Jan 2019 - 08:42

Biomarkers of Scientific Deceit

The iBorderCtrl system contains a Do-It-Yourself “lie detector” where the traveller talks to an avatar on his/her own screen while secretive machine-learning AI in some server rack looks at the facial expressions through the subject's own camera to see if he/she is lying. Is that even real? In this blog post, dr. Vera Wilde examines some of the claims in detail.

The iBorderCtrl consortium (led by European Dynamics) repeatedly talks about how their ADDS/Silent Talker “lie detector” component is using micro-expressions to detect deception, even going so far as to claim that their system will identify and classify people based on “biomarkers of deceit.” But that claim is (grossly) insufficiently evidence-based. And because it is such a central claim, it is really a big red flag indicating that iBorderCtrl is engaging in pseudoscience for profit.

Just to be clear: there is absolutely no scientific basis for the assertion that unique “biomarkers of deceit” exist, or are about to be discovered after centuries of fruitless pursuit. Rather, the solid scientific consensus on physiological deception detection is that we can't do it. “Lie detection” doesn't exist, because there is no unique lie response to detect.

→

Fri 11 Jan 2019 - 08:06
Damasio summarizes his critique of Cartesian dualism thusly:
“This is Descartes' error: the abysmal separation between body and mind, between the sizable, dimensioned, mechanically operated, infinitely divisible body-stuff, on the one hand, and the unsizable, undimensioned, un-pushpullable, nondivisible mind stuff; the suggestion that reasoning, and moral judgment, and the suffering that comes from physical pain or emotional upheaval might exist separately from the body. Specifically: the separation of the most refined operations of mind from the structure and operation of a biological organism.”—Descartes' Error, p. 249-250
As an alternative creative synthetic understanding of the contemporary scientific literature, he suggests the hypotheses “that feelings are a powerful influence on reason, that the brain systems required by the former are enmeshed in those needed by the latter, and that such specific systems are interwoven with those which regulate the body” (p. 245).
As previously discussed in greater detail, existing micro-expressions and other lie detection research is insufficient to support its use in mass screenings. In addition to that general problem, iBorderCtrl accuracy figures specifically are probably artificially inflated by the use of different environments, people, and interactions than are relevant to field contexts.
One-number accuracy claims like the ones iBorderCtrl has been consistently throwing out (i.e., 76% ) aren't generally used to report results in this type of research (e.g., micro-expressions research). Instead, confusion matrices are used, because the outcomes aren't binary—there are many more than two possible outcome variable values. And even if the outcomes of interest could be regrouped as binary—which would not be accurate and therefore is not possible in this context as a matter of good science—then researchers would still need to report these outcomes in a 2×2 Bayesian table to accurately reflect what is going on with accuracy. For examples of those tables in this context, please see the ones in this essay.
The most obvious reason for this is that iBorderCtrl's lie detection component is based on a lie detection tool, Silent Talker, that was originally trained on disproportionately homogeneous samples skewed toward participants “of European origin.” We know from other AI, biometrics, and related research that results from a small, relatively homogeneous population don't usually generalize well to a larger, more diverse one. And that really matters in the case of iBorderCtrl, since it's ostensibly intended for use on non-EU citizens. So the target population is distinctly different from the population that the tool was trained on, and other research shows that will probably degrade accuracy in the actual population of interest.
The iBorderCtrl team repeatedly claims their tool identifies biomarkers of deception. But leading scientific experts on the matter have long agreed the core scientific problem with lie detection is that there's no such thing as biomarkers of deception—there is no unique lie response to detect. So unless this team has a big scientific discovery to announce, on the order of a Nobel-winning advance in psychophysiology, their representation of their research is itself fundamentally dishonest.
There are, of course, several Ioannides-style heuristics out there for assessing what is good science. Another one of the best ones is Robert Abelson's MAGIC criteria from his Statistics as Principled Argument (1995). Abelson suggests the criteria for a persuasive statistical argument are Magnitude (bigger are more compelling than smaller effect sizes), Articulation (more precise are more compelling than imprecise statements), Generality (more general effects and applications that would interest a broad audience are more compelling than less general effects and applications that wouldn't), Interestingness (more interesting and surprising effects are more compelling than less interesting or merely confirmatory ones), and Credibility (credible claims are more compelling than incredible ones). Obviously, the M here overlaps with Ioannides' Point One. But after this, notice that the statistician (Abelson) is less concerned with the methods and more concerned with the humanity, while the doctor (Ioannides) is more concerned with methods throughout. Is that because the practice of science and medicine so degraded between 1995 and 2005 that methodologists have to crack the whip better and harder to keep pace with what is at worst fraud or corruption and at best merely bad science? Or is there something about statistical methods that can seem paramount to outsiders, while statisticians tend to be more worried about things like the writing, whether the research is interesting, and other sorts of more aesthetic or social concerns?