iBorderCtrl Automates Discrimination
(and no amount of handwaving makes that go away)
Discrimination in Europe is not up for debate. Article 6 of the Treaty on European Union founds the EU on common principles of liberty, democracy, respect for human rights and fundamental freedoms, and the rule of law. Title 3 of the Charter of Fundamental Rights fills in these principle: Everyone is equal before the law (Article 20), and so discrimination on the basis of sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation, as well as nationality within the scope of application of the Treaty establishing the European Community and of the Treaty on European Union, is prohibited (Article 21). This fundamental moral and legal consensus against discrimination has led the European Commission to issue several directives to protect your rights. For instance, Council Directive 2000/43/EC of 29 June 2000 implementing the principle of equal treatment between persons irrespective of racial or ethnic origin, which states:
The right to equality before the law and protection against discrimination for all persons constitutes a universal right recognised by the Universal Declaration of Human Rights, the United Nations Convention on the Elimination of all forms of Discrimination Against Women, the International Convention on the Elimination of all forms of Racial Discrimination and the United Nations Covenants on Civil and Political Rights and on Economic, Social and Cultural Rights and by the European Convention for the Protection of Human Rights and Fundamental Freedoms, to which all Member States are signatories.
As the EU's politically independent executive arm, the European Commission is supposed to protect your interests and enforce EU law by protecting you against discrimination. Yet it's the Commission itself that decided the EU would fund iBorderCtrl to the tune of €4,501,877. iBorderCtrl is an automated border security system currently being tested at EU borders. It involves subjecting non-EU citizens to biometric identification and “lie detection” AI—among other Orwellian components—that make automated discrimination likely.
In “Rise of the racist robots – how AI is learning all our worst impulses," (The Guardian, 8 Aug. 2017), Stephen Buranyi summarizes a collection of recent, well-publicized instances in which AI created rather than eliminating bias:
Programs developed by companies at the forefront of AI research have resulted in a string of errors that look uncannily like the darker biases of humanity: a Google image recognition program labelled the faces of several black people as gorillas; a LinkedIn advertising program showed a preference for male names in searches, and a Microsoft chatbot called Tay spent a day learning from Twitter and began spouting antisemitic messages.
But don't worry, says the iBorderCtrl consortium. There are no possible real-world consequences for subjects in their AI's test run! Because the tests are “encapsulated”! And AI could eliminate bias!
Stop. It is unclear what this encapsulation consists of, how it works, or why—as the consortium asserts—it would keep the system from causing problems for people using it in the real world. At best, the tool's developers seem to be unaware of the risks such a project entails. Because even under the best conditions and with good intentions on all sides, an AI decision-making support tool like iBorderCtrl is vulnerable to making prejudiced decisions look neutral and scientific by hiding bias in various ways. And this lack of awareness is a very serious problem in and of itself, because it suggests the researchers are not set up to identify possible bias.
We looked at the iBorderCtrl FAQ1) and dissected it point by point, so you don't have to. We found no good basis for their claim that rights are not at risk in current piloting. What they mean by encapsulation is unclear, and what it seems to mean here is insufficient to actually protect people from discrimination. In other words, their claim that there's absolutely no reason to worry about bias in iBorderCtrl doesn't pass the smell test. As we conclude in the in-depth analysis of their FAQ, iBorderCtrl likely makes travelers vulnerable to discrimination at EU borders now. Here are a few highlights of why that is the case.
First, they say iBorderCtrl is a research product and not about product development. But that assertion lacks face plausibility. The lead developers have a financial stake in the patented deception detection component, ADDS/Silent Talker. The research is not being done in a properly documented manner that makes enough data public for there to be meaningful public dialogue before it gets rolled out at EU borders. If this is purely about research, then why not involve some independent researchers for testing and subsequent evaluation who actually have no stake in the tech, as opposed to letting people run the show who have an obvious material conflict of interest? And why should we expect people with such conflicts of interest to be their own best watchdogs when it comes to assessing their tech for bias?
Second, they keep saying travellers' fundamental rights won't be violated because the test pilots are encapsulated. But insofar they describe what that means at all, it sounds more like the tech is totally pervasive in test subjects' border-crossing experience. Testees are to begin iBorderCtrl processing before they travel and end it after they have crossed the border. Thus these pilots are encapsulated like bread in a sandwich rather than like a substance in a capsule—which is to say, not at all in the sense that anyone would correctly understand the term “encapsulation.”
And finally, they say law enforcement agencies won't be able to access travellers' data from the piloting. But it seems like law enforcement agencies are already deeply involved. The Hungarian National Police has apparently had a lead role in iBorderCtrl development, and continues to lead its privacy management. Border guards in Greece, Latvia, and Hungary are at least involved in conducting tests on the ground. So it doesn't make sense to call law enforcement agencies third parties and say they won't have access to the data, when it's law enforcement agencies that are helping to generate and collect that data in the first place.
Automating Discrimination
Questions about bias plague iBorderCtrl. While its proponents assert bias couldn't affect its current EU piloting, that assertion is insufficiently supported by the available evidence. To the contrary, AI of this nature tends to be especially vulnerable to bias. Research using very small and select study samples, like the ones the researchers here are using, tends to be especially vulnerable to bias. And research that is like this in many other ways detailed below tends to be especially vulnerable to bias.
iBorderCtrl developers say their pilot studies can't possibly violate people's rights or make biased judgments that affect people's lives. Their rationale for this assertion is that the current test run at EU borders is “encapsulated.” But that rationale is without logical or evidentiary basis. And this matters, because the system could introduce racial/ethnic, confirmation, and other biases. These biases would be wrong and should prevent adoption of such systems until their neutrality has been sufficiently supported by scientific evidence. Otherwise, we are requiring some of society's most vulnerable members to bear the costs of testing whether something discriminates against them, or not. That ask itself violates equality before the law.
The burden of proof is on researchers to positively guard against doing harm. Researchers' first step in that protection is typically to disclose to their Institutional Review Boards, research subjects, and the inquiring public about what the risks of their research are, how they're mitigating or guarding against those risks, and why possible benefits to society outweigh possible costs. There's no indication that the iBorderCtrl developers have taken this first step.
AI like this in general, and in particular the AI on which iBorderCtrl's Automatic Deception Detection System (ADDS) is based (Silent Talker), is built on small and select samples under lab conditions. These small, selective samples and artificial conditions tend to generate biases. Three types of bias they're likely to generate are especially concerning: racial, confirmation, and neurodiversity bias.
Racial bias
As previously discussed on this blog, iBorderCtrl's ADDS (formerly Silent Talker) is based on research from a small sample of mostly European men. Generalizing from one racial/ethnic group to another in facial recognition (which is AI similar to microexpressions AI in important ways such as facial landmark detection) does not really work. Research needs to use more diverse study samples, or else its accuracy rates drop for racial/ethnic minorities—automating bias.
The developers already noted in previous research that this is a problem: They trained their AI on European men, so the system has had a higher accuracy rate for that group from the start.
Despite this factual basis for suggesting bias is a problem in iBorderCtrl's “lie detection” system, it's difficult to say exactly who should be worried about what racial biases here. The reason is that non-transparency prevents us from knowing exactly what racial bias studies they have or haven't done on their system. Previous research suggests they know it is a problem, but we are currently prevented from knowing what (if anything) they've done—or are doing—about it.
It might raise additional ethical issues for current piloting if the test subjects it seeks are disproportionately racial/ethnic minorities, and their data is being specifically sought out in order to better tailor the AI for their racial/ethnic groups. The iBorderCtrl team has said that they're looking for a very small sample in their pilot, but not how they're selecting this sample. Would test subjects ignorant of a racial component in their selection be able to provide informed consent for this kind of data usage? Will they be informed at a future date if the purpose of the current piloting is to train the AI to better assess their racial group—and permitted to ask that their data be destroyed? Or will their data already be integrated into the AI in such a way that, if they are informed later of a racial profiling purpose of the piloting with which they disagree, they effectively can't take it back?
Of course mixed-racial/ethnic group subjects will also face a range of dilemmas. How will they be identified and judged by the AI? Will they have an opportunity to specify their race, or will the AI automatically detect and judge them based on an attributed racial classification? How will race be decided if a subject identifies as one race, and the AI would prefer to classify them as another? Will racial classifications made by the AI be shared with any other databases or agencies—including law enforcement partners in participating countries currently under fire for widespread and racialized police violence, such as Hungary?
There are many such possible problems in which iBorderCtrl (and other AI like it) could contribute to systematic racial biases due to invisible bias in the technology. But it could also contribute to bias for more simple, human reasons. As German political commentator Fefe notes, the “black box” technology could be used to simply deny entry to foreigners with the “wrong” color skin, with no apparent consequences. It's a pilot; who knew it was biased? There is no apparent accountability mechanism in play here. This is one of the reasons you supposedly have a right in the EU to understand how decisions affecting your life are made. “Black box” decisions can institutionalize bias and venality with no apparent avenue for recourse… And this could work at a systematic level (e.g., entire border forces could refuse entire populations crossing) or a more subtle and hard-to-measure individual one (e.g., individual border guards could disproportionately refuse crossing to some members of entire populations).
Some people will say that bias of the latter type already exists, and so we should judge new technologies like this not on the basis of whether they are perfectly neutral (they're not), but rather by some relative comparison metric to existing bias in the field. While it's true that bias already exists in the real world, it's hard to measure it at a baseline level for this type of comparison.
When it comes to race, the burden of proof must be on people who want to introduce new technologies, to show that they are not automating discrimination, or indeed are at least better than existing levels of bias. It cannot be on proponents of rule of law and equality to show that new technologies are biased, when we cannot see the software's algorithms, the researchers' data, or the consortium's ethics reports. And it certainly cannot be on vulnerable groups themselves to continuously check new state incursions on their tenuous equality before the law. Those are recipes for compromised scientists who benefit monetarily from their non-transparent work to get biased technology institutionalized in the mainstream before concerned citizens including independent researchers working in the public interest even have a chance to evaluate the facts. Science should serve the public interest, not ignore it for profit.
An AI that needs to know your race in order to (supposedly) accurately judge how likely you are to be lying is engaging in racial profiling. We have not yet had a public dialogue about whether that is ok in this particular context in Europe. There is, however, broad agreement that profiling is illegal and ineffective.
That's not the only type of profiling iBorderCtrl engages in, though. It also updates your risk score based on other background information, introducing possible confirmation bias.
Confirmation bias
Confirmation bias is the most common form of bias. We all tend to see evidence in the world around us that supports our preconceived notions or hypotheses about what's going on, what's true, and what's right. That way of seeing, however, is the opposite of the scientific method, which teaches us to seek disconfirming evidence of our hypotheses. That's how science, unlike any other method, helps us build a more accurate knowledge base about empirical reality.
Neural networks do not correct against, but rather replicate, the existing biases of the researchers who train them, the data they're fed, the circumstances of their testing, and other facets of how they're built. Thus neural networks like the type used in this AI are particularly susceptible to confirmation bias.
Psychophysiology and microexpressions deception detection research, too, is particularly susceptible to confirmation bias in the form of false positive results—or results that appear to confirm researchers' hypotheses by achieving statistical significance, while actually being due to chance. This is a more common problem than you might think.
As Ioannidis argues, most published research findings are false.
The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.
Of these six risk factors for false-positive research results, the first five apply to physiological deception detection research in general and iBorderCtrl in particular. The studies are smaller. In the case of iBorderCtrl, sample size of <40 has been mentioned multiple times. Effect sizes are smaller. In physiology research, effect sizes are often so small that it's normal to statistically manipulate the data in order to be able to meaningfully analyze it at all. Machine learning algorithms tend to use a greater number and lesser preselection of tested relationships. iBorderCtrl's predecessor, Silent Talker, provides a great example of this pitfall:
Each channel is an attribute being measured, such as an eye contact event. The first implementation of ST used up to 37 channels (e.g. direction of gaze, blinking, blushing/blanching, head turning, moving forwards/backwards etc.). Data from the whole set of channels is combined over some time interval (typically 3 seconds). It is actually the changing relationships between the channels during the time period which indicate whether a person is lying or telling the truth.
That's 37 channels' changing relationships over time. That's a lot of relationships.
Machine learning algorithms also often rely on flexibility in definitions, outcomes, and analytical modes. There is also apparent financial and other interest in the case of the iBorderCtrl researchers and their patented deception detection tool. But at least the project doesn't have the sixth risk factor for false-positive results: There are not many teams involved in this scientific field in a chase of statistical significance, because the prevailing scientific consensus on “lie detection” for nearly a hundred years has been that it's pseudoscience.2)
Confirmation bias affects more than just algorithms and researchers. It could also affect border guards in iBorderCtrl's ongoing piloting at EU borders. What if the system flags a traveler as probably lying, suspicious, or high-risk… having been fed fake data from another dummy database for purposes of the test… and the border guard then sees that judgment, and is influenced by it to treat the traveler differently? How would we ever know if that happened? “Black-box” algorithms making non-transparent decisions could, even in piloting, generate confirmation bias that influences people's treatment at the border. Anyone who's been through interrogation knows it can be highly stressful, even (or perhaps especially) if you haven't done anything wrong, and no formal harm results.
What if the system flags a traveler as probably lying, suspicious, or high-risk… having been fed entirely real data from another database for purposes of the test… and the border guard then sees that judgment, and is influenced by it to treat the traveler differently? What if the data were wrong? We know there are cases where, for example, a law-abiding individual has been erroneously placed on a watch list that has restricted his freedoms due to confusion between a terrorist's alias and his name (David Mayer). What if there were a similar confusion regarding something much more innocuous but relevant, like an overstayed visa? Anecdotally, it seems very difficult for people to learn about and correct erroneous derogatory information that has been filed away about them somewhere in the name of security. It would be even harder if the border guards using a “black-box” algorithm's judgment of risk were passing on that erroneous information in a nontransparent telephone game.
And what if the data were instead formally right, but provided by a government abusing its power for purposes of political repression? This might describe what happened recently in the case of Hakeem al-Araibi, a Bahrainian refugee living permanently in Australia, and currently being held in Thailand pending Bahrain's extradition request on a carefully timed Interpol red notice that disrupted his holiday, to say the least. What if the data were entirely right but biased or irrelevant? And how would we ever know if any of these scenarios happened? “Black-box” algorithms making non-transparent decisions could, in wider use, generate confirmation bias that influences people's ability to cross the border at all.
In all sorts of cases of confirmation bias in this context, it's unclear how we would get data from people who experience undue scrutiny, unfair treatment, or are altogether denied freedom of movement in this context, in order to know whether or how confirmation bias (or any other sort of bias) was affecting them through iBorderCtrl. The difficulties of collecting data from false positives (people unfairly flagged as deceptive) as well as false negatives (people wrongly flagged as truthful) will bias data collection on the system in favor of making it look like a “success story”—just like traditional lie detection with the polygraph tends to have its accuracy vastly inflated and misrepresented by its proponents. And just like with those parochial American polygraph programs that are too pseudoscientific to have ever really gotten a foothold in Europe… It will then be much more difficult to get rid of iBorderCtrl and similar systems once they are in place. The false belief in their proven accuracy and efficacy is highly likely to be self-fulfilling. This is likely to be compounded by the iBorderCtrl consortium's demonstrated nontransparency. AI lie detection's accuracy and efficacy are unproven, and claims to the contrary are erroneous at best.
Proponents often argue that the AI doesn't make a decision in iBorderCtrl and similar systems. It just gives an aggregate risk score, category, or probability of deception, rather than a binary recommendation about border crossing. Thus it can't institutionalize confirmation bias—or other forms of bias. But that argument is illogical. If the system assesses someone as probably lying, suspicious, high-risk, or whatever, and can't explain its decision, which goes on to affect a human border guard's decision, then it doesn't really matter if we say the system or the guard has exhibited confirmation bias here.
What matters is that an inexplicable and unfair decision has been made that affects someone's human rights, and it's going to be well nigh impossible to gather empirical data on these decisions as opposed to guessing at how wrong they might be. That decision could in turn affect future decisions through confirmation bias as the result of information sharing: ensuring interoperability between iBorderCtrl and other information technology systems is likely to be a priority once the system is in broader use.
Neurodiversity Bias
It's not only racial and confirmation bias that might be automated in iBorderCtrl. There is also a large family of predominantly psychiatric conditions, broadly associated with a range of nervous system abnormalities and correlated differences in social behavior, which might affect the measures on which iBorderCtrl's deception detection system relies. Several groups might be subject to bias on the basis of their “abnormal” facial, physiological, behavioral, or other responses to deception detection technologies like iBorderCtrl. But we might never know which ones…
There are several such conditions and differences, and bias effects in the field can be somewhat small and thus difficult to measure with statistical significance. Thus it seems probable that there will be many associated bias effects. But, they will not be measured by iBorderCtrl developers before (if) the system or something like it goes into more widespread usage.
Yet, in the aggregate, by the same reasoning, a significant minority of the general population seem likely to be affected by these biases. In other words, because the likely affected groups are somewhat small individually, it may be difficult to accumulate definitive evidence of bias against any one group… Despite the fact that in total, a sizable minority of the population as a whole is likely to be affected. (One possible solution to this problem would be to survey the people subjected to iBorderCtrl using qualitative interviewing techniques to look for problems we might consider practically significant as members of liberal democratic societies, regardless of whether we can measure them as statistically significant according to a certain quantitative test.)
The term “neurodiversity” (as opposed to neurotypicality) was coined by Australia sociologist and autist Judy Singer in her 1998 honors thesis. Thus it originated both in scholarly work and from within the autism spectrum disorder community in order to describe people with autistic traits versus those without them. But these terms are increasingly applied to recognize ways in which many people seem to have different nervous system responses and therefore develop and exhibit different social behaviors, environmental preferences, and verbal and non-verbal responses, as well as finding that they may have different cognitive and emotional abilities and needs without respect to intelligence per se.
Neurodiversity bias may affect iBorderCtrl's measures of facial microexpressions, head position, physiological responses, and other autonomic nervous system activity more broadly, breaking down into these often overlapping but distinct categories: heightened baseline, hyperactivity in response to stressors, impaired normalization from such reactive (stress) responses, lowered baseline autonomic arousal, and/or dampened responses.
Conditions associated with differences in these measures as compared to the general population include:
- autism spectrum disorder (ASD)
- attention-deficit and hyperactivity disorder (ADHD)
- sensory processing disorder
- highly sensitive personality
- trauma history, post-traumatic stress disorder (PTSD), and panic disorder (PD)
- chronic pain conditions including fibromyalgia
- depression
- schizophrenia
These are some vulnerable groups. While sometimes stigmatized or feared, the available evidence suggests they tend to be more likely to be victims than perpetrators of crime on the basis of their limited or different abilities. Thus targeting them disproportionately for secondary screening, interrogation, or limited border crossing opportunities is inefficient for purposes of enhancing state security. More importantly, it's wrong. Liberal democratic societies protect vulnerable minorities from being scapegoated (unfairly blamed or disproportionately targeted by suspicion) precisely when external threats make the majority more fearful or mistrustful. Otherwise they are illiberal—or become illiberal in the process of attempting to defend liberal democracy from outside threats, a paradox of open societies much-discussed in political philosophy and very concretely relevant in the world today.
Specifics: Some Trees in the Forest
How might neurodiverse groups tend to exhibit different responses in ways that might lead a “lie detection” tool like iBorderCtrl to be biased against them?
Autism Spectrum Disorder
Children with autism spectrum disorder have measurably different head movements than children without ASD. They move their heads more and faster in a social setting. It is not known whether adults with ASD share this trait, or whether talking to a “lie detecting” border guard avatar works like a social condition or a non-social one. But if autism affects head movements, and iBorderCtrl measures them, then obviously autistic people might be measured as systematically more deceptive (or harder to judge as truthful) than others, when they're not any less honest—they're just different.
There are other reasons to suspect that other relevant measures might be different in autistic people, too. For instance, autism is associated with autonomic dysregulation and underarousal resulting in atypical social behavior and reduced physiological threat responses. Some research suggests parasympathetic (“rest and digest”) nervous system dominance in this population, which might lead to a lack of expected, normal stress responses. Conversely, other research suggests autistic children experience less habituation in autonomic nervous system responses to a direct gaze, such that relatively sustained sympathetic (“fight or flight”) nervous system activation as if to threat keeps this population from normal social interaction. And adults with autism seem to have an autonomic basis for tactile hypersensitivity in the form of greater skin conductance response to tactile stimuli, even though they don't report perceiving the stimuli differently from neurotypicals. Differences in related measures such as heart rate, galvanic skin response, and body temperature, too, may be present. These differences could affect traditional lie detection as well as many next-generation tools that similarly use proxy measures of autonomic arousal, including iBorderCtrl. Autistic adolescents have been found to have reduced autonomic arousal in response to social threat, highlighting the possibility that expected responses to social aspects of lie detection such as disapproval from the iBorderCtrl avatar might differ between autists and neurotypicals.
We don't know exactly what's going on with differences in autonomic nervous system responses and autism. We just know there seem to be differences. There are signs of hypo- and hyper-arousal, sympathetic and parasympathetic dominance. This means we have reasons to suspect that relying on measures of physiological arousal to determine whether someone is being truthful or not may well work differently in the autistic population than in the general population—but we're not sure how or why.
Sensory processing sensitivities, common and associated with autonomic dysregulation in autism, are often also found in other, distinct or overlapping contexts. These include sensory processing disorder, the often co-occuring attention-deficit and hyperactivity disorder (ADHD), and the often co-occurring highly sensitive personality. Some people are just more reactive to their environment than others, and that reactivity seems to sometimes have neural, autonomic, and social and behavioral correlates. These sorts of differences could affect how lie detection technologies like iBorderCtrl read people, respond to them, and are responded to in turn. As a matter of science, there's an open vista here where we see a lot of human physiological variation that we understand poorly and want to know more about for all sorts of reasons. As a matter of public policy regarding making people's fundamental rights in any way contingent on their physiology, there's a big neon sign saying “Don't go here: we don't know what we're doing.”
Chronic Pain
A recent meta-analysis found that people with chronic pain conditions such as fibromyalgia tend to have lower heart rate variability (HRV), as we would expect based on this measure's typical strong associations with overall health outcomes and nervous system regulation. Another study found that people with chronic pain specifically have a lower measure of HRV, root mean square of successive differences, reflecting parasympathetic nervous system regulation of the heart. These findings are consistent with what we know about vagus nerve influence on the autonomic nervous system and pain, with heart rate variability proxying for vagal activity.
A range of lie detection tools and techniques rely on measurements of vagal activity-influenced autonomic nervous system arousal, especially sympathetic nervous system arousal cues such as pupil dilation (which is sometimes explicitly included among iBorderCtrl's channels),3) sweating (commonly relied on in polygraph chart interpretation and exclusively relied on in Scientology's e-meter measures), heart rate and blood pressure. It is not known whether people with chronic pain seem disproportionately “deceptive” based on such measures, but common polygraph countermeasures involve correctly timed pain. So it stands to reason that incorrectly timed pain, as might occur in the context of chronic pain, could cause false-positive and inconclusive lie detector results. It could also be the case that people with chronic pain (and commonly co-occurring functional gastrointestinal disorders) after traumatic stress exhibit threat-related autonomic dysfunction, causing irregular physiological responses that might similarly affect lie detector results.
There are thus multiple causal pathways through which chronic pain could bias lie detection against its sufferers. Will people who are in pain be subjected to harsher, more invasive and time-intensive border crossing treatment in the future? One might hope that the opposite would be the case: That people who are suffering due to illness would be helped through rather than kept waiting longer, questioned less rather than singled out for more questioning. But if automated “lie detection” that might unfairly implicate such people as deceptive is broadly rolled out, then we might hear anecdotal reports of exactly this sort of morally reprehensible, illegally unequal treatment—and yet still lack the data on how the tool works theoretically and empirically to properly evaluate the scientific basis of such reports.
Trauma
In college women, a history of childhood maltreatment has been associated with atypical autonomic regulation in response to physical and emotional stressors in terms of heart rate and respiratory sinus arrhythmia. People with PTSD and panic disorder, overlapping but distinct anxiety disorders with characteristic psychophysiological symptoms, also exhibit overlapping but distinct signs of autonomic dysregulation. Blunted parasympathetic response in PTSD is a consistent finding across studies, and new research building on this finding increasingly implicates vagal nerve involvement.
These findings build on the relatively recent polyvagal theory of autonomic function, especially the idea that the vagal nerve's functioning as a brake in social settings (to help regulate and balance the sympathetic and parasympathetic parts of the autonomic nervous system) is compromised by experiences of abuse. Subsequent research findings lend further support to this theory; for instance, the findings that people with PTSD show signs of decreased cardiac vagal tone and diminished tonic parasympathetic activity, as well as well-established sympathetic hyperactivation (so, more “fight or flight” response to less stressful stimuli). What this means is that people with trauma histories tend to experience “fight or flight” more easily and then find it harder to normalize those physiological responses. For example, people with abuse histories cannot quickly recover through the normal route of automatically re-engaging vagal regulation after mild exercise to return to a calm physiological state.
Autonomic nervous system abnormalities in people with histories of trauma might matter in physiological deception detection, because there's evidence that the measured responses would seem to be abnormal both at baseline and in response to stressors of various types. This might cause more false-positives and inconclusives in this group. These sorts of abnormalities might especially matter in border crossing contexts, because the sorts of questions likely to be asked could easily be triggers for recalling stressful events or circumstances. For example, questions about identity are likely to involve place of origin, a topic which could trigger stress in a victim of domestic abuse, discrimination, or gang violence. Similarly, questions about reason for travel could trigger stress in someone fleeing persecution or harassment. Not everyone moving away from something stressful becomes a refugee or seeks asylum formally, and no one should be denied freedom of movement on the basis of physiological signs of stress.
Depression
Autonomic dysregulation is so characteristic of depression that one of its cardiovascular symptoms (multi-lag tone-entropy analysis of heart rate variability from ECG) can predict suicidal ideation in depressed people. Current and formerly depressed adolescents have greater pupillary responses to others' facial expressions regardless of the emotion being expressed, suggesting greater autonomic reactivity that could directly affect systems like iBorderCtrl that seem to include pupil dilation among its channels for detecting deception. Depressed adults similarly show biomarkers of stress in response to a simple lab challenge (a time-constrained cognitive task) that others don't exhibit.
It's not just physiological stress responses that are different in depressed people, though. It's also lack of (normal, timely) responses in the form of psychomotor retardation, a poorly-understood but widely-recognized process that can dull movement, thought, and speech in depression. This phenomenon may explain why depressed people show less facial muscle movement when imagining happy and sad situations then non-depressed people.
From proxies of autonomic arousal such as pupil dilation to facial muscle movements, depressed people seem to exhibit different physiological responses than non-depressed people in ways that could adversely affect their interactions with automated lie detection systems like iBorderCtrl. The sad irony of this vulnerability, like that of many others, is that it is borne by a group of people who are particularly unlikely to have the resources (cognitive, emotional, and otherwise) to identify, analyze, articulate, and successfully contest unfair treatment such as automated discrimination on the basis of neurodiversity. Depressed people denied freedom of movement or subjected to additional screening on the basis of their abnormal physiological responses or facial expressions resulting from depression seem more likely to go away rather than having the resources to organize to make change, since depression tends to correlate with political, social, and physical inactivity and isolation.
Schizophrenia
As in several other neurodiverse populations, people with schizophrenia seem to have diminished parasympathetic activity consistent with decreased vagal tone, and thus less ability to recover from stress responses. While other groups (e.g., the severely depressed) may also experience it, blunted affect (or lack of normal and normally responsive facial expressions) is a more recognizably schizotypal symptom and can affect gestures, facial, and vocal expressions. Movement abnormalities such as tics are also seen in psychosis spectrum disorders, although some question whether they are part of organic disease process or iatrogenic drug responses.
From physiological stress responses that don't normalize like in other populations, to facial/vocal expressions and gestures that don't look normal, schizophrenics are at distinctly high risk of being incorrectly classified as deceptive or otherwise not comporting with physiological deception detection tools that may rely on all these measures. Yet they are also disproportionately likely to not be believed if and when they report being unfairly targeted, since paranoid delusions are also a well-known part of the disease.
Neurodiversity in Sum
Here we have a motley crew of people with different psychiatric differences: autism, trauma history and associated anxiety disorders, chronic pain, depression, and schizophrenia. They all might present differently in terms of facial expressions, posture, and/or physiological responses due to a disability or medical condition that should not affect their treatment in border crossing contexts. Taken individually, these groups might seem small and even inconsequential. Their population incidence at any given time is single-digit. But the lifetime prevalence of anxiety and depression alone are generally thought to reach double digits. So in the aggregate, these neurodiverse groups represent a sizeable minority of the general population. That's a lot of people.
Making freedom of movement increasingly technology-mediated in ways that might discriminate against neurodiverse people on the basis of physiological responses related to neurodiversity affects society as a whole. Among other things, it does so by making screening processes and possibly resultant decisions that are supposed to be rational and fair, irrational and discriminatory. It places an undue burden on vulnerable groups. And by doing these things, it makes the rule of law appear weak in Europe, fostering mistrust in state and other institutions. We know that lower trust correlates with worse societal outcomes across a range of areas. Trust is the fabric of society, and rule of law nurtures that trust. Moreover, equality before the law is, well, the law.
Overall, we just don't know enough about human physiological variation at the population level to be submitting people to mass security screenings involving these responses, like iBorderCtrl lays the foundation for doing. Differences in postural responses such as head movement and position, as well as in physiological responses such as pupil dilation, heart rate variability, and skin conductance, are found in several neurodiverse groups. But we need to know a lot more than we do today to know for sure how that matters, for whom, when, and why, in lie detection contexts.
On a human level, it may well be that these groups also tend to seem more nervous or awkward than the rest. Something may seem “off,” causing them to undergo more secondary screening than other people without something like iBorderCtrl in place. But then again, it's also possible that human border guards know more than AI when it comes to coding anomalies that the guard can't necessarily diagnose and the AI hasn't been explicitly trained for either. In the aggregate, border guards might have cumulative social knowledge from life that lets them correct for possible biases that machines can't correct for because they don't have that knowledge. People can be biased, but they can also be smart and resilient. They can learn, listen, and empathize. Machine-learning AIs like iBorderCtrl can do the first two, but not the third, and that matters in human interactions.
In Sum
New technologies can be risky, and one of the big risks of AI like iBorderCtrl is that it's likely automating discrimination. Its history, the history of AI like it, and the theory and empirics of what it's measuring and how it all works—as far as we can tell based on limited publicly available data from the consortium itself—are hugely concerning when it comes to the fundamental European principle of equality before the law. Whether it's racial and/or gender bias in favor of European men, confirmation bias in its various forms, or bias against neurodiverse groups who might present differently in terms of affect, physiology, and posture, the discrimination that iBorderCtrl could be automating at EU borders now is wrong and illegal.
Of course, existing border-crossing screening procedures are already vulnerable to all sorts of bias. But society's most vulnerable members shouldn't bear the burden of finding out the hard way whether or not new technologies solve our age-old bias problems—or make them worse. We have an obligation as a society to take care of our weak; not single them out for discrimination.