iBorderCtrl's FAQ

This in-depth analysis looks closely at the iBorderCtrl FAQ, and finds it could make travelers vulnerable to discrimination at EU borders now. It's part of this longer analysis of what forms of bias the tech is probably most vulnerable to, and why. In particular, it seems likely to automate racial, confirmation, and neurodiversity discrimination.

The origin of this analysis is this: We started to write a promised post about possible vulnerability to bias… And realized the iBorderCtrl consortium addresses it in their FAQ. But they address it, oddly, by saying there's no need to worry about bias or real-world consequences at all, since their current pilot tests are “encapsulated.” But when you dig into what they say and what it all seems to mean, that just doesn't make sense. What they mean by encapsulation is unclear, and what it seems to mean here is insufficient to actually protect people from discrimination. In other words, their claim that there's absolutely no reason to worry about bias in iBorderCtrl don't pass the smell test. Here's why…

Point by Point FAQ Analysis

1. The iBorderCtrl system is not used to perform border checks



iBorderCtrl is a research project, researching and developing new technologies. As the system is still in development, it cannot be used for actual border checks.

However, the system needs to be tested to validate whether the developed technologies are functioning properly. To achieve this, test pilots are required. To simulate real conditions, these take place at selected border crossing points. Travellers are being invited to voluntarily join and test the system. However, the test pilots are an encapsulated process which takes place after the person has officially crossed the border, to avoid any negative consequences. Also, the iBorderCtrl system is not linked with any law enforcement databases or live systems.

The test pilots take place at border crossings, and thus may have negative consequences for travelers unless proven otherwise. Just saying there can't be negative consequences does not show why this would be so. For example, just because travellers will have supposedly officially crossed the border, does not mean they cannot suffer negative consequences if a computer flags them to border security as deceptive or high-risk.

Moreover, if it's true that there are no possible consequences for “failing” your iBorderCtrl interview, and travellers and/or border guards know this, that hurts this research's external validity. Other research shows that it matters in lie detection research if subjects believe that they might suffer negative consequences as a result of being judged deceptive. People who want to be believed tend to look more like they're lying, even if they're telling the truth.1) And their physiological responses as measured in polygraphs show the same trend: People who are more realistically motivated to pass (by fearing the consequences of failure) are actually more likely to fail.2) So either the encapsulation is so thorough that it's highly likely to artificially inflate accuracy rates by judging more people as truthful than it would under real-world conditions—or, travellers in the study are led to really believe they have a reason to fear being judged deceptive during the piloting, and ideally guards helping administer the study are led to believe the same thing so that they don't hurt the study's internal validity through expectancy effects. The researchers cannot have it both ways. Either people think they could experience negative repercussions from failing the lie detector, or they know that's not the case and so these aren't realistic tests.

Furthermore, iBorderCtrl seems to be linked with dummy data from other databases/systems for the test, which might result in innocent people being associated with derogatory background information. We don't know because they don't say, but arrest warrant and previous overstayed visa are the sorts of things you would expect to see in such background information. That would mean people who have done nothing wrong might experience confirmation bias during their border crossing experience as a result of iBorderCtrl piloting. Again, it's not clear how this works in practice to balance (on one hand) minimizing artificiality to really test under real-world conditions, and (on the other) minimizing the risk of an untested AI causing unwarranted problems for innocent travellers. Balancing these risks would be challenging enough; but iBorderCtrl claims to be achieving both ideals: realistic real-world testing through field piloting, and no risk of bias through encapsulation. That probably over-promises on both counts.

At best, this is a nontransparency problem: We need more information about how this works. At worst, the piloting is too artificial to tell us much about how the tech would work in the real world, and it's placing travellers to the EU at risk now.

2. The iBorderCtrl system is used on travellers, but on a voluntary basis

As outlined above, testing the system is required to validate the research conducted in iBorderCtrl. However, travellers are not obliged to use the system when crossing the border. As the system is not used for actual border checks anyway (but just for simulations), travellers are invited to voluntarily join, without any obligation to do so. Data collected in the test pilots will be either deleted or anonymised at the end of the project in August 2019.

Non-EU citizens wishing to enter the EU cannot give as free consent as EU citizens could. Yet non-EU citizens' stakes are also higher in this setting, since they can be denied freedom of movement. So on one hand, for ecological validity, you do want to use non-EU citizens as subjects in this sort of a field study. But on the other hand, their consent cannot be fully voluntary, because they are in a powerless position with respect to border security, and might feel pressure to participate as a result.

Apparently, participants are not told whether their data will be deleted or anonymized. This suggests at least some participants' data will not be anonymized before Aug. 2019, leaving participants potentially and unnecessarily vulnerable to privacy breaches. It should be standard practice in dealing with any potentially sensitive information (such as allegedly lying to border security or being flagged as high-risk by a test system) to anonymize data early and often, even if you want to keep unique identifiers in place in order to be able to follow up with people later. Otherwise you don't know where your data may end up, how long it may be kept there, or how it may be used. Especially when you are closely collaborating with partners like the Hungarian National Police, privacy concerns demand immediate anonymization unless there is a clear and pressing reason to not do it, the benefits of which (to society) outweigh the potential costs to your participants. Otherwise, you make your participants unnecessarily vulnerable to confirmation bias, for instance, when dealing with border security forces that might be able to ascertain that they were flagged as deceptive or high-risk at an earlier interaction.

3. Fundamental rights of travellers might be violated, but safeguards ensure they are not

Of course, also a test pilot can have an impact on the fundamental rights of travellers. However, as outlined above, the test pilots are encapsulated; issues with respect to discrimination, human dignity, etc. therefore cannot occur.  As for the right to privacy, participants have to provide their informed consent prior to participating in the test pilots. Before doing so, they are informed about both the data processing and their rights as data subjects. Participants can also withdraw from the test pilot at any time and ask for their data to be deleted. In any case, data collected in the test pilots will not be shared with any 3rd parties (i.e. law enforcement agencies) and will be either deleted or anonymised at the end of the project in August 2019. As it is closely monitored that no fundamental rights are being violated, there is no reason to further restrict research on this matter (also considering art. 13 of the European charter of fundamental rights) [sic]

It is unclear how this “encapsulation” prevents bias, especially given that iBorderCtrl affects participants throughout in their border-crossing process: before they get to the border, and again at the zone in which they are interacting with border security during the crossing (albeit ostensibly after legally crossing the border). This engagement would seem to be pervasive, rather than encapsulated. Therefore issues with respect to discrimination certainly can occur, for instance, if iBorderCtrl flags someone as deceptive or high-risk, and border security treats them accordingly.

It also does not comport with iBorderCtrl's apparent close collaboration with law enforcement agencies to call them “third parties.” For example, the Hungarian National Police seems to have had a lead role in development that extends into current privacy management, making them a partner in the project's development rather than a third party. Indeed, it is inconsistent to state on one hand that the project privacy officer is a Hungarian police major and border guards are specially selected and trained to help running the pilot, and on the other hand that no law enforcement agencies will have access to test pilot data. Again, if law enforcement agencies have access to the data, this introduces possible confirmation bias into their dealings with people the system has flagged as deceptive or high-risk.

Given the close relationship between law enforcement agencies collaborating on these pilots and iBorderCtrl developers, we need to know who is doing this close monitoring for fundamental rights violations. Is it the developers and their law enforcement partners themselves who are doing the monitoring? Or is there some independent agency involved that is not getting paid by iBorderCtrl and has no financial stake in the project? If not, why not? And how can we expect the monitoring to correct for expectancy and other probable biases in the researchers running this project who have a financial stake in the tech?

It is incredibly out of place and downright distasteful to throw in a passing reference to Article 13 of the EU Charter of Fundamental Rights at the end of this section of the FAQ. Art. 13 is about academic freedom. No one is restricting your academic freedom if they say you can't do research on vulnerable populations with public money to see if pseudoscientific tech works for some nontransparent value of working—and by the way you'll make a fortune if you can sell it, but no one can see the algorithm, data, or ethics reports… Academic freedom in the context of scientific research is the freedom to do good science, balancing risk and benefit with participants' consent in the public interest. But that doesn't seem to be what's going on here. There is no enshrined academic freedom to put human rights at risk through pseudoscience for profit.

4. The system might not be mature enough to be used at the border, but that is the reason for doing research

The Deception Detection system currently has an accuracy of 75%. However, iBorderCtrl is a research project, and not about product development. Therefore, the core task of the project is to research and improve technology, as well as to evaluate its benefit for society and its impact on privacy and fundamental rights of individuals.

Also, the technology is not applied for border checks, but only for testing purposes as explained above.

It's possible that all these statements are true and there is just not enough information public at this date to understand why they are true. But at present, given the available information, all of these statements lack face plausibility.

If the system is not mature enough to be used at the border, why not perform more ecologically valid lab experiments before moving to the field? If iBorderCtrl is not about product development, why do the lead developers have a financial stake in the patented deception detection component, ADDS/Silent Talker? If the core task of the project is to evaluate its benefit for society and its impact on privacy and fundamental rights, why not conduct the research in a properly documented manner and make enough data public for there to be meaningful public dialogue before it gets rolled out at EU borders? Maybe even involve some scientists who are not also developers and shareholders? And if the technology is not applied for border checks, why have it in use throughout border crossings? None of this makes sense.

5. The EU might use the system at the border - if they decide to do so

As of now, iBorderCtrl is only a research project funded by the EU under the H2020 programme. If the system, or parts of it, will be used at the border in the future is unclear. It should be also noted that some technologies are not covered by the existing legal framework, meaning that they could not be implemented without a political decision establishing a legal basis (including proper safeguards) first.

EU borders can be jurisdictional gray zones, creating problems for people including NGOs and activists seeking documents and data under freedom of information laws, as well as opportunities for state entities (both EU and national) to act without clear oversight. So on one hand, it would be correct to say that “some technologies” (such as lie detection) would need a legal basis to be used at the border—and that legal basis is currently lacking. On the other hand, that formulation leaves out the big question: Why pay for something (as the EU is paying for iBorderCtrl's development), unless you are willing to create the laws to make its use legal?

6. Artificial Intelligence-based systems will be implemented at the border - if they offer benefits The capabilities of Artificial Intelligence (AI) and the huge potentials it offers are subject to intensive research, not only in iBorderCtrl, but across the whole academic world and various disciplines. However, it is most likely that such a system will only be used at the border if it provides better results than the current system, solely relying on human beings. In fact, an AI-based system with high accuracy might even decrease the risk of discrimination and other fundamental rights issues if implemented properly. Examining the strength and weaknesses of such system is part of the iBorderCtrl project.

This is classic technocrat daydreaming. While AI could theoretically decrease bias, in reality, we know it often doesn't. In fact, this overall assessment of the future of AI (by proponents with a financial stake in it) is overly optimistic relative to that of most technology pioneers, innovators, developers, business and policy leaders, researchers and activists. A recent Pew Research Center study of nearly 1,000 tech experts found considerable concern about AI using “black box” systems like iBorderCtrl's (1, 2). While most expressed hoped about potential applications in healthcare and public health contexts, they also feared loss of human agency as people don’t get input into or understand how the tools used to make important decisions actually work, sacrificing autonomy and becoming more vulnerable to data abuse at the hands of companies and governments that want power.

On the issue of bias, Andrew McLaughlin, executive director of the Center for Innovative Thinking at Yale University, previously deputy chief technology officer of the United States for President Barack Obama and global public policy lead for Google, said:

My sense is that innovations like the internet and networked AI have massive short-term benefits, along with long-term negatives that can take decades to be recognizable. AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment.

It is unclear from the iBorderCtrl developers' overly optimistic set of assertions who will assess whether iBorderCtrl offers benefits (or costs?), has high accuracy (or not?), or decreases discrimination (rather than increasing it, among other fundamental rights issues). If it is the same researchers who are running the project, who have a financial stake in the technology and vastly differing views on the future of AI from most people in their field—and the current level of nontransparency continues—then their assessment will not have the hallmarks of open science. Rather, making open examination of the strengths and weaknesses of the system part of the project now, as iBorderCtrl is tested in three EU states, is what we are trying to do. We must do this precisely because it does not appear to have been done by the developers responsible. Yet we fear that, as McLaughlin suggests, it could take decades after widespread introduction of automated deception detection technology for independent researchers to even have the data to test for bias. In fact, as transparency goes, that itself could be an overly optimistic assessment; data on federal polygraph programs in the U.S. has never been obtained to independently test for bias or efficacy.

In this context, it's greatly concerning just how biased the iBorderCtrl consortium seems to be in favor of the expectation that their tool won't have any biases. Aside from the fact that looking for disconfirming evidence of your hypothesis defines the scientific method… If they're not set up to find bias, then they probably won't find it. And they're apparently not set up to find bias.

7. The Deception Detection system (Avatar and deception detection) is not tested at the border - it is a web application

The proposed iBorderCtrl system follows of [sic] a two-stage procedure, consisting of a pre-registration and the actual border crossing. The pre-registration, which can be done at home (it is a web-based or mobile application), shall help to reduce the time required for the checks at the border, making the border crossing as comfortable as possible for travellers.

The Deception Detection system is only used in the pre-registration phase (after travellers provided their informed consent, see no. 3) to verify traveller's identification information prior to their arrival at the border. While it would be theoretically possible, no questions are being asked on the traveller's suitcase, etc. The Deception Detection system as such is not used at the border, as there is no need to verify identity information through an avatar, as human border guards are available at the border. However, border guards will be provided with the risk score calculated based on all checks performed during the pre-registration and the border crossing phases including the one from the avatar interview. However, as explained above, the test pilots are only a simulation without any effect on the border crossing of participants.

It seems untenable to claim the system is not tested at the border, when the testing includes participant engagement before and after border crossing, the deception detection system's decision is used beyond the pre-registration phase (that's the whole point), and the system is intended to help decide who can travel into the EU. The deception detection system might be prone to bias, and its developers have not established why this bias cannot affect travellers in the pilot.

8. The system is not used at various/all border crossing points of the EU

The test pilots are being done at one selected border crossing point in each Greece, Hungary and Latvia. As outlined above, the reason for this is to simulate real conditions in various scenarios, while the test pilots are fully encapsulated from actual border checks. Therefore, the iBorderCtrl test pilots should not be confused with large-scales test with millions of travellers. In fact, due to the limited scale of the test pilots, only very few people will actually be able to voluntarily participate.

How will the “only very few people” allowed to participate in the test pilots be selected? Is there a racial/ethnic component to this selection, to help lessen the bias built into the AI by its foundational research that disproportionately focused on and developed accuracy for assessing European males? If so, are participants informed of that component? How are researchers accounting for possible sampling bias that could limit the ecological validity and generalizability of the field study (arguably defeating its purpose)?

And speaking of selection effects, how will the border guards running the tests be selected? The Hungarian Pilot page mentions that the Hungarian National Police iBorderCtrl testing will be done with specially selected and trained border control officers. The Latvian Pilot page mentions the piloting will be done “by qualified Border Guard officers with experience guided over the years [sic].” And the Greek Pilot page makes no mention who on the police side will help with the piloting, in terms of their experience level and training or otherwise. So is this border guard selection, as it sounds, inconsistent across pilots because the police forces wanted control and the researchers let them have it? Does the consortium actually theorize that border guard experience makes no relevant difference in border-crossing expertise whatsoever? Or can we all agree that experienced people might do that job differently, and researchers should probably keep variables that might matter consistent in research studies when possible?

The FAQ also specifies above that pilots are “being done at one selected border crossing point in each Greece, Hungary and Latvia.” But the individual country-level pilot pages referenced above indicate otherwise. The Hungarian page mentions:

To test the proposed system in a relevant environment, the Hungarian National Police will offer two BCPs having significant traffic to properly test and validate the capabilities of the system in a pilot deployment. Testing will be carried out by border control officers selected and trained for testing at the BCPs as well as volunteers recruited via the Internet.

It sounds like the Hungarians will examine two border control posts in piloting. The Greeks may also be doing multiple pilot location tests. It's unclear what the Latvians are doing. There is just so little information out there on what is really being done in the name of security, while the existing public information is incomplete, insufficient, and inconsistent.

iBorderCtrl Likely Makes Travellers Vulnerable to Discrimination at EU Borders Now

The iBorderCtrl team claims their current field uses of the tech are somehow “encapsulated.” It is unclear how this encapsulation works or what that really means. They use this umbrella to assert, without fleshing out an argument, that rights violations—especially with respect to discrimination and human dignity—”cannot occur“ in current piloting. However, this assertion does not logically follow from the information provided on the same FAQ page and examined in detail above. Taking the statements on that page one by one raises troubling questions about how exactly this encapsulation works without either making travellers vulnerable to bias, or rendering the piloting so unrealistic as to be severely limited in what it can tell us about how the system would actually work in practice.

In scientific terms, that tension is perfectly normal and expected. The researchers leading current piloting on iBorderCtrl at EU border crossings have to balance two competing values. On one hand, the tech needs to be tested under realistic conditions on (and by) the sorts of people on whom it would be used (and who would be using it) in the real world. That sort of test is essential to assessing the tool's field accuracy more realistically than lab studies can; existing lab studies probably vastly over-estimated iBorderCtrl's accuracy for reasons explored in a previous post here. In order for field studies to minimize artificiality and thus extend results' generalizability, they need to be under as realistic conditions as possible.

On the other hand, ethics and the apparatus that enforce them (such as university Institutional Review Boards and real-world courts) need to protect human subjects from harm. This need is what limits the realistic nature of the testing conditions of any theory, technology, drug, or device. Field research always struggles to balance these competing values: minimizing artificiality to enhance generalizability, to build on what science knows—while protecting people participating in research (and others who might be affected) from possible harm. Maximally protecting people from any possible harm would require never testing anything new under realistic conditions where you don't really control the outcomes. Testing new things requires risk. And that's fine.

What's odd here is not the balance between risk and ethics. It's the iBorderCtrl team's insistence that you can have one without the other. If someone tells you that you can have the reward (new scientific knowledge) without the risk (harm might result)—especially in a context where real rights, like freedom of movement, are at issue—then you should be suspicious. It's more typical and credible to speak of minimizing the risks… Not using a nontransparent process such as “encapsulation” to negate risks altogether.

1)
As Spencer Ackerman writes in The Guardian ("TSA screening program risks racial profiling amid shaky science – study,“ 8 Feb., 2017), the science is worse than insufficient to support current American uses of so-called lie detection based on micro-expressions. In fact, a 2007 article in the journal Law and Human Behavior cited by Ackerman notes ” 'A striking finding in the literature is that liars do not seem to show clear patterns of nervous behaviors such as gaze aversion and fidgeting,' “ and “A 2006 review of the literature found that 'people who are motivated to be believed look deceptive whether or not they are lying'. ”
2)
This finding was cited in Iacono and Ben-Shakhar's 2018 Law and Human Behavior article "Current status of forensic lie detection with the comparison question technique: An update of the 2003 National Academy of Sciences report on polygraph testing." As noted previously in the post "Biomarkers of Scientific Deceit," a scientific critique of some of iBorderCtrl's claims about using micro-expressions to detect deception, it comes from Patrick and Iacono's (1989) constructive replication of a Raskin and Hare (1978) polygraph prison study that had reported a hit rate of 96% added a threat manipulation and found a hit rate of only 72%. Those hit rates bracket the problem of false-positives, describing only how false-negatives increase as artificiality decreases. Patrick and Iacono also specify that while most guilty subjects (87%, excluding inconclusives) were correctly identified, innocents were only identified with 56% accuracy. So having a reason to actually care about the possibility of failing the polygraph, when you are telling the truth, makes you almost as likely to fail it as you are to pass it. This was a typical mock crime study in both instances, of the sort used in iBorderCtrl/Silent Talker and other lie detection research. But in the replication, inmates were told that if more than 21% of them failed the polygraph, then no one would get the study payment of $20, and a list of who failed would be circulated so that they could be held accountable for everyone missing out on the money. Of course all participants got paid and no such list was circulated, but the realistic contingency threat caused different physiological responses such that deception detection accuracy plummeted. Lab studies used in iBorderCtrl's development could have similarly done a better job approximating real-life circumstances, and might have gotten different results if they had.