iBorderCtrl Archive
On this page we will try to put all relevant publications. These can be media articles, blog posts, scientific papers, parliamentary questions, public letters, or anything else that is published anywhere and has direct relevance to iBorderCtrl.
Everything is in chronological order of publication date. A lot of boilerplate articles just spreading stuff that this archive already had were skipped, mostly because of time constraints. If you feel anything is missing, please mail us a link at admin~at~iBorderCtrl.no
Before 2016
28 Apr 2001 - Zuhair Ghani Al Bandar, David Anthony McLean, James Dominic O'Shea, Janet Alison Rothwell
There is disclosed a method for analyzing the behavior of a subject comprising the steps of: making one or more measurements or observations of the subject; coding the measurements or observations into a plurality of channels; and analyzing the channels using artificial intelligence, in order to output information relating to the psychology of the subject.
12 Apr 2006 - Janet Rothwell, Zuhair Bandar, James O'Shea, David McLean (in journal Applied Cognitive Psychology)
This paper presents the development of a computerised, non‐invasive psychological profiling system, ‘Silent Talker’, for the analysis of non‐verbal behaviour. Nonverbal signals hold rich information about mental, behavioural and/or physical states. Previous attempts to extract individual signals and to classify an overall behaviour have been time‐consuming, costly, biased, error‐prone and complex. Silent Talker overcomes these problems by the use of Artificial Neural Networks. The testing and validation of the system was undertaken by detecting processes associated with ‘deception’ and ‘truth’. In a simulated theft scenario thirty‐nine participants ‘stole’ (or didn't) money, and were interviewed about its location. Silent Talker was able to detect different behaviour patterns indicative of ‘deception’ and ‘truth’ significantly above chance. For example, when 15 European men had no prior knowledge of the exact questions, 74% of individual responses ( p < 0.001) and 80% ( p = 0.035) of interviews were classified correctly.
?? ??? 2006 - Rothwell, J. Bandar, Z. O'Shea, J. McLean, D. (in Neural Computing and Applications)
This paper describes the application of a backpropagation artificial neural network (ANN) for charting the behavioural state of previously unseen persons. In a simulated theft scenario participants stole or did not steal some money and were interviewed about the location of the money. A video of each interview was presented to an automatic system, which collected vectors containing nonverbal behaviour data. Each vector represented a participant’s nonverbal behaviour related to “deception” or “truth” for a short period of time. These vectors were used for training and testing a backpropagation ANN which was subsequently used for charting the behavioural state of previously unseen participants. Although behaviour related to “deception” or “truth” is charted the same strategy can be used to chart different psychological states over time and can be tuned to particular situations, environments and applications.
2016
26 Oct 2016 - Dr James O'Shea
I am currently a Senior Lecturer at Manchester Metropolitan University and a director of an AI startup company, Silent Talker Ltd. The opinions expressed here are entirely my own, but are based on my research, teaching and technology transfer experience. This includes DTI funded consultancy to industry during a similar period of technological change, the adoption of microelectronics during the early 1980s.
I am in investigator on the Horizon2020 iBorderCtrl project which has 13 partners across Europe (including 4 border agencies). This project is intended to speed up the crossing of 3rd party nationals and freight (e.g. the UK post-Brexit) into the Schengen Area. It uses a patented (Rothwell et al., 2002) AI based deception detection system (Silent Talker) developed by myself and colleagues, in a pre-travel interview. This is combined with other measures (Biometric, Document etc.) to assess risk and guide border guards in their dealings with travellers. The pre-travel interview uses questions nominated by actual border guards. Silent Talker is a specific application of a generic technique (adaptive psychological profiling) developed in my research group to assess a person’s internal mental state through non-verbal behaviour. […]
There are interesting possibilities for AI at the interface between the two camps. We are developing systems, which simulate elements of consciousness or emotion, along with more routine AI components to solve complex problems. For example, my group’s contribution to the Horizon 2020 iBorderCtrl project in which we are developing an Avatar (computer generated artificial person) who will present neutral, positive/encouraging or puzzled/skeptical emotions to the interviewee, depending on the degree of deception detected in answering pre-travel questions. […]
Negative effects will include the potential loss of civil liberties due to increased efficiency of mining personal data and monitoring people. Solutions could take different forms. We could counter the abilities of AI with more pro-active, intrusive and statutory personal data protection. Alternatively, we could make a cultural switch as a society to the position where more is known but less is cared about our personal lives.
A question I have been asked on TV and radio interviews is “Suppose your AI lie detection technology becomes generally available, what will happen in a society where no-one is able to lie?” If there are no more social white lies, bluffing and haggling in negotiations, face saving excuses etc. it is possible that we may have to radically change our society’s mores, to gloss over the unpalatable truths that AI may reveal to us about each other, simply to continue to function.
This prepares the ground for my recommendation that the committee should also seek evidence from philosophers – as an AI researcher I have found their work to be productively informing in the past. […]
There are interesting possibilities for AI at the interface between the two camps. We are developing systems, which simulate elements of consciousness or emotion, along with more routine AI components to solve complex problems. For example, my group’s contribution to the Horizon 2020 iBorderCtrl project in which we are developing an Avatar (computer generated artificial person) who will present neutral, positive/encouraging or puzzled/skeptical emotions to the interviewee, depending on the degree of deception detected in answering pre-travel questions. […]
Racial or cultural discrimination. Lack of consciousness and emotion works in favour of AI systems in their claim to robustness against discrimination. However, human developers need to take care not to build bias in at the training stage. This could be a particular weakness where Big Data is used and the dated is not properly vetted, cleansed and balanced. Suppose I built a lie detection system trained from “white Europeans” and then operated the system on another ethnic group? If the second ethnic group had different cultural patterns of non-verbal behaviour, they might produce non-verbal indicators of deception when telling the truth and be unfairly labelled as liars. My current stance is that, in my work, specific AI classifiers should be developed for each ethnic group using the system, so that each is treated fairly. Part of the iBorderCtrl project involves testing whether such differences exist and if so at what level of granularity is ethnic / cultural partitioning most effective in producing fair classifications. […]
In what situations is a relative lack of transparency in artificial intelligence systems (so-called ‘black boxing’) acceptable? When should it not be permissible?
Using my experience with the iBorderCtrl project, described in paragraph 0.1 to frame an example, I see some complications. These arise from legislation (EU Directive 680/2016/EU, which I presume will have some equivalent in UK law for the foreseeable future): “subjects of biometric decision making have a right to be informed of automatic decision making, that it is made transparent and that the subject has the right to express his/her point of view or the right to contest the decision.” These types of question need to be considered by your committee.
What responsibility is there on the AI system or its developers to explain how the system has reached its decision?
we use a form of AI called an Artificial Neural Network which is a black box and effectively inexplicable to humans. We have experimented with producing a rule-based equivalent that produces over a thousand rules using collections of logic operators – again unsuitable for the average traveller.
What is the equivalent duty for a human? If a border guard suspects an interviewee of deception, what is the responsibility and degree to which the human must justify / explain the decision (human explanations of how they reach decisions may be instinctive and have no effective predictive explanation)?
Should you tell the traveller during a pre-travel interview if he / she is suspected of deception? If so, what is the mechanism for contesting the “decision”?
Should you answer questions on how to pass and so lead the traveller to believe that he / she is getting coaching on how to appear truthful?
My view is that “inconsequential” decisions by AI components (i.e. the traveller was truthful, no action needed) do not need to be explained to travellers or contested by them. Where a traveller is suspected of deception, the AI system should provide evidence to a human-in-the-loop, who will take the decision and comply with the traveller’s rights.
?? ??? 2016 - S. Zoltán
- The Intelligent Border Control System and the future of Integrated Security Management in Public-Private Cooperation (Cannot find link)
16 Dec 2016 - J. Stoklas (in ZD-Aktuell, Beck, Munich, Heft 21)
2017
08 Feb 2017 - Horizon - The EU Reasearch & Innovation Magazine
Anastasia Garbi from European Dynamics in Luxembourg is working on a system that uses facial recognition to begin screening people before they leave home.
‘You can use what you have at home, a personal computer, without expert or specialised scanners,’ Garbi said.
The idea is that travellers take pictures of their passport, visa and proof of funds and upload them to a website. Then, using a web camera, they answer questions posed to them from a computer-animated avatar border guard for a few minutes.
That gives officials a series of facial images they can use to check against stored pictures from a passport or past entries and exits, a comparison that is much more difficult to copy than a fingerprint.
‘Even a fingerprint is easy to imitate and cheat,’ Garbi said. ‘If you have a video capture, and you have some questions along with this, you get pictures of the reactions of the face of the traveller. This is very difficult to copy.’
Research like this will help the EU make citizens safer by securing its borders effectively, without causing long queues and discomfort for travellers.
The avatar is even capable of using facial biometrics (micro-expressions) to analyse the non-verbal behaviour of the interviewee and determine if the passenger is lying. The site then sends the inputs the traveller gives to a secure back-end system which calculates an aggregated risk factor, based on a comparison with the stored data and the expression analysis.
Jun 2017 - J. Stoklas (in ZD-Aktuell 2017, 05684)
July 2017 - K. Crockett, J. O’Shea, S. Zoltán, Ł. Szklarski, A. Malamou, G. Boultadakis (in Biometrics Technology Today journal, Elsevier, Volume 2017, Issue 7)
In the current situation where terrorism has become a dire and global concern, daily border crossings arefluctuating, further checks are being reinforced at border control procedures, biometric data canpotentially contribute to a faster, more secure and feasible verification of people’s identity; thus tacklingany related threats and constraints rising from the aforementioned matters. In light of this, a survey hasbeen conducted within the Horizon 2020 “Intelligent Portable ContROl SyStem - iCROSS” project, in closeinteraction with various European border authorities which aimed to extract any problems during theirdaily routines and how the system to be developed could assist them towards a more effective and lessrisky implementation of their duties.
28 Nov 2017 - S. Zoltán (at 2017 CEPOL Research and Science Conference ’Innovations in Law Enforcement)
- iBorderCtrl and BBA242: Examples for research and innovation in border security at European and national level (Cannot find paper)
2018
Mar 2018 - EU Agency for Fundamental Rights
Europe’s migration and security challenges have prompted the European Union (EU) to develop and enhance multiple large-scale information technology systems (IT systems). Policy and legal developments in this area are evolving rapidly. The European Commission has proposed amending the legal bases for Eurodac and the Schengen Information System (SIS II), and is expected to propose amending the Visa Information System (VIS) in 2018. In addition, four new systems are planned: the Entry-Exit System (EES), the European Travel Information and Authorisation System (ETIAS), the European Criminal Records Information System for Third-Country Nationals (ECRIS-TCN), and, most crucially, an IT system that seeks to ensure interoperability across existing and planned systems.
Such systems provide invaluable support to border management efforts, but also have wide-ranging fundamental rights implications. The persons affected – including both regular travellers and persons who may be in situations of vulnerability – typically do not fully understand the implications of the use of such systems.
Contents:
Key findings and FRA opinions The right to information when personal data are processed Respect for human dignity when taking fingerprints Access to and use of personal data stored Persons in need of international protection How data quality affects fundamental rights The right of access, correction and deletion of own data stored Best interests of the child – risks and opportunities
26 Mar 2018 - Keeley Crockett (at Pro Machester Business Conference)
AI is used by a huge range of businesses, and is soon to be trialled by police with an innovative new lie detector called the ‘Silent Talker’. This, worked on by keynote speaker Keeley Crockett of Manchester Metropolitan University is a project which gets multiple channels to work together to work out whether an individual is lying.
These channels are things out of our control, including, non-verbal messages (playing with hair, fake smiles etc.) and micro-gestures (cheeks blushing, eyes moving left or right etc.) which can all measure your mental behavioural state.
The Silent Talker then monitors over 40 factors in the face to detect whether a person is lying or not. This forward-thinking technology is set to be trialled by both the police force and EU border control to see whether they can improve their processes and reduce criminal activity.
8 Jul 2018 - L. Rodriguez Carlos Roca, I. Hupont Torres and C. Fernandez Tena (at 2018 IEEE World Congress on Computational Intelligence)
This paper provides an overview of border control processes and how the inclusion of different biometric technologies contributes to its improvement. In particular, facial recognition is one of the latest biometric technology to have been added to this list of technologies. The Face Matching Tool (FMT), a system defined to assist border guards in the process of validating the identity of a travel document holder during the crossing border process, is presented in this paper. The system is built using advanced and high-performance deep learning models. Existing solutions for border control, such as the Automated Border Check gates (ABC gates), are possible thanks to the use of facial recognition as the main option for identity validation. These solutions imply a decrease in the queue time that a traveler expends crossing the border at airports around the globe. The FMT module, together with the rest of the iBorderCtrl system, reduces this waiting time while providing unconstrained and high facial recognition performances at the land borders.
8 Jul 2018 - J. O'Shea, K. Crockett, W. Khan, P. Kindynis (at 2018 IEEE World Congress on Computational Intelligence)
In this paper an automatic deception detection system, which analyses participant deception risk scores from non-verbal behaviour captured during an interview conducted by an Avatar, is demonstrated. The system is built on a configuration of artificial neural networks, which are used to detect facial objects and extract non-verbal behaviour in the form of micro gestures over short periods of time. A set of empirical experiments was conducted based a typical airport security scenario of packing a suitcase. Data was collected through 30 participants participating in either a truthful or deceptive scenarios being interviewed by a machine based border guard Avatar. Promising results were achieved using raw unprocessed data on un-optimized classifier neural networks. These indicate that a machine based interviewing technique can elicit non-verbal interviewee behavior, which allows an automatic system to detect deception.
18 Sep 2018 - K. Crockett, J. Stoklas, J. O’Shea, T. Krügel and W. Khan (at IJCCI 2018, 10th International Joint Conference on Computational Intelligence)
* Adapted Psychological Profiling Verses [sic] the Right to an Explainable Decision (video)
This keynote will focus on how computational intelligence techniques can be used to automatically physiological profile people. Silent Talker is a pioneering psychological profiling system which was developed by experts in Behavioural Neuroscience and Computational Intelligence. Designed for use in natural conversation, Silent Talker combines image processing and artificial intelligence to classify multiple visible non-verbal signals of the head and face during verbal communication. From analysis, the system produces an accurate and comprehensive time-based profile of a subject’s psychological state. The talk will give examples on how Silent Talker technology can be used. Firstly, to detect deception through providing risk scores to border guards in a border crossing scenario which is being developed as part of the European Union sponsored project known as iBorderCtrl. Secondly, to detect the comprehension level of a person in order to provide personalised and adaptable online learning within a conversational intelligent tutoring system. Ethical considerations will also be touched on in line with the GDPR and how it is important to have a “human in the loop” when developing such systems.
24 Oct 2018 - EU Commission website
An EU-funded project is developing a way to speed up traffic at the EU's external borders and ramp up security using an automated border-control system that will put travellers to the test using lie-detecting avatars. It is introducing advanced analytics and risk-based management at border controls.
27 Oct 2018 - Manchester Evening News (byline Rebecca Day)
Lie detectors could be used as a way to catch out criminals at border control - we put it to the test to see if we'd be able to sneak into a country on a fake passport […]
“[The avatar] is not subjective. It's based upon experiments, it's based upon capturing non-verbal behaviour. […]
If your features suggest that you are being deceptive, then the avatar actually changes its behaviour towards you. So if you seem cross, the avatar would be stricter than if you were happy and answering truthfully.
31 Oct 2018 - Gizmodo (byline Melanie Ehrenkranz)
A number of border control checkpoints in the European Union are about to get increasingly—and unsettlingly—futuristic.
In Hungary, Latvia, and Greece, travelers will be given an automated lie-detection test—by an animated AI border agent. The system, called iBorderCtrl, is part of a six-month pilot led by the Hungarian National Police at four different border crossing points.
31 Oct 2018 - New Scientist (byline Douglas Heaven)
What’s in your suitcase? If you open the suitcase and show me what is inside, will it confirm that your answers were true?
These are just two of the questions that an automated lie-detection system will ask travellers during a six-month pilot starting this month at four border crossing points in Hungary, Latvia and Greece with countries outside the European Union. It will be coordinated by the Hungarian National Police. The lie detector uses artificial intelligence and is part of a new tool called iBorderCtrl …
31 Oct 2018 - Engadget (byline Jon Fingas)
The pilot won't start with live tests. Instead, it'll begin with lab tests and will move on to “realistic conditions” along the borders. And there's a good reason for this: the technology is very much experimental. iBorderCtrl was just 76 percent accurate in early testing, and the team only expects to improve that to 85 percent. There are no plans to prevent people from crossing the border if they fail the initial AI screening.
31 Oct 2018 - The Verge (byline Dani Deahl)
The program is still considered highly experimental, and in its current state, will not prevent anyone from crossing over a border. Early testing of a previous iteration only had a 76 percent success rate, but a member of the iBorderCtrl team told New Scientist that they are “quite confident” that can be raised to 85 percent.
Even if that goal is reached, it leaves a large amount of room for error. But that’s not entirely surprising as studies have shown that many facial recognition algorithms have significant error rate issues and bias. These systems have also raised flags with civil liberties groups like the ACLU’s Border Litigation Project, who worry they might lead to more widespread surveillance.
01 Nov 2018 - Netzpolitik.org (byline Alexander Fanta)
Die EU fördert das Projekt über den Horizon-2020-Fonds mit 4,5 Millionen Euro. Mitentwickelt wird das System in Deutschland und Österreich. Das Institut für Rechtsinformatik der Universität Hannover schätzt für das Projekt die Privatsphäre-Risiken der Technologie ein. Der Forscher Bernhard Strobl vom Austrian Institute of Technology, der an Videoüberwachungstechnologie arbeitet, koordiniert die Forschung.
Das System könnte in Zukunft das Kontrollregime an den Grenzen deutlich verstärken. Die Projektwebseite und öffentliche Angaben der Entwickler lassen offen, was mit den durch das System gesammelten biometrischen und sonstigen Daten passieren soll. Die EU arbeitet derzeit an der europaweiten Zusammenlegung behördlicher Datenbanken für Personendaten, Fingerabdrücke und Fotos. Der neugeschaffene Datentopf soll einen Zugriff auf das Schengener Informationssystem, Europol-Fingerabdrücke und andere Datenbanken schaffen. iBorderCtrl dürfte diese Datensammlung weiter füttern.
01 Nov 2018 - Felix von Leitner - Fefe's Blog
Großartige Idee: An den EU-Außengrenzen sollen jetzt KI-Lügendetektoren eingesetzt werden.
In Hungary, Latvia, and Greece, travelers will be given an automated lie-detection test—by an animated AI border agent. The system, called iBorderCtrl, is part of a six-month pilot led by the Hungarian National Police at four different border crossing points.
Per-fekt! Die Ungarn können einfach niemanden mehr reinlassen, und am Ende hat niemand Schuld. Die KI war wohl kacke. ¯\_(ツ)_/¯
Kein Problem. Hat keiner geschrieben. Wurde trainiert. War dann wohl schlecht trainiert, aber das kann man ja vorher nicht sehen!1!!
01 Nov 2018 - ZDNet (byline Charlie Osborne)
[…] The trials will begin in lab testing conditions before real-world scenario tests start. However, the accuracy rate of roughly 76 percent – based on a trial of 30 people – leaves much to be desired, and so concrete trials are a preferable option to an EU-wide rollout until the lie detection system has been improved.
02 Nov 2018 - The Guardian (byline Daniel Boffey)
The EU has been accused of promoting pseudoscience after announcing plans for a “smart lie-detection system” at its busiest borders in an attempt to identify illegal migrants. […]
Bruno Verschuere, a senior lecturer in forensic psychology at the University of Amsterdam, told the Dutch newspaper De Volskrant he believed the system would deliver unfair outcomes. “Non-verbal signals, such as micro-expressions, really do not say anything about whether someone is lying or not,” he said. “This is the embodiment of everything that can go wrong with lie detection. There is no scientific foundation for the methods that are going to be used now. “Once these systems are put into use, they will not go away. The public will only hear the success stories and not the stories about those who have been wrongly stopped.” Verschuere said there was no evidence for the assumption that liars were stressed and that this translated to into fidgeting or subtle facial movements.
Bennett Kleinberg, an assistant professor in data science at University College London, said: “This can lead to the implementation of a pseudoscientific border control.”
02 Nov 2018 - TV5Monde (byline Pascal Hérard)
C'est une technologie financée depuis 2016 par l'Union européenne qui va être testée les 6 prochains mois dans au moins 3 postes frontière de l'espace Schengen : une intelligence artificielle “détecteur de mensonges”, nommée “Iborderctrl”. Analyse et explications sur le début de la “gouvernance algorithmique” à l'européenne.
02 Nov 2018 - Independent (byline Samuel Osborne)
The AI will then reportedly analyse their faces as they give their answers, looking at 38 micro-gestures to spot facial patterns which some say are associated with lying. Others disagree, however, with Bennett Kleinberg at the University of London telling the magazine the idea is controversial and has little evidence. […]
The trial of the tool, called iBorderCtrl, is being led by the Hungarian National Police. In an early test on 30 people, half of which were told to lie, the tool identified the liars with around 76 per cent accuracy. The team behind the system said they hoped to increase its accuracy by training it on a larger data set during the pilot.
02 Nov 2018 - CNN Travel (byline Rob Picheta)
But privacy groups have raised concerns about the trial. “This is part of a broader trend towards using opaque, and often deficient, automated systems to judge, assess and classify people,” said Frederike Kaltheuner, data program lead at Privacy International, who called the test “a terrible idea.”
The technology has been tested in its current form on only 32 people, and scientists behind the project are hoping to achieve an 85% success rate. Previous facial recognition algorithms have been found to have higher error rates when analyzing women and darker-skinned people, with an MIT study earlier this year finding that technology developed by companies including IBM and Microsoft contained biases.
“Traditional lie detectors have a troubling history of incriminating innocent people. There is no evidence that AI is going to fix that – especially a tool that has been tested in 32 people,” Kaltheuner added.
“Even seemingly small error rates mean that thousands of people now have to prove that they are honest people, just because some software said they are liars,” he [sic] added.
02 Nov 2018 - ORF.at (byline "pepr")
Das angedachte Lügendetektionssystem sorgt auf Expertenseite unterdessen für Stirnrunzeln. Geht es nach Bruno Verschuere vom Institut für forensische Psychologie der Universität Amsterdam gebe es keine wissenschaftliche Grundlage für die hier zur Anwendung kommenden Methoden. Vielmehr würden „Mikroausdrücke“ nicht wirklich etwas darüber aussagen, ob jemand lüge, wie Verschuere gegenüber dem „Guardian“ sagte.
Während auch der Datenwissenschaftler Bennet Kleinberg vom University College gegenüber der Zeitung von einer „pseudowissenschaftlichen“ Vorgangsweise spricht, zeigte sich Verschuere aber auch davon überzeugt, dass das System, sobald einmal in Betrieb genommen, auch in Betrieb bleiben werde: „Die Öffentlichkeit wird nur die Erfolgsgeschichten hören und nicht die Geschichten über diejenigen, die zu Unrecht gestoppt wurden.“
05 Nov 2018 - Homo Digitalis
Στις 05.11, η Η Ηomo Digitalis κατέθεσε Αναφορά στην Βουλή των Ελλήνων (Αρ.Πρωτ: 4661) σχετικά με τη χρήση του συστήματος «IBORDERCTRL» στα ελληνικά σύνορα, θέτοντας συγκεκριμένες ερωτήσεις προς τον αρμόδιο Υπουργό.
On 05.11, Homo Digitalis submitted a Petition to the Greek Parliament (No. 4661) on the use of the IBORDERCTRL system at the Greek border, asking specific questions to the competent Minister.
05 Nov 2018 - EurActiv
An EU-funded project is developing an ‘intelligent control system’ to test third-country nationals who reach the EU’s external borders, including a sophisticated analysis of their facial gestures. The Intelligent Portable Border Control System, iBorderCtrl, is a series of multiple protocols and computer procedures which are meant to scan faces and flag ‘suspicious’ reactions of travellers who lie about their reasons for entering the Schengen area.
05 Nov 2018 - Süddeutsche Zeitung (byline Matthias Kolb)
See below…
05 Nov 2018 - Tages Anzeiger (byline Matthias Kolb)
Aus ihren Zielen macht die EU-Kommission kein Geheimnis: Kontrollen sollen schneller werden und illegale Einwanderer besser erkannt werden. Für den Kriminologen Bennett Kleinberg vom University College London ist der Ansatz aber «pseudowissenschaftlich» und problematisch: «Es ist sehr umstritten, dass es eine Beziehung von nonverbalen Mikroimpressionen wie dem Zucken eines Augenlids und dem Erzählen einer Lüge gibt.» Stress sei kein guter Indikator für die Wahrheitsfindung, sagte Kleinberg der BBC.
Für Skepsis sorgt zudem, dass «European Dynamics» aus Luxemburg den Lügendetektor nur an 30 Mitarbeitern getestet hat und als Erfolgsquote 76 Prozent angibt. Die Hälfte der Teilnehmer habe bewusst geflunkert, weshalb die Software mit fehlerhaften Daten gefüttert werden könnte, warnt die Expertin Maja Pantic im «New Scientist»: «Wenn man Leute bittet, nicht die Wahrheit zu erzählen, verhalten sie sich anders als jemand, der wirklich lügt, um nicht ins Gefängnis zu müssen.»
06 Nov 2018 - The Next Web (byline Tristan Greene)
Overzealous Technocrats over-hyped iBorderCrtl in order to impress clients and the public. It is, however, able to collect massive amounts of data about you and add it to your permanent file. It is not a lie-detector machine. Calling the EU’s new border control AI a “lie detector” is like calling Brexit a minor disagreement among friends. […]
Depending on your views on privacy and immigration, this is either music to your ears or the beginning of a dystopian future straight out of an Orwellian nightmare. You’re wrong either way. For example, if you’re thinking “we could just have it ask everyone “are you a terrorist?” and make the EU safer for everyone” then you’re probably assuming there’s such a thing as an AI lie detector. There isn’t. Don’t worry, that’s a common mistake.
07 Nov 2018 - Sophia in 't Veld, MEP
to the Commission
Rule 130
Sophia in 't Veld (ALDE)
Subject: iBorderCtrl projectOn 24 October, the Commission published an article on its website under the section ‘Success stories’ on the EU-funded iBorderCtrl project. This project, which costs EUR 4.5 million, sets up a ‘smart liedetectionsystem’, which profiles travellers on the basis of a computer-automated interview taken by the traveller’s webcam before the trip, and an artificial-intelligence-based analysis of 38 microgestures. It sets out to detect illegal immigrants and to prevent crime and terrorism. The project gave rise to a lot of criticism from civil society and experts.
Why does the Commission consider this system to be a ‘success story’? Has the High-Level Expert Group on Artificial Intelligence given recommendations regarding ethical guidelines for this system and on its impact on applying the Charter? If not, why not? Why does the Commission consider a trial of the automated lie-detection system on 34 people, with an accuracy rate of only 76 %, to be a sufficient basis to start ‘trials’ of this system in Greece, Hungary and Latvia, where the fundamental rights of many border-crossing travellers will be violated?
08 Nov 2018 - Security Today (byline Sydny Shepard)
You thought you'd seen it all at the airport security checkpoint. From automatic rotating bins to futuristic 3D bag screening traveler screenings have become a bit of a grab bag when it comes to what you might find between the airport lobby and the secured gate areas. Now, especially across the pond, you can expect to administered a lie detector test powered by artificial intelligence—in addition to normal screening routines. […]
Some experts have doubts about the experiment, arguing that passengers will simply be more mindful of their physical cues while continuing to lie during the process. “If you ask people to lie, they will do it differently and show very different behavioral cues than if they truly lie, knowing that they may go to jail or face serous consequences if caught,” Imperial College London's Maja Pantic told the New Scientist. “This is a known problem in psychology.”
08 Nov 2018 - Clubic (byline Alexandre Boero)
Sous couvert de vouloir soulager son personnel des services frontaliers et de renforcer la sécurité au sein de l'Union européenne, l'IA détectrice de mensonges ne serait qu'un alibi pour la collecte de données.
08 Nov 2018 - Opportunités Technos (byline Etienne Henri)
Toutes les expériences à grande échelle le prouvent : les IA répètent et accentuent les biais humains. Qu’il s’agisse de chatbots devenus nazis en moins de 24 h au contact des internautes ou de programmes de reconnaissance faciale racistes, il est aujourd’hui impossible de créer une IA absolument neutre et juste (si tant est que ces termes aient une définition universelle). Tout au plus sait-on créer des IA biaisées comme leurs créateurs l’ont voulu ce qui, dans le cas des questions de sécurité du territoire, est problématique.
Il est donc curieux que l’Europe décide, à ce niveau de progrès des IA, de leur confier une telle responsabilité. Qu’adviendra-t-il lorsque le logiciel laissera entrer automatiquement un terroriste car il était blanc, âgé et bon menteur ? Que penseront les voyageurs stressés qui, mal à l’aise face à un interrogatoire mené par une machine, seront systématiquement qualifiés de suspects ? L’IA d’iBorderCtrl ne pourra pas se défaire de ses biais. Le fait qu’elle n’ait été entraînée que sur une trentaine de sujets pour se faire une idée des micro-expressions des menteurs réduira encore sa fiabilité.
10 Nov 2018 - Privacy News Online
As the borders between nations have become increasingly sensitive from a political point of view, so the threats to privacy there have grown. Privacy News Online has already reported on the use of AI-based facial recognition systems as a way of tightening border controls. As software improves, and hardware becomes faster and cheaper, it’s likely that this will become standard. But it’s by no means the only application of AI systems at the border.
The European Commission has just announced trials in Hungary, Greece and Latvia of iBorderCtrl, a $5.2 million project that includes the use of an AI-based lie detection system to spot when visitors to the EU give false information about themselves and their reasons for entering the area.
11 Nov 2018 - HiTech Expert (byline Vadim Buriak)
Having received a special QR code allowing the passenger to continue moving, the passenger is guaranteed to get on board the aircraft. It is worth noting that the system is experimental, so the algorithm will not personally deny people the opportunity to cross the border. In previous tests, the accuracy of iBorderCtrl was 76%, but, according to one of the developers, this figure can be increased to 85%.
11 Nov 2018 - data-traction (company that does privacy policy)
Für erste Experimente in diesem Projekt wird eine 76 Prozentige Erfolgsrate berichtet. Wenn man bedenkt, dass diese Erfolgsrate in einem kontrollierten Experiment mit einem sehr kleinem Testset von nur 32 Individuen erzielt wurde, ist das ein ausgesprochen schlechtes Ergebnis. Vor allem, unter Berücksichtigung des Fakts, dass die in kontrollierten Experimenten erzielten Ergebnisse, die in einer Realumgebung erzielten Ergebnisse beinahe immer übertreffen. Die in das Projekt involvierten Wissenschaftler gaben an, dass sie zuversichtlich sind, die Erfolgsrate auf 85 Prozent verbessern zu können. Aber reicht das aus? Im Jahr 2015 erhielten mehr als 520.000 Personen ein Schengen Visum. Diese Personen passierten also mit einer großen Wahrscheinlichkeit eine EU Grenzen. Für diese Kategorie an Reisenden allein, hätte das iBorderCtrl System bei einer 85 Prozentigen Erfolgsrate 80.000 Personen falsch klassifiziert. […]
In beiden Fällen wird eine automatisierte Entscheidung getroffen – entweder für die Einreise, oder für die weitere Befragung. […]
Wird ein Reisender weiter befragt, werden dazu automatisch zusätzliche Daten von dieser Person erhoben und das von der Person ausgehende Risiko neu berechnet. Diese Daten beinhalten biometrische Daten wie z.B. Fingerabdrücke, Gesichtsmerkmale oder Handvenenscans. Biometrische Daten sind entsprechend der DSGVO als Daten besonderer Kategorien anzusehen. Die Agentur der Europäischen Union für Grundrecht hat einen Bericht über Datensammlung in der EU und die Interoperabilität von EU Informationssystemen vorgelegt. Datenschützer haben hier bereits ihre Bedenken bezüglich einer EU-weite Sammlung und Verarbeitung von biometrischen Daten geäußert.
12 Nov 2018 - Inc. (byline Chris Matyszczyk)
Please imagine that you might land in, say, Budapest and be met by a body-less border guard.
Yes, an avatar that will ask you piercing questions about your travel.
Sample from Hungary: Do you agree with our great leader Viktor Orban that migrants are 'poison'?
No, I have no actual information that this will be one of the questions.
Indeed, Keeley Crockett, a computational intelligence academic who's involved in this fine scheme, told CNN:
It will ask the person to confirm their name, age and date of birth, (and) it will ask them things like what the purpose of their trip is and who is funding the trip.
But of course.
13 Nov 2018 - Jane's Airport 360 (byline Kylie Bull)
[…] Industry participants in the 14-member iBorderCtrl team include iTTi (Poland), the aerospace and defence division of NTT Data subsidiary Everis (Spain), BioSec Group (Hungary), and JAS Technologie (Poland).
13 Nov 2018 - Business Traveller (byline Robert Curley)
Some security experts expressed doubts about the system. “Traditional lie detectors have a troubling history of incriminating innocent people,” said Frederike Kaltheuner, data program lead at Privacy International. “There is no evidence that AI is going to fix that.”
14 Nov 2018 - Lonely Planet (byline Andrea Smith)
A new EU-funded project designed to ramp up security will put travellers from outside the European Union to the test by using lie-detecting technology. Countries participating in the project include Luxembourg, Greece, Cyprus, Poland, Spain, Hungary, Germany, Latvia and the UK.
14 Nov 2018 - L'ADN
L’Europe va tester pendant un an, un douanier artificiel pouvant détecter les mensonges. Le système nécessite pour le moment le consentement des voyageurs, mais ce dernier pourrait être imposé à l’avenir. Vous avez dit dystopie ?
Souriez, vous êtes fiché ! À partir de décembre 2018, l'Union européenne va mettre en place un nouveau dispositif de contrôle à l’entrée de son territoire. Il s’agit d’iBorderCTRL un bot censé soulager le travail des douaniers. Testé en Hongrie, en Lettonie et en Grèce, le système est notamment équipé d’une intelligence artificielle capable de détecter les mensonges. Son développement a coûté plus de 4 millions d'euros. Grâce à lui, l'UE espère mieux réguler les 700 millions d'entrées sur son territoire, mais aussi reprendre le contrôle de l'immigration clandestine et prévenir le crime et le terrorisme.
Concrètement, comment fonctionne-t-il ? Les voyageurs ou candidats à l'immigration qui ne sont pas citoyens européens peuvent utiliser, depuis un ordinateur équipé d’une webcam, le bot prévu à cette occasion. Ce dernier demande des copies numériques des papiers d’identité et des passeports puis commence un interrogatoire dans la langue du voyageur. Les questions posées sont les mêmes que celles d’un douanier classique : motif et durée du voyage, lieu de résidence, etc. Dans le même temps, une IA analyse le visage et la voix du candidat afin de prélever des données biométriques et un spectre vocal.
Détecter les mensonges… avec une marge d'erreur L’algorithme de reconnaissance faciale examine aussi en détail les micros expressions du visage afin de prédire si la personne est en train de mentir. Pour cela, l’IA détecte des tics inconscients censés révéler le stress de la dissimulation. « On ne cherche pas vraiment de sourires ou de froncement de sourcils, explique le docteur Keeley Crockett de l’université de Manchester, responsable de la technologie. On essaye de capter des micro-mouvements en fonction des questions, comme les yeux qui vont rapidement de gauche à droite par exemple. »
Enfin, iBorderCTRL embarque un autre dispositif tout aussi intrusif que son détecteur de mensonge. À partir du nom et du prénom de l'individu, il va passer en revue l’ensemble de ses comptes de réseaux sociaux afin de vérifier s'il n’est pas en lien avec une personne recherchée par la justice. La machine ne décide toutefois pas toute seule qui peut entrer dans l'Union Européenne. En fonction du score obtenu lors de l’interrogatoire, les voyageurs sont placés dans des files d’attente plus ou moins longues lors du passage à la frontière. Si la machine estime que vous n'avez pas menti, la file d'attente sera plus rapide. En revanche, si vous faites partie de la catégorie des « menteurs potentiels », vous aurez non seulement droit à un nouvel interrogatoire plus poussé, mais aussi à l’enregistrement de vos empreintes digitales et un balayage des veines de la main.
Le système pose évidemment de nombreuses questions techniques et éthiques. La première concerne la fiabilité du détecteur de mensonge. D’après les chercheurs qui ont mis au point ce système, le taux de fiabilité est estimé à 75 % seulement. Mais ces derniers espèrent qu’il pourra monter à 85 % à l’issue du test qui devrait durer un an. Avec une marge d’erreur aussi importante, pas étonnant que le passage par iBorderCTRL se fasse de manière volontaire avec la demande de consentement du voyageur.
16 Nov 2018 - World Economic Forum
Un proyecto piloto con la participación de España
El objetivo de la Unión Europea es agilizar los controles de la frontera, principalmente en una situación donde “la seguridad en los controles está creciendo rápidamente, sobre todo desde las amenazas terroristas y la crisis migratoria”, según explica George Boultadakis, coordinador del proyecto. Para 2020, las 13 empresas que financian este proyecto esperan que el mercado de la seguridad europea alcance un valor de 128 millones de euros.
El auge de la seguridad en la frontera está atrayendo importantes empresas que buscan modernizar los controles. iBorderCtrl es uno de elegidos para el proyecto Horizon 2020 que ha puesto en marcha la Unión Europea.
Las primeras pruebas han sido realizadas en el laboratorio, seguidos de lugares con unas condiciones similares a los de la frontera. Finalmente, durante los últimos seis meses se ha realizado el proyecto piloto en Hungría, Letonia y Grecia. Una prueba que se extenderá hasta agosto de 2019 y cuenta con el soporte de hasta 13 países, entre los que se incluye España.
16 Nov 2018 - AI Hub Europe
[…] Border officials will use a hand-held device to automatically cross-check information, comparing the facial images captured during the pre-screening stage to passports and photos taken on previous border crossings. After the traveller’s documents have been reassessed, and fingerprinting, palm vein scanning and face matching have been carried out, the potential risk posed by the traveller will be recalculated. Only then does a border guard take over from the automated system.
At the start of the IBORDERCTRL project, researchers spent a lot of time learning about border crossings from border officials themselves, through interviews, workshops, site surveys, and by watching them at work.
16 Nov 2018 - Wired UK (byline Sanjana Varghese)
Refugees claiming asylum in Europe are subject to an extensive and intense bureaucracy. Starting next year, it may become even more difficult, as several EU member states are trialling a technology that claims to analyse travellers’ expressions for indications that they may be lying.
[…]
However, while this use of technology could be misplaced, the tools themselves are useful in a wider sense – for example, these kinds of neural networks are used effectively, often for life-saving purposes, in the detection of tumors. “In both cases, there is an assumption that a machine can make visible something that a human cannot see,” says Amoore. “But in one of the instances, the science is used in a way that is unethical.”
21 Nov 2018 - EDRi (byline Homo Digitalis)
Homo Digitalis is alarmed about the introduction of such artificial intelligence (AI) systems to different aspects of our lives, even on a voluntary basis. In the European Union, people enjoy a high level of human rights protection based on the provisions of the EU Treaties and the Charter of Fundamental Rights of the European Union. It is unlikely that an AI system could reliably and without errors detect deception based on face expressions. In addition, if the technical reports and the legal/ethical evaluations that accompany this system are kept confidential, there is even more reason for doubt.
The petition filed by Homo Digitalis underlines the lack of transparency in the implementation of the technology and expresses mistrust regarding the true capabilities of the AI system used in the context of the iBorderCtrl project. In addition, the petition stresses out that there is a high risk of discrimination against natural persons on the basis of special categories of personal data. Therefore, the petition demands the Minister in change to state whether a Data Protection Impact Assessment and a consultation with the Greek Data Protection Authority (DPA) took place prior to the implementation of this pilot system to the Greek borders. It also requests clarification on why the technical reports and the legal and ethical evaluations accompanying the project are being kept confidential, even though the iBorderCtrl is not an authorised law enforcement system.
21 Nov 2018 - Qubit (byline Rácz Johanna)
Képzeld el, hogy mielőtt a világ egy másik szegletébe utazol, regisztrálsz egy digitális határvédelmi rendszerbe: a telefonod vagy a számítógéped kamerájával befotózod magad, a beszkennnelt irataidat is előre felöltöd, részletekbe menő kérdésekre válaszolsz előbb webes űrlapon, majd egy webkamerás interjúban egy animált határőrnek. Van-e utána félnivalód a határon? Lehet, hogy igen. Ha hazudtál.
A BAD SCI-FI FIBROUS FACIAL RECOGNITION BORDER CONTROL SYSTEM IS BEING TESTED AT THE HUNGARIAN FENCE
Imagine before you travel to another corner of the world, sign up for a digital border guard system: you're shooting on your phone or your computer's camera, prefetching your scanned documents, answering detailed questions first on a web form, and then in a webcam interview with an animated border guard. Are you afraid of the border then? Maybe yes. If you lied.
22 Nov 2018 - Boing Boing (byline Seamus Bellamy)
If you're planning on traveling to the European Union in the near future, you'd best grease up as a new border security project is planning on sliding into your background, personal story and biometrics before you have a chance top step off of your plane. […]
As part of the project which was seemingly named by someone who's watched Hackers at least 90 times, iBorderCtrl will consist of two parts. The first is a creepy online component that visitors to countries enrolled in the program will have to endure before they leave home. Speaking to a virtual border guard, they'll be asked about their gender, ethnicity and to upload a photo of their passport in order to sort out their visa. The program will also inform travelers of their rights while they're in the EU. Did I mention that while all this is going on, the program will be using your computer's camera to monitor your micro expressions? Ah, good times. Anyway, if the program thinks that you're full of shit, you'll have more fun that most people when you arrive in the EU.
The second part of the iBorderCtrl process happens once you arrive at your destination. There, a border guard, presumably wearing a Babadook costume will, continue the privacy freak out and pick up where the your computer probing left off. If the program detected what it believed to be a lie during the online application process, you'll face deeper interrogation than other passengers are subjected to. Using a PDA, the Babadook will cross examine you on the answers you gave while you were still at home. Your face will be compared to the passport photos you sent along and, provided everything checks out, you'll be allowed into Europe… just in time for you to catch your return flight home.
I get that with so many people crossing into Europe on a daily basis, something has to be done to streamline the entry process and to increase security for EU citizens. But this all seems like a bit much and, in the end, hasn't been proven to keep the baddies from mingling with the EU's goodies. Time will tell whether or not the project is a success or just another way to erode more of the few rights, freedoms and the wee bit of privacy that some of us still possess.
23 Nov 2018 - El Pais (Retina section)
Pese a que los creadores aseguran que la fiabilidad es bastante alta, todavía no es suficiente. Según explica Keeley Crockett, uno de los miembros del equipo de iBorderCtrl, en unas declaraciones recogidas por Gizmodo, el sistema alcanza una fiabilidad del 85%, lo que podría dar como resultado una gran cantidad de falsos positivos. Para solucionar este problema, la inteligencia artificial necesita seguir aprendiendo de una gran cantidad de datos recogidos de los oficiales de seguridad en la frontera, de su forma de realizar entrevistas y de la realización de encuestas.
El auge de la seguridad en la frontera está atrayendo importantes empresas que buscan modernizar los controles, según recoge Xataka. iBorderCtrl es uno de elegidos para el proyecto Horizon 2020 que ha puesto en marcha la Unión Europea. “Para entonces, las 13 empresas que financian este proyecto esperan que el mercado de la seguridad europea alcance un valor de 128 millones de euros”. Por ahora, el sistema se está probando sobre el terreno en Hungría, Letonia y Grecia, donde ya se han puesto en marcha los primeros tests; y entre los países participantes en el proyecto se encuentran también Polonia, Alemania y España.
26 Nov 2018 - epixeiro (byline Christos Kotsakas)
Giorgos Boulladakis, coordinator of iBorderCtrl and Senior Research Consultant of European Dynamics Luxembourg, speaks about the scheme and how it operates, responding to the criticism that the program has received. […]
There is a critique of such systems that characterizes them as “pseudoscientific”, while it is said that they can even lead to unfair results. Your placement on the issue?
This critique is about the lie detection system, which is part of the overall solution proposed by iBorderCtrl and based on a previous system developed before the iBorderCtrl program, called the “Silent Talker”.
For the development of the lie detection system, partners from Manchester Metropolitan University have taken some steps to ensure that the system will deliver credible results and will not “offend” a traveler. More specifically, a scientific discovery is achieved when scientists start a case and follow the scientific method to determine whether or not it is true. In this case, the case examined by the Manchester Metropolitan University scientists was: “Are there any indications that relate to the person's non-verbal behavior that is related to whether the person is lying or not?” Then, they performed some experiments with which they collected data that would support or defeat this hypothesis.
28 Nov 2018 - ERC-funded project ‘Data Justice: Understanding datafication in relation to social justice’ (working paper by Javier Sánchez-Monedero)
This report provides an overview of the datafication of borders and the management of refugees within the context of the EU. It analyses different reports, papers and systems that are part of the data processes confronted by refugees and asylum seekers. The report is focused on existing systems used by the EU and the UNHCR, but also draws on further studies on the use of Big Data in the context of refugees.
The announce of the ongoing tests of the ADDS in the EU has caused a public discussion in the media. The Guardian collected the opinion of several experts criticising the system as ‘pseudoscience’ by experts in forensic phycology and criminology who questioned the validity of facial micro-expressions as a measure of deception (Boffey, 2018). Also, the New Scientist reported that ’several independent experts contacted by New Scientist expressed strong reservations about the idea, questioning the accuracy of automated lie-detection systems in general’ (Heaven, 2018).
For this report we examined the scientific papers describing Silent Talker (Rothwell et al., 2006) and the ADDS (OrShea et al., 2018). Apart from the claims of the experts in physiology and criminology, we found several concerns regarding the experimental setup and quality of the machine learning models. First, the general setup of the experiments is questionable. It is difficult, if not impossible, to design an experiment to evaluate deception behaviour. In this case, the authors of ADDS asked some colleagues to perform different roles and scenarios. Their non-verbal activity was recorded to build a training dataset based exclusively on people performing a role. Second, the authors of the study claim that their system is sensitive to ethnic diversity, however the system has been trained with the micro-expressions of 32 persons. Third, the experimental validation consisted of a 10-fold cross validation with a mean accuracy of 75.5% for truthful detection and 73.6% for deception detection. This is the mean performance of ten runs for different train-test data folds. However, variability of the prediction accuracy is not considered in the report. From the tables in the paper, we can calculate the standard error (24.3% and 34.3% for truthful and deception respectively) that suggest that the mean performance is not a robust statistical estimator of the actual performance. Last, the stratification of the data split for train and test is problematic. The experimental validation divides the data in train and test sets to assert the performance considering all the vectors extracted from all the participants. This means that the data of a person will be present both in the training and testing sets. Therefore, the data used for model fitting is also used for model validation.
28 Nov 2018 - blog University of Oxford, faculty of law (by Samuel Singler)
An EU-funded border control pilot project, iBorderCtrl, to be trialed in Greece, Hungary, and Latvia, recently made headlines, attracting both praise and criticism. The project involves the deployment of Artificial Intelligence-based, computer-animated border agents to conduct lie-detector tests at the external borders of the EU. Despite the novelty of passengers directly interacting with virtual border agents, iBorderCtrl is in fact representative of a broader trend of technologizing border controls through the deployment of automated security technologies. The deployment of these systems to collect and analyze vast quantities of personal data is not simply envisaged as a tool for border management, but also as a key component of contemporary transnational surveillance and security practices carried out within and beyond those borders.
[…]
As recognized even by the European Commission itself, technologies first deployed at borders or in exceptional circumstances are likely to become normalized among the general population as well. This point was touched upon by a senior EU official, who expressed concern at the possibility of large-scale surveillance infrastructures diffusing inwards from the border to monitor and control EU citizens as well. Despite personally viewing this outcome as undesirable, and arguing it is still politically unpalatable, the official thought that eventual diffusion and ‘function creep’ were inevitably built into the operational logics of these technologies: ‘I personally think that we should not go there. I think for me that’s one step too many. […] It’s extremely sensitive, of course, but I think it will come. I think it will come.'
29 Nov 2018 - Magyar Narancs Online (byline Szedlák Ádám)
Elindult az iBorderCtrl elleni tiltakozás, ismét beszélik a poligráfról, hogy nem működik. Nem jó biznisz hazugságot szűrni.
Protest against iBorderCtrl has begun, and again the polygraph tells it is not working. It's not a good business to filter a lie.
01 Dec 2018 - Transhumanisme et Intelligence Artificielle (byline Sandrine Aumercier)
On a l’habitude de parler de la Chine comme le champion des caméras à reconnaissance faciale. La Commission Européenne a mis au point depuis 2016 une technologie dîte de « contrôle intelligent » qui va être testée pendant neuf mois jusqu’à août 2019 sur trois frontières de l’Union Européenne (Grèce, Hongrie, Lettonie) sur la base du volontariat. Elle est financée par le programme européen de recherche et d’innovation Horizon 2020 à hauteur de 4,5 Millions d’euros. Aucune loi nationale ou européenne n’autorisant à l’heure actuelle un tel dispositif, les volontaires à cet essai doivent signer un consentement éclairé. Le projet, « conscient des dimensions éthiques et légales » travaille en « étroite proximité avec un conseiller en éthique ».
We are used to talking about China as the champion of face-recognition cameras. The European Commission has developed since 2016 a technology called “intelligent control” that will be tested for nine months until August 2019 on three borders of the European Union (Greece, Hungary, Latvia) on a voluntary basis. It is financed by the European Horizon 2020 research and innovation program to the tune of 4.5 million euros. As no national or European law currently allows such a device, volunteers on this trial must sign informed consent. The project, “aware of the ethical and legal dimensions” works in “close proximity with an ethics counselor”.
02 Dec 2018 - Yapay Zeka ve Robotik (byline Sule Guner)
Şu ana kadar yapılan denemeler, sistemin yüzde 76 oranla doğruyu yansıtıyor. iBorderCtrl’ü geliştiren yetkililer, bu oranı yüzde 85’e çıkarmayı hedefliyor. AB’li yetkililer bu projeyi sınır kontrollerini daha etkin kılmak için kullanmayı amaçlıyor ancak yüz tanıma teknolojilerinin farklı coğrafyalardan gelen insanlar üzerinde çok kesin sonuçlar vermemesi bu sistemin en büyük zorluğunu oluşturuyor. Söz gelimi, beyaz ırk üzerinde daha çok test edilen algoritmaların farklı ırk ve renklerde insanların yüzünü tespit etmekte daha az başarılı olmaları mümkün.
The trials carried out so far reflect 76 percent of the system. Authorities developing iBorderCtrl are aiming to increase this rate to 85 percent. The EU authorities aim to use this project to make border controls more effective, but the fact that face recognition technologies do not give very precise results on people from different geographies is the biggest challenge of this system. For example, it is possible that the more tested algorithms on the white race are less successful in detecting people's faces in different races and colors.
03 Dec 2018 - Lawfare (byline Natalie Salmanowitz)
Over the next five months, travelers crossing external borders in Hungary, Latvia and Greece will have the opportunity to participate in the European Union’s latest effort to increase the security, efficiency and efficacy of its border checkpoints. The new system, “iBorderCtrl,” involves a voluntary two-step procedure. First, travelers register online, where an animated border agent asks a series of questions. As the traveler answers, an automated deception detection system measures “micro-gestures” on the traveler’s face and generates a risk assessment score.
04 Dec 2018 - Pressemitteilung Allgemeiner Studierendenausschuss (AStA) Uni Hannover
Marie Forster, Referentin für Öffentlichkeitsarbeit im AStA der Uni Hannover führt diese genauer aus: „Es wurden bereits verschiedene Lücken von Wissenschaftler_Innen weltweit benannt. Zum Beispiel die ganz grundlegende Frage nach der Wissenschaftlichkeit von Lügendetektoren. Besonders in Kombination mit Stress bei den Befragten verschieben sich angebliche Indikatoren für die Wahrheitsfindung und es bleibt unklar, ob zum Beispiel ein Zucken wirklich eine Lüge verrät. Außerdem ist die Software aus Perspektive der Persönlichkeitsrechte stark zu kritisieren, da unklar bleibt ob die Aufnahmen gespeichert werden und von wie vielen Menschen sie begutachtet werden. Zudem sind die Kriterien für die Einstufung eines Menschen als ‚gefährlich‘ nicht transparent“.
Weiter stellt sich die Frage nach dem Umgang mit als „gefährlich“ eingestuften Personen: Werden die Beamten trotz der Diagnose der künstlichen Intelligenz unvoreingenommen und sachlich bei weiteren Untersuchungen sein können?
Tjard Bornefeld, Referent für hochschulpolitische Vernetzung ergänzt weitere Kritikpunkte: „Solche Projekte fördern die Ausweitung ständiger Überwachung, zum Beispiel durch die neuen Polizeigesetze oder „Predictive Policing“-Programme. Bei letzterem werden durch eine künstliche Intelligenz angeblich gefährliche Gebiete oder sogar Personen bestimmt. In all diesen Fällen nimmt die Verfolgung von Menschen zu, wobei Rassismus eine immer größere, und reale Vorwürfe eine immer kleinere Rolle spielen. Es entsteht eine neue Normalität verringerter Freiheit und Rechte. Wir verstehen solche Projekte als Kampfansage gegen Menschen ohne EU-Pass, weil ein Generalverdacht hergestellt wird. Scheinbar soll der Kampf gegen Menschen auf der Flucht auf eine weitere Ebene gebracht werden.“
Vor diesen Hintergründen betrachten und kritisieren wir ein Projekt wie “iBorderCtrl”. Eine kritische Auseinandersetzung mit „ethischen und rechtlichen Fragen“ sollte zum Fazit kommen, dieses Projekt zu verhindern, anstatt es zu legitimieren. Wir fordern das beteiligte Institut für Rechtsinformatik der LUH auf, das Projekt nicht weiter zu unterstützen.
05 Dec 2018 - taz (byline Andrea Maestro)
„Die Leibniz Universität Hannover hilft der EU bei der Abschottung“, lautet der Vorwurf, der auf Plakate gedruckt nun überall im Hochhaus auf dem Conti-Campus in Hannover hängt. Der Allgemeine Studierenden Ausschuss (Asta) der Uni hat am gestrigen Dienstag, unterstützt von anderen Gruppen wie dem Flüchtlingsrat Niedersachsen, Campus Grün, Solinet oder dem Friedensbüro Hannover, einen offenen Brief übergeben, indem er die Uni dafür kritisiert, dass sie sich an einem EU-Projekt für effizientere Grenzkontrollen beteiligt.
05 Dec 2018 - Remo Contro (byline Alessandro Fioroni)
La notizia è stata data all’inizio di novembre dalla rivista specializzata New Scientist, ma bastava consultare il sito web dell’Unione Europea per capire che il vecchio continente sta usando l’intelligenza artificiale per blindare le sue frontiere. E’ questo il senso del progetto ‘iBorderCtrl’, una sorta di poliziotto virtuale, o un ‘Salvini elettronico’, se vuoi buttarla in politica, che controllerà chi intende attraversare i confini europei. Non si tratta di fantascienza perché la sperimentazione è in fase operativa e i primi paesi che ne testeranno a brevissimo l’efficacia sono Grecia, Lettonia e Ungheria. Nell’agosto 2019 verranno tirate le somme e si capirà cosa è successo veramente.
05 Dec 2018 - Tiroler Tageszeitung (byline Philipp Schwartze)
Kritik gibt es von Datenschützern und IT-Fachleuten. „Es ist bisher zu wenig bekannt, was mit Daten passiert, die dabei erhoben werden“, sagt Ines Janusch, Geschäftsführerin des Wiener Unternehmens Data Traction, das zum Thema Datenschutz und künstliche Intelligenz berät.
Fingerabdrücke abzugeben, sind viele Reisende inzwischen gewohnt, moderne Reisepässe, die biometrische Daten in einem Chip gespeichert haben, auch. Doch „iBorderCtrl“ sammelt mehr.
„Da kommt ein Handvenen-Scan dazu, und es ist geplant, Social-Media-Profile zu verknüpfen, um mehr über die Person zu erfahren“, sagt Janusch. Es ist das zweischneidige Schwert zwischen Sicherheit und Datenschutz, der digitale Grenzbeamte könnte einmal mehr dazu beitragen, dass der Mensch gläsern wird.
Die technische Seite ist nicht weniger besorgniserregend. „Wir verwenden bei den Maschinen jetzt Algorithmen, die wir als Menschen nicht mehr ganz nachvollziehen können, weil es zum Teil mathematisch nicht mehr möglich ist“, warnt Janusch vor der zunehmend undurchschaubaren Logik der künstlichen Intelligenz.
06 Dec 2018 - Watson (byline Daniel Schurter)
Wie steht es um den Datenschutz?
Big Brother lässt grüssen.
Es geht ums Anlegen eines riesigen Datenschatzes. Um die biometrischen Daten von hunderten Millionen, ja Milliarden Menschen. Das sind höchst sensible Informationen, mit denen sich jedes Individuum identifizieren lässt.
Ob die zuständigen Dienststellen die anfallenden Datenmengen sicher verarbeiten und speichern können, ist offen. Sicher ist, dass es keine hundertprozentige Sicherheit gibt.
Was lernen wir daraus?
Es wird noch viel Wasser den Rhein hinab fliessen, bis Lügendetektoren das leisten, was ihr Name verspricht.
Bis dahin sollten wir fragwürdigen Forschungsprojekten wie iBorderCtrl mit gesundem Misstrauen begegnen.
17 Dec 2018 - Trends der Zukunft
In den vergangenen Jahren hat der technische Fortschritt schon zu einigen Verbesserungen bei den Grenzkontrollen geführt. So werden inzwischen oftmals auch die biometrischen Daten und die Fingerabdrücke überprüft. Zukünftig will die Europäische Union zudem Lügendetektoren nutzen, um einreisende Personen zu kontrollieren.[…]
Hinzu kommt: Die Datenschutz-Grundverordnung schreibt vor, dass Betroffene im Fall einer automatisierten Einzelfallentscheidung über die dahinter stehende Logik informiert werden müssen. Dies ist im Fall einer selbstlernenden KI aber gar nicht so einfach. Das Forschungsgebiet der erklärbaren künstlichen Intelligenz steckt nämlich noch in den Kinderschuhen.
27 Dec 2018 - NRC Handelsblad (byline Marc Hijink)
* Hoe ziet de computer of je liegt? (English)
Erg gelukkig zien ze er niet uit, George Boultadakis, Athos Antoniades en James O’Shea, aan het statafeltje in hun geïmproviseerde beursstand. Het is een woensdagochtend in december, en in de Brusselse Kunstberg presenteert Europees onderzoeksfonds Horizon 2020 voor het eerst ‘succesverhalen op veiligheidsgebied’.
Detectie van bedrog is maar één van de punten in hun onderzoek, benadrukken Boultadakis en Antoniades. De een werkt bij softwarebedrijf EuroDynamics, de ander is gespecialiseerd in data voor medische toepassingen. „Er wordt alleen bericht over leugendetectie, niet over de andere bestanddelen”, klagen ze. En ook: deze technologie is volwassener dan wordt gesuggereerd.”
Een ander ingrediënt van iBorderCtrl – minstens zo omstreden als leugendetectie – was analyse van socialemediagedrag van reizigers. Dat zou een goed beeld geven van iemands mogelijke connecties met criminelen. Inmiddels is dit onderdeel geschrapt. Het ligt te gevoelig, zeker met de aangescherpte Europese privacyregels.
Jammer trouwens dat de media zich alleen op de ‘sexy’ leugendetectie richten, verzuchten Boultadakis en Antoniades. „Het kan namelijk best zijn dat leugendetectie het uiteindelijk niet haalt in de opzet”, verklaart Antoniades als O’Shea de stand even verlaten heeft. „Toen we dit project bedachten, zochten we rare nieuwe technologie om het geheel wat sexier te maken. Ik was degene die met Silent Talker op de proppen kwam.”
Het lijkt erop dat iBorderCtrl nu al spijt heeft van die keuze.
2019
13 Jan 2019 - Mobility Mag (byline Marinela Potor)
Meinung: iBorderCtrl ist unseriös, fehlerhaft und gehört in den Müll!
[…]
Das ist also, was wir nach aktuellem Projektstand feststellen können:
1. Die Methode, die iBorderCtrl zur Lügendetektion nutzt, ist wissenschaftlich nicht nachhaltig belegt.
2. Die Datenbasis ist bislang viel zu gering, um stichhaltige Aussagen zur Trefferquote machen zu können.
3. Der Algorithmus wird mit Fake-Lügen trainiert.
4. Es gibt keinen Kontrollmechanismus.
Kann mir dann bitte jemand erklären, warum dieses pseudowissenschaftliche Pilotprojekt nicht schon längst verworfen wurde und auf dem Müllhaufen gelandet ist? Und bitte nicht ins Recycling, sondern in den Sondermüll!
14 Jan 2019 - Phys.org (byline CORDIS)
Traffic across the EU's external borders is on the rise, as is the threat posed by illegal immigration. As over 700 million people enter the EU each year, this puts considerable pressure on border agencies that must adhere to strict security protocols while at the same time ensuring the smooth flow of traffic into the EU. Increased international trade and more sophisticated criminal activity make border checks even more challenging. Therefore, authorities need to provide a fast and efficient border clearance process while also safeguarding the safety and security of checkpoints.
21 Jan 2019 - Frankfurter Allgemeine Zeitung (byline Constanze Kurz)
Etablierung einer automatisierten Überwachungsnormalität
Das iBorderCtrl-Programm ist beispielhaft für einen besorgniserregenden Trend in Europa. Statt in eine angemessene Personalausstattung der Grenzbehörden zu investieren, wird viel Geld in höchst fragwürdige invasive Technologie versenkt. Die Privatsphäre von Menschen wird unterminiert, quasi automatisch und ohne Rücksicht auf Verluste. Die massive Förderung der EU für solche „Sicherheitstechnologien“ hat eine Bonanza für allerhand unseriöse und zum Teil gefährliche Projekte eröffnet. Der Trend zu umfassender „intelligenter“ Überwachung, die oft mit Argumenten wie „Nutzerfreundlichkeit“ oder „Beschleunigung“ verkauft wird, hat mit dem iBorderCtrl-Lügendetektor einen vorläufigen Höhepunkt erreicht.
Es ist an der Zeit, dass derartige Projekte auf den Prüfstand gestellt und selbst hinreichend transparent durchleuchtet werden. Denn sind sie einmal entwickelt, in Pilotversuchen getestet und bezahlt, ist der Schritt zur flächendeckenden Einführung nur noch klein. Das droht nicht nur von teuren EU-Projekten wie iBorderCtrl. Die Etablierung einer automatisierten Überwachungsnormalität mit vernetzten Kameras mit Gesichtserkennung auf allen Bahnhöfen ist auch ein erklärtes Ziel deutscher Innenpolitiker.
Während die hiesigen Medien in den letzten Monaten mit einer berechtigten Abscheu über die wachsende technisierte Überwachung in China berichten, finden die parallelen Entwicklungen in der EU zu wenig Aufmerksamkeit. Es wird jedenfalls Zeit, die grassierende Ausbreitung von Überwachungstechnologien vor der eigenen Haustür stärker unter die Lupe zu nehmen und kritische Fragen nicht nur nach der Funktionsfähigkeit und Sinnhaftigkeit dieser Entwicklungen zu stellen. Es gilt auch, die Entscheidung darüber, in welcher Gesellschaft wir leben wollen, nicht nur den Verkäufern, Technikern und Bürokraten zu überlassen.
4 Feb 2019 - Blick.ch (byline Cornelia Eisenach)
Die EU bezahlt die Entwicklung eines KI-Lügendetektors für den Einsatz an der Grenze. Doch dem System fehlt jede wissenschaftliche Grundlage, und es ist technisch unausgegoren.