How the EU Wants to Protect Its Borders with Artificial Intelligence

The EU Commission has invested enormous sums of money in research on artificial intelligence over the past ten years — more than 3 billion euros in total. Among other things, Smart Borders are intended to reduce deaths in the Mediterranean and automate controls at the EU's external borders. AlgorithmWatch and ZDF Magazin Royale have investigated what some of these EU-funded projects are all about.

The Mediterranean is considered the deadliest border region in the world. Around 29.000 people have disappeared or lost their lives in the Mediterranean while trying to reach Europe over the past ten years.

At the same time, since 2015, calls for stronger EU external border protection have become louder. For example, a description of an EU-funded research project states that "In the last years irregular migration has dramatically increased, and is no longer manageable with existing systems.“

One of the European Commission’s responses to these challenges is artificial intelligence. For years, the Commission has been conducting research into how border surveillance can become increasingly automated using drones, satellites, and other digital systems.

The EU invests billions in AI research

The research projects are funded with sums in the millions through EU programs like Horizon 2020 and Horizon Europe. The European Commission publishes some general information about the funded research projects. For example, a website of the Commission states that the research project. ODYSSEUS aims to find out how using AI could help “reduc[e] the workload” of border officers and "improv[e] [their] productivity".

The EU project METICOS aimed to promote "social acceptance" of technologies designed to detect "abnormal behaviours” in humans.

Researchers have been observing a „Securitization” of the migration debate for years. This means that societal issues — such as forced migration and asylum — are declared a security problem for all of us. The idea behind this is that if refugees are no longer portrayed as people seeking protection but as a security risk, citizens are more likely to accept harsh surveillance measures against them.

Migration is also treated as a security issue in EU projects. Many EU research projects on migration are listed in the research category „Secure Societies“.

“Pre-frontier” border protection

The EU is not only focusing on protecting the external borders in the Mediterranean region but has been expanding border protection activities to a so-called „pre-frontier“ area in recent years. According to EU-Regulation this refers to “the geographical area beyond the external borders which is relevant for managing the external borders through risk analysis and situational awareness”. Projects like NESTOR explicitlyist focus on expanding surveillance in this "pre-frontier" area using AI-supported applications. Considering the areas where the EU border agency Frontex now operates, the EU's "pre-frontier" seems to be broadly interpreted in practice: It extends across the Western Balkans, the southern Caucasus, to North Africa, and into the Sahel Region.

EU countries like Germany are attempting to increase the number of deportations by signing migration agreements with countries in these regions, even against the local population’s interests For example, in Tunisia, we see how technology — partly from the arsenal of the Federal Police — ends up with local border control authorities that disregard the human rights.

Predicting migration in order to prevent it?

Several EU-funded projects are concerned with forecasting migration movements. One such system is the EuMigratTool. It provides "monthly predictions of asylum applications in the EU". Its developers claim that it is able to identify the potential risks of tensions between migrants and EU citizens.

The research project’s own users board warned against misuse in aStatement The system could lead to “closing of borders” and “instigating violence". The data on asylum applications could also be used to "gain support and consensus for an anti-migration policy." Nevertheless, it was developed.

Another project utilized Trends in Google-Researcho predict when and how migration movements from Romania to the United Kingdom might occur.

The EU-fundedHumMingBird project even analyses metadata from phone calls to determine where calls are made and where migration movements might be occurring.

Much remains a secret

The general project descriptions are published on the European Commission’s website. However, many documents — such as the project’s ethics reports — often remain unpublished. This makes it difficult to assess what threat a project may pose to fundamental rights or what exactly the researchers are planning. A democratic debate on the use of Artificial Intelligence at the EU's external borders is therefore not possible.

That's why investigators from AlgorithmWatch and ZDF Magazin Royale have requested unpublished documents on EU-funded AI research projects from the responsible European Research Executive Agency (REA).The REA is part of the European Commission and funds "high-quality research and innovation projects that generate knowledge for the benefit of society" on its behalf.

As an EU agency, the REA is legally obligated to grant the public access to documents from EU institutions. Therefore, AlgorithmWatch and ZDF Magazin Royale have requested access to the documents from the REA based on European laws granting public access to information (EU Regulation 1049/2001 and 1367/2006).

These EU regulations enable all EU citizens to access documents of the European Parliament, the Council, and the Commission. Article 4 of Regulation 1049/2001 lists exceptions in which the EU institutions may refuse to disclose the documents. These include protection of public security, data protection, or commercial interests.

Transparency of the European Commission: 169 of 177 pages redacted

However, the REA severely limited access to the documents. For example, for the research project NESTOR, we were instructed to select a maximum of ten documents out of a total of 88 due to the high amount of work involved. Some of these documents were in turn heavily redacted by the REA. One annex to the NESTOR grant agreement contained around 169 redacted pages.

When asked, the European Commission justified the large-scale redaction as follows: “While public interests are protected through the ethics processes in place, disclosure of non-public documents would make them accessible to competitors, within and outside the EU.”

Accordingly, the public is kept out because there are internal ethics procedures that control the projects anyway.

Ethical oversight is also inadequate

The European Commission can require the involvement of Ethics Advisors in research projects that raise complex and significant ethical questions. These Ethics Advisors are required to be independent. As the Commission states in a list of Dos and Don’ts-Liste fest: „[Don’t] Ask your best friend for a favor or recruit your EAB [Ethics Advisory Board] from the project partners or their home institutions.“

The independent Ethics Advisor of iBorderCtrl was not independent.

In one instance, the REA apparently forgot to redact the name of the iBorderCtrl Ethics Advisor in the released documents. It was only due to this oversight that we were able to ascertain that the independent Ethics Advisor of iBorderCtrl was not independent.

ZDF Magazin Royale and AlgorithmWatch know the name of the Ethics Advisor. Since the advisor is not a person of public interest the editorial team chose not to disclose the name. However, the question of how EU-funded projects are ethically supervised is indeed a matter of public interest.

One of the institutes responsible for developing iBorderCtrl was the Leibniz Universität Hannover. The university’s “Institute for Legal Informatics” dealt with the project’s legal and ethical questions An independent Ethics Advisor was supposed to oversee the research contribution of Leibniz Universität Hannover.

According to research conducted by ZDF Magazin Royale and AlgorithmWatch, the independent Ethics Advisor of iBorderCtrl also worked as an assistant professor at Leibniz Universität Hannover at the same time — even at the “Institute for Legal Informatics”.

He taught a course there in the winter semester of 2015/2016, as well as in the winter semesters of 2016/2017, 2017/2018, and 2018/2019. Additionally, a document dated November 30, 2016 shows that the assistant professor at Leibniz Universität Hannover announced his commitment as an "independent external Ethics Advisor" for the research project.

The fact that the independent Ethics Advisor simultaneously worked for one of the project partners likely contradicts the EU Commission's requirements , which state: "Independence and freedom from any conflict of interests are requirements for the participation in these EABs (ethical advisory boards)."

In response to inquiries from ZDF Magazin Royale, the independent Ethics Advisor asserted that he was aware of this requirement: "Yes, this requirement was and is known to me, and it was fulfilled at all times with regard to my work as an 'Ethical Advisor.'

Leibniz Universität Hannover sees no issue either with their assistant professor having served as an Ethics Advisor for iBorderCtrl: "No, an economic dependency is firmly rejected by LUH in terms of the number of hours and remuneration for the teaching assignments."

While Leibniz Universität Hannover was involved in the selection of the independent Ethics Advisor, it emphasized to ZDF Magazin Royale that it was "not the project leader." According to the university, this responsibility lay with the software company European Dynamics Luxembourg SA.

We asked European Dynamics to comment on who appointed iBorderCtrl’s independent Ethics Advisor. European Dynamics did not respond to ZDF Magazin Royale's press inquiry by the editorial deadline.

The European Commission, which covered 100 percent of the iBorderCtrl project costs, stated in response to ZDF Magazin Royale's inquiry that it would not comment on the independent Ethics Advisor of iBorderCtrl for data protection reasons.

tl;dr: iBorderCtrl raises challenging ethical questions. The requirements for the appointment of the independent Ethics Advisor were apparently not met. Leibniz Universität Hannover, responsible for ethical matters, says they were not responsible for appointing the Ethics Advisor. The allegedly responsible project coordinator European Dynamics did not respond to ZDF Magazin Royale's inquiries. And the European Commission will not comment on the matter due to data protection reasons.

These are the AI projects we investigated

  • Full title: An enhanced pre-frontier intelligence picture to Safeguard The EurOpean boRders
  • Project period: November 1, 2021 to April 30, 2023
  • EU funding: €4,999,578.13
  • Funded under: EU Horizon 2020: Secure societies – Protecting freedom and security of Europe and its citizens
  • Objective of the project: A comprehensive border surveillance system that will provide “pre-frontier situational awareness” beyond sea and land borders. Border control personnel will be equipped with smart glasses to monitor the territory in question.
  • Criticism of the project: The EU Commission has been silent about the actual risks this project poses and has redacted large portions of documents. The documents do acknowledge that the NESTOR research could be used for “crime or terrorism,” to “curtail human rights and civil liberties,” or “misused to stigmatise, discriminate against, harass or intimidate people.” However, information about these risks has largely been redacted. When asked, the EU Commission justified the redactions mainly on the grounds of the research team’s “commercial interests,” claiming that the requested documents could provide competitors with an opportunity to anticipate “the strategies and weaknesses of the partners of the consortia” or allow them “to copy or use” their intellectual property.

  • Full title: autonomous swarm of heterogeneous RObots for BORDER surveillance
  • Project period: May 1, 2017 to August 31, 2021
  • EU funding: €7,999,315.82
  • Funded under: EU Horizon 2020: Secure societies – Protecting freedom and security of Europe and its citizens
  • Objective of the project: ROBORDER saims to develop a border control system using unmanned robots – for “aerial, water surface, underwater, and ground vehicles.” The robots will “operate both independently and in swarms” to automatically identify “suspicious persons” and thus prevent criminal activities, among other things.
  • Criticism of the project: As quoted in „The Intercept” robotics professor Noel Sharkey raised the concern that systems like ROBORDER would be easy to weaponize and warned against using them in ”politically-charged border zones.”
  • Researchers at AlgorithmWatch and ZDF Magazin Royale also questioned whether the focus of the research project was exclusively civil – a prerequisite for receiving funding from the EU under Horizon 2020. A market analysis of ROBORDER specified “military units” as among the organizations “who will ultimately use or is intended to ultimately use the ROBORDER system”. When asked by AlgorithmWatch and ZDF Magazin Royale, the EU Commission said it did not see any violations of its funding guidelines. It wrote: “The project activities had an exclusive civil application related to land and marine border surveillance. Military organisations are also involved in civil applications in this context (e.g. search and rescue). Therefore, the fact that military units were identified in a market analysis as possible end-users does not mean that the project activities did not have an exclusive focus on civil applications.”
  • Project documents that AlgorithmWatch and ZDF Magazin Royale reviewed also show that the ROBORDER project was presented to upcoming officers of the Hellenic Navy in June 2018. The EU Commission does not view this as a contradiction of the solely civil focus of the project either.
  • The EU-funded ROBORDER research project was completed in 2021. However, according to the Greek Migration Ministry he current REACTION research project is based on ROBORDER. The Greek research center CERTH, which coordinated ROBORDER, is working on REACTION. When asked, CERTH declined to tell ZDF Magazin Royale and AlgorithmWatch which specific results or elements of ROBORDER were being used in REACTION. REACTION also receives funding from the EU via the Integrated Border Management Fund.

  • Full title: Intelligent Portable Border Control System
  • Project period: September 1, 2016 to August 31, 2019
  • EU funding: €4,501,877.50
  • Funded under: EU Horizon 2020: Secure societies – Protecting freedom and security of Europe and its citizens
  • Objective of the project: The vision of iBorderCtrl was automated border control. The project called for a two-stage border procedure. First, travelers would have to register prior to commencing travel. The actual control would then take place in the second stage at the border.
  • One of the project modules involved a lie detector known as “Silent Talker.” “Silent Talker” was designed to detect, analyze and determine the veracity of statements on the basis of “micro-expressions,” i.e., small and unconscious facial movements. According to the plan, an animated avatar representing a border control agent would interview travelers during the registration phase and analyze their facial movements. “Silent Talker” would assess this material, assign a risk rating based on the veracity of the responses, and provide the traveler with a QR code they can access on their mobile phone. During stage 2, when the traveler crosses the border, a human border control agent would access the rating and conduct a risk assessment based on the data provided.
  • Criticism of the project: Detect lies based on facial expressions? A Paper published in April 2024 by psychologists Kristina Suchotzki and Matthias Gamer warned against this. “Outside of books and movies, Pinocchio’s nose does not exist” the researchers write, adding that “[t]here are no valid behavioral cues that differentiate robustly between liars and truth-tellers, and no physiological or neural signature has been identified that can unambiguously be attributed to deception.” In other words, there is no evidence that physical responses indicate whether someone is lying. “[...] it became evident that increased emotional arousal is neither necessary nor specific for deception, as it may similarly be observed during truth-telling,” say Suchotzki and Gamer. The psychologists also commented specifically on iBorderCtrl and criticized the lack of sufficient testing of “Silent Talker” prior to the funding of the research project: “This reflects the high hopes that are put into AI-based security applications yet unfortunately also demonstrates that this often comes at the cost of basic scientific standards.” The research documents reveal that the lack of scientific evidence is a secondary concern for the research consortium, and that they believe that AI will solve the problem itself: “They [sic!] key feature of ST [Silent Talker, editor’s note] as a machine learning system is that it does not matter whether particular psychologists are correct about particular NVB [Non-Verbal Behaviour, editor’s note] gestures, ST was given a set of candidate features and worked out for itself which interactions over time indicated lying.”

The next lie detector – TRESSPASS

iBorderCtrl is not the only EU research project to incorporate a lie detector. TRESSPASS monitored control measures aat border crossings through November 2021 for “threats” like “irregular immigration.” One of these control measures, according to publicly available research documents, is the use of a system called “MMCAT” (Multi Modal Communication Analysis Tool). MMCAT uses video to analyze “micro movements pertaining to facial expression, gesture and posture (smiling, blinking, hand movements, leaning etc.).” Border control agents would be able to use MMCAT in interviews to determine “if a traveller is telling the truth.” As with iBorderCtrl, the TRESPASS research team started with the questionable premise that physical responses are a reliable way to tell whether someone is lying. “Just as there are no convincing empirical findings that indicate that microexpressions could help distinguish between lies and truth, there are no such findings for body postures either. In this regard, I would not expect any increase in reliability,” explains legal psychologist Kristina Suchotzki. Nevertheless, the MMCAT lie detector was tested on people in Amsterdam in February 2019, funded by the EU. According to documentation, another test was planned for the beginning of 2021. Public project documents describe the test as follows: “A few participants were asked to participate in a card game and answer some questions about the cards, sometimes telling the truth and sometimes lying. The scenes were video-recorded and the video footage analyzed in order to evaluate whether the MMCAT system could provide useful indications about the sincerity of the game participants.” “It is not uncommon to begin with scenarios that are easy to implement in the laboratory in order to check how the results manifest within that context,” says psychologist Suchotzki. At the same time, she emphasizes: “However, one should definitely not be satisfied with this alone and must then – initially in the lab, because only there you have control and realistic feedback on who is telling the truth and who is lying – try to get closer to real-life scenarios step by step. Without such intermediate steps, I believe it is premature to transfer the results of such an artificial, very remote scenario to practice.” In another test, Polish border control agents used MMCAT to identify suspicious travelers. The TRESSPASS research team drew “lessons” from these tests. “There is the risk that the technology is not accurate (e.g., is biased), is misused or that sensitive data is created that can be leaked or reused for illegal purposes,” is the conclusion drawn in one report, which does not, however, question the general objective of the research. Instead, the report notes that MMCAT “stimulates the (public) debate about the why, how and what of improving interviews at border control.”

More transparency with the new AI Act?

The EU's recently adopted AI Regulation (also known as the AI Act) could have included restrictions on the use of AI-based systems for surveillance and control at our borders. The European Parliament has long campaigned to enshrine more transparency in the AI Act and to partially ban the use of AI-based systems in migration. However, most of these safeguards were rejected by the EU Member States and are no longer included in the final documents.

DOCUMENTS

Large parts of the documents we obtained were redacted by the responsible agency of the EU Commission. Names and personal data on the following pages have been redacted by AlgorithmWatch and ZDF Magazin Royale in order to protect the privacy of individuals:

  • d4-1firstversionoftheiborderctrlsoftwareplatform-redacted-compressed.pdf (S. 103, 152)
  • d4-2secondversionoftheiborderctrlsoftwareplatform-redacted.pdf (S. 36, 44, 46, 47)
  • d8-5periodicprogressreport2-redacted (S. 2)
  • d8-7annualreport2-redacted.pdf (S.22)
  • d7-6standardizationandcollaborationwithotherprojects-redacted (S. 72)

ROBORDER

NESTOR

iBorderCtrl