Publication
International arbitration report
In this edition, we focused on the Shanghai International Economic and Trade Arbitration Commission’s (SHIAC) new arbitration rules, which take effect January 1, 2024.
Australia | Publication | dezembro 2024
It is has been one year since the publication of our article: Virtual Justice? Exploring AI’s impact on legal accessibility | Global law firm | Norton Rose Fulbright. We opened that article by stating that, in an era marked by accelerating technological advancements and innovation, it is inevitable that digital transformation (and AI in particular) will change how many industries operate. This statement is only more pertinent one year on.
The race to implement AI tools continues unabated across many different facets of modern life. Governments, educational institutions, businesses, and charitable organisations alike are racing to achieve efficiency through incorporating AI into their systems and practices. The rapid adoption of AI has generated as many questions as it has answers.
Foremost amongst these is the extent to which AI presents a risk, or an opportunity, for the most vulnerable in society. The adoption of AI in the refugee space captures the complexities of this debate and can tell us much about the experiences of vulnerable groups in a world transformed by AI. In this article, we consider the “double-edged sword” of AI’s potential uses in the refugee space.
In 2024 we have seen even greater global and geopolitical uncertainty - in particular arising from the continued war in Ukraine, the conflict in Sudan, and in the actions of the Israeli military in Gaza. From each of these conflicts, and many more, there has been an exodus of refugees seeking security and a more settled life. The United Nations High Commissioner for Refugees (UNHCR) found that, as at May 2024, there were 120 million people around the world that had been forcibly displaced from their homes due to persecution, conflict, violence, or other human rights violations.1
Moreover, the advent of climate change promises to act as a “threat multiplier” for refugees and forcibly displaced persons.2 With some commentators referring to global warming as in fact “global boiling”, we have seen increased discourse from island nations around their future existence and the concept of a person seeking protection as a consequence of climate change is very real.3 Other problems faced by refugees include perilous journeys, inhospitable attitudes and political environments, and other difficulties associated with resettling.
Alongside these trends, the world has experienced a sharp growth in AI’s capabilities and, positively, there have been many innovative uses of AI to improve the lives of refugees. As AI becomes increasingly sophisticated, it is able to imitate human decision-making processes but on a larger scale, processing reams of data in a matter of seconds. As the technology becomes more advanced, it will also be able to undertake more complex functions, such as engaging in problem-solving and strategic planning.4
On its face, therefore, it is a technology well-adapted to decision-making in relation to migration matters, which involves synthesising and evaluating large volumes of information, but the use of such technology can have mixed results.5 For example, as highlighted in our 2023 article, AI can assist to undertake research, but, at present, it can (and does) also generate harmful misinformation, intensifying negative attitudes towards asylum seekers.
In a 2020 report commissioned by the European Commission, consulting firm Deloitte identified five areas in asylum processing where AI could be implemented:
Key decision-makers in Australia have acknowledged that the adoption of AI is a task that should be approached with caution and foresight, whilst hailing the technology as an opportunity to increase efficiency and aid strategic planning.7 Michael Outram, the outgoing Australian Border Force Commissioner, has stated that “AI won’t supplant human judgement, accountability, and responsibility for decision-making; AI will augment it”.8
Highlighting this concern, AI was trialled by European Union border authorities in a controversial project entitled ‘iBorderCtrl’, in which AI was used as a so-called ‘lie detector’, to test arrivals in the EU for outward signs of deception. The pilot program was heavily criticised by human rights experts as opaque, potentially discriminatory, and contrary to human rights. The pilot has been placed on indefinite hiatus; however, it provides a cogent indication of the possible future uses of AI in border security and asylum systems.
The UNHCR has successfully trialed several projects powered by artificial intelligence. In one ongoing example, Project Jetson, a predictive analytics experiment, seeks to process large volumes of data to understand and predict large movements of people. This assists bodies like the UNHCR and other NGOs in proactive contingency planning, and in mobilizing resources quickly and efficiently. In the context of the ongoing conflict in Somalia, researchers used the model to understand the possible flows of refugees internally in the event of a drought exacerbating internal displacement.
The UNHCR reported that the agency discovered that crashing goat prices are a leading indicator for flows of refugees in Somalia because, as people are preparing to leave, they sell their goats which are unable to make long journeys.9 This is just one example of how on-the-ground micro inputs can be used by innovative, large, AI-powered models to predict future areas of humanitarian need.
In Australia, AI could assist in expanding access to justice for refugees. Through analysing the decision-making patterns of the Administrative Affairs Tribunal (AAT) (now the Administrative Review Tribunal), research by the UNSW Kaldor Centre for International Refugee Law has found that refugees and asylum seekers who receive representation before the AAT were nine times more likely to achieve a favorable outcome, and 52% of applicants did not have representation before the AAT.10 The head of the Kaldor Centre, Prof Daniel Ghezelbash has flagged the opportunities to use AI to analyse decision-making patterns, in order to “better leverage data on how asylum cases are processed and reviewed to improve both individual decision-making and institutional reform”.11
There are also several pilots trialing the use of artificial intelligence to identify where refugees should be resettled. In 2020, Stanford University’s Immigration Policy Lab and ETH University in Zurich launched the pilot of a tool called ‘GeoMatch’, which uses machine learning and large amounts of historical economic, social, political, and environmental data to determine the locales that will be the most hospitable to refugees.12 Although full data from the pilot (including employment outcomes and other quantitative measures) are not yet available, not-for-profit and government agencies which have trialed the tool have reported increased efficiency in the lengthy resettlement process, freeing up resources to help with other aspects of resettlement.13
AI may be useful on a mass scale to understand the perceptions of asylum seekers in the wake of events like natural disasters and civil unrest. UNHCR’s Data Innovation program developed an AI-powered, open-source tool in 2016 to analyse ‘hate speech’ and anti-refugee rhetoric on the social media site Twitter (now ‘X’). The algorithm automatically identified tweets that were categorized as xenophobic or refugee-friendly.14
In a similar example, developed by Maurice Blackburn in collaboration with Bret Walker SC and Bernie Quinn QC, Project – Exhibit A-i uses photo-realistic images generated by AI to produce a website to accompany statements of refugees who were imprisoned on Manus Island or Nauru, which were provided in the course of litigation against the Commonwealth. The aim of the project is “make the invisible seen and restore humanity to thousands whose trauma has been hidden from view” to the Australian public, noting that the lack of access to centres means that much of what happened there has been reduced to words only.
Use of AI in the refugee space entails significant risk, particularly in relation to Australia’s non-refoulement obligations under international law, and the potential of AI to weaken natural justice requirements in the Australian asylum system. Signatories to the 1951 Convention on the Status of Refugees have an obligation to avoid returning refugees to the borders of territories where they may face risk of harm. The principle of non-refoulement requires that the individual circumstances of asylum seekers are considered in any refugee status-determination. The potential use of AI within asylum systems presents the risk that the (often opaque) processes and algorithms underlying automated decision-making could fail to take account of such individual circumstances or prioritise policy goals such as “border security” or “counter-terrorism” at the expense of international legal standards.15
This risk is particularly enhanced by the reality that some AI systems have been found to contain and produce “algorithmic racism”. Evidence from trials of large language models, facial recognition tools and natural-language processing tools show that the tools consistently perform worse in relation to people from non-white backgrounds or cultural minorities.16 Such biases may be produced as a result of unrepresentative data sets that train the model, or unconscious racial biases embedded in the algorithm itself, including in how the algorithm is trained to define “success”.17
Moreover, a generative AI model may seek to reproduce patterns that are themselves caused by systemic racism or entrenched disadvantage. For example, a trial of AI technology in the United States to predict the likelihood of criminal recidivism found that the technology was far more likely to incorrectly recommend that individuals from ethnic minorities receive higher pre-custodial sentences than white individuals.18 Studies have also found that such models are more likely to result in false positives for people of ethnic minority background than people with light skin.19 Such risks have reportedly also materialised on the US-Mexico border, where the facial recognition function on an app that asylum seekers are required to use to make their asylum applications is allegedly unable to recognise the faces of black asylum seekers, thus precluding them from making asylum applications.20
The opacity of AI systems heightens the risks. Decisions about a refugee’s status and asylum claims are highly complex and nuanced, involving the consideration of a number of different factors.21 The parameters of these decisions are monitored by judicial review, to ensure a minimum standard of procedural fairness. However, in many systems, there is a significant backlog in processing applications for asylum, making the use of AI an attractive option to help reduce the strain, particularly in light of the complex and data-intensive nature of the decision-making process.
In the United States, the incoming Trump administration’s plans to use the military to effect “mass deportations” of undocumented immigrants in the United States, including potentially of ‘Dreamers’, who arrived in the country as children, also signals a new context for the use of technologies like AI in asylum systems, raising concern from immigration rights advocacy groups.22 The potential for AI to take over such determinations raises questions of opacity, fairness and accountability and it is unclear to what extent courts will be able to review the decision-making process undertaken by the “black box” of AI, to ensure that all relevant considerations have been taken into account.
Algorithms may also not be available for public scrutiny under intellectual property laws, meaning that the logic underlying an automated decision may be shielded from judicial review.23 When combined with the well-documented trend of racial bias in automated decision-making, and the vulnerability of asylum seekers and refugees, the use of such technology in asylum system raises significant concern.
The lack of human or departmental accountability for decisions made by AI identified as a major risk associated with the use of AI in asylum systems.24 Where there is significant risk that outcomes of AI algorithms may be biased, or that processes lack procedural safeguards, it becomes even more important to identify the person or entity responsible for a decision, or breaches, particularly for the purposes of public interest litigation and government accountability.
There are promising reforms globally that point to a way forward in regulating the use of AI in high-risk spaces like asylum systems. On 1 August 2024, the European Union’s AI Act (EU AI Act) entered into force. The EU AI Act sets out a comprehensive legal framework on the use of AI, categorising the uses of the technology into four risk levels: unacceptable, high, limited, and minimal risk.25 The use of AI in immigration and asylum systems is classed as ‘high risk’ under the Act, which means that its uses carry a number of restrictions and safeguards. Such safeguards include obligations to:
The requirement for human oversight removes the possibility of fully automated decision-making in asylum systems. The law also classes assessing the risk of an individual committing criminal offences based on profiling or personality traits as of “unacceptable risk”, meaning that the use of AI in these circumstances is prohibited. However, there are significant limitations to the EU AI Act in terms of human rights protections. information about high-risk AI systems in migration, asylum and border management will be registered in a non-public section of the EU database, which concerns as to secrecy, transparency, accountability and access to redress for asylum seekers.26 The EU AI Act is a promising first step towards global regulation of AI in high-risk settings like asylum systems.
Canada is on its way to implementing similar reforms through the proposed Artificial Intelligence and Data Act (AIDA), which adopts a risk-based approach to regulating uses of AI. Uses of AI systems defined as “high impact” include biometric identification and use by courts and administrative bodies.27 There are also steps being taken towards similar reform in the Australian context. On 5 September 2024, the Australian Government released a proposals paper for regulating the use of AI in high-risk settings, which was open for consultation until 4 October 2024. The proposals paper suggests that the Australian Government intends to follow the EU in implementing a risk-based approach to regulating AI (adopting differing levels of regulation for uses of AI classed at differing risk levels), with an emphasis on a preventative rather than remedial approach. To assist developers in identifying whether a proposed AI system is “high risk”, the first principle listed in the proposed mandatory guidelines is “the risk of adverse impacts to an individual’s rights recognised in Australian human rights law without justification, in addition to Australia’s international human rights obligations”.
Developers are also required under the principles to consider whether a system adversely affects collective rights of cultural groups, and to the rule of law. Examples provided by the proposals paper of a such high-risk systems include the risk of racial bias in biometric identification systems, and systems which attempt to predict criminal behaviour. These proposed mandatory guardrails for models falling within the “high-risk” category are focused on accountability, transparency, and testing. Importantly, the fifth mandatory guardrail proposes to require that developers ensure “human control or intervention in an AI system to achieve meaningful human oversight”.28
Technology is not neutral, and from what we increasingly understand of AI, it is clear that this technology’s outputs will reflect its inputs. The risks of unmitigated, wholesale adoption of AI are likely to fall most heavily on the most vulnerable, including refugees and asylum seekers. Importantly, AI that is created and implemented by a society that is hostile to refugees carries the risk that it will be a technology that reflects this hostility, and not protect the rights of refugees under international law such as rights to protection, safety, life, and freedom. The risk of potential refoulement is a striking example of the “profound human rights ramifications and real impacts on human lives” presented by AI technology in all areas of life.29 The benefits of AI can only be realised where such systems are constructed with a view to protecting human rights and to furthering the moral obligations of states towards those in dire need of assistance.
Ibid.
Deloitte and Directorate-General for Migration and Home Affairs (European Commission), Opportunities and challenges for the use of artificial intelligence in border control, migration, and security (Report, 2020).
Michael Outram, ‘How the Australian Border Force can exploit AI’, The Australian Strategic Policy Institute (Web Page, 5 April 2024).
Ibid.
UNSW Law & Justice, ‘Leveraging AI-driven technologies improves access to justice for people seeking asylum’ UNSW (Web Page, 30 May 2023).
Ibid.
Immigration Policy Lab, ‘GeoMatch: Implementing the Algorithm’ Immigration Policy Lab (Web Page).
Beth Findley, ‘Why Racial Bias is Prevalent in Facial Recognition Technology’, Harvard Journal of Law and Technology (3 November 2020). Accessed 31 October 2024. See also Ayin Caliskan, ‘Detecting and mitigating bias in natural language processing’ Brookings Institute (10 May 2021). Accessed 30 October 2024.
Report of the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance (A/HRC/56/68, 3 June 2024).
Jeff Larson, Surya Mattu, Lauren Kirchner et al. ‘How We Analysed the COMPAS Recidivism Algorithm’ Pro Publica (23 May 2016). Accessed 30 October 2024.
Jane Bailey, Jacquelyn Burkell, and Valerie Steeves, ‘AI technologies – like police facial recognition – discriminate against people of colour’, The Conversation (25 August 2020). Accessed 20 November 2024.
Melissa del Bosque, ‘Facial recognition bias frustrates Black asylum applicants to US, advocates say’ The Guardian (8 February 2023). Accessed 30 October 2024.
Petra Molnar, ‘Technology on the Margins: AI and global migration management from a human rights perspective’ Cambridge International Law Journal (2019) 8(2), 305.
Avi Ascher-Schapiro, ‘Surveillance technology could supercharge Trump’s plans, rights groups say’, The Japan Times (19 November 2024). Accessed 20 November 2024.
Ibid, 310.
EU AI Act, Annex III.
Ludivine Sarah Stewart, 'The regulation of AI‐based migration technologies under the EU AI Act: (Still) operating in the shadows?’ European Law Journal (2024) 30(1-2), 122. See also s 49(4) of EU AI Act.
Government of Canada, The Artificial Intelligence and Data Act (AIDA) – Companion document (2023).
Department of Industry, Science, and Resources, Introducing mandatory guardrails for AI in high-risk settings (Proposals paper, 5 September 2024).
Petra Molnar, ‘Technology on the Margins: AI and global migration management from a human rights perspective’ Cambridge International Law Journal (2019) 8(2), 305.
Publication
In this edition, we focused on the Shanghai International Economic and Trade Arbitration Commission’s (SHIAC) new arbitration rules, which take effect January 1, 2024.
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2023