Publication
International arbitration report
In this edition, we focused on the Shanghai International Economic and Trade Arbitration Commission’s (SHIAC) new arbitration rules, which take effect January 1, 2024.
Australia | Publication | November 2023
In an era marked by accelerating technological advancements and innovation, it is inevitable that digital transformation (and AI in particular) will change how many industries operate. The legal profession is no exception.
In this article we explore what types of innovation and AI have been, and are being, developed to facilitate access to justice, the merits and potential risks and how the legal industry may react as these developments continue.
The gap in unmet need in Australia continues to grow. As a result, Community Legal Centres (CLCs), other legal non-profits, and the pro bono lawyers supporting them, need to do ever more work with ever less time and frequently with stretched resources. Given this, there is a strong business case that suggests the pro bono and community legal sector could benefit from new legal innovation products, including through the use of AI.
Before delving into the issue further, it is important to highlight the distinction between generative vs predictive AI.
Predictive AI relies on historical data to predict outcomes; it does not generate original content. Predictive AI has been favoured in the legal assistance sector as it can complete tasks which do not require emotional intelligence or creativity. Such uses include: e-discovery; legal research; pre-filling forms; judicial predictions (i.e. analysing previous decisions of a judge to predict the outcome in a particular case); translation; as well as proof reading, editing, and reviewing.
In contrast, generative AI involves the creation of its own original content. Generative AI uses ‘large language model (LLM) technology’. LLM technology varies in complexity but essentially it is a type of generative AI that uses and processes natural language inputs to generate output.
A number of products are already under development, or have been launched. One example is a project that Norton Rose Fulbright is working on, together with not-for-profit legal service Justice Connect. The scope is to develop an automated natural language processing AI model that seeks to interpret the ‘everyday’ language used by clients in order to identify the client’s legal issues and correctly diagnose their legal problem. This tool is aimed at addressing the struggles that individuals often face in deciphering legal jargon and understanding the nature of their legal issue and the type of lawyer, or legal support, they need to resolve that problem.
Justice Connect says: “Our hope is to share this technology at no cost with other legal service organisations across Australia, cutting down the time it takes to triage legal enquiries while also serving as an additional assistance tool for volunteers and lawyers alike. We hope by building and sharing this technology, our sector will be better placed to understand and meet peoples’ legal needs.”1
Other products have also been developed to assist help-seekers to navigate their legal issues directly through the use of AI, thereby freeing up CLC’s for other legal assistance work. Some examples include:
As can be seen from these examples, integration of AI and innovation into the realm of social justice holds immense promise for revolutionising access to legal services and fostering a more equitable legal landscape.
Within the legal sector, there is also a growing recognition of the “missing middle”– individuals (as well as small and medium-sized enterprises) who cannot afford legal representation, but also do not qualify for legal aid assistance. We consider that AI and other technology innovation solutions provide a particularly unique opportunity to bridge this gap by improving access to justice for the missing middle.
The optimism around AI in the pro bono and community legal sector is not without reservations.
One of the biggest and most topical risks is the potential errors and injustice that can result from what is termed ‘algorithmic bias’. Algorithmic bias can arise in many ways. It may stem from the design of the AI product itself or from the data set used to train the AI. Algorithmic bias has the potential to lead to a person being unfairly treated or suffering inequality and unlawful discrimination based on characteristics such as race, religion, age, gender, national or social origin, property or disability.
The Australian Human Rights Commission (AHRC) undertook a project which simulated a typical decision-making process and explored how algorithmic bias can ‘creep in’ to AI systems and, most importantly, how this problem can be addressed.2 The AHRC principles focus on eliminating bias in decision-making AI algorithms and ensuring that AI includes human rights principles by design. This is aimed at protecting against a person becoming overlooked by the algorithm if they don’t tick the right boxes. For a minority group, if a tool assesses vulnerability based on algorithms that omit or don’t identify and prioritise that group’s experiences, they could become invisible to the algorithm. This ‘invisibility’ is made possible pursuant to a narrative that views digital assessments as accurate - what is not known by the algorithm does not exist which, of course, is inherently false.
Applying the findings of that project, Justice Connect states that, in developing its automated natural language processing model, it actively incorporated the ethical AI and inclusive technology best practice principles released by the AHRC in its Technical Paper so that its model is unbiased. It is seeking to mitigate the risk of bias by using natural language samples from a diverse range of people including the elderly, people with disability, people with mental health issues, First Nations people, people without tertiary qualifications and migrants (among others).3
Other pressing risks include:
Fundamentally, barriers to access to justice are about deeper structural inequities – race, age, gender, mental health and disability. Access to technology is merely a small part of this complex social problem. If the technology costs money, people on low incomes and minority communities may still be barred from access to legal assistance. Whilst some free or low-cost AI tools in the access to justice space have been released in Australia in recent years, there is still a large need for innovation in the legal assistance sector, developed and implemented in an ethical way.
As matters stand, AI lacks the essential human characteristics of emotional intelligence, judgement based on the nuances of a help seeker’s own personal situation and discretion. If AI is unable to develop these characteristics can it ever truly deliver ‘justice’ in the way that modern society demands? Conversely, could AI eventually become better at avoiding unconscious bias than a human? Is AI an equaliser or inhibiter for justice? These are difficult questions to grapple with. For now, human involvement remains necessary to ensure that AI tools do not place those seeking legal assistance at risk.
Another difficult question yet to be answered is whether AI legal technology can be classified as providing legal advice, thus constituting an unauthorised practice of law. This question will become more pertinent as more legal AI tools are developed that deliver ‘legal services’ to people, such as drafting wills, binding financial agreements and powers of attorneys. The line between legal information and legal advice will become more blurred and regulation may become necessary.Pro bono lawyers can advise CLCs and other not-for-profits on these complex issues by:
It is crucial to approach this transformation thoughtfully and ethically, ensuring that AI is developed and implemented with a commitment to transparency, fairness and accountability. Safeguarding against biases and promoting diversity and inclusion must be at the forefront of AI integration in the legal assistance sector. Collaboration between legal professionals, technologists, policymakers and communities will be essential to strike the right balance between innovation and responsible deployment of AI.
Ultimately, the successful integration of AI in this area has the potential to reshape the legal landscape, making justice more accessible, efficient and just for all members of society.
Legal innovation is a core focus area of the Norton Rose Fulbright pro bono practice. We look forward to working together with the sector as the range of AI products continues to expand.
Contributed to by Madison Colangelo
Publication
In this edition, we focused on the Shanghai International Economic and Trade Arbitration Commission’s (SHIAC) new arbitration rules, which take effect January 1, 2024.
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2023