In an era marked by accelerating technological advancements and innovation, it is inevitable that digital transformation (and AI in particular) will change how many industries operate. The legal profession is no exception.
In this article we explore what types of innovation and AI have been, and are being, developed to facilitate access to justice, the merits and potential risks and how the legal industry may react as these developments continue.
The merits of AI and innovation for access to justice
The gap in unmet need in Australia continues to grow. As a result, Community Legal Centres (CLCs), other legal non-profits, and the pro bono lawyers supporting them, need to do ever more work with ever less time and frequently with stretched resources. Given this, there is a strong business case that suggests the pro bono and community legal sector could benefit from new legal innovation products, including through the use of AI.
Before delving into the issue further, it is important to highlight the distinction between generative vs predictive AI.
Predictive AI relies on historical data to predict outcomes; it does not generate original content. Predictive AI has been favoured in the legal assistance sector as it can complete tasks which do not require emotional intelligence or creativity. Such uses include: e-discovery; legal research; pre-filling forms; judicial predictions (i.e. analysing previous decisions of a judge to predict the outcome in a particular case); translation; as well as proof reading, editing, and reviewing.
In contrast, generative AI involves the creation of its own original content. Generative AI uses ‘large language model (LLM) technology’. LLM technology varies in complexity but essentially it is a type of generative AI that uses and processes natural language inputs to generate output.
AI products that are being developed for the social justice sector
A number of products are already under development, or have been launched. One example is a project that Norton Rose Fulbright is working on, together with not-for-profit legal service Justice Connect. The scope is to develop an automated natural language processing AI model that seeks to interpret the ‘everyday’ language used by clients in order to identify the client’s legal issues and correctly diagnose their legal problem. This tool is aimed at addressing the struggles that individuals often face in deciphering legal jargon and understanding the nature of their legal issue and the type of lawyer, or legal support, they need to resolve that problem.
Justice Connect says: “Our hope is to share this technology at no cost with other legal service organisations across Australia, cutting down the time it takes to triage legal enquiries while also serving as an additional assistance tool for volunteers and lawyers alike. We hope by building and sharing this technology, our sector will be better placed to understand and meet peoples’ legal needs.”1
Other products have also been developed to assist help-seekers to navigate their legal issues directly through the use of AI, thereby freeing up CLC’s for other legal assistance work. Some examples include:
- a free self-help tool for renters which aims to make it easier to figure out options if a person is behind on their rent.
- an app designed by family lawyers to guide people through the process of separation from beginning to end, including reaching settlements about parenting arrangements and division of property.
- a technology used to match refugees to a given location by their needs, skills and the number of available resources and opportunities available with a view to optimising employment and settlement of refugees.
- a low-cost product which provides individuals and small to medium sized businesses with online legal services including ‘do it yourself’ incorporation, completion of legal forms, legal document review and estate planning.
As can be seen from these examples, integration of AI and innovation into the realm of social justice holds immense promise for revolutionising access to legal services and fostering a more equitable legal landscape.
Within the legal sector, there is also a growing recognition of the “missing middle”– individuals (as well as small and medium-sized enterprises) who cannot afford legal representation, but also do not qualify for legal aid assistance. We consider that AI and other technology innovation solutions provide a particularly unique opportunity to bridge this gap by improving access to justice for the missing middle.
Risk factors and barriers
The optimism around AI in the pro bono and community legal sector is not without reservations.
One of the biggest and most topical risks is the potential errors and injustice that can result from what is termed ‘algorithmic bias’. Algorithmic bias can arise in many ways. It may stem from the design of the AI product itself or from the data set used to train the AI. Algorithmic bias has the potential to lead to a person being unfairly treated or suffering inequality and unlawful discrimination based on characteristics such as race, religion, age, gender, national or social origin, property or disability.
The Australian Human Rights Commission (AHRC) undertook a project which simulated a typical decision-making process and explored how algorithmic bias can ‘creep in’ to AI systems and, most importantly, how this problem can be addressed.2 The AHRC principles focus on eliminating bias in decision-making AI algorithms and ensuring that AI includes human rights principles by design. This is aimed at protecting against a person becoming overlooked by the algorithm if they don’t tick the right boxes. For a minority group, if a tool assesses vulnerability based on algorithms that omit or don’t identify and prioritise that group’s experiences, they could become invisible to the algorithm. This ‘invisibility’ is made possible pursuant to a narrative that views digital assessments as accurate - what is not known by the algorithm does not exist which, of course, is inherently false.
Applying the findings of that project, Justice Connect states that, in developing its automated natural language processing model, it actively incorporated the ethical AI and inclusive technology best practice principles released by the AHRC in its Technical Paper so that its model is unbiased. It is seeking to mitigate the risk of bias by using natural language samples from a diverse range of people including the elderly, people with disability, people with mental health issues, First Nations people, people without tertiary qualifications and migrants (among others).3
Other pressing risks include:
- AI lacks the inherent human characteristics that are necessary for the effective practice of law, including discretion, judgment based on the nuances of an individual’s personal situation, emotional intelligence and, crucially, trust which are all the more vital in the pro bono and community legal sector, especially when access to justice has been difficult or barred.
- AI cannot currently pick up on certain complexities of the law such as jurisdiction-specific nuances.
- Generative AI tools have made up false information. A notable example of this occurred in June 2023 when two Manhattan lawyers filed submissions in court that included non-existent judicial opinions with fake quotes and citations created by ChatGPT. U.S. District Judge P. Kevin Castel said in a written opinion there was nothing “inherently improper” about using artificial intelligence for assisting in legal work, but lawyers had to ensure that their filings were accurate.4 Ameliorating the risk of false information being generated by AI will require appropriate technological safeguards being developed in addition to regulation and lawyers utilising predictive, not generative, AI in their work.
- Inputting client or other personal information into AI poses serious ethical and confidentiality concerns in the case of some AI models which can retain and re-use the personal information in the future. There are sectors where this poses particular concerns – for example when advising refugees: “The digital transformation of international protection also means that of being visible and consenting to become a data subject – a refugee must physically come forward to register and agree to have a great deal of personal and sensitive data recorded, stored, and shared – have become requirements for being registered as a refugee, receiving aid (for example biometric banking), and having one’s claim for a durable solution processed.”5 The concern here is that AI technology used by the lawyers could be a source of risk to refugees.
- AI tools hold sensitive client information (e.g. in relation to domestic violence victims and refugees seeking asylum), which could be susceptible to hacking and used to the detriment of the person involved. Paradoxically to the ‘invisibility’ consequence referred to earlier, the ‘visibility’ of such persons could be dangerous.
The future of AI and innovation in the access to justice sector
Fundamentally, barriers to access to justice are about deeper structural inequities – race, age, gender, mental health and disability. Access to technology is merely a small part of this complex social problem. If the technology costs money, people on low incomes and minority communities may still be barred from access to legal assistance. Whilst some free or low-cost AI tools in the access to justice space have been released in Australia in recent years, there is still a large need for innovation in the legal assistance sector, developed and implemented in an ethical way.
As matters stand, AI lacks the essential human characteristics of emotional intelligence, judgement based on the nuances of a help seeker’s own personal situation and discretion. If AI is unable to develop these characteristics can it ever truly deliver ‘justice’ in the way that modern society demands? Conversely, could AI eventually become better at avoiding unconscious bias than a human? Is AI an equaliser or inhibiter for justice? These are difficult questions to grapple with. For now, human involvement remains necessary to ensure that AI tools do not place those seeking legal assistance at risk.
Another difficult question yet to be answered is whether AI legal technology can be classified as providing legal advice, thus constituting an unauthorised practice of law. This question will become more pertinent as more legal AI tools are developed that deliver ‘legal services’ to people, such as drafting wills, binding financial agreements and powers of attorneys. The line between legal information and legal advice will become more blurred and regulation may become necessary.
Pro bono lawyers can advise CLCs and other not-for-profits on these complex issues by:
- working with them to map out how to ethically implement AI technology taking into account where formulaic outputs may be acceptable with less need for human oversight (for example in tenancy matters) and where emotional intelligence, judgement and discretion is needed;
- developing policies around use of AI technology and advising on related laws and regulation as this (inevitably) develops;
- advising on licensing of AI tools and accessibility. There are concerns that technological companies will predominately develop products for entities with deep pockets, including large organisations and law firms, and will de-prioritise the access to justice sector. A further concern is that this could result in a two-tier system where superior AI services are only accessible to those who can afford it, while those who cannot are left with inferior services further widening the justice (or injustice) gap;
- advising on the distinction between legal ‘information’ and legal ‘advice’ that stems from automated law; and
- advising on privacy concerns (particularly, but not exclusively, in relation to refugees and victims of domestic family violence).
It is crucial to approach this transformation thoughtfully and ethically, ensuring that AI is developed and implemented with a commitment to transparency, fairness and accountability. Safeguarding against biases and promoting diversity and inclusion must be at the forefront of AI integration in the legal assistance sector. Collaboration between legal professionals, technologists, policymakers and communities will be essential to strike the right balance between innovation and responsible deployment of AI.
Ultimately, the successful integration of AI in this area has the potential to reshape the legal landscape, making justice more accessible, efficient and just for all members of society.
Legal innovation is a core focus area of the Norton Rose Fulbright pro bono practice. We look forward to working together with the sector as the range of AI products continues to expand.
Contributed to by Madison Colangelo