Publication
International arbitration report
In this edition, we focused on the Shanghai International Economic and Trade Arbitration Commission’s (SHIAC) new arbitration rules, which take effect January 1, 2024.
Author:
South Africa | Publication | June 2024
“We accept that GenAI can have a massive impact, but we don’t know where to start” is a common sentiment among clients. This challenge is particularly pronounced in the legal sector, from whence I hail and where the potential drawbacks of current generation generative AI (GenAI) demand extremely careful navigation and implementation. But this curiosity mixed with concern exist in industries across the economy.
In the past few weeks, OpenAI has made the assistant version its GPT-4omni model, widely recognised as the current most advanced GenAI model, available to free-tier users in ChatGPT. I see this as an incredible move for the adoption of GenAI, because people would often prematurely dismiss the impact and capabilities of GenAI after having only interacted with the free, less capable models. However, while this move made has made advanced GenAI more accessible, it also raises a growing worry: the advancement of "Shadow AI” - the use of GenAI tools by employees without the knowledge, guidance, or regulation of their employers. This poses significant risks to organizations, exposing them to instances where employees have misunderstood the capabilities of GenAI tools, used them for unsuitable tasks, or inputted private or confidential information.
The reality is that the rapid adoption of GenAI tools is undeniable. The 2024 Work Trend Index Annual Report by Microsoft and LinkedIn found that 75% of employees surveyed are already using GenAI in their work tasks, often without employer awareness, and 46% of whom only started to do so in the prior six months. It is likely that the pool of employees using GenAI is broader than many leaders realise, encompassing both technical and non-technical roles. A McKinsey survey reported that 88% of respondents use GenAI as part of routine tasks.
This widespread but unregulated use of GenAI is concerning. Without proper oversight, employees might use these tools in ways that could compromise data security, lead to the dissemination of incorrect information, or violate compliance requirements. GenAI can significantly enhance productivity when used appropriately, but its misuse can lead to serious repercussions and reputational damage.
GenAI has the potential to alter the workplace by automating repetitive tasks, providing tailored guidance to more junior employees, and allowing employees to focus on higher-level cognitive work. McKinsey's own research highlights that GenAI can free employees from mundane tasks, enabling them to engage in more creative and strategic activities.
However, the benefits of GenAI come with significant risks if not managed properly. A magistrate’s court judgment last year starkly illustrates the potential pitfalls. In Parker v Forsyth N.O. & Others, the plaintiff’s attorneys relied solely on ChatGPT for legal research, failing to verify the AI-generated results. This led to the submission of inaccurate and false information. I praise the court's commentary which underscored that while there can be benefit in these tools, it heightens the importance of traditional due diligence and independent verification.
Following this case and others around the globe, and in recognition of the advantages that GenAI offers, my firm made training on how to use GenAI tools responsibly and appropriately compulsory for all employees. Clients value not only the high quality of work, but increasingly also innovative methods, so by providing employee training, we are able to encourage our employees (both lawyers and business services staff) to explore how GenAI can improve their work processes, while highlighting the importance of data privacy, professional privilege, and output validation, as well as informing them about the limitations of these tools (such as never relying on them as a source of legal knowledge).
Through this, the risks of Shadow AI can be mitigated while harnessing GenAI's benefits. Employers must establish comprehensive GenAI policies. This includes setting clear guidelines for ethical use, conducting regular training sessions, and implementing robust oversight mechanisms to monitor AI usage. Providing ongoing education on data privacy, output validation, and the limitations of AI tools is crucial for fostering a responsible AI culture. The UK judiciary's recent guidance on AI use in litigation serves as an example of taking proactive measures in an environment where Shadow AI could have destructive consequences. The guidance emphasises that, while the use of generative AI is permissible, it must be done responsibly, with appropriate safeguards in place to ensure accuracy and confidentiality.
Creating a culture that promotes the responsible use of AI is crucial for companies to see the benefits of GenAI. This involves not only providing technical training but also fostering an environment where employees feel empowered to experiment with AI tools while understanding their limitations. It is employees who will best be able to guide where and when GenAI can augment and improve workflows. The constantly changing nature of GenAI capabilities necessitates the implementation of ongoing education programs, ensuring employees are equipped to handle the dynamic capabilities of GenAI.
So where should an organisation start with GenAI? With its employees. Guide and train them on the basics of assistant based GenAI, and they will help guide where further investment into GenAI could be valuable. This proactive approach not only enhances productivity and innovation but also mitigates one of the most significant emerging risks—Shadow AI.
Publication
In this edition, we focused on the Shanghai International Economic and Trade Arbitration Commission’s (SHIAC) new arbitration rules, which take effect January 1, 2024.
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2023