Great (Data) Expectations
Global | Publication | janeiro 2022
January 21, 2022Introduction
The world’s most sophisticated ‘big data’ and technology companies are advised by law firms. This will continue as organizations operate in increasingly complex regulatory regimes. An illustrative example is the European Court of Justice ruling on Schrems II, decided in July 2020 and still being felt in the industry. The case invalidated the EU-US Privacy Shield adequacy decision – triggering a widespread review of many major corporates data processing, hosting and other processes. With increasing levels of accountability for data governance, information security and operational resilience, this is actually an issue for all companies. Law firms are instrumental in helping them navigate their most pressing concerns in this area.
So, physician, cure thyself. If your lawyer is advising you on the legal implications of deploying AI, would you expect them to understand it’s technical application? What level of technical data literacy should clients expect of their lawyers? This prompts a discussion as to how savvy the legal industry is at using data and what a data driven law firm looks like – and what it might look like in future.
Data drivers
Good progress has been made in discrete areas of many law firms, for a while. A conventional example is eDiscovery / Disclosure, which is the process of identifying, collecting and reviewing electronically stored information in litigation or investigations. This was probably the earliest deployment of machine learning in the legal industry, and first demonstrated the power of AI to help identify relevant data on a large scale, in a way lawyers and their clients cared about.
How about the other data sources? What about raw financial data, operational information, information on current and closed cases, or other internal knowledge? Ideally this data would be immediately accessible, allowing informed decisions and requires individuals who have ownership of these sources. In order for data to be translated to glean both tangible and actionable insights, it needs to first be turned into structured information and therefore processed. Having an abundance of data, regardless of how valuable it could be, is of no use unless its structured and cleaned. This requires law firms to engage technical expertise, covering computer science and data analytics. Historical data cleaning is a considerable expense, often laborious and of course non-billable, which raises the practical concern of whether the effort is justified by the reward. For example, what value will the data create and what problems can the combination of technology and data address?
Diamonds in the rough
When thinking about data in legal work itself, our team have found its value is impossible to predict. But we have rarely collected it and not found some value. For instance, working on a mass claims portfolio for one client, we had to build an industrialised process to efficiently process tens of thousands of claims. Being in a closed system, this made collecting data fairly straightforward. The original data use case was just process efficiency and client reporting - but then we started seeing patterns. Some claims were coming in groups and which were implausible and anomalous – even though they were partially disguised and being channelled through several different organisations. This triggered some digging, which revealed some of the purported claimants weren’t even alive. This was followed by a Norwich Pharmacal order (an order requiring information be disclosed), and the striking out of a large volume of claims, which the supposed claimants often had no idea were being made (especially the dead claimants).
Disputes analytics
In the business of law, the value of data can be better anticipated. A key question when contemplating whether to proceed with a piece of litigation, is how much it will cost. The associated costs of pursuing litigation are often compared to the potential damages that can be received. For example, how much will it cost to get to trial, or even being more specific, how much will the disclosure (eDiscovery) exercise cost? In turn this enables creative fee arrangements, which clients are beginning to expect. Looking into historical matters and financial data of cases to assemble internal estimates or client pricing might seem complex, but it’s not hugely different from looking at your financial statements to determine your previous spending habits. The facilitator is data that is coded and structured, enabling insights and comparisons.
This illustrates the importance of strong data governance practices that are deployed firm-wide with consistent taxonomies. Cambridge dictionary defines ‘taxonomy’ as “a system for naming and organizing things.” In an essay titled ‘Legal Taxonomy’ by Emily Sherwin, she helpfully explains the role of taxonomy in legal analysis, stating that “to think intelligently about law, one must sort legal rules and decisions into categories and generalize about fields of law.” From a practical perspective, legal search engines probably come to mind, allowing access to sophisticated online searching services that contain cases, journals, regulations and case law. Lex Machina, which also uses machine learning and natural language processing, provides quantified insights about courts, law firms, judges and other parties. Insights include the median time to trial and time to termination for cases for a certain judge presiding, who the top law firms representing defendants for a particular organization are, and many other interesting insights.