Publication
Generative AI: A global guide to key IP considerations
Artificial intelligence (AI) raises many intellectual property (IP) issues.
Global | Publication | abril 2024
Gaming raises a multitude of legal issues for businesses. In this series of publications, Gaming and law: What businesses need to know, we cover a number of the most topical issues.
In this Part 4, Gaming and artificial intelligence, Lara White, Shiv Daddar and Rosie Nance consider how the regulation of AI in the EU will impact the use of AI in gaming.
For many, gaming is one of the first applications that comes to mind when thinking about implementations of AI. Games provide an opportunity for people to be transported to a virtual world with people, dialogue and other features of the real-world replicated on-screen.
Traditionally this has all been done by developers manually. Each character was manually modelled and dressed and had its movement scripted. Each building was designed and placed somewhere and even rays of light would be ‘baked into’ in-game textures. As processing power for game engines has increased, we have seen greater automation of these processes. Games like Assassins Creed and Grand Theft Auto were early examples of how the aesthetics of characters can be procedurally generated (i.e. generated algorithmically rather than manually) to better mimic the diversity of people in the real-world. Similarly, games like Minecraft brought procedurally generated worlds into the mainstream, automatically generating landscapes and other interesting features for players to explore.
While games have come a long way, there is sometimes still a sense of things not quite feeling real. You may hear two different-looking characters use the same dialogue. You may see a building that you swore you saw on the other side of town.
As developers push to create bigger, more diverse and more immersive worlds, AI promises to be a solution to many of the challenges they face. AI can be used to create characters that fit into a scene while remaining diverse. It can generate buildings that are appropriate in a city and it can even allow you to talk to characters as you would a real person.
Already, it is easy to see the amount of data that could be captured and analysed by this technology and the opportunities (and challenges) that this creates for businesses.
The EU has adopted the first comprehensive framework on AI worldwide, and, on 8 December 2023, the European Parliament and the Council reached political agreement on the Regulation setting out harmonised rules for development and deployment of AI (the AI Act). The European Parliament approved the AI Act on 13 March, leaving only approval by the Council and publication in the Official Journal to become law.
The EU AI Act The AI Act will enter into force 20 days after it is published. Organisations will have two years to prepare to comply before most provisions that are likely to be applicable to gaming become enforceable. However, the rules applicable to prohibited AI systems will become enforceable six months after the AI Act formally enters into force. |
The AI Act will apply to any system designed to ‘operate with varying levels of autonomy’ which ‘infers… how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’. As the AI Act specifies that impact on ‘virtual environments’ falls within scope, gaming applications could fall within its scope.
The AI Act categorises AI systems into four risk types:
Most obligations under the AI Act fall on ‘providers’ and ‘importers’ - those developing the models are bringing them into the EU - as they are generally best placed to build in safeguards. Some obligations fall on ‘deployers’, those rolling out the AI system.
Under the AI Act these AI use cases are considered to pose an unacceptable level of risk to EU citizens and are, therefore, prohibited.
Those most relevant to gaming include AI systems that:
In both cases, the prohibition only applies where this causes or is likely to cause significant harm.
This would apply more to player-facing elements of AI in video games, such as NPCs (i.e. non-player characters such as bots) able to communicate with a player. Those implementing AI systems in their game will therefore need measures to verify who their audiences and have strict controls over what their AI system is allowed to do and say.
Psychological manipulation or coercion Relevant gaming use cases include the use of Non-Fungible Tokens in video games for injecting an artificial sense of scarcity into digital worlds for the benefit of an investor class, to the clear detriment of the gamer. |
Are businesses always aware of the involvement of AI in their operations? Games developers and investors may not be aware of all the AI in their systems. In addition to the games themselves, developers may use AI-enabled tools as part of the development process that are procured from external vendors. Auditing systems to identify those which use any of the methods defined as AI by the EU AI Act is important for good governance of these systems. Such audits require a mix of technical, legal and regulatory, and commercial skills. Compared with traditional IT systems, there are many new challenges in assessing the risk of using AI - for example, assessing whether systems operate without bias or discrimination, and whether they can explain their decisions. AI audits are ultimately about surfacing AI risk, particularly in the areas covered by the AI Act but not exclusively so. |
Under the AI Act ‘High risk’ AI systems are:
Annex III of the AI Act covers use cases in areas such as access to and enjoyment of essential private and public services and employment, which would not generally apply to gaming (outside employment use cases). However, AI systems using emotional recognition are high risk.
‘Limited risk’ is a label applied by the AI Act to AI systems caught by certain transparency obligations. These transparency requirements apply systems designed to interact directly with people (e.g. chatbots), AI systems generating synthetic audio, image, video or text content and emotion recognition and biometric categorisation systems, generally regardless of the risks arising from the systems. The most relevant to gaming are likely to be the following obligations:
Most applications of AI under the AI Act will fall in this category, e.g. AI used for product or content recommendation, inventory management system and spam filters. AI systems in this category can be developed and used subject to existing legislation without additional legal obligations. Providers of those systems may choose to apply the requirements for trustworthy AI and adhere to voluntary codes of conduct.
‘General purpose’ AI models under the AI Act are those that show significant generality and competency to perform a wide range of tasks. Such models are often deployed into tailored AI systems to produce sophisticated and tailored output, e.g. chatbots in games.
Providers of general purpose AI models have a range of obligations, including drawing up and maintaining technical documentation, training and testing processes, and evaluation of the model’s energy consumption, as well as putting in place a policy to respect EU copyright law and summary of content used to train the model.
Such obligations may be relevant in gaming in some contexts, as a ‘provider’ is anyone who develops a model and places it on the market or puts it into service under its own name or trademark. In some scenarios, customising (or ‘fine tuning’) another provider’s model can bring an organisation in scope for the provider obligations.
More stringent obligations apply to general purpose AI models considered to have systemic risk, defined in relation to having high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, or based on a decision of the Commission.
There is currently no specific legislation governing AI in the UK. The UK government recently confirmed its proposed approach to govern AI using existing law and through setting out cross-sectoral principles to be interpreted and applied by existing regulators.
For more information on:
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2023