Loss of confidentiality in the information used as the Prompt for a generative AI system
Global | 出版物 | 七月 2024
A concern relating to the use of public deployments of generative AI systems is that the Prompts that Users enter into the system can be reused by the Provider or Developer without restriction, with the potential that control over any confidential information entered into the system will likely be lost and the information may cease to be confidential as a result.
The position is analogous to the prompts entered into a publicly available internet search engine or to text entered into a publicly available language translation website. Just as organizations should have policies and training programs controlling their staff’s use of confidential information on such websites, so too should those policies and training programs extend to use of public generative AI systems. For more information, see our blog, Everyone is using ChatGPT: What does my organisation need to watch out for?
Reviewing supply agreement restrictionsPrivate or Enterprise deployments should contain terms in the relevant supply agreement that restrict the Provider from reusing or disclosing the Inputs (and from reusing or disclosing any of the Deployer’s data that is used to further train or fine-tune the model). Clearly, the exact scope of these restrictions needs careful review and consideration, as do any terms that exclude or limit the Provider’s liability for breach of the terms, and the governing law and jurisdiction of enforcement. In principle at least, the risks for an organization here are analogous to the risks of using a cloud service provider such as AWS, Microsoft Azure, Google Cloud Platform, IBM Cloud, Alibaba Cloud etc. |
Generative AI
Subscribe and stay up to date with the latest legal news, information and events . . .