At the IAPP Global Summit, there was broad acknowledgement that cross-regulator co-ordination in the management of privacy and AI, at both local and global levels, is needed and is increasing.
Regulators from the US, UK and EU demonstrated further alignment around the need for collaboration that is both cross-sector and cross-jurisdictional on the regulation of privacy and AI. This need is becoming more urgent as AI becomes more prevalent and embedded in all aspects of our lives and continues to be borderless.
At Norton Rose Fulbright, we have observed that disjointed regulatory regimes are particularly problematic for clients when supply chains for products and services are international.
In the age of connected devices, anywhere-on-line-shopping, cloud services and AI, supply chains are almost inevitably cross-border and data sharing can be essential for the products or services to work or to be supplied. Greater alignment across regulatory regimes will assist in privacy protection and support consumer confidence in the efficiencies and quality that international supply chains can bring.
Comprised of state Attorney Generals from California, Colorado, Connecticut, Delaware, Indiana, New Jersey and Oregon, this consortium of US privacy regulators was created to formalise coordination, information and resource-sharing in investigations and enforcement efforts.
While the fragmentation of regulations across the United States is likely to continue (despite the Trump Administration’s commitment to de-regulation), regulators in the United States are increasingly collaborating on shared priorities to offer a more robust approach to enforcement.
On 12 July 2024, the EU published the world’s first binding and comprehensive AI-specific law in its Regulation (EU) 2024/1689 (AI Act).
Unlike the EU’s privacy legislation (the General Data Protection Regulation), the EU’s AI Act was never intended to be the ‘gold standard’. It is instead intended to have minimal application and is based on EU product safety laws. Indeed, most deployers of AI will not be covered. However, there is overlap between ‘providers’ of AI systems using general purpose AI-models, and ‘deployers’ using AI systems in business – this will be an area of uncertainty and contest when it comes to enforcement.
The definition for ‘AI system’ under the AI Act has been adopted from the (arguably now outdated) OECD definition. It is broad and omits reference to Generative AI. Under Article 3, an ‘AI system’ means:
‘A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.’
However, the AI Act uses a risk-based approach to AI regulation and makes a distinction in applicability based on the system’s potential to result in an occurrence of harm and the severity of that harm. Each risk-based category of AI systems is subject to varying requirements. For instance, ‘high risk’ AI systems are subject to stricter rules due to their potential to harm health, safety, fundamental rights, democracy and rule of law. Given the hefty fines associated with non-compliance, organisations will need to be clear in their understanding of their obligations under the AI Act. This will require an assessment of, amongst other things, whether their tools would be considered an AI system, the category of risk associated with their systems, and applicability of other laws intertwined with the AI Act.
The artefacts required to demonstrate compliance are undefined, thereby making conformity assessments a further area of uncertainty. The ‘blackbox’ problem and ability to explain how the AI works, is an area of significant concern.
The requirement in Article 4 for staff of both providers and deployers to have ‘AI literacy’ is a further area of uncertainty and it is unclear what standard will be or perhaps should be applied to evidence sufficient literacy.
The AI regulatory landscape is continuing to expand as we observe the developing approach of regulators, see the remaining parts of the EU’s AI Act come into effect, and anticipate the establishment of harmonised standards. This development will be a key area for nations to be considerate of as they begin to roll out their own legislation.
Do we need to establish a ‘proof of human test’ to protect human-machine interactions but not machine-machine interactions?
ChatGPT creator, Sam Altman, spoke at the IAPP Global Summit about the need to identify when humans are interacting with bots, in order to preserve the dignity of human-AI interactions.
Further, there appears to be a growing trend, especially among young people, of using generative AI chatbots to have deeply personal conversations akin to those which they may have with therapists, doctors or lawyers. In response to this, Altman revealed his thinking about a new form of confidentiality or privacy that protects identity when those interactions are more intimate – like the sorts of interactions that are protected by equitable duties of confidentiality in certain human-to-human communications and by fiduciary duties between doctor and patient, lawyer and client.
He suggested a concept like ‘AI privilege’ might be appropriate and did not seem to think ‘privacy’ was compatible, as ‘digital de-identification of humans’ could be enough. The absence of such privilege was considered to be a barrier in user adoption.
However, when a doctor or lawyer breaches their duties, there are consequences. Altman did not suggest a remedy should be available to humans when AI breaches our confidence or privacy. If such a relationship is recognised at law, it seems appropriate for it to be user-centric. Legal development with users in mind may build greater public trust in rapidly advancing technology, and additionally reduce users’ tendency to self-censor, ultimately benefitting both the user and the training of the AI model.
Regardless of a decision to recognise ‘AI privilege’ or a concept of ‘AI communication confidence’ in the future, a uniform position and framework adopted across the globe will avoid the creation and potential abuse of ‘privilege havens’. In the meantime, it continues to be best practice to avoid inputting anything into an AI chatbot that you wouldn’t want out in the open.