Artificial Intelligence – regulatory guidance on ‘agentic’ AI

With the meteoric rise in artificial intelligence in just about every area of business and industry, the dangers are being increasingly highlighted by those within the tech world and externally.

The UK is the third largest AI market in the world, the US and China are the leaders. But there are risks which are not being adequately addressed. Attention is being increasingly focused on the latest form of generative AI – “agentic AI” which can be fully autonomous, but carries heightened risks.

For instance, a new Treasury Select Committee report warns that the Bank of England, the FCA and the Treasury are exposing the public and the financial system to potentially serious harm due to their current ‘wait-and-see’ positions on the use of AI in financial services. Separately, an Oxford University study has warned of the risks to the public in using AI chatbots to provide them with medical advice.

Business using, or planning to use agentic AI should particularly note that the Competition and Markets Authority has just published guidance on how businesses can use agentic AI, while still complying with consumer protection laws. It warns of the need to use agentic AI responsibility and within the law.

What is agentic AI?

Agentic AI is an advanced, sophisticated form of AI focused on autonomous decision-making and action using minimal human intervention (for example, an ‘intelligent’, ‘thinking’ chatbot).

In the business setting, agentic AI (often referred to as AI agents) are used to address customer queries, deal with refunds, recommend products and mange marketing campaigns.

Guidance

The guidance makes clear that when dealing with customers, the same rules apply whether you are using AI or human agents. This means taking into account a consumer’s rights under, eg the Consumer Rights Act 2015, the Consumer Contracts (Information, Cancellation and Additional Charges) Regulations 2013 and the Digital Markets, Competition and Consumers Act 2024.

The CMA plainly addresses the risks and warns that the business itself is responsible for what an AI agent does in the same way it is responsible for what an employee does - even if the agentic AI is designed or provided by a third party on your behalf.

The risk is that if you break consumer protection law, enforcement action could result in a fine (up to 10% of your worldwide turnover) and potentially having to compensate your affected consumers - not to mention the reputational damage.

The CMA advises businesses who use agentic AI to:

· Avoid confusion and build trust by clearly telling customers if you use AI agents rather than humans.

· Train your AI agents to comply with consumer law. Testing is crucial to training AI agents and ensuring you are not breaking the law.

· Regularly monitor how your AI agents are performing – making sure a human is involved. The CMA points out that some AI models can misinterpret data and ‘hallucinate’ nonsensical or inaccurate results.

· Refine the AI agent quickly if a problem is identified, particularly if there is a risk of non-compliance. Prompt action is even more vital where an AI agent interacts with large numbers of people or vulnerable customers.

The guidance also includes practical examples, such as using AI agents to run marketing campaigns and response to customer service queries.

If you would like us to cover an issue in the next NGM Tax Law Newsletter, we would be pleased to hear from you