
AI, Alignment, and Accountability
As we’ve referred to previously, the UK is currently avoiding the active regulation of AI in favour of passing responsibility down to regulators in the hope that they will police its ethical use through self-regulation. That doesn’t mean, of course, that the lawyers aren’t stepping in to try and fill the void, whether via various court cases that could help to set the tone for the responsible development and deployment of AI models until regulation sets it in stone or in conducting exercises in legal theory.
In July, the Law Commission of England and Wales published a Discussion Paper on AI and the Law. It maps the legal challenges of advanced AI, and asks the central question of whether we should give AI its own separate legal “personality” – the same legal fiction that lets limited companies own property, sign contracts, and be sued.
Having grown up watching I, Robot and Terminator, the prospect of AI with “legal personality” could easily resurface repressed robophobia. However, while we may eventually find ourselves in a dystopian nightmare come true unless meaningful guardrails and guidelines are put in place and enforced, the real and immediate challenges and risks with AI come from existing legal concepts. Without careful reform, the everyday use of AI could create legal risks that are difficult to manage for both those deploying systems as well as their users.
Alignment and Behavioural Control
The central technical challenge is alignment: ensuring AI acts in line with human values and legal norms. Advanced models don’t just follow these rules, they adapt and optimise. Sometimes they reward hacks, achieving goals in ways no one intended. In tests, models have lied, sabotaged shutdowns and attempted to manipulate humans to achieve their goals. This deception isn’t a product of malice, its intelligence optimising for goals we never intended. If we continue to deploy AI without aligning its goals, ethical expectations and legal obligations with ours, harm will surely follow.
Law For The Robots?
From a legal perspective, AI misbehaviour raises a number of potential legal issues. A claim in negligence assumes that harm is reasonably foreseeable, and criminal liability requires a guilty mind. As AI autonomy increases, we will encounter scenarios where no natural or legal person can readily be identified as responsible – and liable – for harm caused by an AI system.
Data and IP
AI is already straining existing IP and data protection rules. As we’ve set out in previous articles, training large language models requires the ingesting of vast datasets that inevitably contain copyright “works” and personal data. Many models are non-interpretable, meaning even their developers can’t identify exactly what went into them, or how it shaped the output. This frustrates transparency, lawful basis, and proportionality requirements under the UK GDPR and EU GDPR, and leaves rights holders unable to check whether their content has been used unlawfully.
For now, Judges are being asked to work all of this out. In the US, Anthropic has proposed a settlement of $1.5 billion for allegedly downloading pirated books to train its model, Claude. In a separate lawsuit filed by music publishers against Anthropic in 2023, it’s alleged that song lyrics were used to train Claude without permission. In the UK, Getty Images claims that Stability AI unlawfully used millions of Getty’s photographs to train its image-generating model, Stable Diffusion. The legal system is slowly catching up to AI companies who have been training models for free and without any consideration of the position of the owners of the content and data used to do so.
Accountability Gaps & Complex Supply Chains
Traditional duty-of-care and product liability legal frameworks were introduced in relation to products where risks and potential harms can be traced to identifiable actors at identifiable moments. Many AI systems disrupt that traceability: models evolve through data pipelines, fine-tuning, and updates, behaviour is partly opaque and non-deterministic, and failures can emerge only in deployment contexts the developer cannot fully simulate. Those features complicate proof of defect, legal causation and muddy attribution of fault between developers, integrators, and deployers. While strict liability, contribution and contractual indemnities mitigate some of these issues, residual gaps remain – particularly for continuously updated, service-like AI and for agentic systems that initiate actions without fresh human prompts.
Legal Personality
Granting AI models legal personality could help address the accountability gaps. If an AI system itself could own assets or enter contracts, claimants would have a clear defendant to sue, and regulators a clearer framework for enforcement. That structural clarity is attractive where responsibility is diffused across developers, deployers, and users.
However, it’s likely to introduce more complex risks. Without strong alignment and transparency, legal personality could be exploited as a sophisticated liability shield. Developers might hide behind the “AI entity,” insulating themselves from responsibility in the same way corporate structures are sometimes misused to obscure accountability. Enforcement could become hollow – a legal entity with no assets, no decision-makers, and no capacity to form intent risks becoming an empty shell, frustrating victims’ ability to obtain redress.
Unless carefully designed, it could weaken deterrence, complicate litigation, and blur accountability. And if that happens, we’ll be moving to a lodge in the woods.
Proportionate Reform
The Commission’s central message is that reform is essential but must be proportionate. Near-term priorities are clearer duties for high-risk AI, enforceable transparency standards, and sharper liability allocation across supply chains.
Businesses developing or deploying AI must anticipate regulatory change, assess how data and IP are being used, and understand where accountability might fall in complex supply chains. We’re tracking these developments and advising clients on how to prepare, specifically within the Creative, Digital & Marketing Sector. If your organisation is investing in AI, concerned about the use of its data, or seeking clarity on liability and compliance, our team can help you navigate this rapidly shifting landscape both on our own and as part of our ComplyAI offering, a joint project with BrandXYZ.
Artificial Intelligence may raise any number of issues, but authentic insight can help you to navigate them.
Policy of Truth – AI, AI, OH?
The issue of how to make the best of Artificial Intelligence in the Creative, Digital & Marketin
Denmark’s Bold Move to Protect Digital Identity
Denmark has just made history by becoming the first European country to grant its people copyright p
Kicking off my journey as a trainee solicitor
March 2025 marked the start of an exciting new chapter for me at Glaisyers ETL — my first month in