Getty Images v Stability AI – the UK judgment in practical terms
On 4 November 2025, the High Court handed down Getty Images v Stability AI [2025] EWHC 2863 (Ch). It didn’t answer the headline question – whether UK law requires permission to use copyrighted works for model training – because Getty’s primary UK “training” claims were abandoned once it became clear the relevant training wasn’t shown to occur in the UK. Copyright is territorial: no UK act, no UK copyright claim.
Getty’s back-up theory also failed. The Court rejected the idea that Stable Diffusion is itself an “infringing copy” under ss.22–23 CDPA. In plain English: if the model doesn’t store or reproduce the underlying works, simply having or supplying the model in the UK isn’t secondary infringement.
Trademarks were the only real win for Getty, and even then, the infringement was narrow and historic. The Court found infringement where older models produced images with watermark artefacts (e.g., the GETTY logo). For newer models, there wasn’t evidence that UK users were responsible for those infringements, so those claims failed. The s.10(3) “reputation” route went nowhere, and passing off added nothing useful.
What it means in practice
For developers (model builders and platforms):
-
- Primary risk sits in outputs. If your system emits logos/watermarks, expect trademark pain.
-
- Data provenance must be designed into the product. Keep a source registry and licence ledger for any commercial training/fine-tuning. Tag checkpoints with data lineage.
-
- Be audit-ready. Log prompts/outputs, model/version and filters applied. Ship model cards and release notes showing brand-artefact testing.
-
- Design for policy drift. Build switches for opt-outs and fast migration to licensed corpora. Tomorrow’s rules should be a config change, not a rewrite.
For deployers (brands, agencies and enterprises using AI tools)
-
- Focus on what the public sees. Your immediate exposure is messy outputs (logos, watermarks, confusing lookalikes). Keep a human review gate for public assets.
-
- Buy on evidence, not hype. Choose vendors who can demonstrate where training took place, what was included, and how brand safety is tested. Ask for logs, model cards and a provenance ledger.
-
- Contract for hygiene. Require watermark/logo suppression, prompt/output logging, speedy takedown, disclosure of training geography, and warranties of lawful access with sensible indemnities.
-
- Document the why. DPIA + short “training map” on file; keep proof of licences/lawful access for any fine-tuning you commission.
-
- Rehearse the fix. If something slips through, be able to pull the asset, trace the prompt/version, notify stakeholders, and ship a clean replacement the same day.
In the UK, immediate AI risk clusters around three things: outputs (avoid logos/watermarks and lookalike branding), territoriality (where training actually occurred), and provenance (be able to show lawful access/licences). The core “permission to train” question remains undecided in UK law, and UK GDPR still applies whenever personal data is involved.
Contact Us
For professional assistance in implementing these controls and contractual protections, feel free to contact the Creative, Digital and Marketing team.
Oasis, Algorithms and the ASA: What dynamic pricing means
When Oasis (finally) announced their reunion tour, ticket buying fans reignited a national debate ov
AI, Alignment, and Accountability
As we’ve referred to previously, the UK is currently avoiding the active regulation of AI in favou
Kicking off my journey as a trainee solicitor
March 2025 marked the start of an exciting new chapter for me at Glaisyers ETL — my first month in
