AI in Creative, Digital and Business Contracts
AI now sits at the heart of how modern businesses operate. It drafts documents, designs visuals, analyses data, forecasts sales, and writes marketing copy before you’ve finished your coffee. But for all its productivity gains, it also brings a new breed of contractual risk.
When an AI tool produces something infringing, biased, or just plain wrong, the familiar questions arrive: who owns it, who’s liable, and who pays the bill when it goes public?
Contracts drafted in a pre-AI era were never designed with prompts and models in mind and subsequently don’t address key areas.
Know Who The “Author” is
Under section 9(3) of the Copyright, Designs and Patents Act 1988, the “author” of a computer-generated work is the person who arranges its creation. That may have seemed logical enough at the time, but it was drafted in an era when a typical home PC wasn’t even capable of running simple games like Minesweeper.
In today’s landscape, “arrangements” can mean anything from prompting and training to integrating AI into workflows. That could involve internal teams, third-party developers, SaaS providers, or external partners. To stay ahead:
-
- Spell out who owns the AI outputs and who can adapt, license, or monetise them.
-
- Deal with moral rights early. Will they be waived, shared, or retained?
-
- Address joint ownership when several contributors shape the input or model.
-
- Read the fine print. Many AI platforms limit commercial use, require you to give credit, or update their licence terms frequently. If you miss those changes, what feels like harmless use today can turn into a copyright claim overnight.
-
- Consider including a clause dealing with “model drift” – the risk that, after the AI vendor retrains its model, outputs that were previously acceptable begin to resemble third-party intellectual property. The contract should specify who is responsible if that happens and what remedial steps must be taken.
For businesses co-developing AI systems or datasets with partners, Joint Development Agreements (JDAs) should clarify ownership, licensing, and commercialisation rights before any code or content is created.
Managing Brand and Reputational Risks
AI can supercharge performance or damage reputations. Whether it’s a chatbot with attitude, an AI that fabricates customer data, or an automated design tool that borrows too much “inspiration”, brand damage can be swift.
Regulators have made it clear that automation does not dilute accountability. If an AI system misleads consumers, publishes false information, or mishandles data, the business deploying it remains responsible.
Contracts should therefore:
-
- Require human review of AI outputs before publication or implementation.
-
- Include brand-protection clauses and indemnities for reputational harm.
-
- Set out crisis-management obligations: who acts, who apologises, who fixes it.
The Advertising Standards Authority (ASA) and CAP Code continue to apply across industries. Misleading AI-generated materials, from social media posts to automated product recommendations, can breach advertising and consumer protection law regardless of intent.
Transparency: The New Differentiator
AI-driven operations rely on vast datasets. With the UK GDPR and the Data Use and Access Act 2025 (DUAA) tightening rules on data sharing and AI transparency, businesses can’t simply point to the black box and say “the algorithm did it.”
Articles 13–15 and 22 of the GDPR continue to apply, requiring transparency whenever personal data is used in automated decision-making. The ICO’s 2025 AI Guidance emphasises three core principles: document your logic, minimise the personal data you use, and keep a human in the loop.
Contracts should require that:
-
- AI vendors comply with current ICO AI Guidance.
-
- Data minimisation, pseudonymisation, and deletion protocols are implemented.
-
- Special category data is excluded from AI profiling unless its use is strictly necessary and properly justified.
-
- Humans review all automated outputs with material impact.
For businesses deploying or using AI, transparency isn’t about exposing every parameter – it’s about being able to show, on paper and in practice, how your systems make decisions. If you can map where AI sits in your workflows, explain when it uses personal data, and back that up with the right contractual and technical controls, transparency becomes a source of competitive advantage rather than a compliance burden.
Future-Proofing AI Contracts
The biggest contracting mistake is treating the agreement as static. AI evolves monthly, not annually, and your contracts should keep pace.
Build in:
-
-
Annual AI risk and compliance reviews.
-
-
-
Automatic reassessment when vendors retrain or replace models.
-
-
-
Internal governance connecting legal, technical, and data teams.
-
You wouldn’t let a financial forecast go unreviewed for a year, so don’t let your AI contracts gather dust.
The CMA’s 2024 Foundation Models Report also urges companies to monitor supplier transparency and competition risks in AI procurement. It’s another reason to keep clauses flexible and forward-looking.
The Bottom Line
AI has redrawn the commercial landscape. To keep innovation lawful and profitable, contracts need to catch up. The savviest businesses aren’t just using AI; they’re contracting for it – allocating ownership, defining liability, protecting data, and planning exits before the system misbehaves.
At Glaisyers ETL, our Creative, Digital & Marketing team helps businesses across sectors to future-proof their contracts and manage AI-driven risk. We help ensure your innovation stays bold, compliant, and commercially sound.
In the age of AI, the smartest move is still a well-drafted clause.
Kicking off my journey as a trainee solicitor
March 2025 marked the start of an exciting new chapter for me at Glaisyers ETL — my first month in
Oasis, Algorithms and the ASA: What dynamic pricing means
When Oasis (finally) announced their reunion tour, ticket buying fans reignited a national debate ov
Getty Images v Stability AI – the UK judgment in practical terms
On 4 November 2025, the High Court handed down Getty Images v Stability AI [2025] EWHC 2863 (Ch)
