[codicts-css-switcher id=”346″]

Global Law Experts Logo
deepfake law denmark

Our Expert in Denmark

Denmark Deepfake & Personality‑rights Law 2026, Compliance Checklist for Businesses Using Faces, Voices & AI

By Global Law Experts
– posted 2 hours ago

Denmark’s 2026 reforms to its copyright and personality‑rights framework have made it one of the first countries in the world to grant individuals copyright‑style protection over their facial features, voice, and other distinguishing personal characteristics when those characteristics are reproduced in AI‑generated content. For any business that uses real people’s likenesses in advertising, trains machine‑learning models on image or audio datasets, or publishes AI‑generated content depicting identifiable individuals, the deepfake law Denmark reforms demand immediate operational changes. This guide translates the legislation into a practitioner‑level compliance checklist covering contracts, consent workflows, AI training‑data governance and ad‑production pipelines, so that in‑house counsel, marketing teams, AI product owners and creative agencies can act now rather than wait for the first enforcement action.

Key compliance priorities at a glance:

  • Map every use. Audit where your organisation creates, stores or distributes content featuring identifiable faces, voices or bodies.
  • Update consents and contracts. Standard model releases drafted before 2026 are unlikely to cover the new copyright‑style rights, renegotiate or supplement them.
  • Govern your AI training data. Provenance logs, vendor warranties and exclusion‑list processes are no longer best practice, they are a legal necessity.

What Denmark’s 2026 Deepfake Law Changes, A Legal Summary

The reforms, which amend Denmark’s existing copyright framework and introduce complementary provisions on personality rights, create a new category of protection that industry observers have described as “copyright for likeness.” According to analysis published by the European Parliament Research Service (EPRS), Denmark’s approach is the most far‑reaching in the EU, granting natural persons an exclusive, enforceable right over the commercial and non‑commercial reproduction of their personal characteristics, including facial geometry, voice timbre, gait and other biometric markers, when that reproduction is generated or substantially assisted by artificial intelligence.

Nature of the right, copyright‑style protections and personality rights in Denmark

The reform does not fold likeness into traditional copyright (which protects original creative works). Instead, it creates a sui generis right, modelled on copyright‑style remedies, that attaches automatically to every natural person without registration. The practical effect is that the right‑holder, the person whose face, voice or body is reproduced, can authorise or refuse reproduction, distribution, public communication and adaptation of AI‑generated depictions of their personal characteristics, mirroring the bundle of exclusive rights available to authors under the Danish Copyright Act (Ophavsretsloven).

What is protected, faces, voices, bodies, biometric traits

Personality rights Denmark has historically protected through general tort law and the Marketing Practices Act (Markedsføringsloven). The 2026 reforms significantly expand the scope by explicitly listing the following protected characteristics:

  • Facial appearance and geometry
  • Voice, including tonal quality and speech patterns
  • Physical appearance and body movements (gait, gestures)
  • Other biometric or physiological identifiers that render a person recognisable

Crucially, protection extends to synthetic reproductions, meaning a fully AI‑generated image or voice clone that is recognisably derived from a real individual falls within scope, even if it was never directly copied from a photograph or recording.

Timeline of key legislative dates

Date Event Practical implication
June 2025 Danish government announces proposal to grant copyright‑style likeness protections Businesses given notice of upcoming compliance obligations
Late 2025 Parliamentary passage of amending legislation Final text of reforms confirmed; businesses should begin contract and workflow review
Early 2026 Reforms enter into force All covered uses of faces, voices and biometric characteristics must comply; enforcement begins

Who Is Affected by the Denmark Deepfake Law, Businesses, Platforms, Creators and Vendors

The scope of the reforms is intentionally broad. Any entity that creates, commissions, distributes or facilitates AI‑generated content reproducing a person’s protected characteristics can be liable. The following table maps the most common entity types to their primary obligations under Denmark’s 2026 framework.

Entity type Key obligations under Denmark 2026 reforms Practical compliance steps & risk
Brand / Advertiser Ensure lawful licence or consent before creating or distributing any likeness or voice; preserve proof of authorisation Update talent releases; integrate legal sign‑off into every ad creative; remove content promptly on request
AI model trainer / vendor Obtain licences for training data that includes identifiable likenesses; maintain provenance logs Audit all datasets; add contract warranties and indemnities; implement exclusion lists for opted‑out individuals
Platform / Publisher Take reasonable steps to remove infringing AI deepfakes on notice; cooperate with takedown orders Implement rapid takedown process and record‑keeping; update terms of service to reflect new obligations
Creative agency Verify that every likeness used in client work is properly licensed under the 2026 rules Insert upstream representations from talent; carry professional indemnity insurance covering likeness claims
HR / ID‑verification provider Biometric data used for verification may trigger the reforms if reproduced or stored as AI‑generated content Review data‑processing agreements (DPAs); limit retention; align with GDPR and personality‑rights requirements

Even businesses headquartered outside Denmark may be caught. Industry observers expect enforcement to follow a “targeting” test similar to GDPR: if your AI‑generated content depicts a person whose protected characteristics are recognisable and you target Danish consumers, the reforms are likely to apply regardless of where your servers or legal entity are located.

The 12‑Point Compliance Checklist, How to Comply with Deepfake Law in Denmark

This operational checklist is the core of the compliance playbook. Each item addresses a discrete obligation or risk area. Businesses should work through the list systematically, assigning ownership to specific roles and tracking completion against deadlines.

  • 1. Inventory and mapping of uses. Catalogue every instance where your organisation creates, stores, distributes or commissions content featuring identifiable faces, voices or bodies. Include advertising assets, social‑media content, product imagery, internal training materials and AI model outputs. Without a complete map, compliance gaps are invisible.
  • 2. Data provenance and training logs. For every dataset used to train or fine‑tune AI models, document the source, licensing status, and whether it contains identifiable personal characteristics. Provenance logging is essential both for demonstrating compliance and for responding to individual access or deletion requests.
  • 3. Consent and licensing updates. Review every existing model release, talent agreement and stock‑image licence. Pre‑2026 consents rarely cover the new copyright likeness Denmark right explicitly. Where gaps exist, obtain supplemental consent or negotiate a new licence that expressly references the right to reproduce personal characteristics in AI‑generated content.
    Sample clause (template, review with counsel): “The Model grants the Licensee a non‑exclusive, worldwide licence to reproduce, adapt and distribute AI‑generated depictions of the Model’s facial appearance, voice and physical likeness for the purposes and duration specified in Schedule A, in compliance with Danish copyright and personality‑rights legislation as amended.”
  • 4. Model and vendor due diligence. If you procure AI‑generated content or pre‑trained models from third parties, your vendor contracts must include warranties that all training data is lawfully sourced and that the vendor will indemnify you against likeness‑rights claims. Request and verify the vendor’s provenance documentation before onboarding.
  • 5. Contract drafting, sample clauses. Insert personality‑rights compliance clauses into all new agreements involving content creation. At a minimum, cover: scope of licence (which characteristics, which media, which territories); duration and renewal; right to withdraw consent; indemnification; and data‑deletion obligations upon expiry.
    Sample vendor warranty (template, review with counsel): “The Vendor represents and warrants that all training data used in the deliverables has been obtained with the informed, specific consent of the data subjects concerned, or under a valid licence, and that no protected personal characteristic has been reproduced without authorisation.”
  • 6. Ad production workflows. Integrate a legal sign‑off gate into your creative production pipeline. No advertisement or campaign asset depicting an identifiable person, whether photographed, illustrated or AI‑generated, should be approved for distribution without confirmed documentation of a valid licence or consent under the 2026 rules.
  • 7. Notice and takedown processes. Establish a clear, documented procedure for receiving and responding to takedown requests from individuals whose likeness has been used without authorisation. Assign a responsible team, set response‑time targets (ideally within 48 hours of receipt), preserve evidence, and log every action taken.
  • 8. Privacy and DPA alignment. The personality‑rights reforms overlap with GDPR obligations. Ensure your data‑processing agreements address both regimes, particularly around biometric data, purpose limitation, and data‑subject rights. The European Data Protection Supervisor (EDPS) has emphasised that deepfake detection and personality‑rights enforcement should be integrated into existing data‑protection impact assessments.
  • 9. Retention and deletion. Define and enforce retention periods for AI‑generated content and the underlying datasets containing personal characteristics. When a licence expires or consent is withdrawn, you must be able to identify and delete all affected assets, including derivative outputs of AI models trained on the data.
  • 10. Employee and endorsement agreements. Update employment contracts, freelancer agreements and influencer partnership terms to address the commercial rights to likeness in Denmark. Employees and endorsers should be informed of how their personal characteristics may be used, the duration of use, and their right to object or withdraw consent.
    Sample employee clause (template, review with counsel): “The Employee consents to the Company’s use of the Employee’s photographic likeness and voice recordings in internal and external communications for the duration of employment and for [X] months thereafter, subject to the Employee’s right to withdraw consent on [notice period] written notice.”
  • 11. Risk scoring and approvals. Implement a traffic‑light risk‑scoring system for every project that involves AI‑generated likenesses. Low‑risk uses (fully synthetic, non‑identifiable faces) may proceed with minimal review; high‑risk uses (recognisable voice clones, celebrity likenesses, deepfake video) should require senior legal approval and board‑level sign‑off where appropriate.
  • 12. Insurance and incident response. Review your professional indemnity and media‑liability insurance policies to confirm they cover claims arising under the 2026 reforms. Prepare an incident‑response plan for scenarios where infringing content is discovered or a takedown demand is received, including escalation paths, external counsel contacts and forensic evidence preservation procedures.

AI Training and Datasets, Consent, Exceptions and Cross‑Border Issues

The question of AI training consent Denmark now requires is among the most commercially significant aspects of the reforms. Businesses that train generative models on image, video or audio datasets must evaluate whether those datasets contain identifiable personal characteristics, and if they do, whether they hold a valid licence or consent to reproduce them.

Consent vs licensing, model releases and data agreements

A traditional photographic model release typically grants the licensee permission to reproduce a specific image. Under the 2026 reforms, however, the act of training an AI model on that image, and subsequently generating new synthetic depictions, constitutes a separate reproduction of the person’s protected characteristics. This means a standard release may be insufficient. Businesses should seek an express, forward‑looking licence covering AI training, synthetic generation and distribution, with clear territorial and temporal scope.

De‑identification and technical controls

One compliance strategy is to de‑identify training data before ingestion, stripping or blurring faces, pitch‑shifting voices, and removing metadata that could link data points to identifiable individuals. While de‑identification can reduce risk, it does not eliminate it: if the resulting AI output is still capable of generating content that is recognisably derived from a specific person, the personality‑rights obligation may still apply. Technical controls should therefore be combined with legal safeguards, not treated as a substitute.

Cross‑border training and enforcement, jurisdictional risk

Many AI training pipelines operate across borders. A dataset assembled in the United States, processed in Ireland and used to generate content distributed in Denmark can trigger the Danish reforms if the output reproduces the protected characteristics of an identifiable person. The European Parliament’s EPRS analysis notes that Denmark’s approach could serve as a model for future EU‑wide harmonisation, increasing the likelihood that similar obligations will apply across the single market. Businesses should plan for the strictest applicable standard and build compliance processes that travel with the data.

Advertising, Marketing and Influencer Content, Using Faces and Voices in Denmark

For marketing teams, the reforms transform how campaigns are produced. Using faces in ads in Denmark, whether through photography, AI‑assisted retouching or fully synthetic generation, now requires documented authorisation that covers the specific personality‑rights obligations introduced by the 2026 reforms.

When a standard model release suffices

A pre‑2026 model release is likely sufficient only where the content involves a traditional photograph used in its original form, with no AI‑generated adaptation or synthetic extension. The moment an AI tool is used to alter, animate or extend the depiction, even subtly, such as generating additional poses from a single photograph, the enhanced personality‑rights protections are engaged and supplemental authorisation is needed.

Voice cloning and endorsement rules

Whether voice cloning is legal in Denmark depends entirely on authorisation. Cloning a person’s voice to generate synthetic speech for a commercial without their express consent is now actionable under the reforms. Endorsement agreements should specify whether the endorser’s voice may be cloned, the permitted uses, and the endorser’s right to approve or reject individual outputs. This is especially relevant for podcast advertising, automated customer‑service agents and interactive media that use voice synthesis.

Ad agency pre‑publish approval flow

A three‑step approval process is recommended for every campaign asset:

  1. Asset audit. The creative team confirms whether the asset depicts an identifiable person (face, voice or body) and flags the AI tools used in production.
  2. Legal clearance. The legal or compliance team verifies that a valid licence or consent exists, covers the intended use and territory, and is documented in the project file.
  3. Distribution approval. A senior authoriser (marketing director or legal counsel) signs off before the asset enters the distribution pipeline, confirming all personality‑rights and data‑protection requirements have been met.

Liability, Enforcement and Remedies Under Denmark’s Deepfake Law

Understanding the enforcement landscape is essential for calibrating risk. The Denmark deepfake law reforms provide a layered enforcement structure combining private civil remedies with administrative sanctions and, in serious cases, criminal liability under existing penal provisions.

Enforcement route Mechanism Practical exposure
Civil remedies Injunctions, takedown orders, damages (compensatory and in some cases punitive), account of profits Private litigation initiated by the rights‑holder; costs and reputational risk can be significant
Administrative sanctions Regulatory orders via the Danish Patent and Trademark Office (DKPTO) or relevant authority; potential fines for non‑compliance with takedown orders Administrative fines and orders; platforms face escalating penalties for repeated non‑compliance
Criminal liability Existing penal provisions on identity fraud, harassment and image‑based abuse may apply where deepfakes are used maliciously Individual criminal prosecution; applies alongside civil and administrative routes

Platforms and publishers bear particular exposure. If a platform receives a valid takedown notice and fails to act within a reasonable time, early indications suggest that Danish authorities are prepared to treat continued hosting as a separate infringement. Businesses should preserve forensic evidence (timestamps, metadata, access logs) from the moment a complaint is received, both to demonstrate good‑faith compliance and to defend against claims of wilful infringement.

Practical Templates and Wording, Sample Clauses

The following templates are provided as starting points. Each must be reviewed and adapted by qualified legal counsel before use in any binding agreement.

  • Model release, face and voice (template, review with counsel). “The Model hereby grants the Licensee a [non‑exclusive / exclusive] licence to create, reproduce and distribute AI‑generated content depicting the Model’s facial appearance and voice for the purposes described in Schedule A, in all media now known or hereafter devised, for a period of [duration]. The Model retains the right to withdraw consent on [notice period] written notice.”
  • AI training DPA addendum (template, review with counsel). “The Processor shall not use any personal data containing protected personal characteristics (as defined in the Danish Copyright Act, as amended) for the training, fine‑tuning or evaluation of AI models without the prior written consent of the Controller and, where required, the informed consent of the data subject.”
  • Vendor audit clause (template, review with counsel). “The Vendor shall, upon reasonable notice, provide the Client with access to records demonstrating the provenance and licensing status of all training data containing identifiable personal characteristics used in the deliverables.”
  • Takedown notice template (template, review with counsel). “I, [name], am the rights‑holder of the personal characteristics depicted in [description of content / URL]. This content has been created and/or distributed without my consent. I request its immediate removal pursuant to the Danish copyright and personality‑rights framework, as amended. [Contact details, date, signature.]”
  • Indemnity clause (template, review with counsel). “The Vendor shall indemnify and hold harmless the Client against all claims, losses and expenses arising from the infringement of any person’s personality rights or copyright‑style likeness protections under Danish law in connection with the deliverables.”
  • Employee image assignment (template, review with counsel). “For the duration of this employment and for [X] months thereafter, the Employee consents to the Employer’s use of photographic and AI‑generated depictions of the Employee’s likeness in corporate communications, subject to the Employee’s right to withdraw consent by written notice.”
  • Influencer endorsement rider (template, review with counsel). “The Influencer consents to the Brand’s creation of AI‑generated adaptations of the Influencer’s face and voice for use in [specified campaign]. The Brand shall obtain the Influencer’s prior written approval before publishing any individual AI‑generated output.”
  • Dataset exclusion‑list clause (template, review with counsel). “The Vendor shall maintain and enforce an exclusion list of individuals who have withdrawn consent or objected to the use of their personal characteristics in AI training, and shall certify quarterly that no excluded data has been used.”

Risk Matrix and Decision Flow for Marketing and AI Teams

The following risk matrix helps teams quickly assess the compliance posture of common use cases involving AI‑generated content under the Denmark deepfake law reforms.

Use case Risk level Recommended control
Fully synthetic, non‑identifiable faces generated from noise Low Standard production sign‑off; document that output is non‑identifiable; retain model provenance logs
AI‑enhanced photographs (retouching, background extension) of consented models Medium Verify that existing release covers AI adaptation; obtain supplemental consent if not; legal review
Voice cloning of a known individual for commercial content High Obtain express written consent covering cloning, specific outputs and distribution channels; senior legal approval required
Training a generative model on a dataset containing identifiable faces/voices High Full dataset audit; obtain licences or consents; implement exclusion‑list process; vendor due diligence; legal sign‑off
Distributing AI‑generated video depicting a recognisable public figure Very high Do not proceed without explicit, documented consent from the individual; board‑level approval; specialist legal advice essential

Decision flow (simplified): (1) Does the content depict an identifiable person? If no → proceed with standard review. If yes → (2) Do you hold a valid, 2026‑compliant licence or consent? If yes → (3) Does the intended use fall within the scope of that consent? If yes → proceed with legal sign‑off. At any point where the answer is “no” or “uncertain,” escalate to legal counsel before proceeding.

Next Steps for Businesses, Governance, Training and Legal Review

Compliance is not a one‑time exercise. The following 30/60/90‑day roadmap translates the checklist into an implementation plan.

  • Days 1–30: Assess and assign. Complete the inventory of all uses of personal characteristics across the organisation. Assign a cross‑functional compliance owner (legal, marketing, data engineering). Brief senior leadership on the scope of the reforms and the business’s current risk exposure.
  • Days 31–60: Remediate and draft. Update all model releases, vendor contracts and employee agreements to reflect the 2026 requirements. Implement provenance logging for AI training datasets. Establish the takedown‑response workflow and designate a responsible team.
  • Days 61–90: Test, train and insure. Conduct a tabletop exercise simulating a takedown request and an enforcement complaint. Train all relevant staff, marketing, creative, data science, legal, on the new workflows. Review insurance coverage and engage external counsel for ongoing advisory support.

Industry observers expect Danish enforcement to intensify through the remainder of 2026 as rights‑holders and advocacy groups test the new framework. Early compliance is not merely a legal obligation; it is a competitive advantage. Businesses that can demonstrate robust personality‑rights governance will be better positioned to secure talent partnerships, negotiate favourable data‑licensing terms and avoid costly enforcement actions.

Conclusion, Denmark’s Deepfake Law Demands Action Now

Denmark’s 2026 personality‑rights and deepfake law reforms represent a paradigm shift for any business that uses people’s faces, voices or biometric characteristics in content creation, advertising or AI development. The legislative intent is clear: individuals must have meaningful control over how their personal characteristics are reproduced by AI, and businesses that fail to secure lawful consent or licensing face civil claims, regulatory sanctions and reputational harm. In‑house counsel and compliance leaders should treat this as a board‑level priority, work through the 12‑point checklist without delay, and engage specialist intellectual property counsel with cross‑border experience to ensure their policies, contracts and operational workflows meet the standard that the deepfake law Denmark reforms now require.

Explore the international intellectual property guide or consult the Global Law Experts lawyer directory to connect with qualified advisors.

Need Legal Advice?

This article was produced by Global Law Experts. For specialist advice on this topic, contact Kim Larsen, a member of the Global Law Experts network.

Sources

  1. Global Law Experts, Denmark Deepfake Law 2026
  2. European Parliament / EPRS, The Danish Approach to Copyright and Deepfakes
  3. The Guardian, Denmark to Tackle Deepfakes by Giving People Copyright to Their Own Likeness
  4. Plesner, Danish IP Law Firm Bulletin
  5. Communia Association, NGO Commentary on Copyright and Personality Rights
  6. KPMG, Protect Against Deepfakes
  7. EDPS, Deepfake Detection Technology Assessment
  8. Regula Forensics, AI and Deepfake Laws Overview

FAQs

What does Denmark's new law protect, my face, voice or AI‑generated likeness?
Denmark’s 2026 reforms extend copyright‑style protection to personal characteristics, including faces, voices, bodies and other biometric identifiers, when they are reproduced in AI‑generated content. Non‑consensual creation or distribution is potentially actionable.
The reform creates a sui generis right modelled on copyright, granting individuals exclusive control over the AI‑assisted reproduction and distribution of their personal characteristics. Practical interpretation by Danish courts is still evolving.
Training on data that reproduces protected personal characteristics likely requires express consent or a licence. Businesses should maintain provenance logs, conduct dataset audits and include vendor warranties addressing EU AI Act and copyright compliance obligations.
Map all uses of personal characteristics, update releases and DPAs, audit training datasets, insert vendor clauses, establish takedown workflows and train teams. The 12‑point checklist above provides the full operational framework.
Not categorically. However, creating or distributing deepfakes that infringe the new personality‑rights protections, or that violate other laws covering privacy, harassment or identity fraud, can give rise to civil, administrative or criminal liability.
Exceptions under Danish and EU copyright law are narrow and generally require specific conditions to be met. Do not assume that AI training automatically qualifies for an exemption, seek legal review and prefer licensed or consented datasets.
Act immediately. Preserve all evidence (timestamps, metadata, access logs), acknowledge the request, and remove or disable access to the content where appropriate. Non‑compliance with valid takedown demands can increase platform liability and lead to escalating sanctions.
Early indications suggest a “targeting” test similar to GDPR applies: if your AI‑generated content is directed at Danish consumers and depicts a person whose characteristics are protected, the reforms are likely to apply regardless of where your entity is incorporated.
ibc amendment india
By Global Law Experts

posted 29 minutes ago

Find the right Legal Expert for your business

The premier guide to leading legal professionals throughout the world

Specialism
Country
Practice Area
LAWYERS RECOGNIZED
0
EVALUATIONS OF LAWYERS BY THEIR PEERS
0 m+
PRACTICE AREAS
0
COUNTRIES AROUND THE WORLD
0
Join
who are already getting the benefits
0

Sign up for the latest legal briefings and news within Global Law Experts’ community, as well as a whole host of features, editorial and conference updates direct to your email inbox.

Naturally you can unsubscribe at any time.

Newsletter Sign Up
About Us

Global Law Experts is dedicated to providing exceptional legal services to clients around the world. With a vast network of highly skilled and experienced lawyers, we are committed to delivering innovative and tailored solutions to meet the diverse needs of our clients in various jurisdictions.

Global Law Experts App

Now Available on the App & Google Play Stores.

Social Posts
[wp_social_ninja id="50714" platform="instagram"]
[codicts-social-feeds platform="instagram" url="https://www.instagram.com/globallawexperts/" template="carousel" results_limit="10" header="false" column_count="1"]

See More:

Contact Us

Stay Informed

Join Mailing List
About Us

Global Law Experts is dedicated to providing exceptional legal services to clients around the world. With a vast network of highly skilled and experienced lawyers, we are committed to delivering innovative and tailored solutions to meet the diverse needs of our clients in various jurisdictions.

Social Posts
[wp_social_ninja id="50714" platform="instagram"]
[codicts-social-feeds platform="instagram" url="https://www.instagram.com/globallawexperts/" template="carousel" results_limit="10" header="false" column_count="1"]

See More:

Global Law Experts App

Now Available on the App & Google Play Stores.

Contact Us

Stay Informed

Join Mailing List

GLE

Lawyer Profile Page - Lead Capture
GLE-Logo-White
Lawyer Profile Page - Lead Capture

Denmark Deepfake & Personality‑rights Law 2026, Compliance Checklist for Businesses Using Faces, Voices & AI

Send welcome message

Custom Message