AI-Generated Indecent Images: Legal Challenges and Emerging Frameworks

1. Understanding the Threat

The proliferation of AI tools capable of generating indecent images, particularly of minors, has exploded. The Internet Watch Foundation (IWF) reported a staggering rise in AI-generated Child Sexual Abuse Material (CSAM): 245 reports in 2024, versus only 51 in 2023, marking a nearly 380% increase.[1] Alarmingly, over 190 of these were so lifelike they required treatment as ‘real’ images under the law.[2]

These AI tools are being misused to “nudify” children, manipulate real images, and even produce entirely synthetic yet disturbing imagery, often used to blackmail or groom victims.[3][4]

2. Existing UK Legislation

Current legislation already encompasses a variety of offences related to indecent and indecent pseudo‑photographs:

  • Protection of Children Act 1978 criminalises making or distributing indecent photographs or pseudo‑photographs of children.
  • Coroners and Justice Act 2009 (s.62) criminalises possession of prohibited images (non‑photographic but sexual depictions).
  • Criminal Justice Act 1988 (s.160) covers indecent photographs or pseudo‑photographs.
  • Serious Crime Act 2015 (s.69) targets “paedophile manuals”; materials offering advice on child sexual abuse.

Moreover, the Online Safety Act 2023 imposes a statutory duty of care on online platforms to remove illegal content, including CSAM, with Ofcom empowered to enforce compliance.

3. Legal Gaps and Emerging Cases

A recent UK case dramatically illustrates a legal breakthrough: Hugh Nelson, who used AI to transform real images of children into sexualised content, was sentenced to 18 years in prison.[5] This case confirms that such content, even if AI‑generated, can be prosecuted as indecent images when tied to real victims.[6]

However, broader issues remain:

  • The law may not specifically criminalise creation (making) of non‑realistic AI-generated indecent imagery, especially involving fictitious children, in certain jurisdictions (particularly Scotland).[7]
  • Clarity is lacking on whether existing “paedophile manual” legislation applies as technology evolves.

4. Legislative Responses: Crime and Policing Bill

The UK government has responded decisively:

  • New criminal offences will prohibit creation, possession, or distribution of AI models or content designed for generating CSAM. Offenders could face up to five years’ imprisonment.[8][9]
  • The law expands the definition of CSAM to explicitly include AI-generated content and extends “paedophile manuals” to cover AI‑generated instructional material.[10]
  • Border Force powers will be broadened to seize digital devices and compel access for inspection.
  • Administrators of online spaces dedicated to distributing CSAM will face criminal liability, with penalties up to ten years in prison.[11][12]

5. Policy and Advocacy

Public and political calls have intensified:

  • Children’s Commissioner Dame Rachel de Souza urged a ban on AI “nudification” apps and the introduction of an AI Bill that treats deepfake sexual abuse as violence against women and girls.[13]
  • Labour MP Jess Phillips labelled the increase in AI‑generated CSAM a “national emergency,” urging tech firms to embed children’s safety by design and reinforcing the new legislative deterrents.[14]
  • IWF continues to advocate for outright banning of such apps, and for robust regulatory oversight, especially for open-source AI models.[15]

6. International Collaboration

The UK’s approach is not isolated:

  • In 2023, the UK and US committed to joint action against AI-generated CSAM, including sharing investigative tools and best practices.[16]
  • The Parliament’s debate emphasised cooperation across borders and enforcement under the Online Safety Act—even when content is hosted internationally.[17]
  • In the US, while federal laws on non-consensual explicit deepfakes remain limited, proposed bills like the Defiance Act and Shield Act aim to provide recourse for victims.[18]

7. Conclusion

AI’s capacity to create hyper-realistic, digitised abuse imagery confronts existing legal frameworks with urgent challenges. The UK has responded with:

  • Prosecution under existing laws (e.g. Nelson case),
  • Expansion of offences via the Crime and Policing Bill (targeting AI models and manuals),
  • Platform accountability through the Online Safety Act,
  • Heightened enforcement measures at borders and online.

Nonetheless, gaps persist—particularly around synthetic, non-realistic imagery and ensuring consistency across jurisdictions like Scotland and Northern Ireland.

Going forward, continued legislative refinement, international cooperation, and platform-level safeguards – including age verification and content moderation – are critical to protecting children from evolving AI-enabled threats.

Contact Our Serious Crime Lawyers

Naomi Debidin – a highly talented paralegal, works alongside Unan Choudhury, Partner, at our East London office.

For further advice, please contact our expert Serious Crime team on 020 7387 2032 or get in touch via our online enquiry form.

 


 

[1] https://www.iwf.org.uk/news-media/news/new-ai-child-sexual-abuse-laws-announced-following-iwf-campaign/

[2] https://www.iwf.org.uk/news-media/news/new-ai-child-sexual-abuse-laws-announced-following-iwf-campaign/

[3] https://www.theguardian.com/technology/2025/feb/01/ai-tools-used-for-child-sexual-abuse-images-targeted-in-home-office-crackdown

[4] https://apnews.com/article/ai-artificial-intelligence-child-sexual-abuse-c8f17de56d41f05f55286eb6177138d2

[5] https://www.cps.gov.uk/cps/news/man-who-used-ai-technology-create-child-sexual-abuse-images-jailed

[6] https://www.college.police.uk/article/evolution-ai-child-sexual-abuse-material

[7] https://www.pure.ed.ac.uk/ws/portalfiles/portal/535227053/5-EYES-REPORT_USA.pdf

[8] https://www.reuters.com/technology/artificial-intelligence/uk-makes-use-ai-tools-create-child-abuse-material-crime-2025-02-01/

[9] https://www.theguardian.com/technology/2025/feb/01/ai-tools-used-for-child-sexual-abuse-images-targeted-in-home-office-crackdown

[10] https://www.reuters.com/technology/artificial-intelligence/uk-makes-use-ai-tools-create-child-abuse-material-crime-2025-02-01/

[11] https://www.thetimes.com/uk/technology-uk/article/yvette-cooper-home-secretary-tech-online-paedophiles-n0l9hxwh8

[12] https://www.iwf.org.uk/news-media/news/new-ai-child-sexual-abuse-laws-announced-following-iwf-campaign/

[13] https://www.theguardian.com/society/2025/apr/28/commissioner-calls-for-ban-on-apps-that-make-deepfake-nude-images-of-children

[14] https://www.thesun.co.uk/news/35107081/rise-ai-child-abuse-pics-jess-phillips/

[15] https://www.iwf.org.uk/news-media/news/new-ai-child-sexual-abuse-laws-announced-following-iwf-campaign/

[16] https://www.gov.uk/government/news/uk-and-us-pledge-to-combat-ai-generated-images-of-child-abuse

[17] https://hansard.parliament.uk/Lords/2025-04-30/debates/A33CA2D2-0435-4204-AE1C-153D13758BB0/AIChildSexualAbuseMaterial

[18] https://www.ft.com/content/e2fa34b2-6987-494d-a81a-1bdb6693671f

Book a
confidential
consultation

For discreet legal advice, contact Lewis Nedas Law today.