UK schools remove student photos after AI deepfake extortion

Schools across the UK are taking down student photos after criminals used scraped images and AI to create deepfake child sexual abuse material and extort families.

Schools across the UK are removing or altering student photos from websites and social channels after criminals scraped images, fed them into AI tools to create deepfake child sexual abuse material (CSAM), and demanded payments from families. Authorities and child protection groups have urged schools to limit identifiable images online.

The National Crime Agency, the Internet Watch Foundation (IWF) and the Early Warning Working Group (EWWG) reported the activity. Late last year an unnamed secondary school was contacted by extortionists; the IWF identified 150 images it classified as CSAM under UK law and generated digital fingerprints so platforms could block reuploads. Investigators are not naming the school or the police force involved and do not believe the incident was isolated.

The EWWG warned it is “only a matter of time” before similar attacks increase. UK safeguarding minister Jess Phillips called the problem a “deeply worrying emerging threat.” The groups advise schools to act quickly to reduce the risk of further exploitation.

The current pattern is an evolution of sextortion, where intimate images are used to blackmail victims. The FBI’s Internet Crime Complaint Center recorded more than 16,000 sextortion complaints in the first half of 2021 with reported losses exceeding $8 million. By mid-2023 the bureau warned criminals were using ordinary social photos to create fake explicit images and target minors. Childline has received reports of children being sent convincing deepfake nudes built from their social media photos, including victims who had no prior contact with attackers.

The IWF recorded reports of AI-generated CSAM rising from 199 to 426 by November 2025, with girls accounting for 94% of identified victims. Reported cases have included very young children, including infants and toddlers. Researchers also found large collections of AI-generated images and the prompts used to create them; one exposed cloud storage bucket tied to a “nudify” app contained more than 93,000 generated images and the underlying prompts.

The EWWG recommends that schools stop publishing close-up, identifiable photos and instead use distant shots, blurred images, or photos taken from behind. The group advises removing full names from captions, auditing existing online galleries, and asking parents to re-sign consent forms. Some schools have removed recognizable student images from their websites. The Information Commissioner’s Office expects schools to offer a parental opt-out when publishing identifiable photos, but it notes opt-out procedures are not the same as legal consent.

In the United States, student privacy is governed by federal and state rules. Under the Family Educational Rights and Privacy Act, many schools treat identifiable photos as directory information that can be published unless a guardian opts out. Notices to guardians are required, but they may not apply indefinitely after a student leaves, so images can remain accessible online long after families assume they have been removed.

Childline’s Report Remove service handled 394 blackmail reports from under-18s last year, an increase of about one-third compared with 2024. The UK government is amending the Crime and Policing Bill to require platforms to remove flagged intimate images within 48 hours or face fines of up to 10% of global revenue.

Authorities noted that attackers often still locate and assemble images manually, but warned the process could be automated to scrape names and photos from school websites and social media at scale. Parents and guardians have been advised to limit how many identifiable images of their children appear online across schools, sports clubs, extracurricular groups and personal social accounts to reduce the risk of exploitation.

Articles by this author