The mother of the girl used in the AI-generated nude photo said hundreds of parents told her their children were victims too.
Miriam al-Adib’s daughter was one of several children in a Spanish village whose photos of her fully clothed were used to create suggestive images. He said parents around the world were reporting that their children were also being targeted.
A teacher in Wales says schools have a role to play in explaining the risks of AI to children. The Internet Watch Foundation said it is “not surprising” that the practice is so widespread.
The village of Almendralejo was in the news last September after more than two dozen girls aged between 11 and 17 unknowingly shared sexually explicit AI-generated images. Ms Al Adib is part of a group of parents who have set up a support group for victims, which she said has led to many other parents contacting her with their concerns.
“I’ve had hundreds of people write to me saying, ‘How lucky you are [to have support] because I did the same thing, I had my daughter, or I had me, but I didn’t have the support.’ He told Wales Live. “If she’s sick, tell her parents.”
Spanish authorities have launched an investigation into the images. According to Al-Adib, the mothers and fathers of the victims in her village formed a group to support each other and their children. She added: “It has helped a lot of girls to come forward and talk about what happened. “It’s important to know because a lot of girls can’t talk about it with their parents.” He said the combination of social media, pornography and access to artificial intelligence was a “weapon of destruction”.
Government’s Commitment
Last week, the UK’s first AI Safety Summit heard Home Secretary Suella Braverman’s pledge to tackle child sex abuse material generated by AI. The UK government said: “Content created by artificial intelligence to sexually exploit or abuse children is illegal, whether it features real children or not.”
“Online safety laws require companies to take proactive measures against all forms of child sexual abuse that occurs online, including grooming, live streaming, child sexual abuse material and prohibited images of children, or face significant fines. “
Susie Hargreaves, CEO of the Internet Watch Foundation, said AI-generated child sexual abuse content needed to be tackled “urgently”. He worried that there could be a “tsunami” of images in the future.
In an October 2023 report, the foundation found that more than 20,000 AI-generated images were discovered in a single month on a single forum sharing child sexual abuse material. In the comments, the authors were congratulated on the authenticity of the photo, and users responded that the photo was created based on photos taken by children in the park.
School’s Role in AI Education
Dr. Tamasina Pres, head of health and wellbeing at Bryntirion Comprehensive School in Bridgend, said social media and smartphones had changed her role “immeasurably” since she started teaching. He said “It’s important that schools play a key role” in working with children on topics such as the risks of AI.
Wales Live showed him an advert for the app’s nude photo creation feature, which he described as “heartbreaking”. “As adults, we can approach these topics in a safe way without making them taboo, discussing them or sharing the wrong information,” she added.
The Lucy Faithful Foundation, which works with authors to tackle child sexual abuse, said it was bracing for an “explosion” of child sexual abuse material generated by artificial intelligence.
What is AI and how does it work?
Artificial intelligence (AI) ensures that computers can learn and solve problems almost like humans. Artificial intelligence systems are trained on large amounts of information and learn to identify patterns in that information, performing tasks such as having human-like conversations or predicting what products online shoppers will buy.
The technology is behind voice-activated virtual assistants Siri and Alexa, and helps Facebook and X (formerly Twitter) decide which social media posts to show users. Many experts are surprised by the speed at which AI is developing and worry that its rapid growth could be dangerous. Some have even said that AI research should be stopped. Last October, the British government published a report saying that artificial intelligence could soon help hackers launch cyber attacks or terrorists plan chemical attacks.
What regulations currently apply to AI? In the EU, if the AI law becomes law, stricter controls will be imposed on high-risk systems.
The British government has previously refused to create a special artificial control body. But Chancellor Rishi Sunak wants the UK to be a leader in AI safety and will host a global summit at Bletchley Park to discuss how businesses and governments can tackle the risks of the technology.