In a poignant appeal to the government, Denise Fergus, the mother of murdered toddler James Bulger, has called for the introduction of a new legal framework aimed at regulating artificial intelligence (AI) content online, particularly in relation to videos depicting murder victims, especially children. This tragic call to action is rooted in her personal experience with distressing AI-generated content that has surfaced on social media platforms, especially TikTok, showcasing digital clones of her son narrating the tragic events of his abduction and death.
James Bulger was only two years old when he was abducted from a shopping center in Merseyside, UK, on February 12, 1993. The horrific crime was carried out by two ten-year-old boys, Jon Venables and Robert Thompson, who ultimately led James away from the mall, tortured him, and brutally murdered him. The shocking nature of this case not only captured the attention of the nation but also became a pivotal moment in discussions regarding child safety and criminal justice. Years later, Fergus is now faced with the ongoing trauma of seeing her son’s identity manipulated through technology in a manner she described as “absolutely disgusting.”
Fergus claims that despite reaching out to TikTok to remove the offensive videos, her pleas have largely gone unanswered. The government has stated that such content violates the existing Online Safety Act, which is designed to protect individuals from harmful material online. TikTok responded by asserting that they had removed AI videos that breached their community guidelines, yet Fergus criticizes these measures as insufficient. She argued that current laws do not adequately compel social media platforms to take necessary actions against harmful content promptly.
In her conversations with government officials, including Justice Secretary Shabana Mahmood, Fergus emphasized the need for stronger regulatory measures. She articulated her frustration, suggesting current legislative actions were merely “words” without tangible results. By emphasizing the psychological toll of viewing these AI-generated representations of her son, Fergus highlighted a broader ethical issue: the need to protect the dignity of victims and their families in the digital age.
The disturbing content surrounding James Bulger’s story is not unique; Fergus revealed that there are numerous similar videos found across various platforms, including YouTube and Instagram, where animated avatars retell the grim stories of child murder victims. These videos often aim for sensationalism, with accounts dedicated to generating traffic through morbid narratives. For instance, a YouTube channel known as Hidden Stories was terminated for creating content that violated platform standards against simulating deceased individuals describing their deaths.
The nature of AI-generated content raises complex legal and ethical questions, especially regarding its relationship to existing laws aimed at preventing the exploitation of vulnerable narratives for profit or sensationalism. Notably, Kym Morris, chair of the James Bulger Memorial Trust, asserted that the government should amend current legislation to encompass specific protections against AI misuse. According to Morris, measures must be put in place to establish clear definitions and accountability for content that exploits tragic events involving real victims.
The situation is further complicated by ongoing discussions surrounding the Online Safety Act, which was passed by the previous Conservative government in 2023. The act was aimed at safeguarding individuals against illegal and harmful content online. However, it does not grant Ofcom—the regulatory body overseeing compliance—the authority to remove specific content directly but does allow for enforcement actions against non-compliant platforms.
While Fergus’s call for action reflects a deep desire to protect the memory of her son and advocates for broader societal changes regarding the treatment of digital legacy, it also underscores the urgent need for legislative frameworks that can adapt to fast-evolving technologies. As digital platforms face increasing scrutiny, it becomes increasingly critical for lawmakers to bolster regulations guiding the use of AI to ensure it serves society positively and ethically.