The growing phenomenon of social media use among children in Australia is raising significant concerns among regulators, particularly the eSafety Commission. Recent research reveals that more than 80% of children aged between eight to twelve are engaging with social media platforms and messaging services, many of which have age restrictions requiring users to be at least thirteen years old. This alarming trend has prompted discussions about the need for stricter regulations and interventions to ensure the safety of young internet users.
As Australia prepares to implement a total ban on social media for children under the age of sixteen, anticipated by the end of the current year, the findings of this study underscore the urgency for such measures. According to eSafety’s report, platforms including YouTube, TikTok, and Snapchat have emerged as the most frequently used by this age group. This underscores a growing culture of digital interaction among younger demographics, despite the intended protective measures outlined by social media companies.
The eSafety regulator has been vocal in its criticisms, pointing out that these platforms exhibit a “lack of robust interventions” to verify the ages of their users effectively. Despite the age requirements set by these companies—Discord, Google (YouTube), Meta (which oversees Facebook and Instagram), Reddit, Snap, TikTok, and Twitch—it appears that these regulations are not being upheld in practice. The inability of platforms to restrict access for users under the age of thirteen has sparked a critical dialogue about the responsibilities of social media companies in safeguarding children.
Interestingly, while platforms like YouTube provide tools such as Family Link, which allow accounts under parental supervision, the report chose not to include the usage of YouTube Kids in its analysis, focusing instead on the broader social media landscape that children navigate. Julie Inman Grant, the commissioner of eSafety, emphasized the importance of these findings in shaping the next steps for children’s online safety. Australia’s proactive stance on social media regulation is being closely observed by other countries, including the UK, which may consider similar bans in response to these revelations.
The study evaluated the social media habits of over 1,500 children across Australia, revealing that 84% had utilized at least one platform since the start of the previous year. Interestingly, more than half of these children reportedly accessed these services through accounts managed by their parents or guardians. Moreover, 33% confirmed having personal accounts, often receiving assistance from an adult during the account creation process. Alarmingly, only 13% of the children had their accounts deactivated by the respective companies for being underage.
The report sheds light on the inconsistencies present within the industry concerning the methods used to verify the ages of their users. It indicates that there is a common issue among these platforms: an inadequate system during the account sign-up process allows children to misrepresent their age or birthdate without appropriate checks in place. In their responses, platforms like Snapchat, TikTok, and Twitch noted that while they implement age verification post-account creation, these measures are often reactive rather than proactive. This suggests that there is a critical gap where young users could potentially be exposed to various risks before adequate age-checking mechanisms are deployed.
The growing body of evidence indicating widespread underage social media use, paired with the calls for better age verification processes, presents significant challenges for policymakers and regulators. As discussions continue on effective regulatory measures, the welfare of children navigating the complex digital landscape remains at the forefront of this dialogue. Ultimately, the findings of the eSafety report not only reflect current challenges but also highlight the pressing need for comprehensive strategies to protect the youngest internet users from potential online harms.