Meta has blocked roughly 550,000 social media accounts in the first days of complying with Australia’s landmark ban on children under 16 using major social media platforms. The enforcement move follows the rollout of new legislation that requires platforms such as Meta, Instagram, Facebook, and Threads to prevent young Australians from holding accounts.
The scale and speed of the account removals underline how serious Australia has become about regulating children’s online lives. At the same time, the move has intensified a global debate over whether strict bans protect young people or risk driving them into less safe corners of the internet.
The scale of Meta’s enforcement
Meta said it blocked 330,639 Instagram accounts, 173,497 Facebook accounts, and 39,916 Threads accounts during its first week of compliance. The company described the action as necessary to meet Australian legal obligations, while also stressing that it does not believe blanket bans offer the best long-term solution for online child safety.
The numbers highlight the challenge platforms face in enforcing age limits at scale. Hundreds of thousands of accounts were identified as belonging to users under 16, raising questions about how widespread underage use had become before the ban took effect.
Why Australia introduced the ban
Australia’s government argued that social media platforms expose children to harmful content, addictive recommendation algorithms, cyberbullying, and unrealistic social pressures. Campaigners said existing safeguards failed to keep pace with platform designs that reward engagement over wellbeing.
Lawmakers framed the ban as a public health measure rather than a moral judgment. They said delaying children’s exposure to social media could reduce anxiety, depression, and social comparison during critical developmental years. The law applies to the largest platforms and places responsibility squarely on companies to prevent underage access.
What makes Australia’s law different
Many governments have explored limits on children’s social media use. Some U.S. states, including Florida, have experimented with age restrictions or parental consent rules. The European Union has also tested stronger age verification requirements under its digital regulations.
Australia goes further than any of them. The minimum age is 16, not 13 or 14. Crucially, the law does not allow parental consent exemptions, meaning parents cannot override the ban even if they approve their child’s use. This combination makes Australia’s policy the strictest of its kind anywhere in the world.
The law’s popularity among Australian parents has drawn attention overseas. In the United Kingdom, the Conservative Party has pledged to introduce a similar ban if it wins the next general election, which must be held before 2029.
Meta’s objections and alternative proposals
Meta says it agrees that more must be done to protect young people online. However, the company argues that outright bans oversimplify a complex problem. In a blog update, Meta urged the Australian government to engage more constructively with industry.
The company advocates for age verification at the app store or operating system level, rather than forcing each platform to build its own system. Meta argues this approach would create consistent, industry-wide protections and reduce the compliance burden on both regulators and companies.
Meta also supports exemptions based on parental approval. It says parents should retain some control over when and how their children access social media, especially as teens approach adulthood. Australia’s refusal to allow such exemptions remains one of the law’s most controversial elements.
Concerns about circumvention and unintended effects
Despite its strict design, experts warn the ban may prove easy to bypass. Children can often trick age verification systems using false information, borrowed IDs, or shared devices. Digital policy researchers caution that enforcement will always lag behind the ingenuity of tech-savvy teenagers.
Others worry the ban could push young people toward smaller or less regulated platforms. These spaces may lack robust moderation, safety teams, or reporting tools. Critics argue that driving children away from mainstream platforms could expose them to greater risks rather than fewer.
Impact on children and vulnerable groups
Some mental health advocates and young people have voiced concern that the ban removes important sources of connection. Teenagers from rural areas, neurodivergent children, and members of LGBTQ+ communities often rely on online spaces to find support, identity, and friendship.
Young critics argue that banning access does not prepare them for the realities of digital life. They say education, digital literacy, and stronger platform accountability would better equip children to navigate online spaces safely as they grow older.
Others, however, say many children feel relieved by the ban. Some parents and teens report reduced pressure to perform online, fewer distractions, and more time for offline activities. Early anecdotal evidence suggests the policy’s impact may vary widely depending on individual circumstances.
A global test case in online regulation
Australia’s social media ban has become a test case for governments worldwide. Meta’s removal of more than half a million accounts shows the law has real enforcement power rather than symbolic intent.
Over the coming months, researchers and policymakers will study whether the ban reduces harm, improves mental wellbeing, or simply reshapes where young people spend time online. Other countries will watch closely to see whether Australia’s approach delivers measurable benefits or reveals new risks.
As the global debate over children, technology, and regulation continues, Australia’s experiment may shape the future of how societies balance digital freedom with child protection in an increasingly connected world.








