fbpx
Opinion

Looking at both sides of the Australian government’s plan to ban under-16s from social media, it’s more complicated than I first thought as a parent

- November 12, 2024 5 MIN READ
On phone in bed
Phone: AdobeStock
Australia wants to ban children from social media.

The proposed legislation would make platforms such as Instagram, TikTok, and Snapchat, off limits to anyone under sixteen, and put Australia at the forefront of regulating social media access for children.

While backed by social change movements such as 36 Months which delivered a petition to Prime Minister Anthony Albanese featuring over 125,000 names and signatures, the proposed ban also has its challengers.

Full disclosure, my 16yo is not on social media.

It has been her choice based on what she has read and seen, including the documentary The Social Dilemma, featuring Tristan Harris and Jonathan Haidt, observed in the experiences of friends both here and around the world, and knowing herself well enough to realise the likely impact it would have on her own health and wellbeing.

To say I am grateful for her decision is an understatement.

So my immediate response to the proposed ban is “good”.

While this response stems from my personal experience as a parent, I believe there are broader societal reasons to support such regulation.

In the absence of any meaningful action by technology and social media companies, it is long overdue that government has stepped in. To be honest, I’m surprised that the Australian Government is first to do so – I always expected it to be the European Union.

We all know the arguments why social media can be harmful to underage users—the endless comparison, the addictive algorithms, the exposure to cyberbullying, the pressure to present a flawless version of themselves, and the risks of depression, anxiety, even self-harm—but what about the counterarguments, and how do they stack up (at least for me).

Opponents believe that government shouldn’t be in the business of dictating how families manage social media and that such policies set a precedent for more invasive control measures.

Yes, governments getting involved in family life is a delicate line. But platforms have global influence and a proven track record of failing to self-regulate on known issues.

Without intervention, we’re essentially leaving it to tech companies to shape what’s acceptable, and they’ve repeatedly shown their “regulation” means “whatever drives engagement” and ultimately, “whatever drives profit”—before anyone mentions Meta’s recently introduced new policies and safer accounts for teens, let’s just call it as it is: too little, too late from a company that has had the tools to make these changes all along, yet only acted now in a thinly veiled, last-minute attempt to ward off government intervention.

And if protecting kids from unchecked social media influence isn’t a case for government action, then what is?

We readily accept regulations on gun ownership (yes, American readers, we really do so in Australia), alcohol, and tobacco to keep the community safe—is this really so different?

Critics argue that it should be up to parents—not the government—to decide how their children use social media.

In an ideal world, sure, this is on parents. But think about the realities here. Kids are online with their friends, in schools and in their bedrooms, on devices that make monitoring nearly impossible.

This isn’t a knock on parents—it’s an acknowledgment that they’re fighting an uphill battle against trillion dollar companies with weapons-grade technology and algorithms designed to keep their users jacked in to the matrix. Maybe we can give parents a break (and even a helping hand) instead of blaming them.

Opponents claim the law is an attack on freedom of speech.

Freedom of expression matters, but it’s not an absolute free-for-all—especially when it comes to young users. We accept limits on what young people can legally do (drinking, voting, driving) because certain responsibilities require a level of maturity. Giving platforms a free pass in the name of free speech feels to me like putting ideals over reality. Young people deserve a healthy relationship with online spaces, but unfettered exposure isn’t it.

Critics believe that banning kids from social media is futile and will only push them toward other, potentially riskier, corners of the internet.

True, the internet is a slippery beast, and bans don’t always stick. But the point here isn’t an absolute airtight restriction; it’s about creating a hurdle.

Raising the barrier to entry for young people means fewer of them will get drawn in by the algorithm before they’re ready.

Will some slip through? Definitely. But does that mean we throw our hands up and accept that anyone, of any age, should have unrestricted access?

Opponents argue that equipping kids with media literacy and critical thinking skills is far more effective than restricting their access.

ACCC Social media

Photo: AdobeStock

Education is a great idea—no argument there. But media literacy isn’t a magical fix. Implementing meaningful programs is also a decades-long project and will not help the current generation of young users.

Until media literacy and digital ethics are actually in place and effective, reducing access to certain platforms buys society some time to get it right. Education is a long-term fix; boundaries are a short-term guardrail.

Critics worry that enforcing the ban requires intrusive age verification, which could put young people’s personal data at risk.

Privacy advocates have a point—no one wants platforms gathering more data on kids. But let’s weigh the options.

Right now, platforms are already collecting kids’ data, often without meaningful oversight. A policy that prioritises verifiable age controls could actually force platforms to be more transparent about how they’re handling data for young users.

Opponents believe restricting access to mainstream platforms will drive kids to less-regulated, potentially dangerous online spaces.

True, if you ban young people from one corner of the internet, some will inevitably find the next corner. Some kids will always try to find ways around the rules—but that doesn’t mean we throw up our hands and let them wander unprotected through the internet’s harshest spaces.

A responsible approach isn’t about enforcing an impossible blackout; it’s about setting meaningful boundaries that make it harder for young people to access.

Critics believe the legislation unfairly paints young people as incapable of navigating the digital world, reinforcing stereotypes rather than respecting their agency and potential for responsible online engagement.

Nobody’s saying young people can’t handle the internet. But let’s not ignore that social media companies treat young users as a commodity—they’re not fostering thoughtful conversation; they’re building a pipeline of loyal, engaged users.

Encouraging kids to engage with technology in healthier ways doesn’t “other” them; it respects their development and gives them space to learn without relentless influence from whatever trend or rabbit hole the algorithm serves up.

Opponents argue that restricting access to social media for under-16s will economically harm youth-focused brands and young creators.

Should the social media economy depend on young people who might not be ready for the pressures and pitfalls of a public online presence?

Adjusting the ecosystem might shift the landscape for creators, but it could also encourage platforms to rethink revenue models that aren’t centred on minors. Also, if a brand’s survival hinges on access to under-16s, that brand might need to rethink its own business model and standards.

Critics claim the laws will stifle young people’s voices, denying them a place to explore and express their identities online.

This is, perhaps, the hardest and most heart-wrenching part of this entire conversation.

For countless young people, social media is more than just an app—it’s a refuge. It’s where kids facing isolation, bullying, or grappling with questions of identity and sexuality can find community, support, and understanding that they may not have anywhere else.

For kids who feel misunderstood or alone, online spaces can feel like a lifeline.

And it’s true—this kind of connection matters. For those navigating deep pain or alienation, these social communities provide acceptance, validation, and a chance to connect with others who genuinely get it. This is a need, not a luxury, and any policy that risks severing it should be weighed very, very carefully.

Kids need safety, but they also need community and compassion.

It’s a balance we haven’t yet figured out.

In the end, there’s no perfect answer here. Social media can be both a sanctuary and a minefield for young people, and navigating this tension responsibly is no small task.

Australia’s proposed legislation may not be flawless—no single policy will capture the complexity of young people’s digital lives. But it raises questions and seeks to put pressure on Big Tech to finally take younger users’ wellbeing seriously—It will probably not be the ban itself that creates a safer environment, but rather the commercial pressures on tech companies and social platforms that will deliver real change.

It’s a call for all of us—parents, educators, governments, and tech platforms alike—to keep working toward an online ecosystem that protects young people without isolating them from the connection and support they may desperately need.

And if we haven’t figured it out yet, then the least we can do is try.