Does social media ban under 16 work?
Australia recently became the first country to ban social media use for children under 16, a landmark policy aimed at protecting kids from mental health risks, cyberbullying, and harmful or addictive content.
Earlier this month, my WhatsApp groups and dinner table conversations in Bengaluru quickly turned to whether India should do the same after a horrific event. According to preliminary police reports, three young sisters — just 12, 14, and 16 years old — died by suicide following a parental dispute over access to a mobile phone.
As a parent, I’m deeply worried about the harm that social media can cause to children. As a journalist who has monitored the effects — and inefficiencies — of tech-related bans, I’m also skeptical of how far such measures can go in keeping children safe.
Increasingly, my larger question is: Why does the burden of “fixing” social media fall on children and parents instead of on the companies designing these systems?
Tech lawyer Apar Gupta captured this tension well: “Beneath the fury lies a dangerous impulse: to solve a complex problem with a blunt instrument that absolves platforms of accountability while stripping young people of their digital rights.”
That line stays with me because I cannot comprehend why policymaking isn’t more focused on holding Big Tech accountable. Companies that make billions from young users — and constantly promote how advanced their AI systems are — surely have the technical capacity to design safer feeds, stronger age protections, and less addictive algorithms.
If AI can recommend the next video with eerie precision, why can’t it flag grooming patterns, throttle harmful content spirals, or detect when a teenager is in distress?
Early signals from Australia surface the limits and frustrations. A recent survey found that only 6% of Australians believe online spaces are safer and more age-appropriate since the ban. And many expect young users to simply find ways around the restrictions.
I phoned an old school friend in Sydney to ask how the ban is unfolding in real life. Her children, aged 10 and 12, are still below the social media threshold, but her teenage nephew had to quit Snapchat after the rules took effect. He and his friends quickly improvised.
“They are using WhatsApp groups like Snapchat,” she told me. Unable to maintain streaks on the social media platform, they now exchange daily photos and videos in WhatsApp group chats to keep their record of consecutive days alive.
My friend found other virtual spaces to worry about, too. She said her own children play multiplayer online games like Fortnite and Roblox, where a large number of anonymous players interact simultaneously in shared virtual worlds. These games have become hunting grounds for pedophiles and other criminals in recent years, leading to lawsuits, investigations, and arrests.
Is a ban on children playing in our future?
Social media today is more than dance videos and memes. For many people, it’s the primary source of news, community, and information. For some, it’s also a source of livelihood. Restricting access comes with trade-offs.
The harder — and more necessary — conversation is this: Instead of repeatedly fretting over how to keep children away from platforms, we should be demanding safer platforms.
Until companies are required to redesign systems that reward outrage, addiction, and endless engagement, bans risk becoming a cycle of ineffective whack-a-mole.
Surely, the real test of policymaking isn’t how effectively it can block apps — it’s how it pushes the world’s most powerful tech firms to build better ones. -Itika Sharma Punit, Deputy Editor, https://restofworld.org
No comments:
Post a Comment