Is banning social media the best way to protect kids online? | #childpredator | #onlinepredator | #sextrafficing

[ad_1]

A recent Guardian headline captured the frustration felt by many Australian parents: “Australia’s teen social media ban is a flop. But there’s no joy in ‘I told you so’.” Sold by legislators as a potential silver bullet – one that would keep children safer online and improve youth mental health – the ban has proven far more effective in theory than in practice.

While it’s clear that Australia’s social media ban isn’t delivering what it promised, it’s not hard to understand why many parents wanted something from the government to help them better keep their kids healthy and safe in the digital age. Parents around the world are watching their kids’ usage of social media apps with justified concern and bemusement, fuelling intense debate about social media’s effects on youth mental health and the best ways to keep children safe online.

Here in Canada, we lack a coherent, evidence-based national framework to protect people online, especially for children and youth.

That needs to change. The question is, what should that framework look like?

It’s an important issue that the Canadian government hasn’t gotten right, yet. While it’s unfortunate that they’ve waited this long to act (sensibly), legislators now have the benefit of data and results from other attempts around the world, especially Australia and the UK. There is clear evidence of what works and what probably doesn’t, and what the first principles of any intervention should be.

On that point, the emerging consensus seems to be that the end goal of any intervention should be to protect children from online activity designed to exploit or harm them in their developmental stage. At the same time, those interventions shouldn’t be ignorant to data, the digital world and economy that our children live in, including the right to freely speak, connect, and share ideas in digital forums. The design of any policy therefore demands rigorous scrutiny and must be weighed carefully against different policy alternatives to see what will serve Canada best.

(Note: This is a long read, but it’s important. The content I’m sharing here are the factors I’m considering as a legislator, and is designed to help you contribute to the debate.)

Let’s begin.

What does the data say?

When looking to the government for help with their kids’ online health, there are often two commonly stated, no-brainer, goals. They want to keep their kids safe from predators and predatory actors in the broadest sense of the term (safe from exploitation), and they want to protect their child’s mental health. Legislators undoubtedly have a duty to help parents with these objectives.

On the point regarding safeguarding child mental health, Columbia University recently argued that the jury is solidly out on the efficacy of a social media ban as a primary legislative intervention in protecting youth mental health due to online activity. They weren’t the only ones. While large-scale reviews find some correlations between heavy social media use and distress, causation is far from settled, and so the value of a full on ban isn’t really backed up by peer reviewed evidence. In fact, the push for bans as opposed to different types of tools might presently be driven more by activism than by a solid body of empirical evidence.

Illustrating this point is the case of Jonathan Haidt, American social psychologist and author of The Anxious Generation, a popular book which discusses the impact of social media on youth mental health. While Haidt has produced a body of thoughtful and widely respected research, his rationale on the benefits of prohibitions on social media for minors has drawn sharp methodological criticism. Researchers argue he overstates causation by selectively highlighting correlational trends (e.g. post-2012 rises in youth anxiety alongside smartphone adoption) while downplaying contradictory evidence and failing to adequately control for confounding factors such as shifting diagnostic norms, economic pressures, and pre-existing vulnerabilities.

Data on the broader impact of social media on human behviour is similarly jumbled, and so parents (and legislators) are left to determine what do based on their own lived experience. For example, some studies have shown that the algorithmic structures of some social media platforms do elicit addiction like behaviours in some people. But other studies have also shown that many users engage with it without developing addiction, and gain benefits when they use the platforms for work, social connection, or information.

Somewhere in the middle of these points is a general acknowledgement among the public that there are problems, but there’s a lack of certainty on the best way to address them.

And so, rather than jumping headlong into the deep-end of social media ban implementation as a primary intervention, other top researchers are now calling for legislators to balance action with evidence as they examine ways to protect kids online. When it comes to social media bans they warn that outright bans may drive youth toward riskier, unmoderated spaces (as compared to big platforms that have robust child protection measures in place) where harms are greater and safeguards entirely nonexistent, potentially causing more harm to kids and more problems for parents.

This problem – driving kids to the nether regions of the web – means that other interventions are needed to ensure preventing abhorrent behaviours like luring, child sextortion, the distribution of child sexual abuse materials, and attempts to sell youth harmful substances or get them to undertake harmful behaviour across all online operators, not just social media platforms.

This is why some legislators are starting to move beyond social media bans, and are now pursuing measures that ensure greater transparency and stronger safeguards to prevent addictive platform designs and algorithms from harming children. Said differently, some legislators are looking at banning the harmful design features of online platforms and adding watertight safeguards as opposed to outright banning access (which may not be feasible to enforce, anyway). Or perhaps even simply, legislators should probably weigh the benefits between regulating platforms as opposed to regulating parents and children.

But an outright social ban would be an easy way to enforce kids time online and keep them safe, right?

Building on that point, as every parent knows, young people are remarkably sneaky – and Australia’s social media ban has shown just how adept they are at circumventing age restrictions. Teens have quickly turned to VPNs, shared family accounts, and age falsification to keep using the platforms. Australia’s under-16 ban has already demonstrated widespread non-compliance with little measurable reduction in overall screen time. Research has shown that activity has simply migrated to unregulated corners of the internet, other platforms (Roblox, anyone?), or other devices, with parents now expressing concern about this new and previously unforeseen problem.

This is particularly true as AI systems like Claude allow virtually anyone to design any type of app and launch it. Legislation that isn’t platform agnostic won’t be able to keep up.

This real lack of real enforceability has also compounded these problems for the Aussies. The Australian Human Rights Commission recently raised concerns that a meaningful ban requires age verification systems that demand invasive identification, biometrics, or government-linked databases. This means handing over sensitive personal data to private companies and potentially to regulators, with that, creates risks of breaches and identity theft. Australian privacy advocates have warned that such measures could lay the groundwork for a national digital ID regime. The Council of Europe Commissioner for Human Rights has recently raised similar concerns.

It seems reasonable to expect that youth online safety shouldn’t come at the risk of this type of loss of privacy, and any tool that is developed to keep kids safe online should be designed to ward against digital id systems, potentially using already available technology to detect age based on user behavior instead. This is especially true as states with histories of suppressing speech rush to copy and paste social media bans.

But there are literally no good reasons for kids to be on Snapchat or Instagram, right?

For some parents, the answer could well be no. But that answer also very much depends on what any given youth’s personal circumstances are, where they live, and what age they’ve reached.

A comprehensive framework that protects children from exploitation is urgently needed in Canada. At the same time, data shows that with appropriate safeguards, supervision and at the right developmental stage (e.g. ages 5-8 are far different than ages 13-16, etc), social media may also provide genuine benefits, particularly for marginalized teens and those in remote or rural communities.

These young adults may depend on online platforms for peer support, cultural connection, and access to resources unavailable in their offline environments. Systematic academic reviews confirm that active, purposeful use can actually enhance self-esteem, reduce isolation, and even guide users toward mental health resources.

Also, young adults living in non-democratic nations with histories of speech suppression may find themselves cut off from technology that allows them to anonymously express the need for change.

Any Canadian framework for online protection must therefore preserve vital social lifelines for those who rely upon them while fostering the digital citizenship skills essential for success in a modern economy, and making sure that kids are safe if they use them.

Okay, but shouldn’t parents have more control over their kid’s online activities?

Yes, for sure. But legislators also have to make sure they don’t inadvertently design a framework that elicits the opposite effect.

For example, serious debates over when and how the state should usurp the role of parents have recently come to the forefront of Parliament. While there undoubtedly is a role for the state in keeping children safe, a parent’s ability to make decisions for their child’s upbringing is also a fundamental right. Where the line between the two lies is often fraught with contention.

Major civil liberties organizations have noted that if designed the wrong way, age-based outright bans can fail constitutional scrutiny and could amount to unconstitutional censorship of protected speech. They also may undermine young people’s rights to privacy, expression, and participation in digital spaces that are protected under international human rights. And the ACLU recently argued that “the government can’t protect minors by censoring the world around them.

If the free-speech argument doesn’t concern you as a parent, consider this. While there is a role for the government in providing tools to parents and putting way more onus on platforms to maintain safe spaces, observers of the Australian ban note that an outright prohibition sidesteps building digital literacy, digital citizenship and self-regulation skills and by itself doesn’t offer broader tools needed to truly keep kids safe online. This could lead to other, more negative social outcomes, and could pave the way for even more intrusive speech and content regulation for youth.

Augmented interventions could include a duty of care for online operators coupled with more tools to provide parents easier ways to manage and be aware of their children’s online activities, as well as educational resources for parents and youth.

Share

Wait. Are you saying there’s a potential for a slippery slope on speech and privacy to consider?

For a decade, the federal Liberals have asked Canadians to surrender pieces of their civil liberties in the name of “protection” or other laudable objectives, but that trade off should never have to be made. And politically justifying greater censorship under the guise of child-protection measures is an ancient tactic, as human rights and free speech advocate Jacob Mchangama recently observed in the Wall Street Journal. History shows that once governments establish the principle that speech can be curtailed for the “greater good” of children, the restrictions rarely stop with minors.

The Canadian federal Liberal government is no different. The Liberals primarily sold much-maligned Bill C-63, the Online Harms Act, as a way to protect kids. But it drew broad criticism (including from noted Canadian author Margaret Atwood) for its vague speech restrictions and potential for “thoughtcrime” enforcement for the general public.

And so, an outright social media ban for youth may offer the Liberals a fresh angle with which to attempt to justify their speech restrictions, particularly now that they’ve orchestrated a majority government.

Additionally, the issue of the right to be anonymous online is pertinent here. For many people, including youth, anonymous online speech is part of digital culture. For others, it is the only way whistleblowers can bring critical information to light. Age-verification laws that require digital-IDs or other invasive measures designed to detect individual identity have also been criticized by experts.

That said, there is also need for improved safeguards to ensure that perpetrators of crime are not allowed to get away with it due to online anonymity. What is illegal in real life should be illegal online, and law enforcement officials should have better tools to prevent crimes already in the Criminal Code like online criminal harassment, stalking and extortion. But there are ways to update laws that both bring justice to victims and preserve due process, and legislators should be focused on those as well.

Any online harms interventions must be carefully designed to safeguard against injurious censorship. The government must clearly demonstrate that such measures are Charter-compliant and fully protect Canadians’ right to privacy. At the same time, authorities need to strengthen enforcement so that perpetrators of online crimes are effectively brought to justice.”

So, if action is needed and there are potential major concerns with an outright ban, are there better alternatives for lawmakers to explore?

That is a question Canadian legislators are wrestling with right now.

For my part, two years ago, I wrote a bill that would create a duty of care for all online operators, not just social media platforms, to keep kids safe online. This set of consistent rules would impose major legislated safeguards for youth online, would give parents more tools to assist with and monitor their child’s online life (including the legislated ability for online operators to offer them ways to cut their kids off from certain sites), and wouldn’t require a digital ID to operationalize. It would also impose the possibility of civil litigation for operators that don’t comply, and provide better tools for law enforcement to address crimes like online criminal harassment, stalking and extortion.

Very recently, the Council of Europe’s Commissioner of Human Rights favoured taking this type of approach as an augmented approach to a simple social media ban for youth. In a clear statement titled “Regulate online platforms, not children,” he urged European governments to exercise caution before imposing sweeping social media bans on minors. He also argued that instead of shifting responsibility onto children through blanket restrictions and invasive age verification, states should impose clear legal duties on platforms to respect human rights by design and default, with independent oversight, algorithmic transparency, and meaningful accountability.

As the Commissioner noted, the current online ecosystem is failing children because of platform design choices and business models, not necessarily because children are simply using the tools. This is why regulating the platforms, not the children, in a way that respects civil liberties may be a better approach.

While my bill is a first stab at this concept, it’s an approach that could be improved upon and should be measured against the imposition of potentially less effective (or constitutional) instruments like a social media ban. Food for legislative thought as Parliament once again enters this debate, anyway.

So what comes next?

Many advocates for restricting youth access to social media have their hearts in the right place: they are genuinely seeking stronger protections for children online.

To make sure that the best possible public policy happens, legislators and the Canadian public have a responsibility to carefully examine all data, positions and develop the best possible public policy for Canadians.

What’s your take?

Share this post, its important.

Share

[ad_2]

Source link

——————————————————–


Click Here For The Original Source.

National Cyber Security

FREE
VIEW