[ad_1]
Some are hailing Manitoba’s promised youth ban of artificial intelligence chatbots and social media while noting how hard it will be to pull off effectively.
Premier Wab Kinew announced this weekend that Manitoba would be the first jurisdiction in Canada to prohibit youth under 16 from using Instagram and TikTok, among other social media platforms.
Unlike jurisdictions, such as Australia, that recently outlawed youth use of social media, Manitoba’s planned ban goes further by including ChatGPT, Claude and other chatbots.
“Previous approaches to this problem have simply replaced one risk with an even bigger one,” said tech analyst and journalist Carmi Levy, who is based in London, Ont.
“By including AI in the list of technologies that would be targeted by this ban, the province recognizes that, yeah, this is a new and emerging threat, our legislation needs to cast a far wider net.”
Some jurisdictions considering bans look to what emerging science suggests about harms on the developing brain primarily around social media use.
It’s also worth putting restrictions on chatbot access and developing reporting requirements for big tech companies to flag any troubling use of their technologies, said Emma Duerden, Canada Research Chair in Neuroscience and Learning Disorders.
Some are praising Manitoba for becoming the first jurisdiction in Canada to move toward blocking social media and AI chatbots, but experts are questioning whether it will be effective.
Duerden said Manitoba’s decision to ban AI chatbots for youth could be a “world-first.”
She pointed to the Tumbler Ridge, B.C., mass killing involving a teenage shooter who had been banned from ChatGPT months before the massacre due to queries that included gun violence.
The company that opens ChatGPT, OpenAI, “could have potentially flagged it, potentially warned authorities in advance,” said Duerden, an associate professor in the faculty of education at Western University.
“I think this really shines a light on what Canada can be doing right now.”
Duerden said her work and that of others suggests social media use is associated with mental health risks on the developing brain. She said children are four to seven times more likely to become addicted to a behaviour or a substance compared with adults.
Duerden’s team studied 500 teens in Canada and the U.S. and found passive scrolling, compared with creating content and connecting with others, was associated with increased levels of clinical anxiety. That work points to benefits of capping passive scrolling at two hours a day, she said.
“[These platforms] are not neutral … they can have effects on the developing brain, on attention, on learning and memory, mental health,” Duerden said.
“We can’t place the blame on parents. We can’t place the blame on a teacher. We can’t place the blame on individual children. What we need is actual government legislation to limit access at this time point.”
Computer science expert Azfar Adib specializes in age-verification tech. He’d prefer a Canada-wide implementation strategy but, “Unfortunately in Canada, at the federal level, the legislation [has] been going pretty slow.”
Some of the first widely used age-verification methods employed elsewhere involved getting users to upload government identification such as a driver’s licence or health card, Adib said.
LinkedIn is among the platforms that prompt users for selfies that AI scans to estimate your age range, he said.

There are also emerging technologies that use voice recordings, and hand and arm movements, to determine age.
Adib is working on an innovation that leverages electrocardiogram technology to take the pulse of a user and determine age based on that, because heartbeat changes with age.
“There are new solutions coming up in what we call ‘privacy by design,'” he said.
“Because all over the world, whenever we talk about age check, the biggest concern is privacy because not everyone wants to share their ID document or even their selfie.”
There are intersecting legal concerns, too.
“It fundamentally changes the nature of surveillance infrastructure that’s going to exist, expands it significantly, and it opens up all kinds of new vectors for attack and social engineering,” said Michael Karanicolas, an associate professor of law at Dalhousie University in Halifax.
“If there was one piece of advice that I could give to the government … it’s talk to the information security community, talk to cybersecurity experts and ensure that you have a proper and robust understanding of the impacts of these technologies on digital privacy and digital surveillance, both among young people and the population as a whole.”

Karanicolas said a system that requires some form of government ID isn’t foolproof, though he notes a system like that would be more effective than the honour system in place on some apps.
“But of course, if you’re … making these platforms or AI companies gather passport and identification information on all of their users, that is a significant and very sensitive amount of personally identifiable information that these platforms are going to be collecting,” he said.
Levy, the tech analyst and journalist, notes some kids will find ways to bypass verification but that shouldn’t stop governments from trying.
“At least it puts us on the road map toward having … a law in place that protects them against the worst kinds of abuses,” he said. “Anything is better than nothing.”
[ad_2]
Source link

