(844) 627-8267 | Info@NationalCyberSecurity
(844) 627-8267 | Info@NationalCyberSecurity

New online safety rules to cover generative AI and algorithms to better protect children | #childsafety | #kids | #chldern | #parents | #schoolsafey

The federal government has released its plans to strengthen obligations for online companies to protect children and other users from harmful material, and placing an onus on them to ensure future artificial intelligence (AI) technologies can’t be used to generate deepfake intimate content.

Communications Minister Michelle Rowland proposed several amendments to the government’s expectations of online service providers, including social media services.

The proposed changes come after child safety advocates accused the federal government of not doing enough to keep children safe online.

The minister proposed that the rules extend to cover generative AI, algorithms and ensure that the best interests of children are a primary consideration in designing any service used by or accessible to them.

There are no civil penalties for a failure to comply with those expectations.

In a speech to the National Press Club on Wednesday, Ms Rowland will outline why children should be prioritised in designing content accessible by children.

“We know children are particularly susceptible to some types of online harms and it is critical that their best interests are treated as a priority throughout the life cycle of a service’s design and deployment,” Ms Rowland said.

In August, eSafety commissioner Julie Inman Grant called for regulatory scrutiny of the industry to ensure safety is integrated into its products, and said her office was receiving complaints of children using image generators to create sexual images of their peers to bully them.

Part of the government’s proposed expectations would call for service providers to consider user-safety and incorporate safety features to minimise any risks during the development, implementation and maintenance of generative AI capabilities.

It outlined that service providers must take “reasonable steps” to proactively minimise the extent to which generative AI capabilities produce material or facilitate activity that is unlawful or harmful.

“This would cover, for example, the production of ‘deepfake’ intimate images or videos, class 1 material such as child sexual exploitation or abuse material, or the generation of images, video, audio or text to facilitate cyber abuse or hate speech,” the consultation paper outlined.

Michelle Rowland and Julie Inman Grant are putting the onus on online companies to better protect children. (ABC News: Adam Kennedy)

Age verification

The eSafety commissioner previously warned that stronger age verification processes were needed on sites used by children to help prevent them from being coerced into making sexual abuse material.

Analysis from the commissioner of more than 1,300 child sexual abuse material reports showed that one in eight were “self-generated”, with predators grooming and coercing kids into filming and photographing themselves performing sexually explicit acts.


Source link

National Cyber Security