The federal government has released its plans to strengthen obligations for online companies to protect children and other users from harmful material, and placing an onus on them to ensure future artificial intelligence (AI) technologies can’t be used to generate deepfake intimate content.
- The proposed new rules will extend to cover generative AI and algorithms
- The changes also call for service providers to incorporate safety features to minimise any risks of generative AI capabilities
- And more “appropriate” age assurance mechanisms would be required to prevent access by children to class 2 material
Communications Minister Michelle Rowland proposed several amendments to the government’s expectations of online service providers, including social media services.
The proposed changes come after child safety advocates accused the federal government of not doing enough to keep children safe online.
The minister proposed that the rules extend to cover generative AI, algorithms and ensure that the best interests of children are a primary consideration in designing any service used by or accessible to them.
There are no civil penalties for a failure to comply with those expectations.
In a speech to the National Press Club on Wednesday, Ms Rowland will outline why children should be prioritised in designing content accessible by children.
“We know children are particularly susceptible to some types of online harms and it is critical that their best interests are treated as a priority throughout the life cycle of a service’s design and deployment,” Ms Rowland said.
In August, eSafety commissioner Julie Inman Grant called for regulatory scrutiny of the industry to ensure safety is integrated into its products, and said her office was receiving complaints of children using image generators to create sexual images of their peers to bully them.
Part of the government’s proposed expectations would call for service providers to consider user-safety and incorporate safety features to minimise any risks during the development, implementation and maintenance of generative AI capabilities.
It outlined that service providers must take “reasonable steps” to proactively minimise the extent to which generative AI capabilities produce material or facilitate activity that is unlawful or harmful.
“This would cover, for example, the production of ‘deepfake’ intimate images or videos, class 1 material such as child sexual exploitation or abuse material, or the generation of images, video, audio or text to facilitate cyber abuse or hate speech,” the consultation paper outlined.
The eSafety commissioner previously warned that stronger age verification processes were needed on sites used by children to help prevent them from being coerced into making sexual abuse material.
Analysis from the commissioner of more than 1,300 child sexual abuse material reports showed that one in eight were “self-generated”, with predators grooming and coercing kids into filming and photographing themselves performing sexually explicit acts.
The proposed changes would set an expectation to protect children under privacy and safety settings, to implement “appropriate” age assurance mechanisms, and improve processes for preventing access by children to class 2 material.
Many products currently have simple age-verification methods such as putting in a birthday, but this change could require some services to do more than self-report age and instead establish age with “a greater level of certainty”.
The proposed changes would also place new emphasis on companies using algorithms to recommend content to ensure user-safety is prioritised, requiring impact assessments and safety reviews, while also enabling users to make complaints or inquiries related to algorithms.
Earlier this year, Ms Inman Grant recommended the federal government trial stronger age-verification for pornography websites — which is in place in the UK — to stop kids accessing adult content.
Ms Rowland did not take up the suggestion, saying the technology is still too new for a trial.
The decision attracted the ire of child safety advocates and campaigners who accused the federal government of capitulating to the porn industry and flagged they planned to campaign to change Labor’s decision.
Consultation on the proposal will close in February next year.