(844) 627-8267
(844) 627-8267

Using AI to Assess App Reviews for Child Safety | #childsafety | #kids | #chldern | #parents | #schoolsafey

Brian Levine, a computer scientist at the University of Massachusetts Amherst, has developed a computational model that evaluates customer reviews of social apps to help parents make informed decisions about their children’s app usage. Levine and his team built a searchable website called the App Danger Project, which uses artificial intelligence to analyze reviews for mentions of inappropriate behavior and child sexual abuse. The website provides safety assessments of social networking apps, highlighting reviews that mention sexual abuse and tallying user reports about sexual predators.

Predators are increasingly using apps and online services to exploit children, with incidents of coerced explicit image sharing and subsequent blackmail on the rise during the pandemic. However, because Apple’s and Google’s app stores do not offer keyword searches, it can be difficult for parents to find warnings of inappropriate conduct. Levine hopes that the App Danger Project will complement existing services and help identify apps that are not doing enough to protect users. The website is free to use, and Levine encourages donations to offset its costs.

Levine’s investigation, along with a team of computer scientists, found that around one-fifth of the social networking apps distributedApple and Google had multiple complaints of child sexual abuse material, and 81 apps had seven or more of those types of reviews. In recent years, reports have emerged about apps with complaints of unwanted sexual interactions, leading to the removal of some apps from app stores. However, Apple and Google still distribute apps that generate significant revenue, even with multiple user reports of sexual abuse.

Apple and Google claim to regularly scan user reviews and investigate allegations of child sexual abuse. Violating apps are removed, and both companies offer tools for app developers to police child sexual material. However, critics argue that more needs to be done and question why problematic apps are still available in app stores when technology exists to identify them. Apple and Google state that user reviews alone are not reliable enough to take action against apps but serve as a trigger for further investigation.

The App Danger Project highlighted the social networking app Hoop as having a significant number of reviews suggesting it was unsafe for children. Hoop’s new management has implemented a content moderation system to enhance user safety. The situation has reportedly improved since the original founders struggled to deal with bots and malicious users.


Source link

National Cyber Security