SINGAPORE – Two-thirds of Singapore Internet users encountered harmful content online, but nearly half of the people who experienced harm did not block the offending online content or users, or report to the hosting platforms, a survey by the Ministry of Communications and Information (MCI) found.
Even among those who reported the harm, the majority faced issues with the reporting process offered by tech platforms such as Facebook, HardwareZone, Instagram, TikTok, X (formerly known as Twitter) and YouTube.
In the survey conducted online in May 2023, 2,107 Singapore users aged above 15 were asked whether they had encountered harm in the previous six months.
It found that the most common types of harmful content were related to cyber bullying, sexual content, illegal activities, racial or religious disharmony, violence and self-harm.
Nearly half of those who experienced harm online said that they did nothing about it because it did not occur to them to do so, or they were unconcerned about the content.
Most of the harmful content was hosted on social media platforms and online forums, followed by messaging apps, search engines and e-mails.
Among users who reported harmful online content to the platforms, more than three-quarters indicated that they faced issues with the reporting process. Some of the main issues highlighted by users include the platform not taking down harmful online content or disabling the account responsible; taking too long to act; and the lack of updates of their reports.
The survey noted that 88 per cent of the respondents were aware of at least one privacy tool that could be used on social media services, with highest awareness of tools that allowed users to control access to their profile information or their content, as well as to block other users from finding or contacting them.
The survey, which included 515 parent respondents, had found that half of them had used parental controls to restrict the types of content that could be accessed by their children, but usage was lower for other child safety tools. The latter include parent-child linked accounts to allow parents to monitor children’s online activity, kids-only accounts that come with restricted content and filtering tools offered by Internet service providers to block access to age-restricted sites.
To help parents manage the harms their children face online, MCI launched an Online Safety Digital Safety Toolkit in March in partnership with Google, Meta, ByteDance and X. This toolkit recommends parental controls, privacy and reporting tools, as well as self-help resources for individuals and parents to manage their own or their children’s safety online.
An inter-ministry toolkit is being developed by MCI, Ministry of Education, and Ministry of Social and Family Development, and expected to be launched in phases from early 2024.
Laws have also been tightened over the last few months to tackle harm online.
In July, the Online Criminal Harms Act was passed in Parliament to allow the Government to tell individuals, entities, online and Internet service providers, and app stores to remove or block access to content it suspects is being used to commit crimes.
A new code of practice for app stores will address risks associated with harmful content in online games, possibly with the use of a classification system for them. The Republic will also address how children’s personal data is collected and how data can be used in artificial intelligence (AI) systems.
The code of practice for app stores will complement the Code of Practice for Online Safety, which took effect in July. Under the Code, social media firms with significant reach, such as Instagram and Facebook, must put in place systems to limit Singapore users’ exposure to egregious content, including those that promote terrorism, cyber bullying or incite racial or religious tensions.