Take action to control cybersecurity risks in AI | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware




Evaluate current policies


Start by asking:

Do we have the right policies, standards and procedures in place to tackle AI-related security and privacy risks?


Acceptable use policies typically address how users should use computing resources like networks, systems and software. These policies are meant to ensure that people use these resources in a responsible, ethical and legal manner. Explicitly include GenAI or other AI technologies in these policies, alongside existing use provisions for websites, social media, email and communications, to emphasize the potential risks involved.


Revisit your third-party data sharing policy, which typically outlines the types of data that can be shared, the parties that can receive the data, the purposes for which the data can be used, and the methods to ensure security and privacy of the data. These policies should include either the prohibition or limitation on the types of data that may be used during conversations and interactions with AI-driven solutions.


Finally, conduct security training and awareness campaigns to address the risks associated with the use of GenAI and other AI technologies, including appropriate uses, how to identify and respond to potential security and data breaches, and whom to contact in the event of an incident.


Click Here For The Original Source.

How can I help you?
National Cyber Security