
New generative AI use cases are as limitless as the technology itself—and so are the security and data privacy impacts. Despite warnings from scientists, tech luminaries and policymakers to proceed with caution, a tsunami of generative AI is about to wash over the workplace.
Swift action must be taken to make sure that AI serves the business securely, safely and correctly, now and in the future. We can’t let the advancement of AI outpace our ability to control it. Companies need to pause to understand AI’s inherent risks and how it may be abused by bad actors. Clarity is the foundation of strong AI governance framework that effectively balances security against business objectives.
Rogue AI
While an all-out AI apocalypse sounds fictional, the threat of rogue AI should be taken seriously. We don’t yet fully understand the possibilities of a conscienceless technology that is smarter and faster than its creators. Machines are autonomously learning to perform tasks through black box models that even their creators don’t fully understand. In some cases, AI models are known to generate unexpected outputs. For instance, AI hallucinations happen when models generate text that appears plausible but is completely fictional.
Superhuman AI technologies paired with virtual reality will soon have the power to predict behaviors and exploit human psychology—at speeds that will make it impossible for people to distinguish what is real. Unleashing these capabilities without a governing framework is like giving an immoral, fearless genius the power of nuclear fusion and expecting nothing to go wrong. You can’t assume highly automated, intelligent technology will follow a set of ethical rules that are not yet fully written, globally adopted or proven unbreakable.
New Threat Vectors
Employees’ adoption of AI for personal and business use cases is entering the corporate environment without security controls. Threat actors are eager to use the boundary-pushing nature of AI to exploit any weaknesses. Online forums are already full of new ways to weaponize ChatGPT and circumvent its safety controls to accomplish in minutes what had previously taken days or even months. Some of the anticipated AI-enabled exploits include:
• Tricking victims. Generative AI can be used to create convincing imitations of almost anything, including text, videos, voice recordings and images. With easy access to ChatGPT-type tools, nation-state adversaries and criminals can quickly create custom-tailored communications in native English with high-quality artwork. Improving the sophistication of already effective scams like phishing, identity theft and deepfakes will make it much easier for attackers to trick victims into unwittingly helping them accomplish their goals.
• Building better, more sophisticated malware. AI-powered tools will help adversaries automate tasks, mimic human behavior and learn from previous attacks. These capabilities will make it more difficult to detect and defend against malware. For example, a hacker could use AI to overwhelm a security system with false positives, then take it by surprise with a real cyberattack. Also, the coding skills of AI chatbots will make it easier for attackers to make small changes with potentially big effects. By using code-generating AI to modify existing malware, hackers may be able to quickly change the characteristics and behavior of their code so that scanning tools may not recognize and flag the new iteration.
• Exploiting application vulnerabilities. We don’t yet fully understand the scope and implications of what backdoors will be opened by using AI-generated code. But if it is possible to build AI tools that can track known exploits within the code they generate, we should assume the same algorithms could be weaponized to generate an adaptive attack that won’t be easily contained with traditional tools.
Data Privacy Risks
Most generative AI bots do not guarantee data privacy. The lack of model training transparency and data controls has sparked criticism and backlash from regulators and companies alike. Italian regulators temporarily banned ChatGPT over privacy concerns. OpenAI’s privacy policies have since been updated, but European regulators are still considering whether the changes are enough to satisfy General Data Protection Regulation (GDPR) requirements.
When a user enters sensitive data or proprietary information, AI models sometimes use the data in ways that the business didn’t anticipate. For example, some enterprises have found that code or data inputted into chatbots by one user can be revealed to a different user in the form of an answer. Essentially, the data fed into the chatbot became part of its learning process.
Microsoft is reportedly working on a new version of ChatGPT for select users and organizations with privacy concerns regarding the use of the platform, but the risks posed by generative AI and corporate liability for privacy violations are far from solved.
Conclusion
In the past six months, the ubiquity of chatbots powered by large language models, like OpenAI’s ChatGPT, has turned decades of AI research and development into a tangible reality for people everywhere. From companies seeking to capture a competitive advantage to threat actors’ looking to weaponize ChatGPT, it suddenly feels like everyone is rushing to exploit the capabilities of AI.
Before jumping on the AI bandwagon, it’s important for companies to understand the risks the technology will unleash to formulate the right response. Many people within the cybersecurity community are racing to promote and harmonize best practices, standards, and frameworks for AI and related technologies. HISPI, NIST, AI Squared, Forward Edge-AI and Lynx Technology Partners are just a few resources security operation teams can lean on to help protect the organizations they serve from AI threats and make sure the business leverages generative AI to achieve its goals in a secure way.
Click Here For The Original Source.