#cybersecurity | #hackerspace |

Survey: SMBs Plan to Embrace AI but Don’t Know the Risks

Is 2020 the year that AI technology takes hold in SMBs? According to a study from Zix-AppRiver, the answer is yes. Nearly 9 out of 10 SMBs report they have a high interest in adopting AI this year; for businesses with more than 150 employees, the interest in AI jumps to 99%. For many, this is the logical next step in their digital transformation, adding automation to tasks from data analysis to inventory management to human resources assistance to improving cybersecurity systems.

When you add new digital technologies, you also increase your chance of a cyberattack. AI, even those technologies designed to sniff out risks and protect your network and data, is not immune to vulnerabilities. Yet, the Zix-AppRiver study revealed that nearly 7 in 10 respondents were unaware of the risks involved with AI adoption. Even when alerted to potential risk, however, most said they would add AI anyway, believing that the benefits outweigh the threats.

“Since AI is such a rapidly evolving field, security risks and vulnerabilities will continue to be discovered over time,” said David Pickett, senior cybersecurity analyst at AppRiver. “From a high-level perspective, overreliance on AI solutions without maintaining proper oversight, backup processes and controls to override in an emergency scenario or adequately provide redundancy could be disastrous to an organization.”

How AI Adds Risk

Every organization’s risk profile is in constant flux because every organization is dependent upon computers in one form or another, and we’re just beginning to really understand how computer technology causes trouble to our businesses, explained Trevor Pott, technical security lead and product marketing director at Juniper Networks. So, it’s entirely reasonable that SMB owners—or anyone else—don’t feel they have a complete handle on the security implications of business applications.

That includes the AI used as security tools. When relying on AI to discover dangerous bots, cyberattacks, malware or anomalous behaviors of web applications, it is easy to forget that even this AI deployment can add vulnerability to your network. “Most of the best tools used by researchers and security teams for AI are open source and can be easily co-opted by the Black Hats to try to recognize security measures and subvert them,” explained Ido Safruti, co-founder and CTO of PerimeterX. “A cyberattacker could access not just the software but a ready-baked infrastructure to perform machine learning and build models, all at a very modest cost.”

Monitoring can also be challenging, given that in many systems ML models can be quite difficult to interpret, AppRiver Manager of Security Research Troy Gill pointed out. “Training sets and models themselves often contain a large amount of private data that needs to be closely guarded as it could potentially provide an attacker with a trove of sensitive data.”

Recognizing the Vulnerabilities

There is no one true way to know if your AI is vulnerable to cyberattack. Risks will vary depending on the industry or the way the technology is used. It’s also important to remember that risk can never be 100%, but the damage can be decreased when you are more aware of what to look for to determine if your AI makes your company more vulnerable to cyberattacks.

“From an efficacy, risk and potential vulnerability standpoint with AI, external testing with additional datasets and different processes could help prevent the blindness paradox that can lead to biased output and ineffective outcomes,” Pickett said.

AI deployment, even when it is used for security systems, must be treated with the same type of security processes as any other technology. Leadership needs to utilize security training and education and remember that the human element is a valuable part of any security solution.

Organizations should also avoid giving most of the decision-making power to an AI system that could become a single point of failure. “We’ve seen examples where this type of system has had negative consequences, especially in the healthcare system where care for some of the sickest patients is determined by algorithms that have shown to systematically grant privilege to certain patients over others based on race,” said Gill.

Proper oversight should be put in place to ensure the system is performing as intended, he added, and data sets utilized by AI should be heavily guarded to avoid unintended exposure.

AI will transform the way SMBs conduct business, mostly in a good way. But as AI adoption increases, leadership needs to be aware of AI’s weaknesses and vulnerabilities to ensure that the technology doesn’t become a new gateway for bad actors to access your network and data.

Source link

Leave a Reply

Shqip Shqip አማርኛ አማርኛ العربية العربية English English Français Français Deutsch Deutsch Português Português Русский Русский Español Español

National Cyber Security Consulting App







National Cyber Security Radio (Podcast) is now available for Alexa.  If you don't have an Alexa device, you can download the Alexa App for free for Google and Apple devices.