Unveiling the Power and Pitfalls of AI in Cybersecurity: A Coffee Talk Series | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware

[ad_1]

To take some of the uncertainty out of the topics of artificial intelligence (AI) and the world of cybersecurity, Votiro Founder & CTO Aviv Grafi sits down with VP of Product Eric Avigdor in this first of a three-part “Coffee Talk” series that explores the different facets of AI and its impact on cybersecurity. 

In this first chat, Aviv and Eric introduce the basics of AI, machine learning (ML), and large language models (LLM), and discuss the massive opportunities for the rapid adoption of Generative AI. The two differentiated between the more well-known Generative AI for consumers, like ChatGPT and BARD, with generative AI used for Security, such as SOC Assistants, to ensure data integrity and privacy and to enhance security analytics. They also discuss Generative AI used for Enterprises, such as to train AI models using enterprise data, make digital business process improvements, and enhance business analytics for better overall decision-making.

Clearly, AI is here to stay – both for good and bad. Watch the episode and read the recap below:

The discussion then turns to cybercrime. Once, cybercrime was reserved for highly-specialized threat actors with significant skill sets and know-how. No longer. AI has become a commodity anyone can use to execute a cybercrime. The Votiro team describes the availability of Cyberware-as-a-Service, where threat actors create malware sold on the Dark Web to anyone who wants to wreak havoc on another entity. They also explain the concept of Cybercrime-as-a-Service – where hackers offer malicious services to anyone willing to pay their price.

For example, AI-enabled Ransomware-as-a-Service can be found for sale on the Dark Web. For the right price, you can get a white-glove service package deal that includes the techniques to install the ransomware, the methods to bring in the money, the service to help facilitate the payments from the victim, etc. 

While ransomware is the most prevalent – with 24% of breaches across all organization sizes involving ransomware – they are by no means the only option available to malicious actors. AI-leveraged botnets are available for purchase to take down a competitor’s website, or AI-enabled phishing emails can be used to ensnare even the most astute employee. For example, ChatGPT can help one write a perfect phishing email – no more grammatical or spelling errors to arouse suspicions – or help generate a perfectly-worded ransom note in 28 different languages with one click.

Next, Grafi and Avigdor turn their attention to the dangers of relying too heavily on AI. While AI and ML can accomplish tasks we never would have thought possible, the two security experts point out that AI and ML are only as good as their datasets. Training the AI model takes lots of data; if that data is not comprehensive enough, the results will not be successful. For example, if the model is trained on insufficient data to help it recognize a phishing email, many false positives will likely result. Also, even with perfect data enabling the model to detect a phishing email with known parameters, what about a zero-day or yet-unknown threat? 

Even worse is the concept of poisoned data. The expert duo discuss the phenomenon of tricking the learning model using natural language. For example, if you tell the model that the earth is flat, that input becomes fact. So next time someone poses the question, the poisoned model will respond accordingly. 

The same holds true for any organization looking to train an AI model to make the business or a product more efficient. The first step is to collect a vast dataset and train the model to make decisions based on that data. However, what if one of the files used in the dataset is poisoned – corrupted or injected with malware? These poisoned data files will result in a poisoned model that will – possibly maliciously – cause the organization to make the wrong decision. Data repositories can also mistakenly – or maliciously – include privacy-related data, such as credit card information or HIPAA-related data, all of which can eventually be abused.

Votiro’s innovative Zero Trust approach to content security helps companies deliver safe and functional files to their users and applications – wherever and however files enter. Unlike detection-based security solutions that scan for suspicious elements and block some malicious files, Votiro’s technology singles out only the safe elements of each file, ensuring every file that enters your organization is safe and free of any threats or other poisons that can skew your data model.

Grafi and Avigdor reveal how over the last few years, Votiro has trained their own AI models on thousands of macros and files – both malicious and benign – to ensure that their technology can confidently identify a safe macro with the highest accuracy. They challenge the audience to consider where their organization’s files exist – they can be downloaded by employees, uploaded by partners, saved anywhere on their cloud infrastructure, etc. Votiro provides organizations with the peace of mind that their pipelines are secure, so when data is fed into their AI model, it is clean and safe to use. And that’s a win-win for everyone.

Contact us today to learn more about Votiro and how we can help your organization create secure foundation. And if you’re ready to try Votiro for yourself, start today with a free 30-day trial.

*** This is a Security Bloggers Network syndicated blog from Votiro authored by Votiro. Read the original post at: https://votiro.com/blog/unveiling-the-power-and-pitfalls-of-ai-in-cybersecurity-a-coffee-talk-series/

[ad_2]

——————————————————-


Click Here For The Original Source.

National Cyber Security

FREE
VIEW