Hackers Armed with Generative AI Pose a Greater Challenge to Businesses | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #hacker

BIZTECH: What was your initial reaction to the news of breakthroughs around generative AI such as ChatGPT?

Farshchi: My first thought was, “Man, this is really cool! There’s a lot of opportunity here.” But as a security guy, my next thought was that it’s going to generate some risks for us Social engineering is going to get really bad, and I think there will be a whole host of additional risks that will manifest themselves over time.

Titus: To be honest with you, it struck fear in my heart. The threat actors have gotten more sophisticated and better at what they’re doing. This is going to take them to a new level. Our industry has gotten a lot better about reacting very quickly to changes in the behavior of threat actors. So, I’m interested to see what will come out of the cybersecurity startup community.

Green: The generative AI stuff pops up and it’s like, “Oh, man! We’ve got to work on that, but the financial services sector can’t take as long to think about it as we did with cloud.”

The speed of availability to everybody, that was a surprise. And then, “Wow, we’ve got to get in front of this so that we can use it appropriately.” There’s a lot of goodness that comes with it, but then there’s also the opportunity for badness.

REAM MORE: How cybercriminals use AI in their attacks, and why you should use it for defense.

BIZTECH:  What threats does AI pose to your company’s network security?

Green: First, how will bad actors intentionally use it for evil? We’re seeing the bad guys use it, not in ways that are like their machines attacking our machines, but in ways that make it even more difficult for humans to discern, for example, phishing. A lot of the things that we point out — grammar errors, spelling errors, urgency — generative AI smooths all that out for bad actors.

Second, as we take advantage of the opportunity, what might we do to cause inadvertent lapses or weaknesses in our security?

When you first adopt stuff, you don’t know quite what you’re doing. And then, because of that, you’ve created a big old hole. There are a number of rules that we put in place. We want our people to experiment with it, try it, but we’re not going to let them run wild and throw it into production.

You need to be smart about it but not too slow. In the next 12 months, I think you’ll see the financial services sector come up with the guardrails that we’ll need to enable practical business uses around generative AI.


Click Here For The Original Story From This Source.

National Cyber Security