The truths about AI hacking that every CISO needs to know (Q&A) | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #hacker


That means the defender’s job is to change the battlefield in real time. If you’re a cloud-based infrastructure and your cloud instance thinks it’s compromised, it should turn itself off. If someone’s trying to abuse an account privilege, like in a service account, and it’s starting to do something anomalous, don’t let it do it. The permissions should just be turned off.

We’re going to have to put these intelligent reasoning systems behind real-time decision making and disrupt decision making on the ground, without causing reliability problems. Maybe you need human approval. Or you shut down one instance and turn up another one.

There are options other than just the on/off switch, but we have to start reasoning about real time disruption capabilities or degradation, and use the whole information operations playbook to change the battlefield to confuse AI attackers. Particularly because they’re stumbling around in the dark a little bit and may be less resilient than human attackers.

Tim Peacock: AI is getting good at vulnerability discovery, so how does this impact software supply chain and open source and maintainers and so on?

Heather Adkins: Long term, the hope is to have AI assistants early in the software development lifecycle available to open source developers, commercial developers, students, and hobbyists. They’ll catch and prevent most classes of vulnerabilities from reaching production environments. So if you’re coding and you try to create an integer overflow, it will tell you it’s a bad idea to ship that code.

This will take lots of trial and error and new norm setting. That might be for commercial software development houses to adopt but open source developers will have to figure out what this means for their community and who will support it.

Anton Chuvakin: What makes me nervous is that the science is there, but what about organizational readiness?

Heather Adkins: There’s always a natural tension inside business between velocity and safety. These conversations come down to what the solution is. When those solutions are on the market, developers are coming out of school having used them there, and they’ll demand them.

This can be cost-efficient and a win-win for business because no CIO wants to ship unsafe code. We just haven’t had great tools for doing it yet, and once there’s enough public proof points, case studies, and wins, you’ll see this change.

Plus, over the long term, you might start to see insurance companies and regulators nudging the market.

Anton Chuvakin: Going back to EDR [endpoint detection and response] days, I remember selling it in a certain region and companies said that since the EDR goes back to the U.S. cloud, they couldn’t use it. So I asked, if you’re better by 10% or if you’re better by 10 times than the next-best on-premises EDR, would they go with regulations or would they go with risk reduction? Everyone said they’d go with regulation. How do you think this concern will play with AI?

Heather Adkins: We’re trained to look not just at the risk in front of us, but in the rest of the business too. In regulated environments, ecosystems, and sectors, the regulatory risk may be higher than the cyber risk because the regulator decides whether you exist or not.

We have to avoid being so focused on cyber risk that we forget that businesses have other kinds of risks. Our job is to balance those out for them. Don’t let chasing compliance be the death of a good internal security posture, but also don’t forget that in this fast-moving space, compliance will develop faster than we realize.

Governments are interested in how AI is used in society, and regulators, as we are seeing, are already deeply engaged. Ultimately that means we need open conversations with regulators so that we can use the technology we need to defend ourselves and yet doing so in a way that is responsible. I’d hate to find anyone in a situation where they can’t use the best model to keep hackers out because of a regulatory concern. But that also has to be balanced with some realities. As with most complex societal issues, dialogue is key.

Anton Chuvakin: What do businesses need to change if AI innovation outpaces legislation?

Heather Adkins: It’s really important that they talk through these issues with their regulators. As these solutions become available and relevant, and we can put pieces together, we may be able to move the regulators. Of course, that’s not always appropriate or possible. The world is highly fragmented and very complicated, and there are no easy answers. These are tough societal issues and the only way to deal with them is to talk them out.

Tim Peacock: What would you recommend people read to better understand these dynamics? And do you have any advice for CISOs facing them or for early career folks wondering how these issues will change the security industry?

Heather Adkins: First, read newsletters like Daniel Miessler’s Unsupervised Learning, and listen to things like the Google Cloud Security podcast.

Pretend you don’t know anything about tech and learn it all over again, because it’s changing so quickly. CISOs and teams wanting to try out AI should start with small pilots and proof points. Go slow and don’t try to boil the ocean. Pick a few places where you see industry peers being successful and talk to them.

Also, make sure you’ve got good governance over how it’s getting used inside your companies. The workforce, especially the early career workforce, will drive a lot of the innovation because they expect to use these tools.

All this requires an incredible amount of curiosity and critical thinking and really challenging what you’re looking at. It’s going to be a very interesting few years as we see all this develop.

——————————————————–


Click Here For The Original Story From This Source.

.........................

National Cyber Security

FREE
VIEW