#mythos | The Cyber Security Hub™ | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware


Transcript

I’ve seen a lot of discussion about Project Glasswing Slash Clawed my Toes and I wanted to pick up on a point that I actually haven’t seen anyone really discuss. And it’s in that first paragraph where they talk about the 27 year old BSD bug that they found. They mention it costs them less than $20,000 to find. Now what? We don’t know exactly how much it costs them, but we know it’s close to $20,000 because no one says less than $20,000 when they mean $2.00 right now. The vulnerability is something known as a null pointer. Reference, which for those of you not familiar with vulnerability research, is a class of bug that is almost never exploitable for remote code execution. The best you can usually get is crashing a process or crashing an operating system, but you can’t get remote code execution with them in 99.9% of cases. And Anthropic said this themselves, they said they were only able to get the BSD kernel to crash. Something I wanted to highlight is that $20,000 figure. Now, we don’t really know how they came to that number, but later on in the article it mentions API pricing. So we can probably infer that what they mean is it cost $20,000 worth of tokens at the price that they currently sell them for. Now, as many of you know, like tokens are largely subsidized by venture capital investment. You have all these VC’s pouring money into these AI companies to build out infrastructure, data centers, purchase GPUs, and so we don’t actually really know how much a token. Actually costs in terms of computational power, a lot of this is still being subsidized. So my personal belief is that once the VC money dries up and these companies actually start having to sell tokens for what they are truly worth, the prices are going to go up by a lot. So that $20,000 might not be $20,000, it might be more like $40,000 or even more. So that kind of begs the question, who is actually footing the bill for this? BSD is a nonprofit, like it’s an open source. Project. The reason these bugs aren’t getting found is because no one is being paid to find them. So our solution is we now have an AI model that you can pay to find them. But that doesn’t solve the economic incentive of who is paying $20,000 for a single vulnerability and are we going to get companies pulling their money together to socialize cybersecurity? Which if that is going to happen, why didn’t it happen already? Why didn’t we fix cybersecurity? By socializing the cost though, essentially nothing has really changed about the economic incentives. Bugs aren’t going unpatched because no one can find bugs, it’s because no one is being paid to find bugs in the majority of software. So one question I might pose would be, firstly, could a human vulnerability researcher have found that bug for less money? And secondly, who is footing the bill? Whether it’s a human or an AI, someone is still having to pay money. To have their code audited. And the entire problem in the 1st place is economic incentives. No one is getting paid to audit this code, so very few people are doing it. And while sure, there are people who do vulnerability research for free and like massive hat tip to them, there is not enough of them in the world to audit all of the codes that exist. They’ve essentially just shifted the economic burden elsewhere. Rather than paying human vulnerability researchers, they would be paying an AI machine. But who is footing the bill? Like $20,000 for one bug is a lot of money, and even if there are bugs being found for like $200 or $20, that is still way more money than most companies are currently paying. So at the end of the day, someone has to foot the bill for vulnerability research AI, automated or not. It’s not going to be the nonprofit companies with their open source software. It’s unlikely to be the government, and it’s unlikely to be these billion dollar corporations because if they were, they would have already socialized cybersecurity. So I don’t think we’ve made any change to either the defense landscape or the threat landscape. It honestly feels like people have been pretending that networks are fortresses and no one was ever able to break in. And this is like some new revolutionary new hacking technique and suddenly attackers are going to be able to break into networks. The whole time attackers have been getting into networks left and right via like social engineering, mouse spam and all of the old techniques. So they’re never really was this perimeter that is suddenly. Broken. So I really don’t feel like a whole lot about the cyber landscape has changed. Now, while I’m very optimistic for future in which it costs next to nothing to audit your code, I don’t think we’re at that future. I don’t think we’re anywhere close. I actually am not even convinced that these AI models cost less per vulnerability than a human researcher. Attackers are still going to get into networks as they have been since networks existed. Vulnerability researchers are still going to be able to. And vulnerabilities whether enough money, regardless of whether that money goes to a human or an AI. But for the rest of us, I don’t think it changes anything.

——————————————————–


Click Here For The Original Source.

.........................

National Cyber Security

FREE
VIEW