SJA hears exclusively from Marshall Erwin, Chief Information Security Officer of Fastly about AI speed tax.
Companies accelerating AI adoption are moving faster than they can secure themselves.
Despite slower uptake in the UK than the US, adoption is picking up faster than cybersecurity strategies are evolving.
At the same time, the way organisations build and operate technology is changing. AI introduces new dependencies, new integrations and new behaviors that traditional security thinking was not designed to handle.
AI systems and agents need to be privy to reams of sensitive company data and have an ability to manipulate that data in order to function properly, challenging security solutions that rely on tightly managing identity and privileges.
This leaves businesses exposed to greater cyber risks as AI use grows.
Fastly’s annual Global Security Research Report, found that at this early stage of mass adoption, self-described AI-first organisations are showing significant vulnerabilities compared to their peers – an ‘AI speed tax’.
The AI-first conundrum
An ‘AI-first organisation’ is defined here as one that has implemented AI across workflows and incorporates AI in new projects by default.
These advanced adopters are showing weaker resilience to attacks and slower incident response.
AI-first organisations face almost seven month recovery timelines, a full 80 days longer than others, and each large incident is costing them 135% more financially.
This points to a fundamental issue with AI adoption – legacy security tools simply aren’t providing enough cover at this early stage.
The impact is likely to be protracted across sectors like finance and retail that are highly reliant on digital services.
These prominent sectors for the UK economy have faced high-profile attacks in the last year due to the sensitive customer data they hold.
AI use is becoming standard practice for many of the largest players, but security postures aren’t yet adapting to the new threat landscape at a sufficient pace.
The new attack surface
Longer recovery timelines are only part of the reason for such a stark difference in outcomes between AI-first organisations and their peers.
Cyber-criminals are directly targeting AI infrastructure, with AI-first organisations again bearing the brunt of the damage. Nearly half (44%) of them saw AI directly exploited in an attack last year, compared to just 6% of others.
AI systems introduce new layers like agentic workflows and decentralized data flows, offering new entry points for bad actors.
The complexity of overhauling infrastructure for AI is causing teething issues as security teams get used to new tools and ways of working.
AI contributed to security oversights for almost a third (34%) of AI-first organisations.
Using new tools without appropriate controls leaves more gaps for cyber-criminals to exploit and potential for accidental incidents to spiral into critical issues.
Outside of security teams, this problem is being exacerbated by unauthorized use of tools by employees.
Shadow AI runs 31% higher among a quarter of employees at AI-first organisations as the drive to encourage AI-driven innovation picks up across markets.
In the US, nearly six in ten organisations now actively encourage employees to use AI tools according to The Times, and despite a lag in the UK we can expect cultures to shift as the government pushes to become the fastest adopters in the G7.
Security in an AI-driven future
It’s clear that AI adoption is going to continue rising in the UK, and in many cases is not optional for enterprises that want to compete with their US counterparts.
In the same way that AI-first businesses are making AI central to operations, security has to be a consideration from the outset of projects, not retrofitted after the fact.
The key steps to securely scaling in today’s threat landscape are:
- Make systems secure by design
Building security architecture into systems from the beginning enables teams to move faster with confidence, as shown by the 81% of organisations who say resilience investments have safely allowed them to pick up the pace of innovation.
Making organisations reliant on AI ‘secure by design’ can be more complicated. Business leaders are still getting a handle on governance frameworks and accountability when incidents occur.
That’s why now more than ever security leaders need a seat at the table when broader strategy is being formed.
Treating AI systems as privileged infrastructure and clearly defining ownership for incidents from day one adds a layer of protection that ensures the long-term value of adopting AI isn’t outweighed by the risk.
- Improve visibility
Visibility also needs to improve to give organisations a clearer picture of where AI is being used.
Mapping AI use and ensuring each employee is aware of their role in protecting the business prevents blind spots and individual oversights from hampering progress.
This includes understanding not just sanctioned tools, but also the extent of shadow AI across the organisation.
Without that visibility, security teams are left reacting to incidents rather than proactively managing risk.
- Protect the new perimeter
AI systems are powered by web applications and APIs, which are now priority targets for attackers.
Ensuring you have the right security monitoring and alerting in place is essential to maintaining control over both internal systems and external access points.
- Establish who’s in charge
More than half of AI-first businesses lack clear ownership of incident response, compared to just 23% of others.
AI is blurring traditional boundaries between teams, making it harder to define responsibility when incidents occur.
Upskilling existing teams and establishing clear accountability will help organisations respond more effectively when issues arise.
Clarity of ownership is particularly important in AI environments, where the origin of incidents may be more difficult to identify.
The AI mindset shift
Fast adopters of AI don’t need to slow down, they need to recognize that security is the foundation of success.
Organisations rapidly changing their cultures to encourage AI innovation can scale with confidence if security leaders have a say in AI strategy from the outset and each individual employee’s role in the overall security posture is made clear.
Encouragingly, there are signs that organisations are starting to adapt.
Many are investing in post-incident reviews and automation to improve response times, and recovery timelines overall are beginning to stabilize.
Those that get this balance right will be better positioned to realize the benefits of AI without absorbing unnecessary risk.
Those that don’t will continue to pay the AI Speed Tax through longer recovery times and increasing exposure.
Click Here For The Original Source.
