AI Is Changing Cybersecurity Fast and Most Analysts Aren’t Ready | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware


Conventional thinking about AI and automation is that they require a great deal of upfront training and human input to be successful. This is undoubtedly true, especially when it comes to genAI and large language models (LLMs). The more knowledge and thoughtfulness humans take in training an AI model in the early stages, the more automation, decision-making, and tasks AI can take on later.

This is not a one-way street. AI can also be a powerful learning tool for humans. In enterprise cybersecurity, security analysts can leverage AI to acquire new skills, refresh existing ones, and stay ahead of evolving threats. AI is evolving at such a rapid pace that it may now be the solution to cybersecurity’s decades-long skills shortage, which has long plagued the industry and put defenders at an unfair disadvantage when facing off against attackers.

Debunking AI Myths

The current narrative in the tech trades is that AI and the autonomous security operations center (SOC) will eventually displace the role of the junior security analyst. The claim is that AI will be intelligent enough to generate threat reports all on its own — a job currently done by junior analysts.

However, the reality is much more complicated. Autonomous SOCs and human analysts are interdependent. The ideal outcome of a well-built autonomous SOC is to have mutual checks and balances with the human analyst always making the final decision. AI can indeed do much of the heavy lifting when it comes to detecting and analyzing threats, but there will always need to be a human verifying the findings and conclusions drawn by AI. A junior security analyst is still vital for checking the logic behind the data that forms the prescriptive basis of the AI model.

In fact, without a human verifying the high-level yield of an AI model, the process becomes even slower than it was pre-automation. A high-level story of the data without the evidence supporting that data makes the senior security analyst’s job harder.

The role of AI in the Autonomous SOC is to provide junior analysts with more supporting evidence and context rather than replace them. Junior analysts follow the same steps that an AI assistant uses to determine whether the conclusion is sound.

For example, in an autonomous SOC, a junior analyst can type a query into an AI-powered chatbot such as, “Show me the top IP source logins from outside the United States.” The chatbot then conducts a quick search that pulls up a list of the top login results from outside the U.S., allowing the junior analyst to view the IP addresses and the number of times users from other countries have successfully accessed an enterprise’s systems. This may be a very normal behavior in some organizations, but for others, it could be a red flag. It may also depend on the time of day, the user’s IP address, or the number of times they log in. The junior analyst can then use the full context of this data provided by AI to decide whether to investigate further or escalate a threat.

While AI can provide a high-level overview, human analysis is still required for a comprehensive understanding of the threat network and an effective detection strategy. Verifying data and reviewing AI-generated conclusions helps train junior analysts, builds their critical thinking skills, and enables them to become more efficient, ultimately moving to a promotion.

Optimizing the Human Experience

Just like any interdependent relationship, the human analyst and the AI each have strengths and weaknesses. For example, AI is better at quickly analyzing and recognizing patterns, as well as spotting anomalies, in large datasets. AI is faster at writing scripts and code. This makes sense since massive knowledge bases power AI and can process data much faster than a human. Conversely, human analysts are better at making decisions based on their experience and understanding more nuanced situations. Where AI can only see black and white, humans can interpret the gray areas of decision-making that are valuable in making judgment calls.

Humans can teach AI models these traits, just as AI can help human analysts boost their pattern recognition and anomaly detection skills. The idea is that humans and machines work collaboratively toward the same goal, improving each other’s performance along the way.

The human analyst can further train and refine AI through feedback loops. Each time an AI agent completes a task, the human can provide feedback on how well it performed and how it can improve the next time. Continuous improvement is how AI eventually evolves and surpasses humans in specific processes.

So, how can AI do the same for humans?

Here are six ways AI can help human security analysts tighten up their skills:

  1. Learning Through Automated Security Frameworks: It is now possible to train AI on specific cybersecurity frameworks, such as MITRE ATT&CK, a globally accessible knowledge base of adversary tactics, techniques, and procedures (TTPs). MITRE ATT&CK helps security professionals better understand how attackers behave and develop counter-defenses. By automating these TTPs and breaking them down into steps, AI can help security personnel, as well as other IT professionals, improve their cybersecurity posture and be better prepared against adversaries.
  2. Personalized Learning and Playbooks: AI agents can create custom training playbooks tailored to an analyst’s skill level, using existing materials and their training. These playbooks adapt as the analyst progresses and can be supplemented by AI chatbots and assistants for on-demand tutoring. Virtual labs and AI-driven simulations offer hands-on experience.
  3. Threat Intelligence and Real-World Scenarios: AI can analyze data from real-world attacks and generate dynamic threat simulations to help train security analysts to respond more effectively. AI-driven cyberattack simulations can mimic network breaches and other attacks, enabling analysts to practice responding to emerging threats and understand new attack vectors.
  4. Automated Research and Skill Expansion: AI can easily summarize hundreds of pages of research and complex threat intelligence reports. Additionally, NLP tools can track hacker activity more quickly than human analysts. They can recognize certain language patterns hackers use in social engineering hacks, and can even generate reports on these patterns for junior analysts. AI can also suggest relevant training courses for analysts based on their career goals, helping analysts stay updated and motivated to keep their skills sharp and current.
  5. Hands-On Coding and Automation: AI-powered coding assistants can help analysts learn various scripting skills for security automation and generate custom security scripts, improving efficiency and skill development.
  6. Security Gamification and Competitions: AI-powered agents can serve as tutors and create fun, personalized capture-the-flag (CTF) challenges that track performance in cybersecurity competitions, suggesting areas for improvement.

By integrating AI into learning, security analysts can accelerate their skill development, stay ahead of threats, and become expert cybersecurity professionals. Security practitioners need to find opportunities to close those AI skill gaps today. With the number of open source tools, active AI communities, and freely available training content, it’s easier to do so than ever before. It’s never too late to learn AI.

Security professionals shouldn’t be worried about job obsolescence due to AI. Instead, they should be concerned about missing out on job opportunities because of a lack of AI knowledge. It’s now easier than ever for security analysts to upskill in AI. We live in a time when there is an abundance of open source tools, active AI communities, and free AI training content. It’s never too late to start learning.


Group Created with Sketch.



——————————————————-


Click Here For The Original Source.