‘If cyber crime was a country, it would be the third largest GDP” | #cybercrime | #infosec


When it comes to cyber crime, the numbers are stark:

  • It currently costs the world $9.2 trillion
  • On average, it takes a threat actor 72 minutes to gain access to user data, and that number is going down
  • About 20% of data breaches today are as a result of insiders
  • As the world’s largest security company, Microsoft tracks 7,000 password attacks each second. That’s 600 million attacks a day
  • The number of attackers (such as unique nation state actors and financial crime actors) Microsoft is tracking has gone from an average of 300 every day to 1,500 increase.

These were some of the eye-opening statistics Microsoft’s CVP of Security, Vasu Jakkal, underlined in her revelatory keynote address on day two of TiEcon 2025.

TiEcon 2025, the world’s largest tech conference, and the biggest in its 32-year history, took place this May in the heart of Silicon Valley. The conference brought together 3,000-plus entrepreneurs, investors, and industry leaders from around the globe. With over 180 speakers and this year’s theme, ‘AiVerse’, the conference showcased the transformative power of innovation. Under the leadership of TiE Silicon Valley President Anita Manwani, TiEcon continues to drive a culture of transformational change, fostering new ideas, connections, and opportunities for the next wave of global entrepreneurs.

In keeping with the AiVerse theme, Jakkal underscored the importance of security as a foundation for AI. Because Microsoft has a $20 billion security business that processes 84 trillion signals every day, it is uniquely positioned to observe emerging threats such as wallet abuse, word prompt injections, and large language model (LLM) poisoning. Other highlights from the keynote included:

How agentic AI can bolster security
Agentic AI, designed to autonomously make decisions and accomplish given goals with minimal human supervision, is already addressing challenges in healthcare, education, transportation, and security. In the near future, both individuals and organisations could have agentic AI in the form of unique, interactive personas. Think an agent that helps with deep research for your startup, an analyst agent that converts raw data into insights, a chief staff agent that manages schedules every day, or even a home companion agent that can tutor children and plan family trips.


As such agents become digital colleagues and thought partners, the question to ask is: what risk can their prevalence pose to us? This is where critical security considerations come in. The questions to ask are:

Discover the stories of your interest

  • What is your identity strategy?
  • What permissions do such agents have?
  • How are you protecting your data?
  • Do you have the right data leakage policies
  • If agents are working across teams, companies, or homes, what are the privacy considerations?

As agents become pervasive, human defences will need to scale at the speed and scale of AI. Which is why we need to think about agents for security, and AI for security in general.In 2023, Microsoft began focusing on security-focused AI by launching the GPT-4-based Security Co-Pilot. It takes open source models, grounds them on the trillions of security signals and data in its repository, and refines them on security skills. The result is faster and more accurate threat prevention.

How agentic AI can address gaps in security
Around 4.6 million jobs in security remain unfulfilled globally. In this context, AI agents can enable potential talent to develop required competencies.

Security today is largely reactive. Agentic AI agents can predict and stop novel attacks before they happen. As an example, they can identify data risks when an organisation puts data structures in place. They can autonomously apply identity and access policies so the right people can have access to the right things at the right time, for the right reasons. And such policies can be dynamically adjusted.

In workplaces, such agents can also be part of SaaS AI apps or custom enterprise offerings such as the Azure AI Foundry, Amazon Bedrock, or Google Vertex.

What more Microsoft is doing to secure the future of AI
In November 2023, Microsoft launched the Secure Future Initiative, a multi-year cybersecurity effort that shapes how it designs, builds, tests, and operates products and services to meet security standards. Apart from operating the largest security initiative in the world, Microsoft ties executive compensation to security and has 14 deputy Chief Information Security Officers (CISOs) who oversee security engineering teams. Employees across the company are also taken through a security skill academy.

“We review our security updates with Satya [Nadella, CEO] every other week and send a report every week. And we have a meeting with the board, of course, every quarter. The first meeting starts with security,” Vasu Jakkal shared.

“Security is a team sport. It deeply matters and turbocharges our product flywheel of defence, because we use all these learnings from security to build better products.”

TiEcon 2025, which ran from April 30 to May 2, featured eminent tech executives as other grand keynote speakers. ICYMI, here are the takeaways from Satya Nadella’s discussion on what makes a generational company in the AI age.



Source link

——————————————————–


Click Here For The Original Source.

.........................

National Cyber Security

FREE
VIEW