Info@NationalCyberSecurity
Info@NationalCyberSecurity

Legit Security unveils cybersecurity’s first AI discovery features | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware


Legit Security, an application security posture management (ASPM) platform, has released the cybersecurity industry’s first AI discovery capabilities. This technology will allow Chief Information Security Officers and AppSec teams to explore where and when AI code is utilised, thereby providing control and broader visibility to secure application delivery whilst maintaining momentum in software development.

As developers tap into the potential of AI and large language models (LLMs) rapidly to advance and deploy capabilities, a variety of new risks emerge. These include AI-generated code potentially harbouring unknown vulnerabilities or flaws that could pose a risk to the entire application. Furthermore, legal issues may arise from AI-generated code if there are copyright restrictions in place.

An additional peril is the improper implementation of AI features which could result in data exposure. Despite these potential threats, security teams often have limited understanding of the use of AI-generated code, creating security blind spots that have an impact on both the organisation and the software supply chain.

“There’s a significant disconnect between what CISOs and their teams believe to be true and what is actually occurring in development,” comments Dr. Gary McGraw, co-founder of the Berryville Institute of Machine Learning (BILM) and author of Software Security. “This gap in understanding is particularly intense regarding why, when, and how AI technology is being employed by developers.”

The recent BIML publication ‘An Architectural Risk Analysis of Large Language Models’ has flagged up 81 specific LLM risks, including a critical top ten. These risks, states Dr. McGraw, cannot be mitigated without a comprehensive understanding of where AI is being utilised.

Legit Security’s platform allows security leaders, including CISOs, product security leaders, and security architects, a comprehensive view of potential risks across the full development pipeline. With this clear view of the development lifecycle, clients are reassured that their code is secure, compliant, and traceable. These new AI code discovery capabilities enhance the platform by closing a visibility gap, allowing security to act preventively and reduce the risk of legal exposure, while adhering to compliance.

“AI offers tremendous potential for developers to deliver and innovate faster, but there’s a need to understand the risks that may be introduced,” remarks Liav Caspi, co-founder and Chief Technology Officer at Legit Security. “Our aim is to make sure nothing hinders developers, while providing them with the confidence that comes with visibility and control over the application of AI and LLMs. When we showed one of our clients how and where AI was being used, it was a revelation.”

The AI code discovery capabilities of Legit offer a myriad of benefits, including a full view of the development environment, complete visibility of the application environment such as repositories using LLM, MLOps services and code generation tools. This unique platform can detect LLM and GenAI development and enforce organisational security policies such as requiring all AI-generated code to be reviewed by a human.

Other features include real-time notifications of GenAI code, providing greater transparency and responsibility and guardrails to prevent the deployment of vulnerable code to production. Legit can also alert on LLM risks, scanning LLM applications code for security risks.

——————————————————-


Click Here For The Original Source.

National Cyber Security

FREE
VIEW