Lasso Security Ltd., a startup that helps enterprises ensure their workers use large language models securely, launched today with $6 million in funding.
The capital was provided by Entrée Capital, the lead investor in the round, and Samsung Electronics Co. Ltd.’s Next venture capital arm. The raise brings Lasso Security’s total outside funding to just over $7.5 million.
Since the launch of ChatGPT last year, millions of knowledge workers have adopted OpenAI LP’s chatbot and competing artificial intelligence tools. Not all of those workers are using the software with their companies’ permission. Tel Aviv-based Lasso Security has developed a platform that can detect when workers use generative AI products in an unauthorized manner, as well as mitigate the associated cybersecurity risks.
According to Lasso Security, its platform can analyze a company’s technology environment and automatically map out all the generative AI tools that its employees are using. The software then lists those tools in a dashboard that also provides information on which worker is using what service and how. Additionally, administrators have access to controls for regulating AI application usage.
Lasso Security spots AI prompts that contain sensitive data such as customers’ credit card numbers. Using the company’s platform, administrators can prevent workers from entering such input into external LLMs. When workers attempt to include sensitive information in an AI prompt, Lasso Security displays a pop-up panel that asks them to replace the information with placeholder data before continuing.
The software maker says that its platform flags cybersecurity issues in not only the input that users enter into AI models but also those models’ output. In particular, Lasso Security can detect when an AI-powered programming tool generates insecure code that developers shouldn’t incorporate into their company’s software. Lasso Security also spots other issues, such as situations where an AI application might output copyrighted material in response to user prompts.
A third area of focus for the company is helping enterprises protect their internally developed AI applications from hacking.
According to Lasso Security, its platform can detect and block so-called prompt injection attacks. Those are cyberattacks in which hackers use malicious prompts to temper with an AI or trick it into leaking sensitive data. Lasso Security can likewise spot denial of service campaigns that attempt to render AI models inoperable.
“Lasso’s solution directly addresses the vulnerabilities associated with LLMs,” said Entrée Capital General Partner Eran Bielski. “This enables businesses to confidently integrate large language models into their products or internal tools, while safeguarding their commercial data and information.”
Lasso Security will use its newly closed seed round to hire more workers and enhance its platform. The company, which launched earlier this year, says that its software is already used by multiple organizations to support their LLM cybersecurity initiatives.
Image: Lasso Security
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.