Security AI startup Mindgard, has raised $8 million funding and appointed a new Head of Product and VP of Marketing.
Many AI products are being launched without adequate security assurances, leaving organisations vulnerable to risks such as LLM prompt injection and jailbreaks, which exploit the probabilistic and opaque nature of AI systems and only manifest at runtime. Securing these risks, unique to AI models and toolchains, requires a fundamentally new approach.
Spun out of Lancaster University, Mindgard’s Dynamic Application Security Testing for AI (DAST-AI) solution identifies and resolves AI-specific vulnerabilities that can only be detected during runtime. For organisations adopting AI or establishing guardrails, continuous security testing is essential for gaining risk visibility across the AI lifecycle.
"All software has security risks, and AI is no exception,” said Dr Peter Garraghan, CEO of Mindgard and Professor at Lancaster University:
“The challenge is that the way these risks manifest within AI is fundamentally different from other software.
Drawing on our 10 years of experience in AI security research, Mindgard was created to tackle this challenge. We’re proud to lead the charge toward creating a safer, more secure future for AI."
Mindgard’s solution integrates into existing automation, empowering security teams, developers, AI red teamers and pentesters to secure AI without disrupting established workflows.
406 Ventures led the funding, with participation from Atlantic Bridge, Willowtree Investments and existing investors IQ Capital and Lakestar. The new executives, Dave Ganly, a former Director of Product at Twilio, and Fergal Glynn, who most recently served as CMO at Next DLP (acquired by Fortinet), will play a critical role in the company’s product development and launch Mindgard’s expansion into the North American market with a leadership presence in Boston.
According to Greg Dracon, Partner at .406 Ventures, the rapid adoption of AI has introduced new and complex security risks that traditional tools cannot address:
“Mindgard’s approach, born out of the distinct challenges of securing AI, equips security teams and developers with the tools they need to deliver secure AI systems.”
Lead image: Mindguard. Photo: uncredited.
Would you like to write the first comment?
Login to post comments