firsttime

Applications open

LASR: Validate


Apply now

There is a strong demand signal for the UK to adopt AI systems, but as these systems become more sophisticated and embedded across digital sectors, they also become more vulnerable to malicious attacks or unintended misuse. To have confidence in AI systems we need a foundation of trust underpinned by AI security; protecting and safeguarding AI systems, models, infrastructure, and data from cyber threats and malicious attacks.

Plexal’s innovation programme, as part of the Laboratory for AI Security Research (LASR), is designed to empower startups to develop and deploy advanced AI security capabilities. The programme offers support through tailored and competitive sub-stages designed to help SMEs validate, build, and test new AI security capabilities. We will provide participants with the tools, mentorship, and resources needed to explore the evolving landscape of AI and cyber security.

Enhance AI Security
Innovation

Foster the development of new products to address pressing AI security challenges

Accelerate Market
Readiness

Reduce time-to-market for AI security solutions by providing technical, regulatory, and sales support

Promote Education
and Awareness

Raise awareness of AI security risks and best practices

Validate

We are pleased to open applications for the first programme stage, Validate, which will support SMEs to explore and design the next generation of AI security products. The 6-week programme will fast-track industry validation of AI security product opportunities with a focus on sectors such as Financial Services, Telecoms and Defence as well as national security. Participants will have the opportunity to dive into key AI security challenge areas, and, with the support of industry and LASR stakeholders, validate potential product or capability opportunities.

We will encourage SMEs and innovators from different technology domains to collaborate, where there are synergies and relevant opportunities to do so. At the end of the 6-weeks participants will deliver a Proof of Concept (PoC) proposal for a capability that addresses a specific AI security challenge. A panel of judges will select which proposals get through to the follow-on programme, where participants will design out that PoC with a relevant partner.

What to expect from VALIDATE:

Masterclasses on IP creation & regulation

Networking with LASR ecosystem

Introduction to AI security challenges

Exploration of AI security use cases

Free desk-space in a LASR Hub

Focus areas

We have identified some key opportunity areas within AI security and are looking for SMEs who are interested in exploring specific use cases that align to challenges within these areas:

Guarding against adversarial manipulation at the use-interface level

Adversarial manipulation at the user-interface level poses significant challenges to the security and integrity of AI systems. Manipulations can lead to the dissemination of misinformation, unauthorised data access, and other malicious activities, underscoring the need for robust defences. We want to explore:

• How can real-time detection methods identify malicious inputs like prompt injections or poisoned queries?
• How can user inputs be sanitised or filtered without degrading task performance or impacting usability?
• Can adaptive filtering mechanisms detect and learn from emerging adversarial behaviours?

Monitoring & Protecting AI models deployed in user-facing systems

Implementing robust measures to prevent unauthorised access, misuse, and model extraction is crucial for safeguarding AI systems. We want to explore:

• Preventing unauthorised access to AI systems, including inference APIs, to stop adversaries from exploiting or extracting sensitive information
• Can we detect and mitigate "model extraction" attacks, where adversaries attempt to recreate models by probing them with repeated queries?
• How do we monitor deployed AI systems for unauthorised usage patterns or model misuse in real-time?

Identifying subtle interference introduced during training or fine-tuning

Adversaries can introduce subtle interferences, such as data poisoning and backdoor attacks, which compromise model performance and security. We want to explore:

• How can we systematically detect subtle model interference, such as backdoors or parameter tampering?
• How can testing methodologies ensure fine-tuning does not introduce vulnerabilities into previously secure models?
• Can tools be developed to monitor and verify model integrity across the entire development pipeline?

Analysing model loss surfaces to predict and mitigate areas of high vulnerability

The loss surface represents how a model's error changes in response to variations in input or parameters. Regions with steep gradients, or sharp minima, indicate areas where small input perturbations can lead to significant changes in output, making the model susceptible to adversarial examples. We want to explore:

• How can the geometry of loss surfaces reveal "weak spots" susceptible to adversarial attacks?
• Can we quantify connections between sharp or flat loss regions and adversarial robustness?
• How can testing and probing techniques identify loss-surface vulnerabilities during training?

Wildcard

We don’t know what we don’t know. If after reading through the focus areas described you’re still interested and have an idea but it doesn’t completely fit then let us know!

Timeline

  • 20th Dec to 20th Jan

    Applications open

  • 29th Jan

    Successful applicants notified

  • 19th Feb

    Programme commences

  • 3rd Apr

    Programme showcase at LASR event in London

Entry requirements

  • Innovators with an interest in learning about industry AI security challenges and opportunities
  • Open to potential collaboration
  • Open to all technology domains – we are not just looking for existing AI security products. We are particularly interested in cyber security, AI model builders, platform solutions or tools, as well as tech companies looking at quantitative analysis, risk or exposure management
  • Willing to attend in-person programme activities within the UK

Need more help?
Contact us

Contact us