AI Security Guide

10 AI Security Mistakes
That Could Expose Your Business

Your team is using ChatGPT, Copilot, and Gemini right now. Is your company data leaking with every prompt? Learn the critical mistakes putting UK businesses at risk — and how to fix them.

15 min read
CISSP-authored
Updated: December 2024
OWASP LLM Top 10NCSCNIST AI RMFMITRE ATLAS

Your Employees Are Using AI. The Question Is: What Are They Sharing?

Generative AI tools like ChatGPT, Microsoft Copilot, and Google Gemini have exploded into the workplace. They're transforming productivity — but they're also creating security risks that most businesses haven't addressed.

Just like Shadow IT before it, Shadow AI is spreading through organisations. Employees aren't being malicious — they're just trying to work faster. But every prompt containing client data, financial information, or proprietary content is a potential data leak.

The uncomfortable truth: Most businesses don't know what data their employees are sharing with AI tools. If you haven't specifically addressed AI security, you almost certainly have exposure.

This guide covers the 10 most common AI security mistakes we see in UK businesses, mapped to authoritative frameworks including OWASP LLM Top 10,NCSC machine learning guidance, NIST AI Risk Management Framework, and MITRE ATLAS. For each mistake, you'll learn:

  • What the mistake is and why it's happening across organisations
  • Why it matters — the real risks to your data and compliance
  • How to fix it — practical, actionable steps you can implement
  • Quick wins — what you can do today to reduce risk
  • Framework references — links to official OWASP, NCSC, NIST, and MITRE guidance
75%Employees use AI at work — often without IT's knowledge
38%Have shared sensitive work data with AI tools
<15%UK businesses have an AI acceptable use policy

Whether you're a business owner wanting to understand AI risk exposure, an IT manager building your AI security strategy, or a compliance officer addressing regulatory requirements, this guide provides clear direction on securing AI adoption.

Let's get into it.

Is Your Business at Risk?

This guide is designed for UK businesses adopting AI tools who want to understand and manage the associated security risks.

Business Owners & Directors

Your team is probably using AI already. This guide helps you understand the risks and what questions to ask.

IT Managers & Security Leads

You're seeing AI adoption but may lack AI-specific security expertise. Get framework-backed guidance for your AI security strategy.

Compliance & Risk Officers

AI creates new compliance considerations for GDPR, FCA, SRA, and client contracts. Understand the regulatory implications.

Professional Services Firms

Client confidentiality is paramount. Learn how AI tools could be exposing the sensitive information clients trust you with.

Startups & Scale-ups

Moving fast with AI? Make sure you're not creating security debt that blocks enterprise clients or raises investor concerns.

Marketing & Creative Agencies

AI is transforming creative work. Ensure client briefs and strategies aren't being exposed in the process.

1
Critical Risk Quick Win

Using Free AI Tools With Sensitive Data

Free versions of ChatGPT, Gemini, and other AI tools may use your inputs to train their models. Sensitive data pasted into these tools could be exposed, retained, or regurgitated to other users.

Most employees don't realise that the free tools they're using to boost productivity come with significant data handling trade-offs. When you use ChatGPT's free tier, OpenAI explicitly states they may use your conversations to improve their models. That client contract you just asked it to summarise? It's potentially being used to train the next version.

Why This Matters

The implications of using free AI tools with business data are serious:

  • Client confidential information exposed — Your data is sent to AI provider servers with limited control over retention
  • Data potentially used in model training — Information could influence outputs seen by other users
  • No data residency guarantees — Data may be processed or stored outside the UK/EU
  • No audit trail of what was shared — You can't prove what data was or wasn't exposed
  • Contract and regulatory violations — Many client contracts and regulations prohibit this data handling

Real-World Impact

A UK law firm discovered that a paralegal had been using ChatGPT to summarise witness statements and client correspondence for months. The firm had no enterprise agreement with OpenAI, meaning all that data — including privileged communications — was potentially used for model training.

The firm faced a difficult decision: disclose to affected clients (risking relationships and professional conduct complaints) or hope the data exposure would never come to light. They chose disclosure. Three major clients moved their business elsewhere, and the SRA opened an investigation into data handling practices.

How to Fix It

  1. Audit current AI tool usage across your organisation to understand exposure
  2. Deploy enterprise AI tools (Microsoft Copilot, ChatGPT Enterprise, Gemini for Workspace) with contractual data protection guarantees
  3. Configure enterprise tools to opt-out of model training where available
  4. Create clear policies on which AI tools are approved and which are prohibited
  5. Block access to consumer AI tools on corporate networks if you have enterprise alternatives
  6. Implement data classification so staff know what can never go into any AI tool
Quick Win

Check your Microsoft 365 or Google Workspace licence — you may already have access to enterprise AI tools with data protection. Enable Copilot or Gemini for Workspace and communicate approved tools to your team today.

2
High Risk Quick Win

No AI Acceptable Use Policy

Most businesses have no formal policy governing AI use. Employees don't know what's allowed, what's prohibited, or how to use AI tools safely. The result is inconsistent practices and unmanaged risk.

Without clear guidance, each employee makes their own decisions about AI. Some are cautious. Others paste client data into ChatGPT without a second thought. The inconsistency creates unpredictable risk exposure.

Why This Matters

An AI policy gap affects multiple areas:

  • No clear boundaries on AI usage — Staff don't know what they can and can't do
  • Inconsistent data handling across teams — Different people treat the same data differently
  • Compliance gaps for regulated sectors — GDPR, FCA, SRA requirements aren't being met
  • No basis for enforcement or training — You can't discipline violations of rules that don't exist
  • Shadow AI flourishes — Without approved alternatives, staff find their own solutions

Real-World Impact

A financial advisory firm's compliance audit revealed that different advisors were using AI in wildly different ways. One was using ChatGPT to draft client reports (including performance data). Another refused to use any AI. A third was using an obscure AI tool they'd found online.

The FCA examiner was concerned not about the AI use itself, but about the complete lack of policy, oversight, or consistency. The firm received a requirement to implement AI governance within 90 days, including board-level accountability for AI risk.

How to Fix It

  1. Draft an AI Acceptable Use Policy covering approved tools, prohibited uses, and data classification for AI
  2. Define what data categories can never be entered into AI tools (PII, client data, financials, legal privileged, etc.)
  3. Establish an approval process for new AI tools before they're used with company data
  4. Include AI policy acknowledgment in onboarding and annual compliance training
  5. Create a reporting channel for AI-related concerns or incidents
  6. Review and update the policy quarterly as AI capabilities evolve
  7. Get board/leadership sign-off to demonstrate governance
Quick Win

Send a company-wide communication this week stating which AI tools are approved and which are prohibited, with a promise of full policy to follow. Even a brief email establishes expectations immediately.

3
High Risk

Zero Visibility Into Shadow AI

IT and leadership have no idea which AI tools employees are using, how often, or what data is being processed. You can't secure what you can't see.

Just as "Shadow IT" emerged when employees adopted cloud tools faster than IT could evaluate them, "Shadow AI" is now spreading through organisations. Staff sign up for AI tools using personal emails, browser extensions, and mobile apps — completely outside corporate oversight.

Why This Matters

Shadow AI creates significant blind spots:

  • Unknown tools processing company data — AI services you've never heard of have your data
  • No ability to assess or manage risk — You can't evaluate tools you don't know about
  • Compliance violations undetected — Data exports happening without your knowledge
  • Incident response impossible — You can't investigate data exposure in tools you didn't know existed
  • No offboarding control — When staff leave, their personal AI accounts retain company data

Real-World Impact

During a security assessment, we discovered a marketing agency's staff were using 14 different AI tools — the IT manager knew about 2 of them. One was a AI writing assistant that stored all content in the cloud, including client briefs marked confidential. Another was a Chrome extension that had access to every webpage visited, including their client portal.

The agency had no ability to audit what data was in these tools, no contracts with the vendors, and no way to enforce data deletion when staff left. A departing employee's personal ChatGPT history contained six months of client strategy discussions.

How to Fix It

  1. Conduct an AI discovery audit — survey staff and review network traffic for AI domains
  2. Deploy a Cloud Access Security Broker (CASB) to detect AI application usage
  3. Enable Microsoft Purview or similar tools to monitor AI tool access
  4. Review browser extensions installed on corporate devices
  5. Check OAuth application permissions connected to corporate email/identity
  6. Create an inventory of known AI tools with risk classifications
  7. Establish a process to regularly scan for new Shadow AI
Quick Win

Send an anonymous survey to staff asking which AI tools they use for work. Promise no punishment — you need honest answers. You'll learn about Shadow AI and employee needs in one exercise.

4
Critical Risk Quick Win

Pasting Confidential Data Into Prompts

Employees routinely paste client contracts, financial data, source code, and personal information into AI prompts. Once submitted, you've lost control of that data.

The urge to use AI for productivity is understandable. "Summarise this contract" or "Analyse these financial figures" seem like innocent requests. But each prompt containing sensitive data is a data export to a third party, often without the legal basis to do so.

Why This Matters

Direct data exposure through prompts creates immediate risks:

  • Direct data leakage to AI provider — Sensitive data leaves your control instantly
  • GDPR violations — Personal data processing without legal basis
  • Client confidentiality breaches — Contractual obligations violated
  • Intellectual property exposure — Trade secrets and proprietary information compromised
  • Professional liability — For legal, financial, and other regulated professionals
  • No recall possible — Once submitted, you cannot retrieve or delete the data reliably

Real-World Impact

A Samsung engineer pasted proprietary source code into ChatGPT to debug an issue. The code related to semiconductor manufacturing processes worth billions in R&D. Samsung subsequently banned all employee use of generative AI tools.

In another case, an Amazon lawyer used ChatGPT to draft a legal brief and included confidential client information in the prompts. When similar language appeared in ChatGPT outputs for others, the data exposure became apparent. The incident triggered client notifications and a professional conduct review.

How to Fix It

  1. Implement Data Loss Prevention (DLP) policies that detect sensitive data in AI prompts
  2. Deploy Microsoft Purview or similar tools with AI-specific detection capabilities
  3. Train staff on data classification — what can never go into AI, regardless of the tool
  4. Create 'safe prompting' guidelines showing how to use AI without exposing sensitive data
  5. Use AI tools that process data locally where appropriate (on-device AI)
  6. Consider enterprise AI solutions that keep data within your tenant
  7. Implement technical controls that can block or warn on sensitive data submission
Quick Win

Create a one-page 'Safe Prompting Guide' listing what should never be pasted into AI (client names, personal data, financial figures, passwords, source code, legal documents). Distribute it to all staff today.

5
High Risk

No Data Loss Prevention (DLP) for AI

Traditional DLP tools weren't designed for AI. Data flows to AI applications often bypass existing controls, leaving a massive gap in your data protection strategy.

Your organisation probably has DLP rules for email attachments and USB drives. But do those rules cover what happens when someone types sensitive information into a browser-based AI chat? In most cases, no.

Why This Matters

The DLP gap for AI is significant:

  • Sensitive data exfiltration via AI tools goes undetected — Your DLP doesn't see it
  • Existing security controls are ineffective — Designed for different threat vectors
  • No alerts or blocking for AI-related data leakage — Silent data exposure
  • Compliance evidence gaps — You can't demonstrate you're protecting data
  • Audit failures — Regulators and auditors expect AI-aware controls

Real-World Impact

A healthcare organisation's DLP caught staff emailing patient data outside the organisation, but completely missed the same data being pasted into ChatGPT. An internal audit revealed months of patient information — names, conditions, treatment plans — had been processed through consumer AI tools.

The organisation faced ICO scrutiny for inadequate technical measures, despite having "state of the art" DLP for email. The regulator's view was clear: if you allow AI tools, your DLP must cover them.

How to Fix It

  1. Evaluate your current DLP coverage for AI applications
  2. Implement Microsoft Purview with AI-specific DLP policies for Copilot
  3. Configure web DLP to monitor paste actions to known AI domains
  4. Consider Endpoint DLP that can detect sensitive data in any application
  5. Enable browser isolation or CASB controls for AI tool access
  6. Create sensitive data patterns specific to your business (client codes, project names, etc.)
  7. Test DLP effectiveness with controlled AI data exposure scenarios
Quick Win

Check if your existing DLP solution has AI-specific capabilities. Many vendors have released updates — you may just need to enable them. Contact your security vendor this week.

6
High Risk Quick Win

Trusting AI Outputs Without Verification

AI models hallucinate — they generate plausible-sounding but completely false information. Employees using AI outputs in client work, reports, or decisions without verification create accuracy and liability risks.

The confident tone of AI responses is deceptive. ChatGPT will cite academic papers that don't exist, quote statistics that are fabricated, and provide legal precedents from cases that never happened — all with the same authoritative tone as accurate information.

Why This Matters

Unverified AI outputs create professional risks:

  • Incorrect information in client deliverables — Reports, advice, and analysis based on fiction
  • Fabricated citations and statistics — AI-invented sources presented as real
  • Professional liability for errors — You're responsible for work product, even if AI wrote it
  • Reputational damage — Clients discovering AI-generated errors lose confidence
  • Decision-making based on false data — Strategic and operational decisions compromised

Real-World Impact

A New York lawyer used ChatGPT to research legal precedents for a court filing. The AI provided several relevant-sounding case citations. The lawyer included them without verification. The opposing counsel — and then the judge — discovered the cases didn't exist. They were complete fabrications.

The lawyer faced sanctions, public embarrassment, and disciplinary proceedings. His defence — that the AI had "assured" him the cases were real — offered no protection. The professional responsibility for verification remained his.

How to Fix It

  1. Establish mandatory verification processes for all AI-generated content
  2. Train staff to treat AI outputs as first drafts requiring human review, never final products
  3. Create verification checklists for different content types (legal, financial, client communications)
  4. Require source verification for any AI-provided citations, statistics, or factual claims
  5. Implement review workflows where AI-assisted work is checked by someone who didn't use AI
  6. Document verification steps taken for audit and liability purposes
  7. Consider AI tools with citation capabilities and verify those citations too
Quick Win

Add 'AI content verification' to your document review processes. Create a simple checkbox: 'If AI-assisted, all facts and citations independently verified.' Implement it across all client-facing work immediately.

7
Medium Risk Quick Win

No AI-Specific Incident Response Plan

Your incident response plan probably doesn't cover AI-related breaches. What happens if sensitive data is exposed via an AI tool? Most businesses have no playbook.

Traditional incident response assumes breaches involve your systems being compromised. AI incidents are different — your systems may be fine, but your data is now in an AI provider's infrastructure with uncertain retention and usage.

Why This Matters

AI incident gaps create response failures:

  • Delayed or ineffective response — Teams don't know what to do
  • Unclear roles and responsibilities — Who handles AI data exposure?
  • Regulatory notification failures — GDPR 72-hour requirement still applies
  • Evidence preservation gaps — Prompt history may not be retained
  • Increased breach impact — Slow response means more exposure

Real-World Impact

An employee at an investment firm accidentally pasted a confidential M&A deal memo into ChatGPT. They immediately realised the mistake and reported it to IT. But IT had no procedure for this scenario.

While they debated what to do, precious hours passed. Should they notify the client? The regulator? Could they ask OpenAI to delete the data? Did they need to disclose in the data room? By the time they'd worked out a response, they'd missed the optimal window for damage control and had to explain the delay to their regulator.

How to Fix It

  1. Update your incident response plan to include AI-specific scenarios
  2. Define what constitutes an AI-related incident requiring response
  3. Create procedures for: data exposure via prompts, Shadow AI discovery, AI tool compromise, hallucination-caused errors
  4. Establish contact points at major AI vendors for incident response
  5. Document data subject notification requirements for AI incidents
  6. Include AI scenarios in tabletop exercises
  7. Define evidence preservation steps (screenshot prompts, export chat history if available)
Quick Win

Add one page to your incident response plan covering 'Data Exposure via AI Tool' — who to contact, what to preserve, notification requirements. Have it ready before you need it.

8
Medium Risk

Ignoring Prompt Injection Risks

Prompt injection is the SQL injection of AI. Attackers can manipulate AI systems through crafted inputs, potentially extracting data, bypassing controls, or causing unintended actions.

If you're building AI into your products or using AI agents that take actions, prompt injection is a critical vulnerability. Attackers can craft inputs that hijack the AI's behaviour, making it ignore its instructions and follow theirs instead.

Why This Matters

Prompt injection attacks can:

  • Extract data through manipulated prompts — AI reveals information it shouldn't
  • Make AI systems take unintended actions — If AI can send emails or access systems, attackers can too
  • Bypass safety controls — "Ignore your previous instructions" attacks
  • Poison AI with malicious training data — Supply chain attacks through data
  • Create persistent backdoors — Instructions hidden in documents the AI processes

Real-World Impact

Security researchers demonstrated prompt injection attacks against Bing Chat where hidden text in webpages could instruct the AI to behave differently. A user asking Bing about a product could receive an AI response manipulated by text hidden on the product's webpage — potentially spreading misinformation or malicious links.

For businesses using AI in customer service chatbots, similar attacks could lead to the AI sharing confidential information, providing false information to customers, or even being used to socially engineer customers.

How to Fix It

  1. For custom AI implementations, implement input validation and sanitisation
  2. Use output filtering to detect and block unexpected AI behaviours
  3. Implement prompt security testing as part of development lifecycle
  4. Monitor AI outputs for anomalies that might indicate injection attacks
  5. Use AI tools with built-in prompt injection protections
  6. Stay updated on vendor security patches for AI products
  7. Consider security reviews of any custom AI applications before deployment
  8. Limit AI system permissions — principle of least privilege applies to AI too
Quick Win

If you have custom AI implementations or chatbots, add prompt injection to your security testing checklist. Search for 'prompt injection testing' resources and include these scenarios in your next security review.

9
High Risk Quick Win

No Staff Training on Safe AI Use

Employees are teaching themselves AI — often learning bad habits. Without formal training on safe AI usage, your team doesn't know what risks they're creating.

Most AI security incidents aren't malicious — they're mistakes by well-meaning employees who don't understand the implications of their actions. Training transforms AI from a hidden risk into a managed, productive tool.

Why This Matters

Untrained staff create avoidable risks:

  • Unsafe practices spreading through organisation — Bad habits are contagious
  • Repeated mistakes and policy violations — The same errors happening across teams
  • Data exposure through ignorance, not malice — Staff didn't know it was wrong
  • No consistent standard of AI usage — Everyone doing their own thing
  • Missed productivity opportunities — Staff not using AI effectively when they safely could

Real-World Impact

After implementing an AI policy, one organisation tracked violations during the first month. They found 23 instances of sensitive data being entered into AI tools — all by staff who had signed the policy but hadn't received training on what it meant in practice.

After a 30-minute training session covering specific examples of safe and unsafe AI use, violations dropped to 2 in the following month. Training transformed policy awareness into behaviour change.

How to Fix It

  1. Develop AI security awareness training covering approved tools, data classification, and safe prompting
  2. Make training practical — use real examples of safe vs unsafe AI use
  3. Include AI security in new starter onboarding
  4. Provide role-specific guidance — developers, finance, legal each have different AI risks
  5. Create quick-reference materials staff can consult when unsure
  6. Run periodic refresher training as AI tools and risks evolve
  7. Test understanding through simulated scenarios
  8. Recognise and share examples of good AI security practice
Quick Win

Schedule a 30-minute 'AI Security Basics' session for your team this month. Cover: approved tools, what not to paste, and who to ask when unsure. It doesn't need to be perfect — awareness is the goal.

10
Medium Risk

No Due Diligence on AI Vendors and Tools

New AI tools appear daily. Employees sign up without any security evaluation. Even sanctioned AI tools may not have been properly vetted for data handling, security, and compliance.

The AI market is exploding with new tools, each promising productivity gains. But behind the slick interfaces are questions about data handling, security practices, and long-term viability that most organisations aren't asking.

Why This Matters

Unvetted AI tools introduce unknown risks:

  • Unknown data handling practices — Where does your data go? How long is it kept?
  • Inadequate security controls at vendor — Are they even secure?
  • Compliance violations — Data residency, GDPR, sector-specific regulations
  • Supply chain vulnerabilities — AI tools built on other AI tools with unknown provenance
  • No contractual protections — Consumer terms of service don't protect your business
  • Vendor stability risks — Many AI startups may not exist in 12 months

Real-World Impact

A consultancy adopted an AI-powered project management tool that promised to "learn from your documents." Only after six months of use did anyone check the privacy policy — which stated all uploaded documents became the vendor's property for training purposes.

Years of client proposals, strategies, and confidential project materials were now owned by an AI startup. The consultancy had to disclose to affected clients and remove the tool, but the data was already gone.

How to Fix It

  1. Establish an AI tool vetting process covering security, privacy, and compliance requirements
  2. Create a checklist: data handling, retention, training use, residency, security certifications
  3. Require contractual terms that protect your data (not just consumer ToS)
  4. Verify security certifications (SOC 2, ISO 27001) and audit reports
  5. Assess AI tool supply chain — what other services does it rely on?
  6. Include AI tools in your vendor risk management programme
  7. Re-evaluate AI vendors periodically as their practices may change
  8. Maintain an approved AI tool list with supporting documentation
Quick Win

Before approving any new AI tool, ask three questions: Where does our data go? Is it used for training? Can we delete it? If they can't answer clearly, don't approve it.

AI Security Checklist

Use this checklist to assess your current AI security posture. Each item maps to the mistakes covered in this guide.

0 of 32 items checked (0%)

AI Tools & Access

Policies & Governance

Data Protection

Training & Awareness

Verification & Quality

Incident Response

Vendor Management

Technical Controls

Based on Leading AI Security Frameworks

Every mistake in this guide is mapped to official frameworks from cybersecurity and AI authorities. This isn't vendor marketing — it's actionable guidance based on recognised standards.

OWASP LLM Top 10

The definitive list of LLM security vulnerabilities

Visit owasp.org

NCSC ML Guidance

UK government AI/ML security principles

Visit ncsc.gov.uk

NIST AI RMF

Comprehensive AI risk management framework

Visit nist.gov

MITRE ATLAS

Adversarial threat landscape for AI systems

Visit atlas.mitre.org

Taking Action

You've read the guide. You've identified gaps. Now what?

AI security improvement is about consistent progress on the things that matter most. Here's how to prioritise your AI security efforts:

Do This Week

Critical Quick Wins

  • Communicate which AI tools are approved/prohibited
  • Create a one-page safe prompting guide
  • Check if you have access to enterprise AI tools
  • Survey staff on AI tool usage
Do This Month

High-Impact Improvements

  • Draft and publish an AI Acceptable Use Policy
  • Deliver AI security awareness training
  • Implement AI output verification processes
  • Update incident response plan for AI scenarios
Do This Quarter

Strategic Initiatives

  • Deploy enterprise AI tools with data protection
  • Implement DLP for AI applications
  • Establish AI tool vetting process
  • Conduct Shadow AI discovery exercise
  • Build AI governance into board reporting
Remember: AI isn't going away. The businesses that thrive will be those that embrace it securely — not those that ignore the risks until it's too late.

Don't Let AI Become Your Biggest Vulnerability

We've helped UK businesses implement secure AI adoption strategies — enabling productivity gains while protecting sensitive data. If you'd like expert guidance on your AI security journey, we're here to help.

BI

Written by AI Security Experts

This guide was created by our CISSP-certified security professionals who have helped London businesses implement secure AI adoption strategies. We've seen firsthand how quickly AI tools can create data exposure — and how straightforward the fixes can be when you know what to look for.

CISSP Certified AI Security Since 2023 15+ Years Experience

Frequently Asked Questions

We've told staff not to use AI. Isn't that enough?

Unfortunately, no. Studies show employees use AI tools regardless of bans — they're too useful to ignore. A prohibition without enforcement just pushes usage underground, creating Shadow AI with zero visibility. Better to enable approved tools with proper controls.

Are enterprise AI tools actually more secure?

Yes, significantly. Enterprise versions (Microsoft Copilot, ChatGPT Enterprise, Gemini for Workspace) offer: data not used for training, enterprise-grade security controls, audit logging, data residency options, and admin management. Free tools offer none of these protections.

We're a small business. Are we really at risk?

Small businesses often have fewer controls, making them easier targets. They also frequently handle sensitive client data that requires protection. AI security isn't just for enterprises — the risks apply regardless of size.

How quickly can we implement these fixes?

Some fixes (like creating an AI policy) can be done in days. Others (like deploying enterprise AI with full DLP) may take weeks. The guide prioritises quick wins so you can reduce risk immediately while planning larger initiatives.

What if we're already using Copilot or enterprise AI?

Great start! But deployment alone isn't enough. This guide covers the configuration, policies, monitoring, and training needed to use enterprise AI tools securely. Many organisations have the tools but not the governance.

Is this guide technical?

It's written for business decision-makers and IT leaders, not AI researchers. We explain concepts in plain English with actionable steps. Framework references are included for those who want to go deeper.

Official Resources

All recommendations in this guide are based on official guidance from cybersecurity and AI authorities:

OWASP

LLM Top 10 Security Vulnerabilities

Visit owasp.org

NCSC

UK National Cyber Security Centre

Visit ncsc.gov.uk

NIST

AI Risk Management Framework

Visit nist.gov

MITRE ATLAS

Adversarial Threat Landscape for AI

Visit atlas.mitre.org