Blog

OWASP Agentic AI Vulnerabilities – Quick Overview
This video goes through the basic vulnerabilities specific to Agentic AI systems. https://youtu.be/mgbIzg-wFA4

Burp Suite & AI: The Future of Vulnerability Analysis, Bounty Prompt Burp Extension
Burp Suite and AI are powerful tools on their own, but together they become a...

Multi-vector attack against an MCP server – Demo
This challenge demonstrates a sophisticated multi-vector attack against an MCP server. It requires chaining multiple...

Malicious Code Execution in an MCP server – Demo
This challenge demonstrates a malicious code execution vulnerability in an MCP server. The MCP server...

Token Theft vulnerability in an MCP server – Demo
This challenge demonstrates a token theft vulnerability in an MCP server. The MCP server stores...

Excessive permission scope in an MCP server – Demo
This challenge demonstrates the dangers of excessive permission scope in an MCP server. The MCP...

Agentic AI Guardrails Playground (Invariant Labs)
Invariant Explorer, accessible at explorer.invariantlabs.ai, is an open-source observability tool designed to help developers visualize,...

Claude executing script via MCP server leading to exfiltration of bash shell (RCE – Remote Code Execution)
Claude executing a script via the MCP (Model Context Protocol) server demonstrates a critical Remote...

MCP Tool poisoning demo. Are you sure your MCP servers are not malicious?
Model Context Protocol poisoning is an emerging AI attack vector where adversaries manipulate the structured...

Promptfoo a very powerful and free LLM security scanner
Promptfoo is an open-source platform designed to help developers test, evaluate, and secure large language...

Claude Desktop with Desktop Commander MCP to control your machine via AI
Claude Desktop, when integrated with Desktop Commander MCP, enables seamless AI-driven control of your local...

Scan your MCP servers for vulnerabilities specific to agentic AI
The mcp-scan project by Invariant Labs is a security auditing tool designed to analyze Model...

Image prompt injection to invoke MCP tools
Visual prompt injection targeting the Model Context Protocol (MCP) is particularly dangerous because it allows...

Indirect Prompt Injection into coding assistants like GitHub Copilot or Cursor
Indirect prompt injection via instruction files in tools like GitHub Copilot or Cursor occurs when...

Agentic Radar – free agentic code scanning
Agentic Radar is a security scanner designed to analyze and assess agentic systems, providing developers,...

Burp MCP Server with Claude Desktop – Revolution in App Penetration Testing
The MCP Server is a Burp Suite extension that enables integration with AI clients via...

LM Report – Burp AI features
This video explains the new LM Report AI Feature https://youtu.be/8dJLTqAZGNY

AI HTTP Analyzer – Burp AI features
This video explains the new AI HTTP Analyzer AI Feature https://youtu.be/76pDTPCMj6s

Shadow Repeater – Burp AI features
This video explains the new Shadow Repeater AI Feature https://youtu.be/5np5U3Ta8h8

Finding broken access control with AI – Burp AI features
This video explains the new Finding broken access control with AI Feature https://youtu.be/vDII1ak5zro

Record login sequence with AI – Burp AI features
This video explains the new Record login sequence with AI Feature https://youtu.be/4JQOLUGnSWE

Explainer Feature – Burp AI features
This video explains the new Burp AI Explainer Feature https://youtu.be/6EH5sCwwXi4

Explore Feature – Burp AI features
This video explains the new Burp AI Explore Feature https://youtu.be/6_ihpVajwiM

Using LLM models to jailbreak LLM models (Jailbreak to Jailbreak)
The J2 Playground by Scale AI is an interactive platform designed to test the resilience...

Tensortrust AI – Prompt Injection and Prompt Hardening Game
Tensor Trust is an online game developed by researchers at UC Berkeley to study prompt...

Prompt Map – free tool to test for Prompt Leakage, AI Security Expert
PromptMap is a specialized LLM security scanner designed to detect and analyze prompt leaks—instances where a...

Quick overview of Garak – a free LLM vulnerability scanner
The Garak LLM vulnerability scanner is an open-source tool developed by NVIDIA to assess security...

Prompt Injection into terminals / IDEs via ANSI escape code characters
Prompt injection threats in terminals and IDEs via ANSI escape characters exploit the ability of...

AI Agent Denial of Service (DoS), Rabbit R1, AI Security Expert
When AI agents autonomously browse websites and encounter tasks that are intentionally unsolvable or computationally...

AI Agent Data Exfiltration, Rabbit R1, AI Security Expert
AI agents that autonomously browse the web introduce significant security risks, particularly related to data...

OWASP Top 10 LLM10:2025 Unbounded Consumption
Unbounded Consumption refers to scenarios where Large Language Models (LLMs) are subjected to excessive and...

OWASP Top 10 LLM09:2025 Misinformation
Misinformation refers to the generation of false or misleading information by Large Language Models (LLMs),...

OWASP Top 10 LLM08:2025 Vector and Embedding Weaknesses
Vector and Embedding Weaknesses refers to security vulnerabilities in Large Language Models (LLMs) that arise...

OWASP Top 10 LLM07:2025 System Prompt Leakage
System Prompt Leakage refers to the risk that system prompts—internal instructions guiding the behavior of...

OWASP Top 10 LLM06:2025 Excessive Agency
Excessive Agency refers to the vulnerability arising when Large Language Models (LLMs) are granted more...

OWASP Top 10 LLM05:2025 Improper Output Handling
Improper Output Handling refers to the inadequate validation and sanitization of outputs generated by Large...

OWASP Top 10 LLM04:2025 Data and Model Poisoning
Data and Model Poisoning refers to the deliberate manipulation of an LLM's training data or...

OWASP Top 10 LLM03:2025 Supply Chain
Supply Chain refers to vulnerabilities in the development and deployment processes of Large Language Models...

OWASP Top 10 LLM02:2025 Sensitive Information Disclosure
Sensitive Information Disclosure refers to the unintended exposure of confidential data—such as personal identifiable information...

OWASP Top 10 LLM01:2025 Prompt Injection
Prompt Injection refers to a security vulnerability where adversarial inputs manipulate large language models (LLMs)...

Prompt injection via audio or video file
Audio and video prompt injection risks involve malicious manipulation of inputs to deceive AI systems...

LLM image misclassification and the consequences
Misclassifying images in multimodal AI systems can lead to unintended or even harmful actions, especially...

LLMs reading CAPTCHAs – threat to agent systems?
LLMs with multimodal capabilities can be leveraged to read and solve CAPTCHAs in agentic setups,...

Indirect conditional prompt injection via documents
Conditional indirect prompt injection is an advanced attack where hidden instructions in external content—such as...

Indirect Prompt Injection with documents
Indirect prompt injection with documents is an attack technique where adversarial instructions are embedded within...

LLM01: Visual Prompt Injection | Image based prompt injection
Multi-modal prompt injection with images is a sophisticated attack that exploits the integration of visual...

LLM01: Indirect Prompt Injection | Exfiltration to attacker
Data exfiltration from a large language model (LLM) can be performed using markdown formatting and...

Prompt Airlines – AI Security Challenge – Flag 5
In this video we take a look at solving the promptairlines.com challenge (Flag 5) https://youtu.be/MPUwxjWGBQE

Prompt Airlines – AI Security Challenge – Flag 4
In this video we take a look at solving the promptairlines.com challenge (Flag 4) https://youtu.be/jDlTlWLdmaw

Prompt Airlines – AI Security Challenge – Flag 3
In this video we take a look at solving the promptairlines.com challenge (Flag 3) https://youtu.be/qrNFMPwJ9FQ

Prompt Airlines – AI Security Challenge – Flag 1 and 2
In this video we take a look at solving the promptairlines.com challenge (Flag 1 and...

Prompt leakage and indirect prompt injections in Grok X AI
In this video we will take a look at various prompt injection issues in Grok...

myllmbank.com Walkthrough Flag 3
In this video we will take a look at flag 3 of myllmbank.com https://youtu.be/dqXDV-mW0aA

myllmbank.com Walkthrough Flag 2
In this video we will take a look at flag 2 of myllmbank.com https://youtu.be/_PIoWmPlxYQ

myllmbank.com Walkthrough Flag 1
In this video we will take a look at flag 1 of myllmbank.com https://youtu.be/WWaRZR3dv4U

SecOps Group AI/ML Pentester Mock Exam 2
This is a walkthrough of the SecOps Group AI/ML Pentester Mock Exam 2 https://youtu.be/zX2GtI4Fj_Y

SecOps Group AI/ML Pentester Mock Exam 1
This is a walkthrough of SecOps Group AI/ML Pentester Mock Exam 1 https://youtu.be/UoSAjlUUiPs

CSRF potential in LLMs
Cross-Site Request Forgery (CSRF) via prompt injection through a GET request is a potential attack...

Prompt Injection via clipboard
Prompt injection via clipboard copy/paste is a security concern where malicious text, copied into a...

Hero AI Bot
This project is a proof of concept for a Hackbot, an AI-driven system that autonomously...

KONTRA OWASP LLM Top 10 Playground
ONTRA offers an interactive training module titled "OWASP Top 10 for Large Language Model (LLM)...

Pokebot Health Agent to practice prompt injection
A simple Health Agent to practice prompt injection https://youtu.be/dLS5a_fWBjw

Certified AI/ML Penetration Tester
The Certified AI/ML Pentester (C-AI/MLPen) is an intermediate-level certification offered by The SecOps Group, designed...

Image Prompt injection and double instructions
Prompt injection via images involves embedding hidden or overt textual commands within visual elements to...

OpenAI Playground
The OpenAI Playground is an interactive web-based platform that allows users to experiment with OpenAI's...

Prompt injection and exfiltration in Chats apps
Data exfiltration in messaging apps through unfurling exploits the feature where apps automatically generate previews...

Gandalf – AI bot to practice prompt injections
Gandalf AI, developed by Lakera, is an interactive online game designed to educate users about...

Google Colab Playground for LLMs
Google Colaboratory, commonly known as Google Colab, is a cloud-based Jupyter notebook environment that facilitates...

STRIDE GPT – Threat Modeling with LLMs
STRIDE GPT is an AI-powered threat modeling tool that leverages Large Language Models (LLMs) to...

OS Command Injection in LLMs
OS command injection in Large Language Models (LLMs) involves exploiting the model's ability to generate...

Hallucinations in LLMs
Hallucination in AI refers to the phenomenon where a model generates information that appears plausible...

Prompt Injection – Prompt Leakage
Prompt leakage refers to the unintended exposure of sensitive or proprietary prompts used to guide...

HTML Injection in LLMs
HTML injection in Large Language Models (LLMs) involves embedding malicious HTML code within prompts or...

RAG data poisoning via documents in ChatGPT
RAG (Retrieval-Augmented Generation) poisoning occurs when a malicious or manipulated document is uploaded to influence...

RAG data poisoning in ChatGPT
RAG (Retrieval-Augmented Generation) poisoning from a document uploaded involves embedding malicious or misleading data into...

Deleting ChatGPT memories via prompt injection
Deleting memories in AI refers to the deliberate removal of stored information or context from...

Updating ChatGPT memories via prompt injection
Injecting memories into AI involves deliberately embedding specific information or narratives into the system's retained...

Putting ChatGPT into maintenance mode
Prompt injection to manipulate memories involves crafting input that exploits the memory or context retention...

Voice prompting in ChatGPT
Voice prompt injection is a method of exploiting vulnerabilities in voice-activated AI systems by embedding...

Use AI to extract code from images
Using AI to extract code from images involves leveraging Optical Character Recognition (OCR) technology and...

Generating images with embedded prompts
Prompt injection via images is a sophisticated technique where malicious or unintended commands are embedded...

Access LLMs from the Linux CLI
The llm project by Simon Willison, available on GitHub, is a command-line tool designed to interact with...

AI/LLM automated Penetration Testing Bots
Autonomous AI/LLM Penetration Testing bots are a cutting-edge development in cybersecurity, designed to automate the...

Prompt injection to generate content which is normally censored
Prompt injection is a technique used to manipulate AI language models by inserting malicious or...

Creating hidden prompts
Hidden or transparent prompt injection is a subtle yet potent form of prompt injection that...

Data Exfiltration with markdown in LLMs
Data exfiltration through markdown in LLM chatbots is a subtle but dangerous attack vector. When...

Prompt Injection with ASCII to Unicode Tags
ASCII to Unicode tag conversion is a technique that can be leveraged to bypass input...

LLM Expert Prompting Framework – Fabric
Fabric is an open-source framework for augmenting humans using AI. It provides a modular framework...

LLMs, datasets and playgrounds (Huggingface)
Hugging Face is a prominent company in the field of artificial intelligence and natural language...

Free LLMs on replicate.com
Replicate.com is a platform designed to simplify the deployment and use of machine learning models....

GitHub repos with prompt injection samples
This video is a walkthrough some of the GitHub repos which have prompt injection samples....

Prompt Injection with encoded prompts
Prompt injection with encoded prompts involves using various encoding methods (such as Base64, hexadecimal, or...

Voice Audio Prompt Injection
Prompt injection via voice and audio is a form of attack that targets AI systems...

Prompt injection to generate any image
Prompt injection in image generation refers to the manipulation of input text prompts to produce...

LLM system prompt leakage
Large Language Model (LLM) prompt leakage poses a significant security risk as it can expose...

ChatGPT assumptions made
ChatGPT, like many AI models, operates based on patterns it has learned from a vast...

Jailbreaking to generate undesired images
Direct prompt injection and jailbreaking are two techniques often employed to manipulate large language models...

Indirect Prompt Injection with Data Exfiltration
Indirect prompt injection with data exfiltration via markdown image rendering is a sophisticated attack method...

Direct Prompt Injection / Information Disclosure
Direct Prompt Injection is a technique where a user inputs specific instructions or queries directly into...

LLM Prompting with emojis
Prompting via emojis is a communication technique that uses emojis to convey ideas, instructions, or...

Prompt Injection via image
In this video I will explain prompt injection via an image. The LLM is asked...

AI Security Expert Blog
Welcome. In this blog we will regularly publish blog articles around Penetration Testing and Ethical...