Blog

LM Report – Burp AI features
This video explains the new LM Report AI Feature https://youtu.be/8dJLTqAZGNY

AI HTTP Analyzer – Burp AI features
This video explains the new AI HTTP Analyzer AI Feature https://youtu.be/76pDTPCMj6s

Shadow Repeater – Burp AI features
This video explains the new Shadow Repeater AI Feature https://youtu.be/5np5U3Ta8h8

Finding broken access control with AI – Burp AI features
This video explains the new Finding broken access control with AI Feature https://youtu.be/vDII1ak5zro

Record login sequence with AI – Burp AI features
This video explains the new Record login sequence with AI Feature https://youtu.be/4JQOLUGnSWE

Explainer Feature – Burp AI features
This video explains the new Burp AI Explainer Feature https://youtu.be/6EH5sCwwXi4

Explore Feature – Burp AI features
This video explains the new Burp AI Explore Feature https://youtu.be/6_ihpVajwiM

Using LLM models to jailbreak LLM models (Jailbreak to Jailbreak)
The J2 Playground by Scale AI is an interactive platform designed to test the resilience...

Tensortrust AI – Prompt Injection and Prompt Hardening Game
Tensor Trust is an online game developed by researchers at UC Berkeley to study prompt...

Prompt Map – free tool to test for Prompt Leakage, AI Security Expert
PromptMap is a specialized LLM security scanner designed to detect and analyze prompt leaks—instances where a...

Quick overview of Garak – a free LLM vulnerability scanner
The Garak LLM vulnerability scanner is an open-source tool developed by NVIDIA to assess security...

Prompt Injection into terminals / IDEs via ANSI escape code characters
Prompt injection threats in terminals and IDEs via ANSI escape characters exploit the ability of...

AI Agent Denial of Service (DoS), Rabbit R1, AI Security Expert
When AI agents autonomously browse websites and encounter tasks that are intentionally unsolvable or computationally...

AI Agent Data Exfiltration, Rabbit R1, AI Security Expert
AI agents that autonomously browse the web introduce significant security risks, particularly related to data...

OWASP Top 10 LLM10:2025 Unbounded Consumption
Unbounded Consumption refers to scenarios where Large Language Models (LLMs) are subjected to excessive and...

OWASP Top 10 LLM09:2025 Misinformation
Misinformation refers to the generation of false or misleading information by Large Language Models (LLMs),...

OWASP Top 10 LLM08:2025 Vector and Embedding Weaknesses
Vector and Embedding Weaknesses refers to security vulnerabilities in Large Language Models (LLMs) that arise...

OWASP Top 10 LLM07:2025 System Prompt Leakage
System Prompt Leakage refers to the risk that system prompts—internal instructions guiding the behavior of...

OWASP Top 10 LLM06:2025 Excessive Agency
Excessive Agency refers to the vulnerability arising when Large Language Models (LLMs) are granted more...

OWASP Top 10 LLM05:2025 Improper Output Handling
Improper Output Handling refers to the inadequate validation and sanitization of outputs generated by Large...

OWASP Top 10 LLM04:2025 Data and Model Poisoning
Data and Model Poisoning refers to the deliberate manipulation of an LLM's training data or...

OWASP Top 10 LLM03:2025 Supply Chain
Supply Chain refers to vulnerabilities in the development and deployment processes of Large Language Models...

OWASP Top 10 LLM02:2025 Sensitive Information Disclosure
Sensitive Information Disclosure refers to the unintended exposure of confidential data—such as personal identifiable information...

OWASP Top 10 LLM01:2025 Prompt Injection
Prompt Injection refers to a security vulnerability where adversarial inputs manipulate large language models (LLMs)...

Prompt injection via audio or video file
Audio and video prompt injection risks involve malicious manipulation of inputs to deceive AI systems...

LLM image misclassification and the consequences
Misclassifying images in multimodal AI systems can lead to unintended or even harmful actions, especially...

LLMs reading CAPTCHAs – threat to agent systems?
LLMs with multimodal capabilities can be leveraged to read and solve CAPTCHAs in agentic setups,...

Indirect conditional prompt injection via documents
Conditional indirect prompt injection is an advanced attack where hidden instructions in external content—such as...

Indirect Prompt Injection with documents
Indirect prompt injection with documents is an attack technique where adversarial instructions are embedded within...

LLM01: Visual Prompt Injection | Image based prompt injection
Multi-modal prompt injection with images is a sophisticated attack that exploits the integration of visual...

LLM01: Indirect Prompt Injection | Exfiltration to attacker
Data exfiltration from a large language model (LLM) can be performed using markdown formatting and...

Prompt Airlines – AI Security Challenge – Flag 5
In this video we take a look at solving the promptairlines.com challenge (Flag 5) https://youtu.be/MPUwxjWGBQE

Prompt Airlines – AI Security Challenge – Flag 4
In this video we take a look at solving the promptairlines.com challenge (Flag 4) https://youtu.be/jDlTlWLdmaw

Prompt Airlines – AI Security Challenge – Flag 3
In this video we take a look at solving the promptairlines.com challenge (Flag 3) https://youtu.be/qrNFMPwJ9FQ

Prompt Airlines – AI Security Challenge – Flag 1 and 2
In this video we take a look at solving the promptairlines.com challenge (Flag 1 and...

Prompt leakage and indirect prompt injections in Grok X AI
In this video we will take a look at various prompt injection issues in Grok...

myllmbank.com Walkthrough Flag 3
In this video we will take a look at flag 3 of myllmbank.com https://youtu.be/dqXDV-mW0aA

myllmbank.com Walkthrough Flag 2
In this video we will take a look at flag 2 of myllmbank.com https://youtu.be/_PIoWmPlxYQ

myllmbank.com Walkthrough Flag 1
In this video we will take a look at flag 1 of myllmbank.com https://youtu.be/WWaRZR3dv4U

SecOps Group AI/ML Pentester Mock Exam 2
This is a walkthrough of the SecOps Group AI/ML Pentester Mock Exam 2 https://youtu.be/zX2GtI4Fj_Y

SecOps Group AI/ML Pentester Mock Exam 1
This is a walkthrough of SecOps Group AI/ML Pentester Mock Exam 1 https://youtu.be/UoSAjlUUiPs

CSRF potential in LLMs
Cross-Site Request Forgery (CSRF) via prompt injection through a GET request is a potential attack...

Prompt Injection via clipboard
Prompt injection via clipboard copy/paste is a security concern where malicious text, copied into a...

Hero AI Bot
This project is a proof of concept for a Hackbot, an AI-driven system that autonomously...

KONTRA OWASP LLM Top 10 Playground
ONTRA offers an interactive training module titled "OWASP Top 10 for Large Language Model (LLM)...

Pokebot Health Agent to practice prompt injection
A simple Health Agent to practice prompt injection https://youtu.be/dLS5a_fWBjw

Certified AI/ML Penetration Tester
The Certified AI/ML Pentester (C-AI/MLPen) is an intermediate-level certification offered by The SecOps Group, designed...

Image Prompt injection and double instructions
Prompt injection via images involves embedding hidden or overt textual commands within visual elements to...

OpenAI Playground
The OpenAI Playground is an interactive web-based platform that allows users to experiment with OpenAI's...

Prompt injection and exfiltration in Chats apps
Data exfiltration in messaging apps through unfurling exploits the feature where apps automatically generate previews...