Blog

OWASP Agentic AI Vulnerabilities – Quick Overview

OWASP Agentic AI Vulnerabilities – Quick Overview

This video goes through the basic vulnerabilities specific to Agentic AI systems. https://youtu.be/mgbIzg-wFA4

April 23, 2025
Read More
Burp Suite & AI: The Future of Vulnerability Analysis, Bounty Prompt Burp Extension

Burp Suite & AI: The Future of Vulnerability Analysis, Bounty Prompt Burp Extension

Burp Suite and AI are powerful tools on their own, but together they become a...

April 23, 2025
Read More
Multi-vector attack against an MCP server – Demo

Multi-vector attack against an MCP server – Demo

This challenge demonstrates a sophisticated multi-vector attack against an MCP server. It requires chaining multiple...

April 22, 2025
Read More
Malicious Code Execution in an MCP server – Demo

Malicious Code Execution in an MCP server – Demo

This challenge demonstrates a malicious code execution vulnerability in an MCP server. The MCP server...

April 22, 2025
Read More
Token Theft vulnerability in an MCP server – Demo

Token Theft vulnerability in an MCP server – Demo

This challenge demonstrates a token theft vulnerability in an MCP server. The MCP server stores...

April 22, 2025
Read More
Excessive permission scope in an MCP server – Demo

Excessive permission scope in an MCP server – Demo

This challenge demonstrates the dangers of excessive permission scope in an MCP server. The MCP...

April 22, 2025
Read More
Agentic AI Guardrails Playground (Invariant Labs)

Agentic AI Guardrails Playground (Invariant Labs)

Invariant Explorer, accessible at explorer.invariantlabs.ai, is an open-source observability tool designed to help developers visualize,...

April 22, 2025
Read More
Claude executing script via MCP server leading to exfiltration of bash shell (RCE – Remote Code Execution)

Claude executing script via MCP server leading to exfiltration of bash shell (RCE – Remote Code Execution)

Claude executing a script via the MCP (Model Context Protocol) server demonstrates a critical Remote...

April 16, 2025
Read More
MCP Tool poisoning demo. Are you sure your MCP servers are not malicious?

MCP Tool poisoning demo. Are you sure your MCP servers are not malicious?

Model Context Protocol poisoning is an emerging AI attack vector where adversaries manipulate the structured...

April 15, 2025
Read More
Promptfoo a very powerful and free LLM security scanner

Promptfoo a very powerful and free LLM security scanner

Promptfoo is an open-source platform designed to help developers test, evaluate, and secure large language...

April 14, 2025
Read More
Claude Desktop with Desktop Commander MCP to control your machine via AI

Claude Desktop with Desktop Commander MCP to control your machine via AI

Claude Desktop, when integrated with Desktop Commander MCP, enables seamless AI-driven control of your local...

April 14, 2025
Read More
Scan your MCP servers for vulnerabilities specific to agentic AI

Scan your MCP servers for vulnerabilities specific to agentic AI

The mcp-scan project by Invariant Labs is a security auditing tool designed to analyze Model...

April 11, 2025
Read More
Image prompt injection to invoke MCP tools

Image prompt injection to invoke MCP tools

Visual prompt injection targeting the Model Context Protocol (MCP) is particularly dangerous because it allows...

April 11, 2025
Read More
Indirect Prompt Injection into coding assistants like GitHub Copilot or Cursor

Indirect Prompt Injection into coding assistants like GitHub Copilot or Cursor

Indirect prompt injection via instruction files in tools like GitHub Copilot or Cursor occurs when...

April 9, 2025
Read More
Agentic Radar – free agentic code scanning

Agentic Radar – free agentic code scanning

Agentic Radar is a security scanner designed to analyze and assess agentic systems, providing developers,...

April 4, 2025
Read More
Burp MCP Server with Claude Desktop – Revolution in App Penetration Testing

Burp MCP Server with Claude Desktop – Revolution in App Penetration Testing

The MCP Server is a Burp Suite extension that enables integration with AI clients via...

April 4, 2025
Read More
LM Report – Burp AI features

LM Report – Burp AI features

This video explains the new LM Report AI Feature https://youtu.be/8dJLTqAZGNY

April 1, 2025
Read More
AI HTTP Analyzer – Burp AI features

AI HTTP Analyzer – Burp AI features

This video explains the new AI HTTP Analyzer AI Feature https://youtu.be/76pDTPCMj6s

April 1, 2025
Read More
Shadow Repeater – Burp AI features

Shadow Repeater – Burp AI features

This video explains the new Shadow Repeater AI Feature https://youtu.be/5np5U3Ta8h8

April 1, 2025
Read More
Finding broken access control with AI – Burp AI features

Finding broken access control with AI – Burp AI features

This video explains the new Finding broken access control with AI Feature https://youtu.be/vDII1ak5zro

April 1, 2025
Read More
Record login sequence with AI – Burp AI features

Record login sequence with AI – Burp AI features

This video explains the new Record login sequence with AI Feature https://youtu.be/4JQOLUGnSWE

April 1, 2025
Read More
Explainer Feature – Burp AI features

Explainer Feature – Burp AI features

This video explains the new Burp AI Explainer Feature https://youtu.be/6EH5sCwwXi4

March 31, 2025
Read More
Explore Feature – Burp AI features

Explore Feature – Burp AI features

This video explains the new Burp AI Explore Feature https://youtu.be/6_ihpVajwiM

March 31, 2025
Read More
Using LLM models to jailbreak LLM models (Jailbreak to Jailbreak)

Using LLM models to jailbreak LLM models (Jailbreak to Jailbreak)

The J2 Playground by Scale AI is an interactive platform designed to test the resilience...

March 30, 2025
Read More
Tensortrust AI – Prompt Injection and Prompt Hardening Game

Tensortrust AI – Prompt Injection and Prompt Hardening Game

Tensor Trust is an online game developed by researchers at UC Berkeley to study prompt...

March 30, 2025
Read More
Prompt Map – free tool to test for Prompt Leakage, AI Security Expert

Prompt Map – free tool to test for Prompt Leakage, AI Security Expert

PromptMap is a specialized LLM security scanner designed to detect and analyze prompt leaks—instances where a...

March 30, 2025
Read More
Quick overview of Garak – a free LLM vulnerability scanner

Quick overview of Garak – a free LLM vulnerability scanner

The Garak LLM vulnerability scanner is an open-source tool developed by NVIDIA to assess security...

March 28, 2025
Read More
Prompt Injection into terminals / IDEs via ANSI escape code characters

Prompt Injection into terminals / IDEs via ANSI escape code characters

Prompt injection threats in terminals and IDEs via ANSI escape characters exploit the ability of...

March 26, 2025
Read More
AI Agent Denial of Service (DoS), Rabbit R1, AI Security Expert

AI Agent Denial of Service (DoS), Rabbit R1, AI Security Expert

When AI agents autonomously browse websites and encounter tasks that are intentionally unsolvable or computationally...

March 25, 2025
Read More
AI Agent Data Exfiltration, Rabbit R1, AI Security Expert

AI Agent Data Exfiltration, Rabbit R1, AI Security Expert

AI agents that autonomously browse the web introduce significant security risks, particularly related to data...

March 25, 2025
Read More
OWASP Top 10 LLM10:2025 Unbounded Consumption

OWASP Top 10 LLM10:2025 Unbounded Consumption

Unbounded Consumption refers to scenarios where Large Language Models (LLMs) are subjected to excessive and...

March 22, 2025
Read More
OWASP Top 10 LLM09:2025 Misinformation

OWASP Top 10 LLM09:2025 Misinformation

Misinformation refers to the generation of false or misleading information by Large Language Models (LLMs),...

March 22, 2025
Read More
OWASP Top 10 LLM08:2025 Vector and Embedding Weaknesses

OWASP Top 10 LLM08:2025 Vector and Embedding Weaknesses

Vector and Embedding Weaknesses refers to security vulnerabilities in Large Language Models (LLMs) that arise...

March 22, 2025
Read More
OWASP Top 10 LLM07:2025 System Prompt Leakage

OWASP Top 10 LLM07:2025 System Prompt Leakage

System Prompt Leakage refers to the risk that system prompts—internal instructions guiding the behavior of...

March 22, 2025
Read More
OWASP Top 10 LLM06:2025 Excessive Agency

OWASP Top 10 LLM06:2025 Excessive Agency

Excessive Agency refers to the vulnerability arising when Large Language Models (LLMs) are granted more...

March 22, 2025
Read More
OWASP Top 10 LLM05:2025 Improper Output Handling

OWASP Top 10 LLM05:2025 Improper Output Handling

Improper Output Handling refers to the inadequate validation and sanitization of outputs generated by Large...

March 22, 2025
Read More
OWASP Top 10 LLM04:2025 Data and Model Poisoning

OWASP Top 10 LLM04:2025 Data and Model Poisoning

Data and Model Poisoning refers to the deliberate manipulation of an LLM's training data or...

March 22, 2025
Read More
OWASP Top 10 LLM03:2025 Supply Chain

OWASP Top 10 LLM03:2025 Supply Chain

Supply Chain refers to vulnerabilities in the development and deployment processes of Large Language Models...

March 22, 2025
Read More
OWASP Top 10 LLM02:2025 Sensitive Information Disclosure

OWASP Top 10 LLM02:2025 Sensitive Information Disclosure

Sensitive Information Disclosure refers to the unintended exposure of confidential data—such as personal identifiable information...

March 22, 2025
Read More
OWASP Top 10 LLM01:2025 Prompt Injection

OWASP Top 10 LLM01:2025 Prompt Injection

Prompt Injection refers to a security vulnerability where adversarial inputs manipulate large language models (LLMs)...

March 22, 2025
Read More
Prompt injection via audio or video file

Prompt injection via audio or video file

Audio and video prompt injection risks involve malicious manipulation of inputs to deceive AI systems...

March 19, 2025
Read More
LLM image misclassification and the consequences

LLM image misclassification and the consequences

Misclassifying images in multimodal AI systems can lead to unintended or even harmful actions, especially...

March 19, 2025
Read More
LLMs reading CAPTCHAs – threat to agent systems?

LLMs reading CAPTCHAs – threat to agent systems?

LLMs with multimodal capabilities can be leveraged to read and solve CAPTCHAs in agentic setups,...

March 13, 2025
Read More
Indirect conditional prompt injection via documents

Indirect conditional prompt injection via documents

Conditional indirect prompt injection is an advanced attack where hidden instructions in external content—such as...

March 13, 2025
Read More
Indirect Prompt Injection with documents

Indirect Prompt Injection with documents

Indirect prompt injection with documents is an attack technique where adversarial instructions are embedded within...

March 13, 2025
Read More
LLM01: Visual Prompt Injection | Image based prompt injection

LLM01: Visual Prompt Injection | Image based prompt injection

Multi-modal prompt injection with images is a sophisticated attack that exploits the integration of visual...

March 13, 2025
Read More
LLM01: Indirect Prompt Injection | Exfiltration to attacker

LLM01: Indirect Prompt Injection | Exfiltration to attacker

Data exfiltration from a large language model (LLM) can be performed using markdown formatting and...

March 13, 2025
Read More
Prompt Airlines – AI Security Challenge – Flag 5

Prompt Airlines – AI Security Challenge – Flag 5

In this video we take a look at solving the promptairlines.com challenge (Flag 5) https://youtu.be/MPUwxjWGBQE

December 19, 2024
Read More
Prompt Airlines – AI Security Challenge – Flag 4

Prompt Airlines – AI Security Challenge – Flag 4

In this video we take a look at solving the promptairlines.com challenge (Flag 4) https://youtu.be/jDlTlWLdmaw

December 19, 2024
Read More
Prompt Airlines – AI Security Challenge – Flag 3

Prompt Airlines – AI Security Challenge – Flag 3

In this video we take a look at solving the promptairlines.com challenge (Flag 3) https://youtu.be/qrNFMPwJ9FQ

December 19, 2024
Read More
Prompt Airlines – AI Security Challenge – Flag 1 and 2

Prompt Airlines – AI Security Challenge – Flag 1 and 2

In this video we take a look at solving the promptairlines.com challenge (Flag 1 and...

December 19, 2024
Read More
Prompt leakage and indirect prompt injections in Grok X AI

Prompt leakage and indirect prompt injections in Grok X AI

In this video we will take a look at various prompt injection issues in Grok...

December 19, 2024
Read More
myllmbank.com Walkthrough Flag 3

myllmbank.com Walkthrough Flag 3

In this video we will take a look at flag 3 of myllmbank.com https://youtu.be/dqXDV-mW0aA

December 17, 2024
Read More
myllmbank.com Walkthrough Flag 2

myllmbank.com Walkthrough Flag 2

In this video we will take a look at flag 2 of myllmbank.com https://youtu.be/_PIoWmPlxYQ

December 17, 2024
Read More
myllmbank.com Walkthrough Flag 1

myllmbank.com Walkthrough Flag 1

In this video we will take a look at flag 1 of myllmbank.com https://youtu.be/WWaRZR3dv4U

December 17, 2024
Read More
SecOps Group AI/ML Pentester Mock Exam 2

SecOps Group AI/ML Pentester Mock Exam 2

This is a walkthrough of the SecOps Group AI/ML Pentester Mock Exam 2 https://youtu.be/zX2GtI4Fj_Y

December 16, 2024
Read More
SecOps Group AI/ML Pentester Mock Exam 1

SecOps Group AI/ML Pentester Mock Exam 1

This is a walkthrough of SecOps Group AI/ML Pentester Mock Exam 1 https://youtu.be/UoSAjlUUiPs

December 16, 2024
Read More
CSRF potential in LLMs

CSRF potential in LLMs

Cross-Site Request Forgery (CSRF) via prompt injection through a GET request is a potential attack...

December 16, 2024
Read More
Prompt Injection via clipboard

Prompt Injection via clipboard

Prompt injection via clipboard copy/paste is a security concern where malicious text, copied into a...

December 16, 2024
Read More
Hero AI Bot

Hero AI Bot

This project is a proof of concept for a Hackbot, an AI-driven system that autonomously...

November 21, 2024
Read More
KONTRA OWASP LLM Top 10 Playground

KONTRA OWASP LLM Top 10 Playground

ONTRA offers an interactive training module titled "OWASP Top 10 for Large Language Model (LLM)...

November 21, 2024
Read More
Pokebot Health Agent to practice prompt injection

Pokebot Health Agent to practice prompt injection

A simple Health Agent to practice prompt injection https://youtu.be/dLS5a_fWBjw

November 21, 2024
Read More
Certified AI/ML Penetration Tester

Certified AI/ML Penetration Tester

The Certified AI/ML Pentester (C-AI/MLPen) is an intermediate-level certification offered by The SecOps Group, designed...

November 21, 2024
Read More
Image Prompt injection and double instructions

Image Prompt injection and double instructions

Prompt injection via images involves embedding hidden or overt textual commands within visual elements to...

November 21, 2024
Read More
OpenAI Playground

OpenAI Playground

The OpenAI Playground is an interactive web-based platform that allows users to experiment with OpenAI's...

November 21, 2024
Read More
Prompt injection and exfiltration in Chats apps

Prompt injection and exfiltration in Chats apps

Data exfiltration in messaging apps through unfurling exploits the feature where apps automatically generate previews...

November 21, 2024
Read More
Gandalf – AI bot to practice prompt injections

Gandalf – AI bot to practice prompt injections

Gandalf AI, developed by Lakera, is an interactive online game designed to educate users about...

November 21, 2024
Read More
Google Colab Playground for LLMs

Google Colab Playground for LLMs

Google Colaboratory, commonly known as Google Colab, is a cloud-based Jupyter notebook environment that facilitates...

November 21, 2024
Read More
STRIDE GPT – Threat Modeling with LLMs

STRIDE GPT – Threat Modeling with LLMs

STRIDE GPT is an AI-powered threat modeling tool that leverages Large Language Models (LLMs) to...

November 21, 2024
Read More
OS Command Injection in LLMs

OS Command Injection in LLMs

OS command injection in Large Language Models (LLMs) involves exploiting the model's ability to generate...

November 21, 2024
Read More
Hallucinations in LLMs

Hallucinations in LLMs

Hallucination in AI refers to the phenomenon where a model generates information that appears plausible...

November 21, 2024
Read More
Prompt Injection – Prompt Leakage

Prompt Injection – Prompt Leakage

Prompt leakage refers to the unintended exposure of sensitive or proprietary prompts used to guide...

November 20, 2024
Read More
HTML Injection in LLMs

HTML Injection in LLMs

HTML injection in Large Language Models (LLMs) involves embedding malicious HTML code within prompts or...

November 20, 2024
Read More
RAG data poisoning via documents in ChatGPT

RAG data poisoning via documents in ChatGPT

RAG (Retrieval-Augmented Generation) poisoning occurs when a malicious or manipulated document is uploaded to influence...

November 20, 2024
Read More
RAG data poisoning in ChatGPT

RAG data poisoning in ChatGPT

RAG (Retrieval-Augmented Generation) poisoning from a document uploaded involves embedding malicious or misleading data into...

November 20, 2024
Read More
Deleting ChatGPT memories via prompt injection

Deleting ChatGPT memories via prompt injection

Deleting memories in AI refers to the deliberate removal of stored information or context from...

November 20, 2024
Read More
Updating ChatGPT memories via prompt injection

Updating ChatGPT memories via prompt injection

Injecting memories into AI involves deliberately embedding specific information or narratives into the system's retained...

November 20, 2024
Read More
Putting ChatGPT into maintenance mode

Putting ChatGPT into maintenance mode

Prompt injection to manipulate memories involves crafting input that exploits the memory or context retention...

November 20, 2024
Read More
Voice prompting in ChatGPT

Voice prompting in ChatGPT

Voice prompt injection is a method of exploiting vulnerabilities in voice-activated AI systems by embedding...

November 20, 2024
Read More
Use AI to extract code from images

Use AI to extract code from images

Using AI to extract code from images involves leveraging Optical Character Recognition (OCR) technology and...

November 20, 2024
Read More
Generating images with embedded prompts

Generating images with embedded prompts

Prompt injection via images is a sophisticated technique where malicious or unintended commands are embedded...

November 20, 2024
Read More
Access LLMs from the Linux CLI

Access LLMs from the Linux CLI

The llm project by Simon Willison, available on GitHub, is a command-line tool designed to interact with...

September 25, 2024
Read More
AI/LLM automated Penetration Testing Bots

AI/LLM automated Penetration Testing Bots

Autonomous AI/LLM Penetration Testing bots are a cutting-edge development in cybersecurity, designed to automate the...

September 20, 2024
Read More
Prompt injection to generate content which is normally censored

Prompt injection to generate content which is normally censored

Prompt injection is a technique used to manipulate AI language models by inserting malicious or...

September 19, 2024
Read More
Creating hidden prompts

Creating hidden prompts

Hidden or transparent prompt injection is a subtle yet potent form of prompt injection that...

September 18, 2024
Read More
Data Exfiltration with markdown in LLMs

Data Exfiltration with markdown in LLMs

Data exfiltration through markdown in LLM chatbots is a subtle but dangerous attack vector. When...

September 17, 2024
Read More
Prompt Injection with ASCII to Unicode Tags

Prompt Injection with ASCII to Unicode Tags

ASCII to Unicode tag conversion is a technique that can be leveraged to bypass input...

September 16, 2024
Read More
LLM Expert Prompting Framework – Fabric

LLM Expert Prompting Framework – Fabric

Fabric is an open-source framework for augmenting humans using AI. It provides a modular framework...

September 13, 2024
Read More
LLMs, datasets and playgrounds (Huggingface)

LLMs, datasets and playgrounds (Huggingface)

Hugging Face is a prominent company in the field of artificial intelligence and natural language...

September 12, 2024
Read More
Free LLMs on replicate.com

Free LLMs on replicate.com

Replicate.com is a platform designed to simplify the deployment and use of machine learning models....

September 11, 2024
Read More
GitHub repos with prompt injection samples

GitHub repos with prompt injection samples

This video is a walkthrough some of the GitHub repos which have prompt injection samples....

September 10, 2024
Read More
Prompt Injection with encoded prompts

Prompt Injection with encoded prompts

Prompt injection with encoded prompts involves using various encoding methods (such as Base64, hexadecimal, or...

September 9, 2024
Read More
Voice Audio Prompt Injection

Voice Audio Prompt Injection

Prompt injection via voice and audio is a form of attack that targets AI systems...

September 8, 2024
Read More
Prompt injection to generate any image

Prompt injection to generate any image

Prompt injection in image generation refers to the manipulation of input text prompts to produce...

September 6, 2024
Read More
LLM system prompt leakage

LLM system prompt leakage

Large Language Model (LLM) prompt leakage poses a significant security risk as it can expose...

September 5, 2024
Read More
ChatGPT assumptions made

ChatGPT assumptions made

ChatGPT, like many AI models, operates based on patterns it has learned from a vast...

September 4, 2024
Read More
Jailbreaking to generate undesired images

Jailbreaking to generate undesired images

Direct prompt injection and jailbreaking are two techniques often employed to manipulate large language models...

September 3, 2024
Read More
Indirect Prompt Injection with Data Exfiltration

Indirect Prompt Injection with Data Exfiltration

Indirect prompt injection with data exfiltration via markdown image rendering is a sophisticated attack method...

September 1, 2024
Read More
Direct Prompt Injection / Information Disclosure

Direct Prompt Injection / Information Disclosure

Direct Prompt Injection is a technique where a user inputs specific instructions or queries directly into...

August 31, 2024
Read More
LLM Prompting with emojis

LLM Prompting with emojis

Prompting via emojis is a communication technique that uses emojis to convey ideas, instructions, or...

August 30, 2024
Read More
Prompt Injection via image

Prompt Injection via image

In this video I will explain prompt injection via an image. The LLM is asked...

August 29, 2024
Read More
AI Security Expert Blog

AI Security Expert Blog

Welcome. In this blog we will regularly publish blog articles around Penetration Testing and Ethical...

August 20, 2024
Read More
Scroll to Top