Blog
Access LLMs from the Linux CLI
The llm project by Simon Willison, available on GitHub, is a command-line tool designed to interact with...
AI/LLM automated Penetration Testing Bots
Autonomous AI/LLM Penetration Testing bots are a cutting-edge development in cybersecurity, designed to automate the...
Prompt injection to generate content which is normally censored
Prompt injection is a technique used to manipulate AI language models by inserting malicious or...
Creating hidden prompts
Hidden or transparent prompt injection is a subtle yet potent form of prompt injection that...
Data Exfiltration with markdown in LLMs
Data exfiltration through markdown in LLM chatbots is a subtle but dangerous attack vector. When...
Prompt Injection with ASCII to Unicode Tags
ASCII to Unicode tag conversion is a technique that can be leveraged to bypass input...
LLM Expert Prompting Framework – Fabric
Fabric is an open-source framework for augmenting humans using AI. It provides a modular framework...
LLMs, datasets and playgrounds (Huggingface)
Hugging Face is a prominent company in the field of artificial intelligence and natural language...
Free LLMs on replicate.com
Replicate.com is a platform designed to simplify the deployment and use of machine learning models....
GitHub repos with prompt injection samples
This video is a walkthrough some of the GitHub repos which have prompt injection samples....
Prompt Injection with encoded prompts
Prompt injection with encoded prompts involves using various encoding methods (such as Base64, hexadecimal, or...
Voice Audio Prompt Injection
Prompt injection via voice and audio is a form of attack that targets AI systems...
Prompt injection to generate any image
Prompt injection in image generation refers to the manipulation of input text prompts to produce...
LLM system prompt leakage
Large Language Model (LLM) prompt leakage poses a significant security risk as it can expose...
ChatGPT assumptions made
ChatGPT, like many AI models, operates based on patterns it has learned from a vast...
Jailbreaking to generate undesired images
Direct prompt injection and jailbreaking are two techniques often employed to manipulate large language models...
Indirect Prompt Injection with Data Exfiltration
Indirect prompt injection with data exfiltration via markdown image rendering is a sophisticated attack method...
Direct Prompt Injection / Information Disclosure
Direct Prompt Injection is a technique where a user inputs specific instructions or queries directly into...
LLM Prompting with emojis
Prompting via emojis is a communication technique that uses emojis to convey ideas, instructions, or...
Prompt Injection via image
In this video I will explain prompt injection via an image. The LLM is asked...
AI Security Expert Blog
Welcome. In this blog we will regularly publish blog articles around Penetration Testing and Ethical...