Blog

Prompt Injection – Prompt Leakage

Prompt Injection – Prompt Leakage

Prompt leakage refers to the unintended exposure of sensitive or proprietary prompts used to guide...

November 20, 2024
Read More
HTML Injection in LLMs

HTML Injection in LLMs

HTML injection in Large Language Models (LLMs) involves embedding malicious HTML code within prompts or...

November 20, 2024
Read More
RAG data poisoning via documents in ChatGPT

RAG data poisoning via documents in ChatGPT

RAG (Retrieval-Augmented Generation) poisoning occurs when a malicious or manipulated document is uploaded to influence...

November 20, 2024
Read More
RAG data poisoning in ChatGPT

RAG data poisoning in ChatGPT

RAG (Retrieval-Augmented Generation) poisoning from a document uploaded involves embedding malicious or misleading data into...

November 20, 2024
Read More
Deleting ChatGPT memories via prompt injection

Deleting ChatGPT memories via prompt injection

Deleting memories in AI refers to the deliberate removal of stored information or context from...

November 20, 2024
Read More
Updating ChatGPT memories via prompt injection

Updating ChatGPT memories via prompt injection

Injecting memories into AI involves deliberately embedding specific information or narratives into the system's retained...

November 20, 2024
Read More
Putting ChatGPT into maintenance mode

Putting ChatGPT into maintenance mode

Prompt injection to manipulate memories involves crafting input that exploits the memory or context retention...

November 20, 2024
Read More
Voice prompting in ChatGPT

Voice prompting in ChatGPT

Voice prompt injection is a method of exploiting vulnerabilities in voice-activated AI systems by embedding...

November 20, 2024
Read More
Use AI to extract code from images

Use AI to extract code from images

Using AI to extract code from images involves leveraging Optical Character Recognition (OCR) technology and...

November 20, 2024
Read More
Generating images with embedded prompts

Generating images with embedded prompts

Prompt injection via images is a sophisticated technique where malicious or unintended commands are embedded...

November 20, 2024
Read More
Access LLMs from the Linux CLI

Access LLMs from the Linux CLI

The llm project by Simon Willison, available on GitHub, is a command-line tool designed to interact with...

September 25, 2024
Read More
AI/LLM automated Penetration Testing Bots

AI/LLM automated Penetration Testing Bots

Autonomous AI/LLM Penetration Testing bots are a cutting-edge development in cybersecurity, designed to automate the...

September 20, 2024
Read More
Prompt injection to generate content which is normally censored

Prompt injection to generate content which is normally censored

Prompt injection is a technique used to manipulate AI language models by inserting malicious or...

September 19, 2024
Read More
Creating hidden prompts

Creating hidden prompts

Hidden or transparent prompt injection is a subtle yet potent form of prompt injection that...

September 18, 2024
Read More
Data Exfiltration with markdown in LLMs

Data Exfiltration with markdown in LLMs

Data exfiltration through markdown in LLM chatbots is a subtle but dangerous attack vector. When...

September 17, 2024
Read More
Prompt Injection with ASCII to Unicode Tags

Prompt Injection with ASCII to Unicode Tags

ASCII to Unicode tag conversion is a technique that can be leveraged to bypass input...

September 16, 2024
Read More
LLM Expert Prompting Framework – Fabric

LLM Expert Prompting Framework – Fabric

Fabric is an open-source framework for augmenting humans using AI. It provides a modular framework...

September 13, 2024
Read More
LLMs, datasets and playgrounds (Huggingface)

LLMs, datasets and playgrounds (Huggingface)

Hugging Face is a prominent company in the field of artificial intelligence and natural language...

September 12, 2024
Read More
Free LLMs on replicate.com

Free LLMs on replicate.com

Replicate.com is a platform designed to simplify the deployment and use of machine learning models....

September 11, 2024
Read More
GitHub repos with prompt injection samples

GitHub repos with prompt injection samples

This video is a walkthrough some of the GitHub repos which have prompt injection samples....

September 10, 2024
Read More
Prompt Injection with encoded prompts

Prompt Injection with encoded prompts

Prompt injection with encoded prompts involves using various encoding methods (such as Base64, hexadecimal, or...

September 9, 2024
Read More
Voice Audio Prompt Injection

Voice Audio Prompt Injection

Prompt injection via voice and audio is a form of attack that targets AI systems...

September 8, 2024
Read More
Prompt injection to generate any image

Prompt injection to generate any image

Prompt injection in image generation refers to the manipulation of input text prompts to produce...

September 6, 2024
Read More
LLM system prompt leakage

LLM system prompt leakage

Large Language Model (LLM) prompt leakage poses a significant security risk as it can expose...

September 5, 2024
Read More
ChatGPT assumptions made

ChatGPT assumptions made

ChatGPT, like many AI models, operates based on patterns it has learned from a vast...

September 4, 2024
Read More
Jailbreaking to generate undesired images

Jailbreaking to generate undesired images

Direct prompt injection and jailbreaking are two techniques often employed to manipulate large language models...

September 3, 2024
Read More
Indirect Prompt Injection with Data Exfiltration

Indirect Prompt Injection with Data Exfiltration

Indirect prompt injection with data exfiltration via markdown image rendering is a sophisticated attack method...

September 1, 2024
Read More
Direct Prompt Injection / Information Disclosure

Direct Prompt Injection / Information Disclosure

Direct Prompt Injection is a technique where a user inputs specific instructions or queries directly into...

August 31, 2024
Read More
LLM Prompting with emojis

LLM Prompting with emojis

Prompting via emojis is a communication technique that uses emojis to convey ideas, instructions, or...

August 30, 2024
Read More
Prompt Injection via image

Prompt Injection via image

In this video I will explain prompt injection via an image. The LLM is asked...

August 29, 2024
Read More
AI Security Expert Blog

AI Security Expert Blog

Welcome. In this blog we will regularly publish blog articles around Penetration Testing and Ethical...

August 20, 2024
Read More
Scroll to Top