Home » Uncategorized

Uncategorized

Uncategorized

Hero AI Bot

This project is a proof of concept for a Hackbot, an AI-driven system that autonomously finds vulnerabilities in web applications. It takes a raw HTTP request as input and attempts to identify and exploit potential security vulnerabilities. It’s probably not the best way to build a hackbot, but you can view it as inspiration.

Uncategorized

KONTRA OWASP LLM Top 10 Playground

ONTRA offers an interactive training module titled “OWASP Top 10 for Large Language Model (LLM) Applications,” designed to educate developers on the most critical security vulnerabilities associated with LLMs. This module is inspired by real-world vulnerabilities and case studies, providing hands-on experience to help developers understand, identify, and mitigate security issues in their applications. 

Uncategorized

Certified AI/ML Penetration Tester

The Certified AI/ML Pentester (C-AI/MLPen) is an intermediate-level certification offered by The SecOps Group, designed to assess and validate a candidate’s expertise in AI and machine learning security. This certification is particularly suited for professionals such as penetration testers, application security architects, SOC analysts, red and blue team members, AI/ML engineers, and enthusiasts aiming to enhance their knowledge in identifying and exploiting security vulnerabilities within AI/ML systems.

Uncategorized

Image Prompt injection and double instructions

Prompt injection via images involves embedding hidden or overt textual commands within visual elements to manipulate AI systems. This approach exploits Optical Character Recognition (OCR) or visual-text processing models, enabling attackers to include instructions that the system interprets as prompts. These commands could trick the AI into generating unintended outputs or executing malicious tasks. For example, a visually disguised instruction embedded in a QR code or background text might bypass user detection but still influence the AI. Double instructions amplify this vulnerability by layering contradictory or complex commands to confuse the AI’s decision-making processes. By combining visible, user-friendly prompts with hidden, conflicting directives, attackers can manipulate the system’s output. For instance, an overtly benign text might instruct the AI to generate safe responses, while hidden instructions (in an image or metadata) direct it to include harmful or biased content.

Uncategorized

OpenAI Playground

The OpenAI Playground is an interactive web-based platform that allows users to experiment with OpenAI’s language models, such as GPT-3 and GPT-4, in a user-friendly environment. It enables users to input prompts and receive generated text responses, facilitating exploration of the models’ capabilities without requiring programming skills.

Uncategorized

Prompt injection and exfiltration in Chats apps

Data exfiltration in messaging apps through unfurling exploits the feature where apps automatically generate previews for shared links. This process, called unfurling, involves fetching metadata (like titles, descriptions, or images) from the linked resource. Attackers can abuse this mechanism by crafting malicious links that, when shared, cause the app to fetch sensitive data from internal servers or leak tokens, cookies, or other confidential information. For example, when a user sends a link, the app’s server might access the linked resource to generate a preview. If the server is on an internal network, attackers can include URLs pointing to internal endpoints, tricking the app into exposing sensitive data during the unfurling process. This vulnerability is particularly concerning in enterprise messaging platforms, where such attacks might expose internal APIs, configuration details, or sensitive documents. Mitigating the risk involves limiting what metadata can be fetched, enforcing strict URL validation, and sandboxing the unfurling process to prevent access to sensitive or internal resources. Users and administrators should also be cautious about sharing unknown or untrusted links in messaging apps.

Uncategorized

Gandalf – AI bot to practice prompt injections

Gandalf AI, developed by Lakera, is an interactive online game designed to educate users about AI security vulnerabilities, particularly prompt injection attacks. In this game, players engage with an AI chatbot named Gandalf, whose objective is to safeguard a secret password. The player’s challenge is to craft prompts that trick Gandalf into revealing this password, thereby learning about the potential weaknesses in large language models (LLMs).

Uncategorized

Google Colab Playground for LLMs

Google Colaboratory, commonly known as Google Colab, is a cloud-based Jupyter notebook environment that facilitates interactive coding and data analysis directly in the browser. It supports Python and offers free access to computing resources, including GPUs and TPUs, making it particularly beneficial for machine learning, data science, and educational purposes.  Google Colab In Colab, “Playground Mode” allows users to experiment with notebooks without affecting the original content. When a notebook is shared in read-only mode, opening it in Playground Mode creates a temporary, editable copy. This enables users to modify and run code cells freely, facilitating exploration and learning. However, changes made in this mode are not saved unless explicitly stored in the user’s Google Drive.

Uncategorized

STRIDE GPT – Threat Modeling with LLMs

STRIDE GPT is an AI-powered threat modeling tool that leverages Large Language Models (LLMs) to generate threat models and attack trees for applications based on the STRIDE methodology. Users provide application details, such as the application type, authentication methods, and whether the application is internet-facing or processes sensitive data. The model then generates its output based on the provided information. Features include suggesting possible mitigations for identified threats, supporting DREAD risk scoring, generating Gherkin test cases, and analyzing GitHub repositories for comprehensive threat modeling. The tool is accessible via a web application and is also available as a Docker container image for easy deployment.

Scroll to Top