AI Security Expert https://aisecurityexpert.com/ Tue, 01 Apr 2025 00:24:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://aisecurityexpert.com/wp-content/uploads/2024/08/cropped-AI-Security-Expert-logo-png-1-1-32x32.png AI Security Expert https://aisecurityexpert.com/ 32 32 LM Report – Burp AI features https://aisecurityexpert.com/lm-report-burp-ai-features/ https://aisecurityexpert.com/lm-report-burp-ai-features/#respond Tue, 01 Apr 2025 00:24:30 +0000 https://aisecurityexpert.com/?p=871 This video explains the new LM Report AI Feature

The post LM Report – Burp AI features appeared first on AI Security Expert.

]]>
This video explains the new LM Report AI Feature

The post LM Report – Burp AI features appeared first on AI Security Expert.

]]>
https://aisecurityexpert.com/lm-report-burp-ai-features/feed/ 0 871
AI HTTP Analyzer – Burp AI features https://aisecurityexpert.com/ai-http-analyzer-burp-ai-features/ https://aisecurityexpert.com/ai-http-analyzer-burp-ai-features/#respond Tue, 01 Apr 2025 00:20:36 +0000 https://aisecurityexpert.com/?p=868 This video explains the new AI HTTP Analyzer AI Feature

The post AI HTTP Analyzer – Burp AI features appeared first on AI Security Expert.

]]>
This video explains the new AI HTTP Analyzer AI Feature

The post AI HTTP Analyzer – Burp AI features appeared first on AI Security Expert.

]]>
https://aisecurityexpert.com/ai-http-analyzer-burp-ai-features/feed/ 0 868
Shadow Repeater – Burp AI features https://aisecurityexpert.com/shadow-repeater-burp-ai-features/ https://aisecurityexpert.com/shadow-repeater-burp-ai-features/#respond Tue, 01 Apr 2025 00:15:06 +0000 https://aisecurityexpert.com/?p=865 This video explains the new Shadow Repeater AI Feature

The post Shadow Repeater – Burp AI features appeared first on AI Security Expert.

]]>
This video explains the new Shadow Repeater AI Feature

The post Shadow Repeater – Burp AI features appeared first on AI Security Expert.

]]>
https://aisecurityexpert.com/shadow-repeater-burp-ai-features/feed/ 0 865
Finding broken access control with AI – Burp AI features https://aisecurityexpert.com/finding-broken-access-control-with-ai-burp-ai-features/ https://aisecurityexpert.com/finding-broken-access-control-with-ai-burp-ai-features/#respond Tue, 01 Apr 2025 00:09:36 +0000 https://aisecurityexpert.com/?p=862 This video explains the new Finding broken access control with AI Feature

The post Finding broken access control with AI – Burp AI features appeared first on AI Security Expert.

]]>
This video explains the new Finding broken access control with AI Feature

The post Finding broken access control with AI – Burp AI features appeared first on AI Security Expert.

]]>
https://aisecurityexpert.com/finding-broken-access-control-with-ai-burp-ai-features/feed/ 0 862
Record login sequence with AI – Burp AI features https://aisecurityexpert.com/record-login-sequence-with-ai-burp-ai-features/ https://aisecurityexpert.com/record-login-sequence-with-ai-burp-ai-features/#respond Tue, 01 Apr 2025 00:03:06 +0000 https://aisecurityexpert.com/?p=859 This video explains the new Record login sequence with AI Feature

The post Record login sequence with AI – Burp AI features appeared first on AI Security Expert.

]]>
This video explains the new Record login sequence with AI Feature

The post Record login sequence with AI – Burp AI features appeared first on AI Security Expert.

]]>
https://aisecurityexpert.com/record-login-sequence-with-ai-burp-ai-features/feed/ 0 859
Explainer Feature – Burp AI features https://aisecurityexpert.com/explainer-feature-burp-ai-features/ https://aisecurityexpert.com/explainer-feature-burp-ai-features/#respond Mon, 31 Mar 2025 23:56:42 +0000 https://aisecurityexpert.com/?p=854 This video explains the new Burp AI Explainer Feature

The post Explainer Feature – Burp AI features appeared first on AI Security Expert.

]]>
This video explains the new Burp AI Explainer Feature

The post Explainer Feature – Burp AI features appeared first on AI Security Expert.

]]>
https://aisecurityexpert.com/explainer-feature-burp-ai-features/feed/ 0 854
Explore Feature – Burp AI features https://aisecurityexpert.com/explore-feature-burp-ai-features/ https://aisecurityexpert.com/explore-feature-burp-ai-features/#respond Mon, 31 Mar 2025 23:48:28 +0000 https://aisecurityexpert.com/?p=851 This video explains the new Burp AI Explore Feature

The post Explore Feature – Burp AI features appeared first on AI Security Expert.

]]>
This video explains the new Burp AI Explore Feature

The post Explore Feature – Burp AI features appeared first on AI Security Expert.

]]>
https://aisecurityexpert.com/explore-feature-burp-ai-features/feed/ 0 851
Using LLM models to jailbreak LLM models (Jailbreak to Jailbreak) https://aisecurityexpert.com/using-llm-models-to-jailbreak-llm-models-jaibreak-to-jailbreak/ https://aisecurityexpert.com/using-llm-models-to-jailbreak-llm-models-jaibreak-to-jailbreak/#respond Sun, 30 Mar 2025 20:21:44 +0000 https://aisecurityexpert.com/?p=847 The J2 Playground by Scale AI is an interactive platform designed to test the resilience of large language models (LLMs) against jailbreak attempts. To use it, select an attacker model (e.g., Claude-Sonnet-3.5 or Gemini-1.5-Pro) and a target model (e.g., GPT-4o or Gemini-1.5-Pro). Define the behavior you want to elicit from the target model, such as generating specific instructions. Choose an attack strategy, then click “Start Conversation” to initiate the simulated interaction. This setup allows users to observe how effectively the attacker model can bypass the target model’s safeguards, providing valuable insights into the vulnerabilities and safety measures of various LLMs.

The post Using LLM models to jailbreak LLM models (Jailbreak to Jailbreak) appeared first on AI Security Expert.

]]>
The J2 Playground by Scale AI is an interactive platform designed to test the resilience of large language models (LLMs) against jailbreak attempts. To use it, select an attacker model (e.g., Claude-Sonnet-3.5 or Gemini-1.5-Pro) and a target model (e.g., GPT-4o or Gemini-1.5-Pro). Define the behavior you want to elicit from the target model, such as generating specific instructions. Choose an attack strategy, then click “Start Conversation” to initiate the simulated interaction. This setup allows users to observe how effectively the attacker model can bypass the target model’s safeguards, providing valuable insights into the vulnerabilities and safety measures of various LLMs.

The post Using LLM models to jailbreak LLM models (Jailbreak to Jailbreak) appeared first on AI Security Expert.

]]>
https://aisecurityexpert.com/using-llm-models-to-jailbreak-llm-models-jaibreak-to-jailbreak/feed/ 0 847
Tensortrust AI – Prompt Injection and Prompt Hardening Game https://aisecurityexpert.com/tensortrust-ai-prompt-injection-and-prompt-hardening-game/ https://aisecurityexpert.com/tensortrust-ai-prompt-injection-and-prompt-hardening-game/#respond Sun, 30 Mar 2025 20:13:51 +0000 https://aisecurityexpert.com/?p=844 Tensor Trust is an online game developed by researchers at UC Berkeley to study prompt injection vulnerabilities in AI systems. In this game, players defend their virtual bank accounts by crafting prompts that instruct the AI to grant access only when the correct password is provided. Conversely, players also attempt to attack other accounts by devising prompts that trick the AI into granting unauthorized access. This interactive platform serves as a research tool, collecting data to better understand and mitigate prompt injection attacks in large language models.

The post Tensortrust AI – Prompt Injection and Prompt Hardening Game appeared first on AI Security Expert.

]]>
Tensor Trust is an online game developed by researchers at UC Berkeley to study prompt injection vulnerabilities in AI systems. In this game, players defend their virtual bank accounts by crafting prompts that instruct the AI to grant access only when the correct password is provided. Conversely, players also attempt to attack other accounts by devising prompts that trick the AI into granting unauthorized access. This interactive platform serves as a research tool, collecting data to better understand and mitigate prompt injection attacks in large language models.

The post Tensortrust AI – Prompt Injection and Prompt Hardening Game appeared first on AI Security Expert.

]]>
https://aisecurityexpert.com/tensortrust-ai-prompt-injection-and-prompt-hardening-game/feed/ 0 844
Prompt Map – free tool to test for Prompt Leakage, AI Security Expert https://aisecurityexpert.com/prompt-map-free-tool-to-test-for-prompt-leakage-ai-security-expert/ https://aisecurityexpert.com/prompt-map-free-tool-to-test-for-prompt-leakage-ai-security-expert/#respond Sun, 30 Mar 2025 01:02:49 +0000 https://aisecurityexpert.com/?p=840 PromptMap is a specialized LLM security scanner designed to detect and analyze prompt leaks—instances where a model inadvertently exposes hidden system instructions, internal guidelines, or sensitive operational details. By systematically probing AI responses with crafted input variations, PromptMap identifies vulnerabilities that could lead to unauthorized disclosure of proprietary information, security policies, or hidden prompt engineering techniques. Its structured mapping of leak points helps researchers and developers strengthen AI defenses, ensuring models remain resilient against prompt extraction attacks and unintended information exposure.

The post Prompt Map – free tool to test for Prompt Leakage, AI Security Expert appeared first on AI Security Expert.

]]>
PromptMap is a specialized LLM security scanner designed to detect and analyze prompt leaks—instances where a model inadvertently exposes hidden system instructions, internal guidelines, or sensitive operational details. By systematically probing AI responses with crafted input variations, PromptMap identifies vulnerabilities that could lead to unauthorized disclosure of proprietary information, security policies, or hidden prompt engineering techniques. Its structured mapping of leak points helps researchers and developers strengthen AI defenses, ensuring models remain resilient against prompt extraction attacks and unintended information exposure.

The post Prompt Map – free tool to test for Prompt Leakage, AI Security Expert appeared first on AI Security Expert.

]]>
https://aisecurityexpert.com/prompt-map-free-tool-to-test-for-prompt-leakage-ai-security-expert/feed/ 0 840