Penetration Testing against AI
Our Ethical Hacking and Penetration Testing Artificial Intelligence (AI) and Large Language Models (LLM) Training course you will enable you to become professional in AI and LLM Pen Testing and Vulnerability Discovery.
Target Audience
- Security Engineers
- Security Analysts
- DevOps Engineers
- Penetration Testers
- AI Developers looking to expand on their knowledge of vulnerabilities
- Anybody interested in learning how malicious actors hack AI/LLM
- Anybody interested in becoming professional in AI/LLM Penetration testing
Course Objectives
This course will teach you how to perform comprehensive Penetration Tests and Security Audits against AI/LLM
What you will be able to perform after attending the class?
You will have a solid understanding how to perform step-by-step Penetration Tests in accordance to the OWASP Top 10 LLM framework.
Handouts
All students will receive the all presented slides in PDF format along with a lot of web resources and links for further studies.
The instructor
Your instructor is Martin Voelk. He is a Cyber Security veteran with 25 years of experience. Martin holds some of the highest certification incl. CISSP, OSCP, OSWP, Portswigger BSCP, CCIE, PCI ISA and PCIP. He recently became one of the first folks worldwide to become a Certified AI/ML Pentester (C-AI/MLPen). He works as a consultant for a big tech company and engages in Bug Bounty programs where he found hundreds of critical and high vulnerabilities.
What are the requirements or prerequisites for taking your course?
- Basic IT Skills
- Basic understanding of web technology
- No Linux, programming or hacking knowledge required
- Computer with a minimum of 4GB ram/memory
- Operating System: Windows / Apple Mac OS / Linux
- Reliable internet connection
- Firefox Web Browser and Burp Suite (optional)
Course outline:
This course has a both theory and practical lab sections with a focus on finding and exploiting vulnerabilities in AI and LLM systems and applications. The training is aligned with the OWASP Top 10 LLM vulnerability categories.
- What is AI and LLM?
- Learning process
- Application and the future
- Development cycle
- Tokenization
- Misalignment
- Jailbreaks
- Prompt Injections
- Forgery
- Exfiltration
- OSWASP Top 10 LLM
- LLM Attacks
- PIPE
- MITRE
- Direct
- Indirect
- Jailbreaks and Bypasses
- Data Exfiltration
- Prompt Leaking
- Playgrounds
- 2 x hands-on Labs
XSS
CSRF
SSRF
- 1 x hands-on Lab
- LLM internal datasets
- Public datasets
- Targeted vs. general poisoning
- High CPU/Memory tasks
- High Network utilization tasks
- Lack of Rate Limiting
- Traditional DoS
- Out of date software, libraries
- 3rd party dependencies
- Poisoned packages
- PII, PHI, financial data
- System data
- Access Control and Encryption
- Sensitive info in training data
- Plugin vulnerabilities
- Access Control and Authorization
- Plugins chaining
- Having plugins perform actions
- Plugin based injection
Excessive privileges
- Lack of oversight
- Interaction with backend APIs
- Incorrect permissions
- 2 x hands-on Labs
Too much dependency on AI output
Lack of backup systems
False information output
Misclassification
Hallucinations
- Reverse engineering
- Lack of Authentication / Authorization
- Code access
- Model replication
- Model extraction
Next Dates
Online at any time