with ❤️ from www.arcanum-sec.com
All Levels • Online Platform
Interactive AI security challenge platform with progressive difficulty levels. Test your skills in prompt injection, jailbreaking, and AI manipulation techniques.
Beginner-Intermediate Level
Comprehensive vulnerable LLM application demonstrating common integration flaws. Based on the open-source project exploring real-world AI security vulnerabilities in web applications.
Beginner-Advanced Level
Popular online prompt injection challenge by Lakera AI. Progressive levels teaching fundamental to advanced prompt engineering and security bypass techniques.
Intermediate-Advanced Level
Advanced agent-based prompt injection challenge by Lakera AI. Test your skills against AI agents that can use tools and take actions. Push the boundaries of agentic AI security and discover vulnerabilities in multi-step AI workflows.
Beginner-Advanced Level
Complete collection of Gandalf adventure challenges by Lakera AI. Seven unique scenarios testing different prompt injection techniques and AI security concepts, from basic password extraction to advanced jailbreaking methods.
Beginner-Intermediate Level
The alternative wizard challenge for prompt injection!
Beginner-Intermediate Level
Professional prompt injection training platform by Immersive Labs with 10 progressive levels. Learn to hack AI chatbots by extracting secret passwords through increasingly sophisticated injection techniques.
Beginner Level
Interactive web-based game challenging users to craft the shortest possible prompt that can trick an AI assistant into revealing its system prompt's secret key. Perfect introduction to prompt injection concepts.
Intermediate Level
UC Berkeley research platform combining attack and defense scenarios in a gamified environment. Players create defense prompts to protect assets and craft attack prompts to gain unauthorized access through prompt injection.
Intermediate-Advanced Level • Self-Hosted
Educational CTF-style platform with 10 hands-on challenges based on OWASP LLM Top 10 vulnerabilities. Runs locally using Python and Ollama framework with open-source models like Mistral and Llama3.
Intermediate Level
Advanced AI/ML penetration testing mock lab by The SecOps Group. Hands-on practical exercises covering AI/ML vulnerabilities including prompt injection, model attacks, and certification preparation scenarios.
Intermediate Level
Agentic AI security CTF simulating goal manipulation attacks against AI-powered financial systems. The "Juice Shop for Agentic AI" - manipulate FinBot to approve fraudulent invoices without triggering detection.
Intermediate Level
JARVIS-themed cybersecurity challenge platform by Tyson0x0 featuring 4 distinct prompt injection challenges. Navigate through high-tech protocols with an Iron Man-inspired interface and competitive leaderboard system.
Intermediate-Advanced Level
Comprehensive series of 4 LLM security labs by PortSwigger. Covers indirect prompt injection, data exfiltration, cross-user data leakage, and authentication bypass techniques.
Intermediate-Advanced Level
Advanced prompt injection challenge against chained AI agents that perform data transformation. Explore sophisticated attacks targeting multi-agent banking systems.
Intermediate-Advanced Level
Document-focused AI security challenge exploring vulnerabilities in document processing and analysis systems. Practice attacks against LLM-powered document handling applications.
Intermediate Level
Chained LLM-powered auto parts system with multiple vulnerability types. Features real-time WebSocket communication and API endpoints.
Intermediate Level • Self-Hosted
Agentic LLM CTF with vector searching and OpenAI LLMs. Features 10+ progressive levels teaching prompt injection, information retrieval, and LLM security vulnerabilities.
Intermediate Level • Self-Hosted
Multi-feature secure AI platform demonstrating proper security implementations. Includes RAG, web assistance, and security demos.
Intermediate Level
AI security CTF by Wiz featuring 5 progressive challenges where participants manipulate a customer service chatbot to earn a free airline ticket. Learn prompt injection through hands-on practice.
Specialized Platform
Professional ML/AI security platform with 80+ challenges covering prompt injection, adversarial attacks, model inversion, and data poisoning. Features challenges from DEFCON, Black Hat, and GovTech competitions.
Intermediate Level • Self-Hosted
Language Model Query Language demonstrations with specialized bots for different use cases.
These competitions remain available for practice and skill development. However, please note that the monetary prize periods have concluded. You can still participate to test your skills, learn new techniques, and compete on leaderboards where available.
Intermediate-Advanced Level
Interactive AI escape room challenge where participants use prompt injection techniques to outsmart AI chatbot supervisors and reveal secret passcodes. Features $10K competition with global leaderboard.
Intermediate Level
Open-source community-driven LLM red-teaming platform featuring gamified AI security challenges. Players have 60 seconds to convince models to say target words using jailbreaking techniques.
Advanced Competition
World's largest AI red-teaming competition with $100,000+ prize pool by Learn Prompting & OpenAI. Multiple tracks for discovering AI vulnerabilities through jailbreaking and prompt engineering attacks.
Intermediate Competition
Competitive AI safety and alignment arena featuring prompt injection challenges, model evaluation, and red-teaming competitions. Test your skills against various AI models in structured competitive scenarios.
Intermediate Competition • All About AI
Interactive LLM hacking challenge created by All About AI. Test your prompt engineering and jailbreaking skills through progressively difficult levels designed to push the boundaries of AI security and model manipulation.
Professional Bounty
Official Anthropic bug bounty program for reporting security vulnerabilities in Claude AI systems and infrastructure. Submit security findings through their responsible disclosure process.
Professional Bounty
OpenAI's bug bounty program hosted on Bugcrowd for discovering security vulnerabilities in ChatGPT, GPT API, and related OpenAI services and infrastructure.
Professional Bounty
Google's Abuse Vulnerability Reward Program for Gemini AI models and services. Part of Google's Bug Hunters program focusing on AI safety and security vulnerabilities.
Professional Bounty
Mozilla's 0-Day Investigative Network GenAI bug bounty program targeting vulnerabilities in large language models and generative AI systems. Rewards up to $15,000 for critical discoveries.
Prompt Injection Tool • Self-Hosted
Extended version of P4RS3LT0NGV3 with additional payload generation techniques. Advanced prompt injection payload generator with 30+ text transformation techniques for LLM security testing and red teaming.
Prompt Injection Tool • Online
Original P4RS3LT0NGV3 by Elder Plinius. A prompt injection payload generator that creates obfuscated prompts using various text transformation techniques to test LLM security controls and filters.
Red Team Framework • Microsoft
Microsoft's Python Risk Identification Tool for Generative AI (PyRIT). An open-source automation framework designed to empower security professionals and ML engineers to proactively identify risks in AI systems through automated red teaming.
Security Scanner • NVIDIA
NVIDIA's comprehensive LLM vulnerability scanner that probes for hallucination, data leakage, prompt injection, misinformation, toxicity generation, jailbreaks, and many other weaknesses in Large Language Models. Think of it as "nmap for LLMs".
Testing Framework • Open Source
Open-source LLM testing and red teaming framework for evaluating prompt quality, catching regressions, and identifying vulnerabilities. Test your prompts, agents, and RAG applications for security, quality, and performance issues.
Security Platform • Arcanum
Advanced AI security analysis platform for comprehensive testing of LLM applications. Provides automated vulnerability assessment, prompt injection testing, and security posture evaluation for AI systems with enterprise-grade reporting.
Burp Extension • Microsoft
PyRIT-Ship is a prototype project that extends Microsoft's PyRIT (Python Risk Identification Toolkit) by providing API integration capabilities for security testing tools. Features a Python Flask server and Burp Suite Intruder extension for AI safety testing.
Research Resource • Arcanum Security
Comprehensive taxonomy and classification system for prompt injection attacks developed by Arcanum Security. A structured framework for understanding, categorizing, and analyzing different types of prompt injection vulnerabilities and attack vectors.
Assessment Guide • Arcanum Security
Comprehensive penetration testing questionnaire for AI systems developed by Arcanum Security. A structured assessment guide covering security evaluation criteria, attack vectors, and vulnerability assessment methodologies for AI/LLM applications.
Research Collection • Arcanum Security
Enterprise AI deployment ecosystem mapping project by Arcanum Security. Maps applications and components in the orbit of enterprise AI deployments to help AI pentesters identify and include all relevant components in their security testing scope.