Master the Future of Defense:AI Security Red Teaming
Release Date : 01/05/2026
Earn your CARTP (Certified AI Red Team Professional) certification and lead offensive security testing against AI systems and models.
Certification
CARTP
Certified AI Red Teaming Practitioner
Level
Intermediate
Medium Level
Duration
~30 Hours
10h course content, Interactive Exercises + 20h Labs
Format
Self-paced, blended learning
videos + quizzes + optional labs
Overview
Why This Course?
Course Content Duration: 7-10 hours (lectures, demonstrations, case studies)
Hands-On Labs: Separate access - 12 labs available (estimated 6-8 additional hours)
Format: Video lectures, demonstrations, real-world case studies, with optional hands-on labs
Course Modules
MODULE 1
Module 1: Introduction to AI Security Red Teaming
The AI Security Landscape
Understanding AI Red Teaming
Industry Frameworks and Standards
Real-World AI Security Incidents
Associated Labs: AI Security Reconnaissance
MODULE 2
Module 2: Prompt Injection and LLM Manipulation
Understanding Large Language Models
Prompt Injection Fundamentals
Advanced Prompt Injection Techniques
Jailbreaking and Guardrail Bypass
System Prompt Leakage
Associated Labs: Basic Prompt Injection, Advanced Prompt Injection, LLM Jailbreaking Techniques
MODULE 3
Module 3: Data Poisoning and Supply Chain Attacks
AI Supply Chain Vulnerabilities
Training Data Poisoning
Model Poisoning and Trojans
RAG System Poisoning
Supply Chain Attack Case Studies
Associated Labs: Data Poisoning Attack Simulation, RAG System Exploitation
MODULE 4
Module 4: Adversarial Machine Learning and Model Evasion
Introduction to Adversarial ML
Evasion Attacks on ML Models
Model Extraction and Stealing
Defense Evasion Strategies
Tools and Frameworks for Adversarial Testing
Associated Labs: Adversarial Example Generation
MODULE 5
Module 5: AI Agent Security and Excessive Agency
Understanding AI Agents
Excessive Agency Vulnerabilities
AI API Security
Integration Vulnerabilities
Associated Labs: AI Agent Exploitation, AI API Security Testing
MODULE 6
Module 6: Information Disclosure and Privacy Attacks
Sensitive Information Disclosure
Training Data Extraction
Model Inversion and Reconstruction
Associated Labs: Data Extraction from LLMs
MODULE 7
Module 7: AI Red Teaming Methodology and Practice
Planning an AI Red Team Engagement (25 minutes)
Manual vs. Automated Testing
AI Red Teaming Tools and Platforms
Reporting and Documentation
Mitigation Strategies and Best Practices
Future of AI Security
Associated Labs: Automated AI Security Testing
