Master the Future of Defense:AI Security Red Teaming

Release Date : 01/05/2026

Earn your CARTP (Certified AI Red Team Professional) certification and lead offensive security testing against AI systems and models.

Certification
CARTP
Certified AI Red Teaming Practitioner
Level
Intermediate
Medium Level
Duration
~30 Hours
10h course content, Interactive Exercises + 20h Labs
Format
Self-paced, blended learning
videos + quizzes + optional labs
Overview

Why This Course?

The Certified AI Security Red Teamer (CAISR) course is designed to equip security professionals with the knowledge and skills to identify, exploit, and measure the security weaknesses of modern AI systems.

Learn how to systematically attack and evaluate the security of modern AI systems.

AI systems introduce new and complex attack surfaces that extend beyond traditional application security. These systems combine machine learning models, data pipelines, prompts, agents, infrastructure, and human decision-making, forming what is best described as a socio-technical system.

The Certified AI Security Red Teamer (CAISR) course trains security professionals to think and operate as adversaries when assessing AI systems.

Participants learn how to identify, exploit, and measure weaknesses across the entire AI attack surface, including:

training data pipelines
machine learning models
LLM prompt interfaces
RAG pipelines and memory systems
AI agents and tools
infrastructure and MLOps environments
human-in-the-loop decision workflows

Through 20+ hands-on labs and real attack simulations, students perform structured AI red teaming engagements and produce evidence-based security assessments.

The course content aligns with leading industry frameworks including:

OWASP Top 10 for LLM Applications
OWASP AI Exchange
MITRE ATLAS adversarial AI framework

Course Overview

This course prepares practitioners to design and execute full AI red teaming engagements.

Participants learn how to:

identify attack surfaces in AI architectures
exploit vulnerabilities across the AI lifecycle
measure the impact of attacks
document evidence for security teams
recommend defensive controls

By the end of the program, participants will be able to conduct structured AI security assessments against real AI systems.

Who Should Attend

This course is designed for professionals responsible for evaluating or securing AI systems.

Recommended for:

Red Team Operators
Penetration Testers
Application Security Engineers
AI Security Engineers
Security Researchers
AI Engineers deploying ML or LLM systems

Basic familiarity with Python and machine learning concepts is recommended.

What You Will Learn

Participants completing this course will be able to:

understand the architecture of AI systems
identify AI-specific attack surfaces
perform AI threat modeling
exploit data poisoning and backdoor attacks
perform model extraction and privacy attacks
execute adversarial input and evasion attacks
exploit LLM prompt injection and jailbreaking
abuse agent tools and RAG pipelines
test AI infrastructure and MLOps pipelines
automate AI red teaming using specialized tools

Hands-On Labs

The course includes 20 hands-on labs covering realistic AI attack scenarios.

Participants will practice:

adversarial input generation
prompt injection and jailbreak attacks
dataset poisoning attacks
backdoor insertion in training datasets
model extraction attacks
membership inference attacks
RAG manipulation
agent tool abuse
supply chain attacks
cost-based denial-of-service attacks

Each lab requires students to demonstrate the impact of their attack and document the evidence.

Course Modules

MODULE 1
Module 1: Foundations of AI Security

This module introduces the fundamental differences between traditional software security and AI system security.

Topics include:

AI systems vs traditional applications
the goals of AI security red teaming
red teaming vs penetration testing vs AI evaluation
exploitation, impact, and evidence collection
AI, machine learning, and LLM fundamentals

Hands-on exercises compare traditional vulnerabilities with ML failure modes.

Participants also explore the AI attack surface, including:

training data
models
inference pipelines
prompts
agents and tools
RAG and memory systems
infrastructure and MLOps pipelines
human decision workflows
MODULE 2
Module 2: Threat Modeling for AI Systems

This module teaches practitioners how to systematically identify risks in AI systems.

Participants learn to apply multiple frameworks including:

STRIDE adapted for machine learning
MITRE ATLAS adversarial AI framework
OWASP AI Exchange threat modeling approach
OWASP Top 10 for LLM Applications

Hands-on lab:

Participants perform their first AI red teaming exercise against a vulnerable AI application.

Students perform threat modeling across the AI lifecycle, including:

development-time risks
runtime application threats
user interaction threats
MODULE 3
Module 3: Data-Centric Attacks

Training data is one of the most critical attack surfaces in AI systems.

This module explores attacks targeting machine learning datasets.

Topics include:

data poisoning attacks
label flipping
backdoor insertion
dataset bias exploitation
dataset supply chain risks

Hands-on labs demonstrate how small changes in training data can corrupt model behavior.

Participants also study defensive techniques such as:

data validation
adversarial training
differential privacy
monitoring and auditing
MODULE 4
Module 4: Model Extraction and Privacy Attacks

This module focuses on attacks targeting the intellectual property and privacy of machine learning models.

Topics include:

model stealing attacks
membership inference attacks
model inversion attacks

Students learn how attackers reconstruct models using only API access.

Hands-on labs include:

reconstructing a model using query attacks
identifying whether specific data points were used during training
implementing defenses to reduce attack success
MODULE 5
Module 5: LLM Prompt-Level Exploitation

This module explores attacks that target large language models directly.

Topics include:

prompt injection attacks
jailbreak techniques
role-play attacks
context manipulation
instruction hierarchy exploitation
context overflow attacks

Participants complete a multi-challenge prompt exploitation lab environment.

MODULE 6
Module 6 — Agent, Tooling, and RAG Attacks

Modern AI systems often include agents, tools, and retrieval pipelines, introducing new attack surfaces.

Topics include:

improper output handling
vector database weaknesses
sensitive information disclosure
RAG manipulation
excessive agent autonomy

Hands-on labs include:

exploiting database queries through LLM outputs
extracting sensitive data from vector embeddings
abusing agent permissions
MODULE 7
Module 7: Infrastructure and MLOps Attacks

This module focuses on attacks targeting the infrastructure supporting AI systems.

Topics include:

resource exhaustion attacks
cost-based denial-of-service attacks
API abuse
model supply chain vulnerabilities
malicious model artifacts

Participants simulate attacks against AI infrastructure and deployment pipelines.

MODULE 8
Module 8: Red Team Automation and Evaluation

This module introduces automated AI security testing tools.

Participants learn to use:

OpenAI Evals
Garak
PromptBench

Students learn when to use manual vs automated red teaming techniques.

Hands-on labs allow participants to evaluate model vulnerabilities using automated tools.

MODULE 9
Module 9: Real Red Team Methodology

Running a full AI red team engagement

Including:

scoping
rules of engagement
evidence reporting
communication with AI teams

Certification

Participants who successfully complete the labs and pass the final assessment earn the certification:

Certified AI Security Red Teamer

The certification demonstrates the ability to:

perform AI threat modeling
identify AI system attack surfaces
execute real-world AI attacks
produce evidence-based security assessments

Lab Environment

Participants gain access to a sandboxed AI security testing environment including:

machine learning models
LLM-based applications
RAG systems
agent tool integrations

All labs run in isolated container environments.

Course Format

Delivery includes:

structured video lessons covering the full AI attack surface
guided hands-on laboratories
adversarial attack demonstrations
security control implementation exercises
threat modeling and reporting practice
automated AI red teaming tooling labs

Total labs: 20+

Estimated completion time: 35–45 hours

The course combines offensive testing techniques with defensive validation, enabling participants to practice realistic AI red teaming scenarios in a controlled lab environment.

Why This Course Matters

Artificial intelligence is rapidly becoming embedded in enterprise platforms, business automation, decision systems, and digital products.

However, many organizations deploy AI systems without fully understanding the security risks introduced by machine learning models, LLM interfaces, agents, data pipelines, and human interaction workflows.

Traditional penetration testing methodologies are not sufficient to evaluate AI systems, which behave probabilistically and rely on complex interactions between data, models, infrastructure, and people.

This course prepares security professionals to systematically assess the security of AI systems, identify weaknesses across the entire AI lifecycle, and produce actionable evidence to improve AI security posture.

Participants learn how to think like adversaries targeting AI systems, while developing the skills needed to safely evaluate and strengthen real-world AI deployments.