Master the Future of Defense:AI Security Red Teaming
Earn your CARTP (Certified AI Red Team Professional) certification and lead offensive security testing against AI systems and models.
Why This Course?
The Certified AI Security Red Teamer (CAISR) course is designed to equip security professionals with the knowledge and skills to identify, exploit, and measure the security weaknesses of modern AI systems.
Learn how to systematically attack and evaluate the security of modern AI systems.
AI systems introduce new and complex attack surfaces that extend beyond traditional application security. These systems combine machine learning models, data pipelines, prompts, agents, infrastructure, and human decision-making, forming what is best described as a socio-technical system.
The Certified AI Security Red Teamer (CAISR) course trains security professionals to think and operate as adversaries when assessing AI systems.
Participants learn how to identify, exploit, and measure weaknesses across the entire AI attack surface, including:
Through 20+ hands-on labs and real attack simulations, students perform structured AI red teaming engagements and produce evidence-based security assessments.
The course content aligns with leading industry frameworks including:
Course Overview
This course prepares practitioners to design and execute full AI red teaming engagements.
Participants learn how to:
By the end of the program, participants will be able to conduct structured AI security assessments against real AI systems.
Who Should Attend
This course is designed for professionals responsible for evaluating or securing AI systems.
Recommended for:
Basic familiarity with Python and machine learning concepts is recommended.
What You Will Learn
Participants completing this course will be able to:
Hands-On Labs
The course includes 20 hands-on labs covering realistic AI attack scenarios.
Participants will practice:
Each lab requires students to demonstrate the impact of their attack and document the evidence.
Course Modules
This module introduces the fundamental differences between traditional software security and AI system security.
Topics include:
Hands-on exercises compare traditional vulnerabilities with ML failure modes.
Participants also explore the AI attack surface, including:
This module teaches practitioners how to systematically identify risks in AI systems.
Participants learn to apply multiple frameworks including:
Hands-on lab:
Participants perform their first AI red teaming exercise against a vulnerable AI application.
Students perform threat modeling across the AI lifecycle, including:
Training data is one of the most critical attack surfaces in AI systems.
This module explores attacks targeting machine learning datasets.
Topics include:
Hands-on labs demonstrate how small changes in training data can corrupt model behavior.
Participants also study defensive techniques such as:
This module focuses on attacks targeting the intellectual property and privacy of machine learning models.
Topics include:
Students learn how attackers reconstruct models using only API access.
Hands-on labs include:
This module explores attacks that target large language models directly.
Topics include:
Participants complete a multi-challenge prompt exploitation lab environment.
Modern AI systems often include agents, tools, and retrieval pipelines, introducing new attack surfaces.
Topics include:
Hands-on labs include:
This module focuses on attacks targeting the infrastructure supporting AI systems.
Topics include:
Participants simulate attacks against AI infrastructure and deployment pipelines.
This module introduces automated AI security testing tools.
Participants learn to use:
Students learn when to use manual vs automated red teaming techniques.
Hands-on labs allow participants to evaluate model vulnerabilities using automated tools.
Running a full AI red team engagement
Including:
Certification
Participants who successfully complete the labs and pass the final assessment earn the certification:
Certified AI Security Red Teamer
The certification demonstrates the ability to:
Lab Environment
Participants gain access to a sandboxed AI security testing environment including:
All labs run in isolated container environments.
Course Format
Delivery includes:
Total labs: 20+
Estimated completion time: 35–45 hours
The course combines offensive testing techniques with defensive validation, enabling participants to practice realistic AI red teaming scenarios in a controlled lab environment.
Why This Course Matters
Artificial intelligence is rapidly becoming embedded in enterprise platforms, business automation, decision systems, and digital products.
However, many organizations deploy AI systems without fully understanding the security risks introduced by machine learning models, LLM interfaces, agents, data pipelines, and human interaction workflows.
Traditional penetration testing methodologies are not sufficient to evaluate AI systems, which behave probabilistically and rely on complex interactions between data, models, infrastructure, and people.
This course prepares security professionals to systematically assess the security of AI systems, identify weaknesses across the entire AI lifecycle, and produce actionable evidence to improve AI security posture.
Participants learn how to think like adversaries targeting AI systems, while developing the skills needed to safely evaluate and strengthen real-world AI deployments.
