Master the Future of AI:AI Agentic Security Practitioner Course

Release Date : 01/04/2026

Earn your CASP (Certified AI Security Professional) certification and design, implement, and manage secure AI systems at scale.

Certification
CASP
Certified Agentic Security Practitioner
Level
Intermediate
Medium Level
Duration
~30 Hours
10h course + 20h Labs
Format
Self-paced, blended learning
videos, quizzes, exercises
Overview

Why This Course?

Master the security of autonomous AI systems through hands-on exploitation and defense.

Agentic AI systems are transforming enterprise automation, enabling AI agents to interact with tools, APIs, databases, and other agents. However, these systems introduce entirely new attack surfaces that traditional application security frameworks were never designed to handle.

The Certified Agentic Security Practitioner (CASP-AI) program equips security professionals with the knowledge and practical skills required to analyze, exploit, and secure agentic AI architectures used in modern enterprises.

Through hands-on labs and realistic attack scenarios, participants learn how attackers compromise agent workflows through prompt injection, memory poisoning, tool abuse, multi-agent trust failures, and AI supply-chain attacks.

Participants will build, attack, and defend real agentic systems using n8n workflows, LangChain agents, MCP integrations, and Python-based AI applications.

Course Overview

This practitioner-level course provides a deep understanding of the security risks introduced by autonomous AI systems and teaches professionals how to evaluate and secure agentic architectures.

Participants will:

Understand how agentic AI systems operate
Identify security vulnerabilities in AI agent workflows
Exploit real-world agentic attack scenarios
Implement defensive controls for secure deployment

The course culminates in a capstone lab simulating the compromise of a real enterprise AI agent system.

Who Should Attend

This course is designed for professionals responsible for building, securing, or assessing AI systems.

Recommended for:

Application Security Engineers
AI Security Engineers
Security Architects
Red Team Professionals
AI Engineers deploying agent-based systems
DevSecOps professionals working with AI infrastructure

No prior experience with agentic AI systems is required, although familiarity with LLMs or machine learning concepts is beneficial.

What You Will Learn

Participants completing this course will be able to:

Understand the architecture of agentic AI systems
Identify security risks introduced by memory, tools, and agent orchestration
Perform prompt injection and indirect injection attacks
Exploit memory poisoning and persistent agent manipulation
Manipulate tool execution and function calling interfaces
Exploit RAG pipelines and knowledge poisoning attacks
Identify trust failures in multi-agent architectures
Assess risks in MCP ecosystems and AI supply chains
Implement security controls to protect agentic systems

Hands-On Labs

This course is highly practical and includes 14 hands-on attack and defense labs designed to simulate real-world AI system vulnerabilities.

Participants will practice exploiting vulnerabilities including:

System prompt extraction
Prompt injection bypass techniques
Persistent memory poisoning
Dormant agent backdoors
Unauthorized tool invocation
Function call injection attacks
Command execution through AI tools
RAG knowledge poisoning
Denial-of-wallet attacks
Agent impersonation
Multi-agent collusion attacks
Cross-agent privilege escalation
MCP registry exploitation
Malicious tool injection

Each lab challenges participants to retrieve hidden secrets from vulnerable AI systems.

Course Modules

MODULE 1
Module 0: Agentic AI Security Fundamentals

Participants learn how autonomous agents operate and why they introduce new security challenges.

Hands-on exercise: Build your first autonomous agent. (1 Labs)

Topics include:

Agentic AI architecture
LLM reasoning engines
Agent memory systems
Tool interfaces and APIs
The agentic threat landscape
The “Lethal Trifecta” risk model
MODULE 2
Module 1: Prompt Attacks Foundations

Learn how attackers exploit the confusion between instructions and untrusted data in AI systems.

Hands-on lab: Extract secrets from a protected system prompt (3+ labs).

Topics include:

Prompt injection attacks
Direct vs indirect prompt injection
Output manipulation risks
Downstream exploitation (XSS, SSRF, command execution)
MODULE 3
Module 2: Memory Poisoning & Persistence

Understand how attackers implant long-term behavioral changes into AI agents.

Hands-on lab: Poison a memory system and trigger hidden behaviors (4+ labs).

Topics include:

Agent memory architectures
Persistent manipulation techniques
Trigger-based activation
Memory governance controls
MODULE 4
Module 3: Tool Abuse & Excessive Agency

Agents interacting with tools introduce serious security risks.

Hands-on lab: Exploit tool misuse through a compromised agent workflow (6+ labs).

Topics include:

Tool permission models
Function call manipulation
Tool selection coercion
Unsafe plugin architectures
MODULE 5
Module 4: RAG Attacks & Economic Abuse

Retrieval Augmented Generation systems introduce vulnerabilities across the data pipeline.

Hands-on lab: Poison a knowledge base and manipulate model responses (6+ labs).

Topics include:

Knowledge poisoning
Retrieval manipulation
Ranking attacks
Token flooding attacks
Denial-of-wallet exploitation
MODULE 6
Module 5: Multi-Agent Trust Failures

When agents collaborate, trust boundaries can break down.

Hands-on lab: Exploit weaknesses in multi-agent workflows (6+ labs).

Topics include:

Agent impersonation
Message spoofing
Colluding agents
Privilege escalation between agents
MODULE 7
Module 6: MCP & AI Supply Chain Attacks

Agent ecosystems relying on external tools introduce supply chain vulnerabilities.

Hands-on lab: Exploit a vulnerable MCP registry (4+ labs).

Topics include:

MCP architecture
Tool discovery risks
Registry poisoning
Malicious tool injection
MODULE 8
Module 7: Capstone: Autonomous System Compromise

Participants perform a full attack chain against a complex agentic system.

Students use automated AI security testing tools (2+ labs).

Attack phases include:

Reconnaissance
Initial compromise
Persistence
Privilege escalation
Secret extraction
MODULE 9
Module 8: Defending Agentic AI Systems

This module focuses on defensive architecture and operational security.

Participants implement controls to mitigate attacks performed earlier in the course.

Topics include:

Secure agent design patterns
Prompt boundary enforcement
Tool execution sandboxing
Identity and authorization for agents
Monitoring and anomaly detection
Cost anomaly detection
Runtime policy enforcement

Certification

Participants who complete the labs and pass the final assessment earn the certification:

Certified Agentic Security Practitioner

This certification demonstrates the ability to:

analyze agentic AI architectures
identify AI-specific attack surfaces
exploit vulnerabilities in AI agents
design secure agentic systems for enterprise environments

Lab Environment

Participants receive access to a sandboxed training environment including:

n8n automation workflows
LangChain agent frameworks
MCP servers and tool ecosystems
Python-based AI applications

All labs run in isolated containers, allowing participants to safely practice offensive AI security techniques.

Course Format

Delivery includes:

Participants will practice exploiting vulnerabilities including:

video lessons
guided labs
attack walkthroughs
capstone challenge

Total labs: 25+

Estimated completion time:

Course Content: 10 hours
Labs: 40+ hours

Why This Course Matters

AI agents are rapidly becoming the operating system of modern automation.

However, most organizations deploy agentic systems without understanding the security risks they introduce.

This course prepares security professionals to evaluate, exploit, and secure agentic AI systems before attackers do.