Master the Future of AI:AI Security Essentials
Released !
Earn your CAISE (Certified AI Security Expert) certification and lead the secure deployment of artificial intelligence.
Certification
CAISE
Certified AI Security Essentials
Level
Beginner
Beginner Level
Duration
~30 Hours
10 hours course + 20 hours labs + quizzes
Format
Self-paced, blended learning
videos + quizzes + optional labs
Overview
Why This Course?
Artificial Intelligence is reshaping every aspect of business from automation and healthcare diagnostics to creative generation and decision-making. But as organizations adopt AI, they inherit new and complex risks: model manipulation, data leakage, Shadow AI, deepfakes, and unregulated agentic systems.
This foundational course introduces learners to the core principles of AI security, showing how AI works, where vulnerabilities arise, and how to build and govern AI systems responsibly. It bridges technical, governance, and ethical layers of AI for engineers, security professionals, and policy leaders.
Course Modules
MODULE 1
Module 1: Foundations of Artificial Intelligence
ML101 - Quiz
LLM101 - Quiz
Agentic101 - Quiz
MODULE 2
Module 2: AI Threat Landscape
Quizzes
MODULE 3
Module 3: OWASP ML Security Top 10 (2023)
Dataset poisoning (Spam Filtering)
Federated Learning Attack (Byzantine Attack)
Dataset Poisoning (Rogue Reviewer)
MODULE 4
Module 4: OWASP LLM Security Top 10 (2024)
LLM vulnerabilities (Direct Textual Prompt Injection)
LLM vulnerabilities (Indirect Prompt Injection)
Hidden Prompt Injection (Invisible HTML elements)
MODULE 5
Module 5: OWASP Agentic AI Security (2024–2025)
RAG Shadow Cache
Agentic vulnerabilities (Excessive Agency – mission 4)
Mission 5: Agentic vulnerabilities (Excessive Agency)
MODULE 6
Module 6: Secure AI Development Lifecycle (SAIDLC)
ML: Secure Data Preprocessing
MODULE 7
Module 7: Deepfakes and Disinformation
Deepfake detection challenge x 3 levels (1 challenge)
MODULE 8
Module 8: Shadow AI and Governance
Detect unauthorized AI tools in a simulated network
Build an internal AI governance policy
MODULE 9
Module 9: AI Red Teaming
LLM: vulnerabilities (Improper output Handling)
ML: Model stealing
Agentic Prompt Injection in ReACT loop (Mission 7)
