Adversarial AI Attacks, Mitigations, and Defense Strategies

(AI-ATCK-DEF.AJ1)
Lessons
Lab
AI Tutor (Add-on)
Get A Free Trial

Skills You’ll Get

1

Preface

  • Who this course is for
  • What this course covers
  • To get the most out of this course
2

Getting Started with AI

  • Understanding AI and ML
  • Types of ML and the ML life cycle
  • Key algorithms in ML
  • Neural networks and deep learning
  • ML development tools
  • Summary
3

Building Our Adversarial Playground

  • Technical requirements
  • Setting up your development environment
  • Hands-on basic baseline ML
  • Developing our target AI service with CNNs
  • ML development at scale
  • Summary
4

Security and Adversarial AI

  • Technical requirements
  • Security fundamentals
  • Securing our adversarial playground
  • Securing code and artifacts
  • Bypassing security with adversarial AI
  • Summary
5

Poisoning Attacks

  • Basics of poisoning attacks
  • Staging a simple poisoning attack
  • Backdoor poisoning attacks
  • Hidden-trigger backdoor attacks
  • Clean-label attacks
  • Advanced poisoning attacks
  • Mitigations and defenses
  • Summary
6

Model Tampering with Trojan Horses and Model Reprogramming

  • Injecting backdoors using pickle serialization
  • Injecting Trojan horses with Keras Lambda layers
  • Trojan horses with custom layers
  • Neural payload injection
  • Attacking edge AI
  • Model hijacking
  • Summary
7

Supply Chain Attacks and Adversarial AI

  • Traditional supply chain risks and AI
  • AI supply chain risks
  • Data poisoning
  • AI/ML SBOMs
  • Summary
8

Evasion Attacks against Deployed AI

  • Fundamentals of evasion attacks
  • Perturbations and image evasion attack techniques
  • NLP evasion attacks with BERT using TextAttack
  • Universal Adversarial Perturbations (UAPs)
  • Black-box attacks with transferability
  • Defending against evasion attacks
  • Summary
9

Privacy Attacks – Stealing Models

  • Understanding privacy attacks
  • Stealing models with model extraction attacks
  • Defenses and mitigations
  • Summary
10

Privacy Attacks – Stealing Data

  • Understanding model inversion attacks
  • Types of model inversion attacks
  • Example model inversion attack
  • Understanding inference attacks
  • Attribute inference attacks
  • Example attribute inference attack
  • Membership inference attacks
  • Summary
11

Privacy-Preserving AI

  • Privacy-preserving ML and AI
  • Simple data anonymization
  • Advanced anonymization
  • Differential privacy (DP)
  • Federated learning (FL)
  • Split learning
  • Advanced encryption options for privacy-preserving ML
  • Advanced ML encryption techniques in practice
  • Applying privacy-preserving ML techniques
  • Summary
12

Generative AI – A New Frontier

  • A brief introduction to generative AI
  • Using GANs
  • Using pre-trained GANs
  • Summary
13

Weaponizing GANs for Deepfakes and Adversarial Attacks

  • Use of GANs for deepfakes and deepfake detection
  • Using GANs in cyberattacks and offensive security
  • Defenses and mitigations
  • Summary
14

LLM Foundations for Adversarial AI

  • A brief introduction to LLMs
  • Developing AI applications with LLMs
  • Hello LLM with Python
  • Hello LLM with LangChain
  • Bringing your own data
  • How LLMs change Adversarial AI
  • Summary
15

Adversarial Attacks with Prompts

  • Adversarial inputs and prompt injection
  • Direct prompt injection
  • Automated gradient-based prompt injection
  • Risks from bringing your own data
  • Indirect prompt injection
  • Data exfiltration with prompt injection
  • Privilege escalation with prompt injection
  • RCE with prompt injection
  • Defenses and mitigations
  • Summary
16

Poisoning Attacks and LLMs

  • Poisoning embeddings in RAG
  • Poisoning attacks on fine-tuning LLMs
  • Summary
17

Advanced Generative AI Scenarios

  • Supply-chain attacks in LLMs
  • Privacy attacks and LLMs
  • Model inversion and training data extraction attacks on LLMs
  • Inference attacks on LLMs
  • Model cloning with LLMs using a secondary model
  • Defenses and mitigations for privacy attacks
  • Summary
18

Secure by Design and Trustworthy AI

  • Secure by design AI
  • Building our threat library
  • Industry AI threat taxonomies
  • AI threat taxonomy mapping
  • Threat modeling for AI
  • Threat modelling in action
  • Enhanced FoodieAI threat model
  • Risk assessment and prioritization
  • Security design and implementation
  • Testing and verification
  • Shifting left – embedding security into the AI life cycle
  • Live operations
  • Beyond security – Trustworthy AI
  • Summary
19

AI Security with MLSecOps

  • The MLSecOps imperative
  • Toward an MLSecOps 2.0 framework
  • Building a primary MLSecOPs platform
  • MLSecOps in action
  • Integrating MLSecOps with LLMOps
  • Advanced MLSecOps with SBOMs
  • Summary
20

Maturing AI Security

  • Enterprise security AI challenges
  • Foundations of enterprise AI security
  • Protecting AI with enterprise security
  • Operational AI security
  • Iterative enterprise security
  • Summary

1

Building Our Adversarial Playground

2

Security and Adversarial AI

3

Poisoning Attacks

4

Model Tampering with Trojan Horses and Model Reprogramming

5

Supply Chain Attacks and Adversarial AI

6

Evasion Attacks against Deployed AI

7

Privacy Attacks – Stealing Models

8

Privacy Attacks – Stealing Data

9

Privacy-Preserving AI

10

Generative AI – A New Frontier

11

Weaponizing GANs for Deepfakes and Adversarial Attacks

12

LLM Foundations for Adversarial AI

13

Adversarial Attacks with Prompts

14

Poisoning Attacks and LLMs

15

Advanced Generative AI Scenarios

16

Secure by Design and Trustworthy AI

  • Understanding Secure Design, Threats, and Trustworthy AI
17

AI Security with MLSecOps

18

Maturing AI Security

  • Strengthening Enterprise AI Security Maturity

Any questions?
Check out the FAQs

Still have unanswered questions and need to get in touch?

Contact Us Now

Adversarial AI Attacks, Mitigations, and Defense Strategies

$167.99

Buy Now

Related Courses

All Courses
scroll to top