AI Deception: Security and Privacy of AI Agents

This three days hands-on masterclass starts with an overview of Artificial Intelligence (AI) algorithms and explores the threat model space of different AI techniques. It then provides an in-depth analysis of different AI adversarial attacks including adversarial poisoning attacks, adversarial evasion attacks, adversarial attacks against malware detection and differential privacy and model theft techniques. The course is concluded with defense mechanisms to protect AI engines from adversarial attacks and reviews different testing techniques to identify vulnerabilities of different AI engines against adversarial learning techniques

IDENTIFY the threat model space of different Artificial Intelligence (AI) techniques.

UNDERSTAND and ANALYSE different AI adversarial attacks.

LEARN defense mechanisms to protect AI engines from attacks.

REVIEW different testing techniques to identify vulnerabilities of different AI engines.

IDENTIFY critical AI usages within your organisation and your roadmap.

EXECUTE security modeling for AI usages (threats, adversaries, attack vectors).

BETTER understanding of secure incorporation of AI technology.

  • -CEOs
  • -CTOs
  • -COOs
  • -Chief Data Officers
  • -Chief Information Officers
  • -Chief Innovation Officers
  • -Chief Digital Officers
  • -CxOs (Analytics, Data, Information,
    Innovation, Technology)
  • -Algorithms Engineers/Scientists
  • -Scientists Engineers -Developers
  • -Software Engineers
  • -VP’s, Directors, Heads of:
    •     -AI
    •     -Machine Learning
    •     -Data Science
    •     -Technology Innovation
    •     -Applied AI
    •  

Despite growing concerns about security and privacy vulnerabilities of Artificial Intelligence (AI) systems, there is very little understanding of these vulnerabilities within the cybersecurity community. Artificial Intelligence algorithms are developed for stationary environments. However, intelligent and adaptive adversaries can carefully craft input data to always bypass AI-based cybersecurity systems. Therefore, an adversary may try to identify potential vulnerabilities of Artificial Intelligence systems and build adversarial payloads that targets detected vulnerabilities. Participate in this workshop to learn how you can protect your AI systems.

Day 1

Overview of Machine Learning Tasks

  • Supervised Learning
  • Unsupervised Learning
  • Reinforcement Learning
  • Deep Learning
  • Real-world applications of Machine Learning
  • Lab 1: Deep Learning for face recognition

Machine Learning in Cyber Security

  • Machine Learning empirical process
  • Theoretical model of Machine Learning
  • Application of Machine Learning in Cybersecurity
  • Lab 2: Support Vector Machine (SVM) for IoT malware threat hunting

The Machine Learning Threat Model

  • The ML attack surface
  • Adversarial capabilities
  • Adversarial objectives
  • ML threat modelling
  • Lab 3: Identifying attack surface and threat modelling of a Deep Learning agent for traffic sign detection

Adversarial Machine Learning

  • ML training in adversarial settings
  • ML inference in adversarial settings
  • ML Differential Privacy and model thefts
  • Lab 4: Data exfiltration from a trained Decision Tree model

Day 2

Penetration Testing of ML agents

  • Penetration testing methodology of ML engines
  • White-box testing
  • Black-box testing
  • Payload injection attacks
  • Lab 5: A black box testing against Inception deep learning model for image recognition and evaluating adversarial attack

Adversarial Poisoning Attacks

  • Adversarial poisoning attacks methodology
  • Poisoning attacks techniques
  • Lab 6: Poisoning attacks against a text-recognition Deep Learning agent

Adversarial Evasion Attacks

  • Adversarial evasion attacks methodology
  • Evasion attacks techniques
  • Lab 7: Evading an object recognition Deep Learning agent

Adversarial Attacks on Malware Detection Systems

  • Machine learning for malware analysis
  • Poisoning and evasion attacks against ML-based malware detection systems
  • Lab 8: Evading a deep convolutional neural network agent for malware detection

Day 3

ML Differential Privacy and Model Thefts

  • Foundations of differential privacy
  • Data inference and model theft attacks
  • Lab 9: Stealing credit card data of a ML agent

Adversarial Attacks Detection

  • Adversarial attack detection methodology
  • Anomaly detection in adversarial settings
  • Lab 10: Feature based anomaly detection on deep neural networks

Adversarial Attacks Prevention

  • Adversarial attack prevention methodology
  • Preventing techniques for adversarial attacks
  • Evaluation robustness and confidence of ML models
  • Input sanitization of ML models
  • Lab 11: Input sanitization of a ML image recognition system

Adversarial Attacks Defense-in-Depth framework

  • An adversarial learning defense reference model
  • Preserving privacy of ML models
  • The ML Defense-in-Depth (ML-DiD) framework
  • Lab 12: architecting a defense in depth model in an AI-centered enterprise
– November 27 to November 29, 2019, Singapore, More Details