This three days hands-on masterclass starts with an overview of Artificial Intelligence (AI) algorithms and explores the threat model space of different AI techniques. It then provides an in-depth analysis of different AI adversarial attacks including adversarial poisoning attacks, adversarial evasion attacks, adversarial attacks against malware detection and differential privacy and model theft techniques. The course is concluded with defense mechanisms to protect AI engines from adversarial attacks and reviews different testing techniques to identify vulnerabilities of different AI engines against adversarial learning techniques
—IDENTIFY the threat model space of different Artificial Intelligence (AI) techniques.
—UNDERSTAND and ANALYSE different AI adversarial attacks.
—LEARN defense mechanisms to protect AI engines from attacks.
—REVIEW different testing techniques to identify vulnerabilities of different AI engines.
—IDENTIFY critical AI usages within your organisation and your roadmap.
—EXECUTE security modeling for AI usages (threats, adversaries, attack vectors).
—BETTER understanding of secure incorporation of AI technology.
Despite growing concerns about security and privacy vulnerabilities of Artificial Intelligence (AI) systems, there is very little understanding of these vulnerabilities within the cybersecurity community. Artificial Intelligence algorithms are developed for stationary environments. However, intelligent and adaptive adversaries can carefully craft input data to always bypass AI-based cybersecurity systems. Therefore, an adversary may try to identify potential vulnerabilities of Artificial Intelligence systems and build adversarial payloads that targets detected vulnerabilities. Participate in this workshop to learn how you can protect your AI systems.