5, 10 or 20 seats+ for your team - learn more
Step into the realm of machine learning where adversarial attacks are a growing concern. In each of the liveProjects in this series, you’ll either play the role of an attacker penetrating a classification model or a cybersecurity professional protecting the model from malicious attacks. Using Convolutional Neural Network (CNN) architecture, you’ll build a deep learning model to predict patterns in images. You’ll generate untargeted and targeted adversarial ML attacks using the highly popular attack libraries CleverHans and Adversarial Robustness Toolbox (ART). Then, you’ll implement mitigation based on adversarial training and defensive distillation strategies. Throughout this series, you’ll gain firsthand experience on what goes into malicious ML attacks and building models to defend against them.
Tackle a fundamental step in many AI applications: building a simple image classification model. Using Convolutional Neural Network (CNN) layers, you’ll create this deep learning model for victims of adversarial machine learning attacks, train it on a publicly accessible traffic sign dataset, and implement it using Python.
Play the villain! Your goal is to mislead an existing DL model into incorrectly predicting the pattern. First, you’ll load your dataset, learn its structure, and examine a few random samples using OpenCV or Matplotlib. Using NumPy, you’ll prepare your dataset for training. Then, it’s attack time: Using FGSM and PGD, you’ll generate malicious inputs for the model in an effort to predict any class other than the correct one. Finally, you’ll enlist NumPy again to evaluate the success ratio of your attacks.
Mount a targeted attack! Your goal is to mislead an existing DL model into predicting a specific incorrect target class. First, you’ll load your dataset, learn its structure, and examine a few random samples using OpenCV or Matplotlib. Next, you’ll prepare your dataset for training using NumPy. Then you’ll generate malicious input using three different classes from the highly popular CleverHans attack library. Finally, you’ll enlist NumPy again to evaluate the success ratio of your attacks.
Protect your model by implementing adversarial training, the easiest method of safeguarding against adversarial attacks. You’ll load your dataset, learn its structure, and examine a few random samples using OpenCV or Matplotlib. Using Numpy, you’ll prepare your dataset for training, then you’ll use FGSM to generate malicious input for both untargeted and targeted attacks on a trained DL model. For each type of attack, you’ll evaluate your model before and after you apply adversarial training-based mitigation methods, gauging the success of your defense.
Make your model less vulnerable to exploitation with defensive distillation, an adversarial training strategy that uses a teacher (larger, more efficient) model to learn the critical features of a student (smaller, less efficient) model, then use the teacher model to improve the accuracy of the student model. In this liveProject, you’ll use a pre-trained model to train your student model without distillation, generate malicious input using FGSM, and evaluate the undefended model. Then, you’ll train a teacher model with a pre-trained dataset, train your student model with the same training set and teacher model using distillation, generate malicious input, and evaluate the defended student model, comparing the results with and without distillation.
This liveProject series is for intermediate Python programmers who know the basics of data science. To begin this series, you’ll need to be familiar with the following:
TOOLSgeekle is based on a wordle clone.