5, 10 or 20 seats+ for your team - learn more
Make your model less vulnerable to exploitation with defensive distillation, an adversarial training strategy that uses a teacher (larger, more efficient) model to learn the critical features of a student (smaller, less efficient) model, then use the teacher model to improve the accuracy of the student model. In this liveProject, you’ll use a pre-trained model to train your student model without distillation, generate malicious input using FGSM, and evaluate the undefended model. Then, you’ll train a teacher model with a pre-trained dataset, train your student model with the same training set and teacher model using distillation, generate malicious input, and evaluate the defended student model, comparing the results with and without distillation.
This liveProject is for intermediate Python programmers who know the basics of data science. To begin this liveProject, you’ll need to be familiar with the following:
TOOLSgeekle is based on a wordle clone.