Look inside
In this liveProject, you’ll use the DistilBERT variation of the BERT Transformer to detect and block occurrences of spam emails in a data set. You’ll utilize binary classification to determine whether an email is spam, or legitimate. The DistilBERT model uses knowledge distillation to highly reduce the size of the transformer model, thus optimizing time and resources. You’ll learn to use the HuggingFace library to load your data set, and fine-tune it to your task with PyTorch Lightning. You’ll also explore alternative training approaches that utilize novel APIs in the transformers library to fine-tune pre trained DistilBERT models. Every part of an NLP pipeline is covered, from preprocessing your data to remove symbols and numbers, to model training and validation using F1-scoring to assess the robustness of your pipeline.
This project is designed for learning purposes and is not a complete, production-ready application or solution.