Three-Project Series

BERT-Based Transformer Projects you own this product

prerequisites
intermediate Python • intermediate PyTorch • basics of Natural Language Processing • basics of Google Colab
skills learned
loading and preprocessing a text data set • tokenizing data using pretrained tokenizers • loading and configuring pretrained ALBERT, RoBERTa, and DistilBERT models using Hugging Face
Rohan Khilnani
3 weeks · 6-8 hours per week average · INTERMEDIATE

pro $24.99 per month

  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose one free eBook per month to keep
  • exclusive 50% discount on all purchases

lite $19.99 per month

  • access to all Manning books, including MEAPs!

team

5, 10 or 20 seats+ for your team - learn more


In this series of liveProjects, you’ll use variants of the BERT Transformer to solve real-world natural language processing problems. Transformers are pretrained machine learning models and are rapidly becoming the go-to architecture for any NLP use case. As you work through the hands-on challenges in this liveProject series, you’ll get real experience implementing state-of-the-art Transformer architectures, using the Hugging Face library, for use cases such as detecting hate speech, spotting fake news, and blocking spam. Each project is standalone, letting you pick and choose the application most relevant to your career. For each, you’ll create an entire NLP pipeline that stretches from preprocessing data all the way to validating model performance.

These projects are designed for learning purposes and are not complete, production-ready applications or solutions.

Manning author Rohan Khilnani shares what he likes about the Manning liveProject platform.

here's what's included

Project 1 Hate Speech Detection
In this liveProject, you’ll use the ALBERT variation of the BERT Transformer to detect occurrences of hate speech in a data set. The ALBERT model uses fewer parameters than BERT, making it more suitable to the unstructured and slang-heavy text of social media. You’ll load this powerful pretrained model using the Hugging Face library and fine-tune it for your specific needs with PyTorch Lightning. As falsely tagging hate speech can be a big problem, the success of your model will involve calculating and optimizing its precision score. Your final product will run as a notebook on a GPU in the Google Colab environment.
Project 2 Fake News Detection
In this liveProject, you’ll use the RoBERTa variation of the BERT Transformer to detect occurrences of fake news in a data set. Political news can be tricky to validate for accuracy, as sources report the same events from different biased angles. RoBERTa uses different pre-training methods than traditional BERT and has hyperparameters that are highly optimized, meaning it tends to perform better than its predecessor. You’ll start out by loading the model using the Hugging Face library and training it to your data with PyTorch Lightning. The project will also include the implementation of training a custom tokenizer from scratch and using it to tokenize the data. A successful model will maximize positives, and so model evaluation will be based on your model having a high recall score.
Project 3 Spam SMS Detection
In this liveProject, you’ll use the DistilBERT variation of the BERT Transformer to detect and block occurrences of spam emails in a data set. You’ll utilize binary classification to determine whether an email is spam, or legitimate. The DistilBERT model uses knowledge distillation to highly reduce the size of the transformer model, thus optimizing time and resources. You’ll learn to use the HuggingFace library to load your data set, and fine-tune it to your task with PyTorch Lightning. You’ll also explore alternative training approaches that utilize novel APIs in the transformers library to fine-tune pre trained DistilBERT models. Every part of an NLP pipeline is covered, from preprocessing your data to remove symbols and numbers, to model training and validation using F1-scoring to assess the robustness of your pipeline.

book resources

When you start each of the projects in this series, you'll get full access to the following book for 90 days.

choose your plan

team

monthly
annual
$49.99
$499.99
only $41.67 per month
  • five seats for your team
  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose another free product every time you renew
  • choose twelve free products per year
  • exclusive 50% discount on all purchases
  • BERT-Based Transformer Projects project for free

project author

Rohan Khilnani
Rohan Khilnani is a data scientist at Optum, United Health Group. He has filed two patents in the field of natural language processing and has also published a research paper on LSTMs with Attention at the COLING conference in 2018.

Prerequisites

This liveProject is for intermediate Python and NLP practitioners who are interested in implementing pretrained BERT architectures, and customizing them to solve real-world NLP problems. To begin this liveProject you will need to be familiar with the following:


TOOLS
  • Intermediate Python
  • Intermediate PyTorch
  • Basics of Google Colab
TECHNIQUES
  • Basics of machine learning
  • Basics of neural networks
  • Basics of natural language processing

features

Self-paced
You choose the schedule and decide how much time to invest as you build your project.
Project roadmap
Each project is divided into several achievable steps.
Get Help
While within the liveProject platform, get help from other participants and our expert mentors.
Compare with others
For each step, compare your deliverable to the solutions by the author and other participants.
book resources
Get full access to select books for 90 days. Permanent access to excerpts from Manning products are also included, as well as references to other resources.