Look inside
Unexpected bias in machine learning models reduces accuracy, produces negative real-world consequences, and in the worst cases, entrenches existing inequalities for decades. Audits that can detect and mitigate this unwanted bias are an essential part of building and maintaining reliable machine learning systems.
In this liveProject, you’ll take on the role of a data scientist running a bias audit for the World Health Organization’s healthcare models. You’ll get to grips with state-of-the-art tools and best practices for bias detection and mitigation, including interpretability methods, the AIF-360 package, and options for imputing membership of a protected class. These tools and practices are placed in the context of a broad universe of bias where geopolitical awareness and deep understanding of the history and use of your particular data is key.
This project is designed for learning purposes and is not a complete, production-ready application or solution.