We regret that we will not be publishing this title.
Look inside
Make your AI a trustworthy partner. Build machine learning systems that are explainable, robust, transparent, and optimized for fairness.
In
Trust in Machine Learning you will learn:
- What “trustworthiness” means for machine learning
- Evaluating data for biases, privacy, and consent
- Handling adversarial attacks and machine learning security
- Interpretability and transparency across the machine learning pipeline
- Aligning machine learning to your values
- Tackling the negative uses of artificial intelligence
- Ensuring an inclusive development process
- Building AI that works for the social good
Machine learning that works in the lab can make false, unjust, and even unsafe decisions when it’s deployed to the real world.
Trust in Machine Learning is a practical guide to creating AI that you can rely on to handle high-stakes issues. You’ll learn how to build systems that are optimized for trust by reducing bias, handling distribution shift, and making your whole pipeline transparent and interpretable.
about the technology
Machine learning is influencing the big decisions of society: what stocks to buy, which employees to hire, and even criminal justice judgements. Trustworthy machine learning helps ensure these critical decisions are safe and ethical, boosts user uptake, manages legal risks, and has real potential to make the world a better place.
about the book
Trust in Machine Learning reveals the four principles of trustworthiness that humans and machines both share: accuracy, reliability, openness, and selflessness. You’ll master practical techniques for achieving trustworthiness as you explore real-world use cases drawn from author Kush Varshney’s extensive career, including a peer-to-peer lender and an automated résumé screening service.
You’ll learn how to handle the inevitable distribution shift between training and operating data, measure and mitigate bias and unfairness, and be robust to deliberate sabotage from adversarial attacks. Interpretability techniques demystify how your ML makes its decisions, and transparent reporting mechanisms ensure your whole pipeline is open and explainable. You’ll discover the dark side of AI such as filter bubbles and malicious deepfakes, and learn how to prevent unintended consequences from arising. Finally, you’ll see how machine learning can be used with real benevolence to empower nonprofits and do social good.
about the reader
For experienced data scientists and machine learning engineers.
about the author
Kush R. Varshney works for IBM Research where he leads the machine learning group in the Foundations of Trustworthy AI Department. He is the founding co-director of the IBM Science for Social Good initiative. He has received the 2013 Gerstner Award for Client Excellence, the Extraordinary IBM Research Technical Accomplishment, and the Harvard Belfer Center Tech Spotlight runner-up. He conducts academic research on the theory and methods of trustworthy machine learning, which has been recognized with numerous best-paper awards.