Look inside
Simplify, streamline, and scale your data operations with data pipelines built on Apache Airflow.
Apache Airflow provides a batteries-included platform for designing, implementing, and monitoring data pipelines. Building pipelines on Airflow eliminates the need for patchwork stacks and homegrown processes, adding security and consistency to the process. Now in its second edition,
Data Pipelines with Apache Airflow teaches you to harness this powerful platform to simplify and automate your data pipelines, reduce operational overhead, and seamlessly integrate all the technologies in your stack.
In
Data Pipelines with Apache Airflow, Second Edition you'll learn how to:
- Master the core concepts of Airflow architecture and workflow design
- Schedule data pipelines using the Dataset API and time tables, including complex irregular schedules
- Develop custom Airflow components for your specific needs
- Implement comprehensive testing strategies for your pipelines
- Apply industry best practices for building and maintaining Airflow workflows
- Deploy and operate Airflow in production environments
- Orchestrate workflows in container-native environments
- Build and deploy Machine Learning and Generative AI models using Airflow
Data Pipelines with Apache Airflow has empowered thousands of data engineers to build more successful data platforms. This new second edition has been fully revised to cover the latest features of Apache Airflow, including the Taskflow API, deferrable operators, and Large Language Model integration. Filled with real-world scenarios and examples, you'll be carefully guided from Airflow novice to expert.
about the book
Data Pipelines with Apache Airflow, Second Edition teaches you how to build and maintain effective data pipelines. You'll master every aspect of directed acyclic graphs (DAGs)—the power behind Airflow—and learn to customize them for your pipeline's specific needs. Part reference and part tutorial, each technique is illustrated with engaging hands-on examples, from training machine learning models for generative AI to optimizing delivery routes. You'll explore common Airflow usage patterns, including aggregating multiple data sources and connecting to data lakes, while discovering exciting new features such as dynamic scheduling, the Taskflow API, and Kubernetes deployments.
about the reader
For DevOps, data engineers, machine learning engineers, and sysadmins with intermediate Python skills.
about the authors
Julian de Ruiter is a Data + AI engineering lead at Xebia Data, with a background in computer and life sciences and a PhD in computational cancer biology. As consultant at Xebia Data, he enjoys helping clients design and build AI solutions and platforms, as well as the teams that drive them. From this work, he has extensive experience in deploying and applying Apache Airflow in production in diverse environments.
Ismael Cabral is a Machine Learning Engineer and Airflow trainer with experience spanning across Europe, US, Mexico, and South America, where he has worked with market-leading companies. He has vast experience implementing data pipelines and deploying machine learning models in production.
Kris Geusebroek is a data-engineering consultant with extensive hands-on experience with Apache Airflow at several clients and is the maintainer of Whirl (the open source local testing with Airflow repository), where he is actively adding new examples based on new functionality and new technologies that integrate with Airflow.
Daniel van der Ende is a Data Engineer who first started using Apache Airflow back in 2016. Since then, he has worked in many different Airflow environments, both on-premises and in the cloud. He has actively contributed to the Airflow project itself, as well as related projects such as Astronomer-Cosmos.
Bas Harenslak is a Staff Architect at Astronomer, where he helps customers develop mission-critical data pipelines at large scale using Apache Airflow and the Astro platform. With a background in software engineering and computer science, he enjoys working on software and data as if they are challenging puzzles. He favours working on open source software, is a committer on the Apache Airflow project, and co-author of the first edition of
Data Pipelines with Apache Airflow.