From Data Lake to Lakehouse you own this product

prerequisites
beginner Python • basics of Jupyter Notebook • beginner SQL • beginner Apache Spark • basic data processing in Python (with pandas or similar libraries) • basic distributed computing
skills learned
leverage Delta Lake capabilities including schema enforcement and time travel • use Delta Lake commands • interact with multiple layers of the three-layer architecture • use Apache Spark for data processing and interacting with the Lakehouse
Mahdi Karabiben
1 week · 4-6 hours per week · BEGINNER

pro $24.99 per month

  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose one free eBook per month to keep
  • exclusive 50% discount on all purchases

lite $19.99 per month

  • access to all Manning books, including MEAPs!

team

5, 10 or 20 seats+ for your team - learn more


Look inside

Turn an existing data lake into a Lakehouse, using Delta Lake, an open table format (and the cornerstone of Databricks’ Lakehouse design). For data processing and interacting with the Lakehouse, you’ll use Apache Spark. As you transform the existing tables into Delta tables, you’ll explore Delta Lake’s rich features, see firsthand how it handles potential problems, and appreciate the sophistication of the Lakehouse design.

This project is designed for learning purposes and is not a complete, production-ready application or solution.

project author

Mahdi Karabiben

Mahdi is a senior data engineer at Zendesk. With four years of experience in data engineering, he has worked on multiple large-scale projects within the AdTech and financial sectors. He's a Cloudera-certified Apache Spark developer and works with Big Data technologies on a daily basis, designing and building data pipelines, data lakes, and data services that rely on petabytes of data. Thanks to his degree in software engineering (with a minor in big data), he is comfortable with a wide range of technologies and concepts. He additionally writes for major Medium publications (Towards Data Science, The Startup) and technology websites (TheNextWeb, Software Engineering Daily, freeCodeCamp).

prerequisites

The liveProject is for software engineers and data professionals interested in onboarding big data processing skills including processing large amounts of data and building cloud-based data lakes. To begin these liveProjects you'll need to be familiar with the following:


TOOLS
  • Beginner Python
  • Basics of Jupyter Notebook
  • Beginner SQL
  • Beginner Apache Spark
TECHNIQUES
  • Basic data processing in Python (with pandas or similar libraries)
  • Basic distributed computing
  • Basic SQL
  • Basic understanding of distributed data lakes

features

Self-paced
You choose the schedule and decide how much time to invest as you build your project.
Project roadmap
Each project is divided into several achievable steps.
Get Help
While within the liveProject platform, get help from other participants and our expert mentors.
Compare with others
For each step, compare your deliverable to the solutions by the author and other participants.
book resources
Get full access to select books for 90 days. Permanent access to excerpts from Manning products are also included, as well as references to other resources.

choose your plan

team

monthly
annual
$49.99
$499.99
only $41.67 per month
  • five seats for your team
  • access to all Manning books, MEAPs, liveVideos, liveProjects, and audiobooks!
  • choose another free product every time you renew
  • choose twelve free products per year
  • exclusive 50% discount on all purchases
  • From Data Lake to Lakehouse project for free