Modern Data Engineering with Apache Spark - Scott Haines

Modern Data Engineering with Apache Spark

A Hands-On Guide for Building Mission-Critical Streaming Applications

(Autor)

Buch | Softcover
585 Seiten
2022 | 1st ed.
Apress (Verlag)
978-1-4842-7451-4 (ISBN)
64,19 inkl. MwSt
Leverage Apache Spark within a modern data engineering ecosystem. This hands-on guide will teach you how to write fully functional applications, follow industry best practices, and learn the rationale behind these decisions. With Apache Spark as the foundation, you will follow a step-by-step journey beginning with the basics of data ingestion, processing, and transformation, and ending up with an entire local data platform running Apache Spark, Apache Zeppelin, Apache Kafka, Redis, MySQL, Minio (S3), and Apache Airflow.

Apache Spark applications solve a wide range of data problems from traditional data loading and processing to rich SQL-based analysis as well as complex machine learning workloads and even near real-time processing of streaming data. Spark fits well as a central foundation for any data engineering workload. This book will teach you to write interactive Spark applications using Apache Zeppelin notebooks, write and compilereusable applications and modules, and fully test both batch and streaming. You will also learn to containerize your applications using Docker and run and deploy your Spark applications using a variety of tools such as Apache Airflow, Docker and Kubernetes.​Reading this book will empower you to take advantage of Apache Spark to optimize your data pipelines and teach you to craft modular and testable Spark applications. You will create and deploy mission-critical streaming spark applications in a low-stress environment that paves the way for your own path to production.



What You Will Learn

Simplify data transformation with Spark Pipelines and Spark SQL

Bridge data engineering with machine learning
Architect modular data pipeline applications

Build reusable application components and libraries
Containerize your Spark applications for consistency and reliability
Use Docker and Kubernetes to deploy your Spark applications

Speed up application experimentation using Apache Zeppelin and Docker
Understand serializable structured data and data contracts
Harness effective strategies for optimizing data in your data lakes
Build end-to-end Spark structured streaming applications using Redis and Apache Kafka
Embrace testing for your batch and streaming applications
Deploy and monitor your Spark applications


Who This Book Is For
Professional software engineers who want to take their current skills and apply them to new and exciting opportunities within the data ecosystem, practicing data engineers who are looking for a guiding light while traversing the many challenges of moving from batch to streaming modes, data architects who wish to provide clear and concise direction for how best to harness anduse Apache Spark within their organization, and those interested in the ins and outs of becoming a modern data engineer in today's fast-paced and data-hungry world

​Scott Haines is a full stack engineer with a current focus on real-time, highly available, trustworthy analytics systems. He works at Twilio as a Principal Software Engineer on the Voice Insights team, where he helps drive Spark adoption, creates streaming pipeline architectures, and helps to architect and build out a massive stream and batch processing platform. Prior to Twilio, Scott worked writing the backend Java APIs for Yahoo Games as well as the real-time game ranking and ratings engine (built on Storm) to provide personalized recommendations and page views for 10 million customers. He finished his tenure at Yahoo working for Flurry Analytics where he wrote the alerts and notifications system for mobile devices.

Part I. The Fundamentals of Data Engineering with Spark.- 1. Introduction to Modern Data Engineering.- 2. Getting Started with Apache Spark.- 3. Working with Data.- 4. Transforming Data with Spark SQL and the  DataFrame API.- 5. Bridging Spark SQL with JDBC.- 6. Data Discovery and the Spark SQL Catalog.- 7. Data Pipelines & Structured Spark Applications.- Part II. The Streaming Pipeline Ecosystem.- 8. Workflow Orchestration with Apache Airflow.- 9. A Gentle Introduction to Stream Processing.- 10. Patterns for Writing Structured Streaming Applications.- 11. Apache Kafka & Spark Structured Streaming.- 12. Analytical Processing & Insights.- Part III. Advanced Techniques.- 13. Advanced Analytics with Spark Stateful Structured Streaming.- 14. Deploying Mission Critical Spark Applications on Spark Standalone.- 15. Deploying Mission Critical Spark Applications on Kubernetes.

Erscheinungsdatum
Zusatzinfo 59 Illustrations, black and white; XXV, 585 p. 59 illus.
Verlagsort Berkley
Sprache englisch
Maße 178 x 254 mm
Themenwelt Informatik Datenbanken Data Warehouse / Data Mining
Mathematik / Informatik Informatik Programmiersprachen / -werkzeuge
Informatik Theorie / Studium Künstliche Intelligenz / Robotik
Mathematik / Informatik Mathematik
Schlagworte Analytics • Apache Spark • Apache Zeppelin • Big Data • data architecture • data engineering • DataFrames • Data-Intensive Applications • Data Lineage • Data Modeling • data pipelines • Design Patterns for Data • ETL • machine learning • Modern Data Engineering • performance tuning • sparksql • Streaming data • Stream Processing with Apache Spark
ISBN-10 1-4842-7451-2 / 1484274512
ISBN-13 978-1-4842-7451-4 / 9781484274514
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich
Datenanalyse für Künstliche Intelligenz

von Jürgen Cleve; Uwe Lämmel

Buch | Softcover (2024)
De Gruyter Oldenbourg (Verlag)
74,95
Daten importieren, bereinigen, umformen und visualisieren

von Hadley Wickham; Mine Çetinkaya-Rundel …

Buch | Softcover (2024)
O'Reilly (Verlag)
54,90