Data Analytics with Hadoop
O'Reilly Media (Verlag)
978-1-4919-1370-3 (ISBN)
This practical guide shows you why the Hadoop ecosystem is perfect for the job. Instead of deployment, operations, or software development usually associated with distributed computing, you'll focus on particular analyses you can build, the data warehousing techniques that Hadoop provides, and higher order data workflows this framework can produce.
Data scientists and analysts will learn how to perform a wide range of techniques, from writing MapReduce and Spark applications with Python to using advanced modeling and data management with Spark MLlib, Hive, and HBase. You'll also learn about the analytical processes and data systems available to build and empower data products that can handle-and actually require-huge amounts of data.
- Understand core concepts behind Hadoop and cluster computing
- Use design patterns and parallel analytical algorithms to create distributed data analysis jobs
- Learn about data management, mining, and warehousing in a distributed context using Apache Hive and HBase
- Use Sqoop and Apache Flume to ingest data from relational databases
- Program complex Hadoop and Spark applications with Apache Pig and Spark DataFrames
- Perform machine learning techniques such as classification, clustering, and collaborative filtering with Spark’s MLlib
Benjamin Bengfort is a data scientist with a passion for massive machine learning involving gigantic natural language corpora, and has been leveraging that passion to develop a keen understanding of recommendation algorithms at Cobrain in Bethesda, MD where he serves as the Chief Data Scientist. With a professional background in military and intelligence, and an academic background in economics and computer science, he brings a unique set of skills and insights to his work. Ben believes that data is a currency that can pave the way to discovering insights and solve complex problems. He is also currently pursuing a PhD in Computer Science at the University of Maryland.
Jenny Kim is an experienced data scientist who works in both commercial software efforts as well as in academia. She has significant experience in working with large scale data, machine learning, and Hadoop implementations in production and research environments. Jenny (with Benjamin Bengfort) built a large scale recommender system that used a web crawler to gather ontological information about apparel products and produce recommendations from transactions. Currently she teaches Introduction to Hadoop and Advanced Hadoop courses on Statistics.com.
Introduction to Distributed Computing
Chapter 1The Age of the Data Product
What Is a Data Product?
Building Data Products at Scale with Hadoop
The Data Science Pipeline and the Hadoop Ecosystem
Conclusion
Chapter 2An Operating System for Big Data
Basic Concepts
Hadoop Architecture
Working with a Distributed File System
Working with Distributed Computation
Submitting a MapReduce Job to YARN
Conclusion
Chapter 3A Framework for Python and Hadoop Streaming
Hadoop Streaming
A Framework for MapReduce with Python
Advanced MapReduce
Conclusion
Chapter 4In-Memory Computing with Spark
Spark Basics
Interactive Spark Using PySpark
Writing Spark Applications
Conclusion
Chapter 5Distributed Analysis and Patterns
Computing with Keys
Design Patterns
Toward Last-Mile Analytics
Conclusion
Workflows and Tools for Big Data Science
Chapter 6Data Mining and Warehousing
Structured Data Queries with Hive
HBase
Conclusion
Chapter 7Data Ingestion
Importing Relational Data with Sqoop
Ingesting Streaming Data with Flume
Conclusion
Chapter 8Analytics with Higher-Level APIs
Pig
Spark’s Higher-Level APIs
Conclusion
Chapter 9Machine Learning
Scalable Machine Learning with Spark
Conclusion
Chapter 10Summary: Doing Distributed Data Science
Data Product Lifecycle
Machine Learning Lifecycle
Conclusion
Appendix Creating a Hadoop Pseudo-Distributed Development Environment
Quick Start
Setting Up Linux
Installing Hadoop
Appendix Installing Hadoop Ecosystem Products
Packaged Hadoop Distributions
Self-Installation of Apache Hadoop Ecosystem Products
Erscheint lt. Verlag | 12.7.2016 |
---|---|
Verlagsort | Sebastopol |
Sprache | englisch |
Maße | 179 x 233 mm |
Gewicht | 498 g |
Einbandart | kartoniert |
Themenwelt | Informatik ► Datenbanken ► Data Warehouse / Data Mining |
Schlagworte | Apache Hadoop • Big Data • Data Warehouse • Hive • Linux • MapReduce • Python • Spark |
ISBN-10 | 1-4919-1370-3 / 1491913703 |
ISBN-13 | 978-1-4919-1370-3 / 9781491913703 |
Zustand | Neuware |
Haben Sie eine Frage zum Produkt? |
aus dem Bereich