PySpark Recipes (eBook)
XXIII, 265 Seiten
Apress (Verlag)
978-1-4842-3141-8 (ISBN)
- Understand the advanced features of PySpark2 and SparkSQL
- Optimize your code
- Program SparkSQL with Python
- Use Spark Streaming and Spark MLlib with Python
- Perform graph analysis with GraphFrames
Quickly find solutions to common programming problems encountered while processing big data. Content is presented in the popular problem-solution format. Look up the programming problem that you want to solve. Read the solution. Apply the solution directly in your own code. Problem solved!PySpark Recipes covers Hadoop and its shortcomings. The architecture of Spark, PySpark, and RDD are presented. You will learn to apply RDD to solve day-to-day big data problems. Python and NumPy are included and make it easy for new learners of PySpark to understand and adopt the model.What You Will Learn Understand the advanced features of PySpark2 and SparkSQLOptimize your codeProgram SparkSQL with PythonUse Spark Streaming and Spark MLlib with PythonPerform graph analysis with GraphFramesWho This Book Is ForData analysts, Python programmers, big data enthusiasts
Raju Mishra has strong interests in data science and systems that have the capability of handling large amounts of data and operating complex mathematical models through computational programming. He was inspired to pursue an M. Tech in computational sciences from Indian Institute of Science in Bangalore, India. Raju primarily works in the areas of data science and its different applications. Working as a corporate trainer he has developed unique insights that help him in teaching and explaining complex ideas with ease. Raju is also a data science consultant solving complex industrial problems. He works on programming tools such as R, Python, scikit-learn, Statsmodels, Hadoop, Hive, Pig, Spark, and many others.
Chapter 1: The era of Big Data and HadoopChapter Goal:Reader learns about Big data and its usefulness. Also how Hadoop and its ecosystem is beautifully able to process big data for useful informations. What are the shortcomings of Hadoop which requires another Big data processing platform.No of pages 15-20Sub -Topics1. Introduction to Big-Data2. Big Data challenges and processing technology 3. Hadoop, structure and its ecosystem4. Shortcomings of HadoopChapter 2: Python, NumPy and SciPyChapter Goal:The goal of this chapter to get reader acquainted with Python, NumPy and SciPy. No of pages: 25-30Sub - Topics 1. Introduction to Python2. Python collection, String Function and Class3. NumPy and ndarray4. SciPyChapter 3: Spark : Introduction, Installation, Structure and PySparkChapter Goal:This chapters will introduce Spark, Installation on Single machine. There after it continues with structure of Spark. Finally, PySpark is introduced.No of pages : 15-20Sub - Topics: 1. Introduction to Spark2. Spark installation on Ubuntu3. Spark architecture4. PySpark and Its architectureChapter 4: Resilient Distributed Dataset (RDD)Chapter Goal:Chapter deals with the core of Spark, RDD. Operation on RDDNo of pages: 25-30Sub - Topics: 1. Introduction to RDD and its characteristics2. Transformation and Actions2. Operations on RDD ( like map, filter, set operations and many more)Chapter 5: The power of pairs : Paired RDDChapter Goal:Paired RDD can help in making many complex computation easy in programming. Learners will learn paired RDD and operation on this.No of pages:15 -20Sub - Topics: 1. Introduction to Paired RDD2. Operation on paired RDD (mapByKey, reduceByKey …...) Chapter 6: Advance PySpark and PySpark application optimizationChapter Goal: 30-35Reader will learn about Advance PySpark topics broadcast and accumulator. In this chapter learner will learn about PySpark application optimization. No of pages:Sub - Topics: 1. Spark Accumulator2. Spark Broadcast3. Spark Code OptimizationChapter 7: IO in PySparkChapter Goal:We will learn PySpark IO in this chapter. Reading and writing .csv file and .json files. We will also learn how to connect to different databases with PySpark.No of pages:20-30Sub - Topics: 1. Reading and writing JSON and .csv files2. Reading data from HDFS3. Reading data from different databases and writing data to different databasesChapter 8: PySpark StreamingChapter Goal:Reader will understand real time data analysis with PySpark Streaming. This chapter is focus on PySpark Streaming architecture, Discretized stream operations and windowing operations.No of pages:30-40Sub - Topics: 1. PySpark Streaming architecture2. Discretized Stream and operations3. Concept of windowing operationsChapter 9: SparkSQLChapter Goal:In this chapter reader will learn about SparkSQL. SparkSQL Dataframe is introduced in this chapter. In this chapter learner will learn how to use SQL commands using SparkSQLNo of pages: 40-50Sub - Topics: 1. SparkSQL2. SQL with SparkSQL3. Hive commands with SparkSQL
Erscheint lt. Verlag | 9.12.2017 |
---|---|
Zusatzinfo | XXIII, 265 p. 47 illus., 12 illus. in color. |
Verlagsort | Berkeley |
Sprache | englisch |
Themenwelt | Informatik ► Datenbanken ► Data Warehouse / Data Mining |
Mathematik / Informatik ► Informatik ► Netzwerke | |
Mathematik / Informatik ► Informatik ► Programmiersprachen / -werkzeuge | |
Schlagworte | Advanced PySpark • Big Data • MLLIb • NumPy • PySpark2 • Python • Resilient Distributed Database • SciPy • Spark • Spark SQL |
ISBN-10 | 1-4842-3141-4 / 1484231414 |
ISBN-13 | 978-1-4842-3141-8 / 9781484231418 |
Haben Sie eine Frage zum Produkt? |
Größe: 3,3 MB
DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasserzeichen und ist damit für Sie personalisiert. Bei einer missbräuchlichen Weitergabe des eBooks an Dritte ist eine Rückverfolgung an die Quelle möglich.
Dateiformat: PDF (Portable Document Format)
Mit einem festen Seitenlayout eignet sich die PDF besonders für Fachbücher mit Spalten, Tabellen und Abbildungen. Eine PDF kann auf fast allen Geräten angezeigt werden, ist aber für kleine Displays (Smartphone, eReader) nur eingeschränkt geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür einen PDF-Viewer - z.B. den Adobe Reader oder Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür einen PDF-Viewer - z.B. die kostenlose Adobe Digital Editions-App.
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich