Learning Spark, 2nd Edition
豆瓣Lightning-Fast Data Analytics
Tathagata Das / Jules Damji …
简介
Data is getting bigger, arriving faster, and coming in varied formats—and it all needs to be processed at scale for analytics or machine learning. How can you process such varied data workloads efficiently? Enter Apache Spark.
Updated to emphasize new features in Spark 2.x., this second edition shows data engineers and scientists why structure and unification in Spark matters. Specifically, this book explains how to perform simple and complex data analytics and employ machine-learning algorithms. Through discourse, code snippets, and notebooks, you’ll be able to:
Learn Python, SQL, Scala, or Java high-level APIs: DataFrames and Datasets
Peek under the hood of the Spark SQL engine to understand Spark transformations and performance
Inspect, tune, and debug your Spark operations with Spark configurations and Spark UI
Connect to data sources: JSON, Parquet, CSV, Avro, ORC, Hive, S3, or Kafka
Perform analytics on batch and streaming data using Structured Streaming
Build reliable data pipelines with open source Delta Lake and Spark
Develop machine learning pipelines with MLlib and productionize models using MLflow
Use open source Pandas framework Koalas and Spark for data transformation and feature engineering
contents
1. Introduction to Unified Analytics with Apache Spark
The Genesis of Big Data and Distributed Computing at Google
Hadoop at Yahoo!
Spark’s Early Years at AMPLab
What is Apache Spark?
Speed
Ease of Use
Modularity
Extensibility
Why Unified Analytics?
Apache Spark Components as a Unified Stack
Apache Spark’s Distributed Execution and Concepts
Developer’s Experience
Who Uses Spark, and for What?
Data Science Tasks
Data Engineering Tasks
Machine Learning or Deep Learning Tasks
Community Adoption and Expansion
2. Downloading Apache Spark and Getting Started
Step 1: Download Apache Spark
Spark’s Directories and Files
Step 2: Use Scala Shell or PySpark Shell
Using Local Machine
Step 3: Understand Spark Application Concepts
Spark Application and SparkSession
Spark Jobs
Spark Stages
Spark Tasks
Transformations, Actions, and Lazy Evaluation
Spark UI
Databricks Community Edition
First Standalone Application
Using Local Machine
Counting M&Ms for the Cookie Monster
Building Standalone Applications in Scala
Summary
3. Apache Spark’s Structured APIs
A Bit of History…
Unstructured Spark: What’s Underneath an RDD?
Structuring Spark
Key Merits and Benefits
Structured APIs: DataFrames and Datasets APIs
DataFrames API
Common DataFrame Operations
Datasets API
DataFrames vs Datasets
What about RDDs?
Spark SQL and the Underlying Engine
Catalyst Optimizer
Summary
4. Spark SQL and DataFrames — Introduction to Built-in Data Sources
Using Spark SQL in Spark Applications
Basic Query Example
SQL Tables and Views
Data Sources for DataFrames and SQL Tables
DataFrameReader
DataFrameWriter
Parquet
JSON
CSV
Avro
ORC
Image
Summary
5. Spark SQL and Datasets
Single API for Java and Scala
Scala Case Classes and JavaBeans for Datasets
Working with Datasets
Creating Sample Data
Transforming Sample Data
Memory Management for Datasets and DataFrames
Dataset Encoders
Spark’s Internal Format vs Java Object Format
Serialization and Deserialization (SerDe)
Costs of Using Datasets
Strategies to Mitigate Costs
Summary
6. Loading and Saving Your Data
Motivation for Data Sources
File Formats: Revisited
Text Files
Organizing Data for Efficient I/O
Partitioning
Bucketing
Compression Schemes
Saving as Parquet Files
Delta Lake Storage Format
Delta Lake Table
Summary