PacktPublishing / Essential-PySpark-for-Scalable-Data-Analytics

Essential PySpark for Scalable Data Analytics, published by Packt
MIT License
41 stars 38 forks source link

Essential PySpark for Scalable Data Analytics

Essential PySpark for Scalable Data Analytics

This is the code repository for Essential PySpark for Scalable Data Analytics, published by Packt.

A beginner's guide to harnessing the power and ease of PySpark 3

What is this book about?

Apache Spark is a unified data analytics engine designed to process huge volumes of data fast and efficiently. PySpark is the Python language API of Apache Spark, that offers Python developers an easy-to-use scalable data analytics framework.

This book covers the following exciting features: Understand the role of distributed computing in the world of big data Gain an appreciation for Apache Spark as the de facto go-to for big data processing Scale out your data analytics process using Apache Spark Build data pipelines using data lakes, and perform data visualization with PySpark and Spark SQL Leverage the cloud to build truly scalable and real-time data analytics applications Explore the applications of data science and scalable machine learning with PySpark Integrate your clean and curated data with BI and SQL analysis tools

If you feel this book is for you, get your copy today!

<img src="https://raw.githubusercontent.com/PacktPublishing/GitHub/master/GitHub.png" alt="https://www.packtpub.com/" border="5" />

Instructions and Navigations

All of the code is organized into folders. For example, Chapter02.

The code will look like the following:

retailSchema = ( StructType()
  .add('InvoiceNo', StringType()) 
  .add('StockCode', StringType())
  .add('Description', StringType()) 
  .add('Quantity', IntegerType()) 
  .add('InvoiceDate', StringType()) 
  .add('UnitPrice', DoubleType()) 
  .add('CustomerID', IntegerType()) 
  .add('Country', StringType())     
)

Following is what you need for this book: This book is for practicing data engineers, data scientists, data analysts, and data enthusiasts who are already using data analytics to explore distributed and scalable data analytics. Basic to intermediate knowledge of the disciplines of data engineering, data science, and SQL analytics is expected. General proficiency in using any programming language, especially Python, and working knowledge of performing data analytics using frameworks such as pandas and SQL will help you to get the most out of this book.

With the following software and hardware list you can run all code files present in the book (Chapter 1-14).

Software and Hardware List

Chapter Software required OS required
1-13 Databricks, Apache Spark 3, Python Windows, Mac OS X, and Linux (Any)

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. Click here to download it.

Related products

Get to Know the Author

Sreeram Nudurupati is a data analytics professional with years of experience in designing and optimizing data analytics pipelines at scale. He has a history of helping enterprises, as well as digital natives, build optimized analytics pipelines by using the knowledge of the organization, infrastructure environment, and current technologies.

Download a free PDF

If you have already purchased a print or Kindle version of this book, you can get a DRM-free PDF version at no cost.
Simply click on the link to claim your free PDF.

https://packt.link/free-ebook/9781800568877