Skip to content

Latest commit

 

History

History
44 lines (27 loc) · 1.23 KB

File metadata and controls

44 lines (27 loc) · 1.23 KB

🔥 PySpark Essentials

This project is a hands-on collection of notebooks, code snippets, and exercises focused on learning Apache Spark with Python (PySpark). It includes my notes and experiments while exploring core Spark concepts, transformations, actions, DataFrame API, and more.


🚀 What is PySpark?

PySpark is the Python API for Apache Spark, a powerful open-source distributed computing engine used for large-scale data processing and analytics. PySpark allows you to leverage the power of distributed computing using Python.


📘 Topics Covered

  • ✅ Introduction to Spark & PySpark
  • ✅ SparkContext & SparkSession
  • ✅ RDDs (Resilient Distributed Datasets)
  • ✅ DataFrames & Datasets
  • ✅ Transformations vs Actions
  • ✅ Reading/Writing: JSON, CSV, Parquet
  • ✅ PySpark SQL & Queries
  • ✅ GroupBy, Aggregations, Joins
  • ✅ Handling Nulls & Missing Data
  • ✅ User-Defined Functions (UDFs)
  • ✅ Window Functions
  • ✅ Data Partitioning & Performance Optimization
  • ✅ Intro to MLlib (Optional)

✍️ How I Learn

I follow a "Learn by Doing" approach. Each notebook contains:

✅ Detailed explanations

🧪 Hands-on code examples

📌 Real-world case studies