Apache Spark and Scala Certification Training
- Description
- Curriculum
- FAQ
- Reviews
- Overview of Big Data & Hadoop including HDFS (Hadoop Distributed File System), YARN (Yet Another Resource Negotiator)
- Comprehensive knowledge of various tools that fall in Spark Ecosystem like Spark SQL, Spark MlLib, Sqoop, Kafka, Flume and Spark Streaming
- The capability to ingest data in HDFS using Sqoop & Flume, and analyze those large datasets stored in the HDFS
- The power of handling real time data feeds through a publish-subscribe messaging system like Kafka
- The exposure to many real-life industry-based projects which will be executed using Edureka’s CloudLab
- Projects which are diverse in nature covering banking, telecommunication, social media, and govenment domains
- Rigorous involvement of a SME throughout the Spark Training to learn industry standards and best practices
Spark is one of the most growing and widely used tool for Big Data & Analytics. It has been adopted by multiple companies falling into various domains around the globe and therefore, offers promising career opportunities. In order to take part in these kind of opportunities, you need a structured training that is aligned as per Cloudera Hadoop and Spark Developer Certification (CCA175) and current industry requirements and best practices.
Besides strong theoretical understanding, it is quite essential to have a strong hands-on experience. Hence, during the Edureka’s Spark and Scala course, you will be working on various industry-based use-cases and projects incorporating big data and spark tools as a part of solution strategy.
Additionally, all your doubts will be addressed by the industry professional, currently working on real life big data and analytics projects.
- Write Scala Programs to build Spark Application
- Master the concepts of HDFS
- Understand Hadoop 2.x Architecture
- Understand Spark and its Ecosystem
- Implement Spark operations on Spark Shell
- Implement Spark applications on YARN (Hadoop)
- Write Spark Applications using Spark RDD concepts
- Learn data ingestion using Sqoop
- Perform SQL queries using Spark SQL
- Implement various machine learning algorithms in Spark MLlib API and Clustering
- Explain Kafka and its components
- Understand Flume and its components
- Integrate Kafka with real time streaming systems like Flume
- Use Kafka to produce and consume messages
- Build Spark Streaming Application
- Process Multiple Batches in Spark Streaming
- Implement different streaming data sources
- Developers and Architects
- BI /ETL/DW Professionals
- Senior IT Professionals
- Testing Professionals
- Mainframe Professionals
- Freshers
- Big Data Enthusiasts
- Software Architects, Engineers and Developers
- Data Scientists and Analytics Professionals
-
- 56% of Enterprises Will Increase Their Investment in Big Data over the Next Three Years – Forbes
- McKinsey predicts that by 2018 there will be a shortage of 1.5M data experts
- Average Salary of Spark Developers is $113k
- According to a McKinsey report, US alone will deal with shortage of nearly 190,000 data scientists and 1.5 million data analysts and Big Data managers by 2018
-
1Introduction to Big Data Hadoop and Spark
Learning Objectives: In this module, you will understand Big Data, the limitations of the existing solutions for Big Data problem, how Hadoop solves the Big Data problem, Hadoop ecosystem components, Hadoop Architecture, HDFS, Rack Awareness, and Replication. You will learn about the Hadoop Cluster Architecture, important configuration files in a Hadoop Cluster. You will also get an introduction to Spark, why it is used and understanding of the difference between batch processing and real-time processing.
Topics:
- What is Big Data?
- Big Data Customer Scenarios
- Limitations and Solutions of Existing Data Analytics Architecture with Uber Use Case
- How Hadoop Solves the Big Data Problem?
- What is Hadoop?
- Hadoop’s Key Characteristics
- Hadoop Ecosystem and HDFS
- Hadoop Core Components
- Rack Awareness and Block Replication
- YARN and its Advantage
- Hadoop Cluster and its Architecture
- Hadoop: Different Cluster Modes
- Big Data Analytics with Batch & Real-Time Processing
- Why Spark is Needed?
- What is Spark?
- How Spark Differs from its Competitors?
- Spark at eBay
- Spark’s Place in Hadoop Ecosystem
-
2Introduction to Scala for Apache Spark
Learning Objectives:
- Learn the basics of Scala that are required for programming Spark applications. You will also learn about the basic constructs of Scala such as variable types, control structures, collections such as Array, ArrayBuffer, Map, Lists, and many more.
Topics:
- What is Scala?
- Why Scala for Spark?
- Scala in other Frameworks
- Introduction to Scala REPL
- Basic Scala Operations
- Variable Types in Scala
- Control Structures in Scala
- Foreach loop, Functions and Procedures
- Collections in Scala- Array
- ArrayBuffer, Map, Tuples, Lists, and more
Hands-on:
- Scala REPL Detailed Demo
-
3Functional Programming and OOPs Concepts in Scala
Learning Objectives:
- In this module, you will learn about object-oriented programming and functional programming techniques in Scala.
Topics:
- Functional Programming
- Higher Order Functions
- Anonymous Functions
- Class in Scala
- Getters and Setters
- Custom Getters and Setters
- Properties with only Getters
- Auxiliary Constructor and Primary Constructor
- Singletons
- Extending a Class
- Overriding Methods
- Traits as Interfaces and Layered Traits
Hands-on:
- OOPs Concepts
- Functional Programming
-
4Deep Dive into Apache Spark Framework
Learning Objectives: In this module, you will understand Apache Spark in depth and you will be learning about various Spark components, you will be creating and running various spark applications. At the end, you will learn how to perform data ingestion using Sqoop.
Topics:
- Spark Components & its Architecture
- Spark Deployment Modes
- Introduction to PySpark Shell
- Submitting PySpark Job
- Spark Web UI
- Writing your first PySpark Job Using Jupyter Notebook
- Data Ingestion using Sqoop
Hands-On:
- Building and Running Spark Application
- Spark Application Web UI
- Understanding different Spark Properties
-
5Playing with Spark RDDs
Learning Objectives: In this module, you will learn about Spark - RDDs and other RDD related manipulations for implementing business logics (Transformations, Actions, and Functions performed on RDD).
Topics:
- Challenges in Existing Computing Methods
- Probable Solution & How RDD Solves the Problem
- What is RDD, It’s Operations, Transformations & Actions
- Data Loading and Saving Through RDDs
- Key-Value Pair RDDs
- Other Pair RDDs, Two Pair RDDs
- RDD Lineage
- RDD Persistence
- WordCount Program Using RDD Concepts
- RDD Partitioning & How it Helps Achieve Parallelization
- Passing Functions to Spark
Hands-On:
- Loading data in RDDs
- Saving data through RDDs
- RDD Transformations
- RDD Actions and Functions
- RDD Partitions
- WordCount through RDDs
-
6DataFrames and Spark SQL
Learning Objectives: In this module, you will learn about SparkSQL which is used to process structured data with SQL queries. You will learn about data-frames and datasets in Spark SQL along with different kind of SQL operations performed on the data-frames. You will also learn about the Spark and Hive integration.
Topics:
- Need for Spark SQL
- What is Spark SQL
- Spark SQL Architecture
- SQL Context in Spark SQL
- Schema RDDs
- User Defined Functions
- Data Frames & Datasets
- Interoperating with RDDs
- JSON and Parquet File Formats
- Loading Data through Different Sources
- Spark-Hive Integration
Hands-On:
- Spark SQL – Creating data frames
- Loading and transforming data through different sources
- Stock Market Analysis
- Spark-Hive Integration
-
7Machine Learning using Spark MLlib
Learning Objectives: In this module, you will learn about why machine learning is needed, different Machine Learning techniques/algorithms and their implementation using Spark MLlib.
Topics:
- Why Machine Learning
- What is Machine Learning
- Where Machine Learning is used
- Face Detection: USE CASE
- Different Types of Machine Learning Techniques
- Introduction to MLlib
- Features of MLlib and MLlib Tools
- Various ML algorithms supported by MLlib
-
8Deep Dive into Spark MLlib
Learning Objectives: In this module, you will be implementing various algorithms supported by MLlib such as Linear Regression, Decision Tree, Random Forest and many more.
Topics:
- Supervised Learning: Linear Regression, Logistic Regression, Decision Tree, Random Forest
- Unsupervised Learning: K-Means Clustering & How It Works with MLlib
- Analysis of US Election Data using MLlib (K-Means)
Hands-On:
- K- Means Clustering
- Linear Regression
- Logistic Regression
- Decision Tree
- Random Forest
-
9Understanding Apache Kafka and Apache Flume
Learning Objectives: In this module, you will understand Kafka and Kafka Architecture. Afterwards you will go through the details of Kafka Cluster and you will also learn how to configure different types of Kafka Cluster. After that you will see how messages are produced and consumed using Kafka API’s in Java. You will also get an introduction to Apache Flume, its basic architecture and how it is integrated with Apache Kafka for event processing. You will learn how to ingest streaming data using flume.
Topics:
- Need for Kafka
- What is Kafka
- Core Concepts of Kafka
- Kafka Architecture
- Where is Kafka Used
- Understanding the Components of Kafka Cluster
- Configuring Kafka Cluster
- Kafka Producer and Consumer Java API
- Need of Apache Flume
- What is Apache Flume
- Basic Flume Architecture
- Flume Sources
- Flume Sinks
- Flume Channels
- Flume Configuration
- Integrating Apache Flume and Apache Kafka
Hands-On:
- Configuring Single Node Single Broker Cluster
- Configuring Single Node Multi-Broker Cluster
- Producing and consuming messages through Kafka Java API
- Flume Commands
- Setting up Flume Agent
- Streaming Twitter Data into HDFS
-
10Apache Spark Streaming - Processing Multiple Batches
Learning Objectives:
- Work on Spark streaming which is used to build scalable fault-tolerant streaming applications. Also, learn about DStreams and various Transformations performed on the streaming data. You will get to know about commonly used streaming operators such as Sliding Window Operators and Stateful Operators.
Topics:
- Drawbacks in Existing Computing Methods
- Why Streaming is Necessary?
- What is Spark Streaming?
- Spark Streaming Features
- Spark Streaming Workflow
- How Uber Uses Streaming Data
- Streaming Context & DStreams
- Transformations on DStreams
- Describe Windowed Operators and Why it is Useful
- Important Windowed Operators
- Slice, Window and ReduceByWindow Operators
- Stateful Operators
-
11Apache Spark Streaming - Data Sources
Learning Objectives: In this module, you will learn about the different streaming data sources such as Kafka and flume. At the end of the module, you will be able to create a spark streaming application.
Topics:
- Apache Spark Streaming: Data Sources
- Streaming Data Source Overview
- Apache Flume and Apache Kafka Data Sources
- Example: Using a Kafka Direct Data Source
Hands-On:
- Various Spark Streaming Data Sources
-
12In-class Project
Goal:
- The aim of this module is to provide you hands-on experience in Reinforcement Learning
-
13Spark GraphX (Self-Paced)
Learning Objective: In this module, you will be learning the key concepts of Spark GraphX programming concepts and operations along with different GraphX algorithms and their implementations.
Topics:
- Introduction to Spark GraphX
- Information about a Graph
- GraphX Basic APIs and Operations
- Spark GraphX Algorithm - PageRank, Personalized PageRank, Triangle Count, Shortest Paths, Connected Components, Strongly Connected Components, Label Propagation
Hands-On:
- The Traveling Salesman problem
- Minimum Spanning Trees