Building Big Data Pipelines with PySpark + MongoDB + Bokeh
hotnews udemy course
Link : Building Big Data Pipelines with PySpark + MongoDB + Bokeh
Python: news about the dynamic, interpreted, interactive, object-oriented, extensible programming language Python.HOT & NEW
by Edwin Bomela
What you'll learn
Description
Welcome to the Building Big Data Pipelines with PySpark & MongoDB & Bokeh course. In
this course we will be building an intelligent data pipeline using big data technologies like
Apache Spark and MongoDB.
We will be building an ETLP pipeline, ETLP stands for Extract Transform Load and Predict.
These are the different stages of the data pipeline that our data has to go through in order for it
to become useful at the end. Once the data has gone through this pipeline we will be able to
use it for building reports and dashboards for data analysis.
The data pipeline that we will build will comprise of data processing using PySpark, Predictive
modelling using Spark’s MLlib machine learning library, and data analysis using MongoDB and
Bokeh.
Link : Building Big Data Pipelines with PySpark + MongoDB + Bokeh
Python: news about the dynamic, interpreted, interactive, object-oriented, extensible programming language Python.HOT & NEW
by Edwin Bomela
What you'll learn
- PySpark Programming
- Data Analysis
- Python and Bokeh
- Data Transformation and Manipulation
- Data Visualization
- Big Data Machine Learning
- Geo Mapping
- Geospatial Machine Learning
- Creating Dashboards
Description
Welcome to the Building Big Data Pipelines with PySpark & MongoDB & Bokeh course. In
this course we will be building an intelligent data pipeline using big data technologies like
Apache Spark and MongoDB.
We will be building an ETLP pipeline, ETLP stands for Extract Transform Load and Predict.
These are the different stages of the data pipeline that our data has to go through in order for it
to become useful at the end. Once the data has gone through this pipeline we will be able to
use it for building reports and dashboards for data analysis.
The data pipeline that we will build will comprise of data processing using PySpark, Predictive
modelling using Spark’s MLlib machine learning library, and data analysis using MongoDB and
Bokeh.
Post a Comment for "Building Big Data Pipelines with PySpark + MongoDB + Bokeh"