Course Description

Welcome to the Building Big Data Pipelines with SparkR & PowerBI & MongoDB course. In this course we will be creating a big data analytics solution using big data technologies for R.

In our use case we will be working with raw earthquake data and we will be applying big data processing techniques to extract transform and load the data into usable datasets. Once we have processed and cleaned the data, we will use it as a data source for building predictive analytics and visualizations.

Power BI Desktop is a powerful data visualization tool that lets you build advanced queries, models and reports. With Power BI Desktop, you can connect to multiple data sources and combine them into a data model. This data model lets you build visuals, and dashboards that you can share as reports with other people in your organization.

SparkR is an R package that provides a light-weight frontend to use Apache Spark from R. SparkR provides a distributed data frame implementation that supports operations like selection, filtering, aggregation etc. (similar to R data frames, dplyr) but on large datasets. SparkR also supports distributed machine learning using MLlib.

MongoDB is a document-oriented NoSQL database, used for high volume data storage. It stores data in JSON like format called documents, and does not use row/column tables. The document model maps to the objects in your application code, making the data easy to work with.

Course curriculum

  1. 01
  2. 02
    • R Installation

    • Apache Spark Installation

    • Java Installation

    • Testing Spark

    • MongoDB Installation

    • NoSQL Booster Installation

    • SparkR Installation

    • Configurations on SparkR

  3. 03
    • Dataset Extraction

    • Dataset Transformation and Cleaning 1

    • Dataset Transformation and Cleaning 2

    • Data Loading in MongoDB

  4. 04
    • Data Pre-Processing and Preparation

    • Building the Machine Learning Model

    • Prediction Output

  5. 05
    • PowerBI Desktop Installation

    • Mongo ODBC Drivers Installation

    • System DSN Creation for MongoDB

    • Data Source Loading into PowerBI

    • Creation of Earthquake Prediction Map

    • Doughnut Charts

    • Area Charts

    • Stacked Bar Charts

  6. 06
    • SparkR script

Pricing - Life time Access

What will you learn?

  • How to create big data processing pipelines using R and MongoDB.

  • Machine learning with geospatial data using the SparkR and the MLlib library.

  • Data analysis using SparkR, R and PowerBI.

  • How to manipulate, clean and transform data using Spark dataframes.

  • How to create Geo Maps in PowerBI Desktop.

  • How to create dashboards in PowerBI Desktop.

GEO Premium

Access our ENTIRE content instantly with a subscription

Student profile?

  • R Developers at any level

  • Undergraduate students

  • Machine Learning engineers at any level

  • GIS Developers at any level

  • Master students and PhD candidates

  • Researchers and Academics

  • Professionals and Companies

Some more information

  • Certificates of Completion

    After you successfully finish the course, you can claim your Certificate of Completion with NO extra cost! You can add it to your CV, LinkedIn profile etc

  • Available at any time! Study at your best time

    We know hard it is to acquire new skills. All our courses are self paced.

  • Online and always accessible

    Even when you finish the course and you get your certificate, you will still have access to course contents! Every time an Instructor makes an update you will be notified and be able to watch it for FREE

About your Instructor

Data Engineer and business intelligence consultant with an academic background in Bsc computer science and around 5 years of experience in IT. Involved in multiple projects ranging from Business Intelligence, Software Engineering, IoT and Big data analytics. Expertise are in building data processing pipelines in the Hadoop and Cloud ecosystems and software development. My career started as an embedded software engineer writing firmware for integrated microchips, then moved on as an ERDAS APOLLO developer at geo data design a hexagon geospatial partner. Am now a consultant at one of the top business intelligence consultancies helping clients build data warehouses, data lakes, cloud data processing pipelines and machine learning pipelines. The technologies I use to accomplish client requirements range from Hadoop, Amazon S3, Python, Django, Apache Spark, MSBI, Microsoft Azure, SQL Server Data Tools, Talend and Elastic MapReduce.

Edwin Bomela

Data Engineer and business intelligence consultant