Taming Big Data using Spark & Scala Udemy Free Download
What you'll learn:
- Big Data and its EcoSystem like Hadoop , Sqoop, Hive, Flume, Kafka, Spark using Scala, Spark SQL & Spark Streaming
- Both the Concepts (Theories & Architectures) + Practicals
- Assignments & Projects Scenarios for Real Projects
- Build, deploy, and run Spark scripts on Hadoop clusters
- Transform structured data using SparkSQL and DataFrames
- Process continual streams of data with Spark Streaming
- Working on intellij and executing the JAR through scripts
- Practice questions for CCA 175 Certification
Requirements::
- Basic programming skills
- Cloudera Quickstart VM or windows Hadoop VM or Your Own Hadoop Setup. You can use the one provided with the course without any issues
- A Laptop with Minimum RAM of 6GB to support VM (If using the VM provided in the course). You can do your own installation on local following the course
- Having SQL skills would be advantageous
Description:
The Course is for those who do not know even ABC of Big Data and tools, want to learn them and be in a comfortable situation to implement them in projects. The course is also for those, who have some knowledge on Big Data tools, but want to enhance them further and be comfortable working in Projects. Due to the extensive scenario implementation, the course is also suitable for people interested to write Big Data Certifications like CCA 175. The course contains Practice Test for CCA 175.
Because the course is focused on setting up the entire Hadoop Platform on your windows (for those having less than 6GB RAM) and providing or working on fully configured VM's, you need not to buy cluster very often to practice the tools. Hence, the Course is ONE TIME INVESTMENT for secure future.
In the course, we will learn how to utilize Big Data tools like Hadoop, Flume, Kafka, Spark, Scala (the most valuable tech skills on the market today).
In this course I will show you how to -
Use Scala and Spark to analyze Big Data.
Practice Test for writing CCA 175 Exam is available at the end of the course.
Extensive and Real time project scenarios with solutions as you will write in REAL PROJECTS
Use Sqoop to import data from Traditional Relational Databases to HDFS & Hive.
Use Flume and Kafka to process streaming data
Use Hive to view and store data & Partition the tables
Use Spark Streaming to fetch the streaming data from Kafka & Flume
The VM's in the course are configured to work synchronously together and also have Spark 2.2.0 Version Installed. (Standard Cloudera VM has Spark 1.6 Installed with NO KAFKA and requires an upgrade for Spark, while the VM's provided in the course has Spark 2.2 configured and working along with Kafka.)
Big Data is the most in demand skills right now, and with this course you can learn them quickly and easily! You can also learn the components in the basic setup in files like "hdfs-site.xml", "core-site.xml" etc They are good to know if working for a projet.
The course is focused on upskilling someone who do not know Big Data tools and target is to bring them up-to the mark to be able to work in Big Data projects seamlessly without issues.
This course comes with some project scenarios and multiple datasets to work on with.
After completing this course you will feel comfortable putting Big Data, Scala and Spark on your resume and also will be easily able to work and implement in projects!
Thanks and I will see you inside the course!
Who this course is for:
- The course is designed to be used for all who want to learn and move to Big Data Technologies.
- Those who want to get a real feel of project like scenarios along with learning the concepts
- ONE Stop Shop for Required Big Data Tools with Theories, Concepts, Practicals, Practice Scenarios & Project Scenarios using Scala Programming Langiage
Course Details:
- 38 ч видео по запросу
- 15 ресурсов для скачивания
- 2 практических тестов
- Полный пожизненный доступ
- Доступ через мобильные устройства и телевизор
- Сертификат об окончании