Data Analysis Using Pyspark

4.4
176 個評分
提供方
Coursera Project Network
6,376 人已註冊
在此指導項目中,您將:

Learn how to setup the google colab for distributed data processing

Learn applying different queries to your dataset to extract useful Information

Learn how to visualize this information using matplotlib

Clock1.5 h
Intermediate中級
Cloud無需下載
Video分屏視頻
Comment Dots英語(English)
Laptop僅限桌面

One of the important topics that every data analyst should be familiar with is the distributed data processing technologies. As a data analyst, you should be able to apply different queries to your dataset to extract useful information out of it. but what if your data is so big that working with it on your local machine is not easy to be done. That is when the distributed data processing and Spark Technology will become handy. So in this project, we are going to work with pyspark module in python and we are going to use google colab environment in order to apply some queries to the dataset we have related to lastfm website which is an online music service where users can listen to different songs. This dataset is containing two csv files listening.csv and genre.csv. Also, we will learn how we can visualize our query results using matplotlib.

您要培養的技能

  • Google colab
  • Data Analysis
  • Python Programming
  • pySpark SQL

分步進行學習

在與您的工作區一起在分屏中播放的視頻中,您的授課教師將指導您完成每個步驟:

  1. Prepare the Google Colab for distributed data processing

  2. Mounting our Google Drive into Google Colab environment

  3. Importing first file of our Dataset (1 Gb) into pySpark dataframe

  4. Applying some Queries to extract useful information out of our data

  5. Importing second file of our Dataset (3 Mb) into pySpark dataframe

  6. Joining two dataframes and prepapre it for more advanced queries

  7. Learn visualizing our query results using matplotlib

指導項目工作原理

您的工作空間就是瀏覽器中的雲桌面,無需下載

在分屏視頻中,您的授課教師會為您提供分步指導

授課教師

審閱

來自DATA ANALYSIS USING PYSPARK的熱門評論

查看所有評論

常見問題

常見問題

還有其他問題嗎?請訪問 學生幫助中心