Hello. The subject of this lesson is Non-Personalized Recommender Systems. Since now, I will give you only the basic implementations. All the optimization is left for you as an assignment. And I bet you are already comfortable with it as you have elaborated all the necessary skills over the courses over this specialization. Even if it sounds silly, there are a number of obstacles on the way of implementation of a non-personalized recommender system. As an example, I took publicly available for educational purposes movie data set called MovieLens. Without much of hesitation, let me build the top rated movies recommendation list. We have here rating.csv which contains a comma separated list of userId, movieId, rating, and timestamp. Rating is a value between zero and five. The higher the better. Movies.csv which contains a comma separated list of movieId, title, and genres. We can easily find out the average score by aggregating by movieId. Here is the snippet of the spark code to do the calculation. You write ratings.csv, skip the header column, pass lines into rating data structures. Extract movieId and rating value, group by movieID and finally calculate the average rating. When they code these codes several times, you will see different output each time. Moreover, far from all the items you see on the screen are really popular. Any ideas why? The reason is that some movies have only very few rating. By very few I mean one. The Damped Means technique is a solution to this problem. From a mathematical point of view, previously we had the following formula for the movie score. The nominator is a sum of all the ratings for a movie. The denominator is a quantity of ratings for the movie. The Damped Means' assumption is that each movie was rated by K hidden users with the value µ. From the mathematical point of view, it looks the way you see on the screen. In simple words, if you have only a few ratings for a movie then movie errors rating should be close to the data set average rating, the value µ. The more ratings per movie you have, the more confident you are about the movie average rating value provided by users. Let us fix our known personalized recommender with the help of Damped Means. You see? Much better output, and it is now stable. When people see the same popular items recommendation every day is becoming boring and the current situation becomes tense but as a good recommender service provider, you can take into consideration temporal effects to get the most popular items for the last week, for the last months, for the previous year and so on. MovieLens data set provides a timestamp for each rating event. So you can easily do it with as part fielder statement. Trending recommendations can be gathered with the help of mathematical sugar called exponential decay. From user's point of view ratings you provided in the past are becoming obsolete over time. In other words, all ratings may not reflect correctly the current user preferences. If we go to extremes, try to remember yourself at the age of five or 10 years. Do you still like all the movies from your childhood? Do you really remember them in all the details? Compare your feelings and memories about movies that you have seen today, with those you saw one a week ago, one month ago or a year ago and so on. Compared to the old one, recent experience has way more influence on you and your memories. You can experiment with a decay factor to see the difference in trending movie recommendations. As a side note, here we expect the data set to be steady but in reality you just provide feedback every day, every hour, and every minute. You can build your recommender service to consume these data simultaneously and automatically update recommendations. How to work with streaming data and what frameworks are available for you, you will learn in the next course. There will be a lot of practical exercises. Get ready. Moving forward with non-personalized recommendations. I bet you know a lot of examples of websites which provide recommendations in the following way. People who like A item also like B item, or customers who bought A item also bought B item. Let me reveal the secret how it works. In the literature it is known as basket recommendations or market basket analysis. First, we need to create baskets of light items. For example, movies for each user. In MovieLens data set we can binarize ratings into two categories, light items, heavy rating value bigger or equal to four, and all other items. The value of four is chosen empirically and it is not always the best choice. You can experiment with this value to see how recommendations change. So you have baskets. You choose an arbitrary item A and your task is to generate the recommendation in the form. People who like an item A also like an item B. Any ideas? And of course naive implementation that naturally comes into mind, will not work. Do you remember the story about bananas from the previous video? It is exactly what I'm talking about. If you take a look at each item B and count how many times it occurs in the same basket with A, then likely you get just the most popular items and disregard the information about A. There are several approaches to overcome the issue. First, you can adjust the value by overall popularity of B. This metric that you see on the screen is called the lift metric. Second, you can use the following metric which means that having an item A in the basket makes an item B more likeable compared to the baskets where we don't have the item A. This type of recommendations requires you to calculate a current metrics. The definition of the occurrence metric is pretty straightforward. It is a square matrix where both rows and columns correspond to items in the same order. Row-column intersection represents the number of times the corresponding pair of items occur or could occur together. For example, it can be the number of times the approaches together. If you have thousands of items then you will be able to do it within a second. But if you have millions of items then it is becoming a real challenge. Millions of items is not something difficult to imagine. I personally worked as a senior software development engineer for the biggest e-commerce website in Russia in 2014, and we had 60 million of distinct items. How to make these calculations efficient is a job for a data engineer. IT companies require your knowledge and your skills to complete these kinds of tasks. And I am sure that you are already confident to do it by yourself. Summary of this video. You can build a non-personalized recommender system and adjust it for rare occur items with the help of Damped Means. You can build a non-personalized recommender system taken into consideration temporal effect with the help of exponential decay. You can explain the concept of basket recommendations and how to overcome bananas problem. Do you remember with metrics for it? Though non-personalized recommender system sounds easy for big e-commerce website you need to think about efficient calculations. More on this is waiting for you in the assignments section.