Skip to main content

Recommender System using Collaborative filtering

Recommender system using collaborative filtering approach uses the past users' behavior to predict what items the current user would like.

We create a UxM matrix where U is the number of users and M is the number of different items or products. Uij is the rating expressed by the user-i for product-j.

In the real world, not every user expresses an opinion about every product. For example, let us say there are five users including Bob has expressed their opinion about four movies as shown below Table 1:



movie1movie2movie3movie4
user11335
user22
45
user332
2
user4
134
Bob325

Our goal is to predict what movies to recommend to Bob, or put it another way should we recommend movie4 to Bob, knowing the rating for four movies from other users including Bob.

Traditionally, we could do item to item comparison, which means if the user has liked item1 in the past then that user may like other items similar to item1. Another way to recommend is to do user to user comparison, where if two users have the similar profile then we can recommend items liked by user1 to other users similar to user1.

Above are examples of some of the memory based technique for the recommendation.  On the other hand, the model-based techniques such as SVD, PCA,  and probabilistic recommendation creates a model offline. This model is then used to determine the degree of likeness of an item by a user or recommending top N items to the user.

Alternating least square (ALS) is a latest and very popular technique for collaborating filtering. This technique won the Netflix prize. If we convert the above table into a matrix of NxP matrix where N is the number of users and P is the number of products for which users have provided the implicit or explicit rating. Implicit rating is where instead of asking users about how they liked the product, the rating is inferred based on other factors such how often user visited the website, how long users stayed on the website, or whether users buy the product.  In the real world, this matrix will be very sparse since not every user will review or rate every product.

Since many of the entries in the table is empty including the one with '?', the goal is to fill those entries based on other users rating. ALS does this by breaking this NxP matrix into two matrices: U x n_factor and n_factor x M.  n_factor is a number of latent factors, a property of the system. ALS is an iterative procedure in which a cost function is minimized to prevent overfitting.    Missing entries are initialized to some numbers and then iteratively we update those numbers while minimizing the cost function.

Apache Spark has ALS.trainImplicit() function that takes RDD of tuples of (userId, productId, rating), a rank value which represents n_factor, and a seed for the random number.  For example, a call to the function might be:

model = ALS.trainImplicit(trainData, rank=10, seed=100)

To find the right value of rank, we may try different value or through cross-validation, we can find the optimum value of the rank.
To get the prediction, call

recommended_products = [x.product for x in model.recommendProducts(userId, 5)]












Comments

Popular posts from this blog

Decision Tree

Decision tree is a multi-class classification tool allowing a data point to be classified into one of many (two or more) classes available.  A decision tree divides the sample space into a rectilinear region. This will be more clear with an example. Let us say we have this auto-insurance claim related data as shown in the following table. We want to predict what type of customer profile may more likely lead to claim payout.  The decision tree model may first divide the sample space based on age. So, now we have two regions divided based on the age. Next, one of those regions will further sub-divided based Marital_status, and then that newly divided sub-regision may further get divide based on Num_of_vehicle_owned.  A decision tree is made up of a root node followed by intermediate node and leaf node.  Each leaf node represents one of the class into which data points have been classified to. An intermediate node represents the decision rule based...

Sentimental Analysis Using Scikit-Learn and Neural Network

Using Scikit-Learn and NLTK for Sentimental Analysis Sentimental analysis is a way of categorizing text into subgroup based on the opinion or sentiments expressed in the text. For example, we would like to categorize review or comments of people about a movie to determine how many like the movie and how many don't. In a supervised sentimental analysis, we have some training data which is already categorized or sub-grouped into different categories, for example, into 'positive' or 'negative' sentiments. We used these training data to train our model to learn what makes a text to be part of a specific group. By text I mean a sentence or a paragraph. Using this labeled sentences, we are going to build a model. So, let us say we have following training text: training_positive = list() training_positive[0] =  "bromwell high is a nice cartoon comedy perfect for family" training_positive[1] =  " homelessness or houselessness as george carlin s...