Overview of Recommender Algorithms – Part 4

This is the fourth in a multi-part post. In the first post, we introduced the main types of recommender algorithms by providing a cheatsheet for them. In the second post, we covered the different types of collaborative filtering algorithms highlighting some of their nuances and how they differ from one another. In the third post, we described content-based filtering in more detail. In this blog post, we’ll present hybrid recommenders which build upon the algorithms we’ve discussed thus far. We’ll also briefly discuss how the popularity of items can be used to address some of the limitations of collaborative and content-based filtering approaches.

Hybrid Approaches combine user and item content features and usage data to benefit from both types of data. A hybrid recommender combining algorithms A and B tries to use the advantages of A to fix the disadvantages of B. For example, CF algorithm suffer from new-item problems, i.e., they cannot recommend items that have no ratings/usage. This does not limit content-based algorithms since the prediction for new items is based on their content (features) that are typically available when the new-item enters the system. By creating a hybrid recommender which combines collaborative filtering and content-based filtering, we can overcome some of the limitations of the individual algorithms such as cold-start problem and popularity bias.  We outline some of the different ways for combining two (or more) basic RSs techniques to create a new hybrid system in Table 1.

Screen Shot 2015-11-20 at 21.15.27
Table 1: Different ways of combining two (or more) basic recommender algorithms to create a new hybrid algorithm.

Say that we have some users who have expressed preferences for a range of books.  The more that they like a book, the higher a rating they give it, on a scale of one to five.  We can represent their preferences in a matrix, where the rows contain the users and the columns the books (Figure 1).

Screen Shot 2015-11-18 at 22.27.08
Figure 1: User preferences for books. All preferences are on a scale of 1-5, 5 being the most liked. The first user (row 1) has a preference for the first book (column 1) given by rating of 4. Where a cell is empty, the user has not given a preference for the book.

Using this setup, in Part 2 of this posts’ series, we worked through two examples showing how to compute recommendations using item- and user-based collaborative filtering algorithms, and in Part 3 we showed how to use content-based filtering to generate recommendations. We’ll now combine these three different algorithms to create a new hybrid recommender. We’ll use the weighted method (Table 1) which combines the output from several techniques, in this case three, with different degrees of importance (i.e. weight) to offer new set of recommendations.

Let’s take the first user and generate some recommendations for them. First we get the recommendations from user- and item-based collaborative filtering (CF) from Part 2 and content-based filtering (CB) from Part 3 (Figure 2). It is worth noting that on this small toy example, the three approaches generate slightly different recommendations for the same user even though the input is the same to all three of them.

Screen Shot 2015-11-20 at 17.49.07
Figure 2: Recommendations for a user from user-based CF, item-based CF and content-based filtering.

Next, we generate recommendations for the given user using a weighted hybrid recommender by putting 40% of the weight on user-based CF, 30% on item-based CF and 30% on content-based filtering (Figure 3). In our example, the user would be recommended all three books that they have not rated yet, compared to getting just two book recommendations from the individual algorithms.

Screen Shot 2015-11-20 at 17.49.19.png
Figure 3: Generating recommendations for a user using a weighted hybrid recommender by putting 40% weight on user-based CF, 30% on item-based CF and 30% on content-based filtering.

Although hybrid approaches address some of the big challenges and limitations of CF and CB methods (see Table 3), they also require a lot of work to get the right balance between the different algorithms in the system. Another way of combining individual recommender algorithms is using ensemble methods, where we learn a function (i.e. train an ensemble) for how to mix the results of the different methods. It is worth noting that usually ensembles combine not only different algorithms, but also different variations/models based on the same algorithm. For example, the winning solution in the Netflix Prize consisted of over 100 different models from more than 10 different algorithms (popularity, neighbourhood methods, matrix factorisation, restricted boltzmann machines, regression and more), which were combined in an ensemble using gradient boosted decision trees.

It’s also worth adding that popularity-based approaches are a good solution to the new user cold start problem.  These approaches rate items using some form of popularity measure such as most downloaded or purchased, and recommend these popular items to new users.  It’s a basic but powerful approach when you have a good measure of popularity and often provides a good baseline with which to compare other recommender algorithms. Popularity can be used on its own as an algorithm to bootstrap a recommender system to get enough activity and usage for the user before switching to approaches that can better model user interests such as collaborative filtering and content-based filtering.  Popularity models can also be included in hybrid approaches, allowing them to address the new user cold start problem.

 

4 thoughts on “Overview of Recommender Algorithms – Part 4

Leave a comment