Saturday, May 05, 2007

The LibraryThing Recommender

I knew that LibraryThing.com had plans to work on a recommender for books, and seems like its out now. It's called LibrarySuggester; you can type a name of any book that you own or have read and the systems spills out suggestions in different categories:
  • People with this book also have...(v 1)
  • Special sauce recommendations!
  • Books with similar tags
  • Books with similar library subjects and classifications..
  • Amazon recommendations
  • People with this book also have...(v2)
LibraryThing Suggester analyses the more than thirteen million books and sixteen million tags LibraryThing members have added, and comes back with reading suggestions. Amazon suggestions come from Amazon.com, not LibraryThing.

Crowdsourcing

13 million books and 16 million tags, holy cow! That's some serious amount of data that people have free-willingly entered into the system! Just imagine trying to do the same before the day when the Web was crowdsourced. It would have taken an enormous amount of man-hours to enter people's likes and dislikes in books into a recommeder system as input to compute a list of recommendations, let alone the ratings, evaluations and discussions people have added too.

This is exactly the same way we want to go down with learning resources; first create a tool for teachers to create their favourite collections of learning resources and then use those to better serve them in terms of recommendations.


















Transparency

When I look at the recommendations from LibrarySuggester, what I like is that they are clearly classified in different classes of recommendations and on what those are based on. It is nice, as a user, to get the reasoning behind, e.g. ah, I was recommended this book because other "people with this book also have.." or I know that it is based on similar tags, etc.

This kind of practice of being transparent about the recommendations has also been argued about in previous research in the field, and it seems to be something that people appreciate, as opposed to a "black box" recommendations where the user has no idea on what the recommendations are based upon (Swearingen, 2001; Rafaeli 2005).

List of recommendations

Also, what I like is that LibrarySuggester offers a list of recommendations, as opposed to one or a few to choose from. However, in my list there were 74 recommendations all together, which I find way too much!

There are also some really evident ones, like books from the same author, which is not really a salient recommendation. McNee, et al. (2006) talk about a "similarity hole" that item-item collaborative filtering algorithm can trap users into by only giving similar recommendations. They argue that the old-skool accuracy metrics should be taken with a caution, as they only are designed to judge the accuracy of individual items and not the list of items. Thus, "the recommendation list should be judged for its usefulness as a complete entity, not just as a collection of individual items."

Moreover, within the same framework, which is called Human-Recommender-Interaction, these folks talk about three aspects that should be improved in recommendations. They are similarity (discussed above), recommendation serendipity, and the importance of user needs and expectations in a recommender.

Serendipity

Take the list of "Special sauce recommendations" for Dune by F.Herbert. On the list of 20 books you can find on the top 2 of his other books, and 2 by B.Herbert, his son. This sounds rather dull and not really anything surprising, you could find that easily from a bookstore too. By serendipity, the authors mean how unexpected the recommendation is for the user and how novel it is. For me personally this is a very important factor and why I like the idea of recommenders, as opposed to just content-based retrieval of resources.

I won't discuss the importance of user needs and expectations in a recommender, as in this case it is pretty clear. In some other cases, though, like for learning resources, this comes very important, as teachers do have different tasks at hand when they are looking for learning resources. This is something I've blogged before about and will keep exploring in my context of research.



McNee, S.M. , Riedl, J. , and Konstan, J.A. (2006) "Being Accurate is Not Enough: How Accuracy Metrics have hurt Recommender Systems". In the Extended Abstracts of the 2006 ACM Conference on Human Factors in Computing Systems (CHI 2006) [to appear], Montreal, Canada, April 2006

Rafaeli S., Dan-Gur Y., Barak M. (2005), “Social Recommender Systems: Recommendations in
Support of E-Learning”, Journal of Distance Education Technologies, 3(2), 29-45,
April - June 2005.

Swearingen K., Sinha R. (2001). , “Beyond algorithms: An HCI perspective on recommender
systems”, ACM SIGIR 2001 Workshop on Recommender Systems, 2001.

No comments: