The paper start from an healthy self-assertion that recommenders do not always generate good recommendations for users. Authors propose a Human-Recommender Interaction (HRI) as a framework and a methodology to understand users, their tasks and how do they relate to recommender algorithms. They propose that HRI can be a bridge between user information seeking tasks and recommender algorithms, when applied in the HRI Analytic Process Model, it can become a constructive model to help the process to design a recommender.
As the information density grows, users have more specific needs for their information seeking. HRI can be used to describe these needs, thus firstly, thinking about myself and my research area, I have to describe user types (probably I could use the LRE logs to deduce this) and typical domain tasks (this would have to be some guestimates that I test with a focus group). The authors suggest Hackos 1998 for this, but looks pretty old. A detailed analysis of these tasks will allow us to link task to specific HRI Aspects.
HRI Aspects, the three pillars:
- the Recommendation Dialog, the act of giving information and recieving one recommendation list from a recommender. This contains aspects like Correctness, Transparency, Saliency, Serendipity Quantity, Usefulness, Spread and Usability. The authors argues that recommender's purpose is to generate salient recommendations that strike an emotional response (the awe factor!)
- the Recommender Personality (uh, I don't like that term), the user's perception of the recommender over a period of time. Aspects are such like personalisation, boldness, adaptability, trust/first impression, risk taking/aversion, affirmation, pigeonholing and freshness.
- User Information Seeking Tasks, the reason the user came to the recommender system. Aspects such as Concreteness of task, task compromising, recommender appropriateness, expectations of recommender usefulness, recommender importance in meeting needs. Check out Case 2002.
The HRI Analytic Process Model can help to analyse and redesign recommenders to better meet user information needs. (Can it also help to design them in the first place?) For example it could help to understand whether a user would be contented with risky recommendations or more like the ones that affirm her information seeking needs.
Moreover, the authors say that by looking at which HRI aspects are important to which task, some metrics can be designed (I would be very interested in those metrics!) to categorise the differences between tasks. These metrics could be used to benchmark the known algorithms, and thus help to choose the proper one for the task. Rather, as the authors state, a recommender should have a set of algorithms to use instead of being “one for all users”-type of set.
Questions for Mr. Riedl
- What are the metrics, any exaples?
- What are the outcomes of the simulations against the well-known algorithms, did the mapping between the tasks and algorithms materialise, and if not, how well? More information available? In the paper it mentiones that they are submitted, under review. Whom to contact?
More on HRI, a PhD thesis by McNee: http://www-users.cs.umn.edu/~mcnee/mcnee-thesis-preprint.pdf
Research statement by the above: http://www-users.cs.umn.edu/~mcnee/mcnee-research-statement.pdf