Tuesday, January 13, 2009

Modelling the portal ecology: What goes around comes around

The three main actions on the portal: discover resources, play them and annotate. The two main group of users: ones logged-in and the others not.

I've divided the resource discovery process in three slots (Millen et al.):
  1. Explicit search
  2. Community search
  3. Personal search
Play is when the user clicks on the link. We also call this implicit interest indicator, however, we are not sure whether it was relevant to the user or not. Worth noting anyway. This is also called "hits" or "click-through" in some lingo.

Annotation is when the user makes an explicit interest marking (indicator) on the resource, this can currently be either a rating (usefulness, scale 1 to 5) and bookmark with tags. Both of these actions are public.

About users and logs

In general terms we record all kinds of clicks and actions on the portal (see here). I studied the logs from the last 2,5 months. We know that we have 340 users who have a user name, excluding staff, etc., we have 168 "real" users. Out of them 82 had clicked on a resource on the portal at least once, so these users are included in the logs. Additionally, there are users who do not log, but I do not have any idea currently how many they are (check Analytics). There were 13 604 actions recorded, 40% from the ones who logged in and 60% by others who did not log in.

In general, we can think that the relationship between these 3 actions is important on the portal and can indicate something about its efficiency for users to get what they want, as well as for the system to get what is needed to keep it going. In our case, we are in the process of looking at how Social information can help the discovery. So, a perquisite is to have SI available, thus the system needs ratings and bookmarks.

With the contributing users (=logged-in) on the MELT portal:
  • 2 searches result to one play;
  • 2.6 searches result to one annotation, this can be either rating or bookmarking;
  • 1.3 plays result to one annotation.

For the comparison, in Calibrate the figures were the following:
  • 0.5 searches result to one play;
  • 5.7 searches result to one annotation, this can be either rating or bookmarking;
  • 11.3 plays result to one annotation.
I will study this further too. A quick look would say that a system, which emphasises Social Information for users own benefits (Favourites) and for everyone's benefits (allows Community browsing) like is the case with MELT, the loop for getting annotations is more efficient than in the system which does not make use of such information (e.g. Calibrate). The ration of search-to-annotation is 2.6 to 1 in MELT, whereas the same in Calibrate is 5.7 to 1.

However, if we look at the ration of "hits", the Calibrate system has been four times more efficient: it took one search to play 2 resources, whereas in MELT it took 2 searches for one play. The MELT system search function has been under constant development for speed, which has been somewhat problematic due to huge amount of content. I will report later on the same ration after our last optimisation effort.

What goes around comes around

Graph 1 depicts what is going on on the portal. I will explain this later in details. For each action I have indicated the percentage of total, e.g. Explicit search 78% is from all Explicit searches executed by non-logged in users.

Graph 1

No comments: