The question now is how and where do users discover cross-border resources? By this I mean that the user and the content come from different countries, and/or that the content is in a language other than the user’s mother tongue.
One challenge is discovering resources in general (previous post), another one is to discover resources that are in a different language and come from different countries. This is the case on our portal, so I'm interested in how to facilitate the discovery process that involves crossing those boundaries (language and national, which might have implications to the educational content of the resource), and hopefully make it more efficient.
My take is social information: making it readily available to all to leave cues from other users. I bet that this should facilitate the discovery process and thus also make it more efficient (more resources and faster).
So I looked at the previous data and the cross-boundary bookmarks from users. This is relative to the user, of course, so with every bookmark I compare whether the resource and the user come from the same country and/or is the users mother tongue different from the resource language.
41 out of 48 active users had bookmarked cross-boundary resources. They had added
- 299 distinct resources into their collections 350 times. Out of these resources
- 163 were cross-boundary resources, which had been added to their collections 198 times.
- This means that 55% of distinct resources obtained during the period of 1.5 months were cross-boundary resources.
Table 1 shows that about a third of the bookmarked cross-boundary resources had Social Information on them, they were either bookmarked by previous users or existed in the "travel well" list. This is cool! Although we cannot say that the users only discovered these resources because of Social Information, it is important to know that it has had added value for users to discover them. I also found out that about 10% of bookmarks on resources with SI had been previously bookmarked by someone from the same country as the resource was. The fact that they were bookmarked, although still almost dismissible small (10%), is still good news for SI and social navigation based on it.
I then looked at where the resources are discovered: 62% of cross-boundary discoveries were done in Search Result List (SRL) whereas 37% took place in Community searches, most of them in the tag cloud (30%) and 7.5% on the Travel well and Most bookmarked lists (only one case in the latter :(
As comparison for the resource discoveries that did not cross any borders (e.g. German teacher found German resources), 90% of them took place in SRL. So it seems that for discovering cross-boundary resources the Social Information is important, as it allows users to do Community searches to discover these resources. We do see, though, that cross-boundary discovery is efficient also within the resources that cannot leverage the previous user experiences, as 63% of cross-boundary resources do not have any SI.
Interestingly, when we look at Table 1, we see that for the resource discovery that does not imply any cross-border action, users do not seem to care much for SI. Actually, more than 90% of these bookmarked resources had no SI. This is cool, as it seems that we need users to discover and annotate resources among their comfort zone (e.g. national and regional educational material in their own language) in order to make it more readily available to others.
Another thing that I've looked at are these measures for
- usage coverage within the repository,
- how many resources are shared among collections (e.g. Favourites) and
- what I call the pick-up rate, this is how many of distinct used resources are reused. This could happen when someone discovers a resource that has SI related to it (e.g. in the travel well list or from other user's favourites, or just picks it up from SRL based on someone else's annotations). These were discussed in more details in the last paper.
Table 2 shows these measure in LeMill, Calibrate and delicious, additionally, the gray column indicates this 1.5 month trial in MELT and the last column has all the data from MELT, which includes the pilot teachers, staff, partners, etc.
We can see that as the initial amount of resources is so big in MELT, we still cover only a minimal amount of resources, from 1 to 2 %. This does not even include the assets, which more than doubles the amount. Used here means that the resource is added to Favourites once (reuse more than once).
We can see, though, that even if the resources coverage is not that high, there is still quite a lot of sharing among used resources. This figure still remains low with 1.5month trial (about 15%, same as in Calibrate which did not make the SI available!), but if we look at all the usage so far, the sharing is at 43%. This is somewhat artificial, though, as there is a lot of staff use, but still I hope it indicates that making SI available on the long run helps sharing resources (or, I need to look better solutions on the portal for sharing, which is also planned but super delayed because of all the other dev programme).
We can already see that the pick-up rate is higher in MELT trail than in the 3 other platforms that I've looked previously. This is an indication that SI works, I hope.
Btw, I could not find any correlation between the act of putting the resources in Favourites and whether they have Social information related to them (I got some lousy 0.18 even if I removed 2 outliers who outperformed anyone else). Also, in some previous test I had hard time finding significant changes, so I might need to seddle with increased reuse rate, which has previously been shown to be abotu 20% across collections. I found that it was about half of this (or the same as general reuse) in my previous paper.