Saturday, September 29, 2007

Notes on Smart Indicators on Learning Interactions

Smart Indicators on Learning Interactions by Clahn et al. (2007) discusses how indicators can be used to help learners, or groups of learners, to organise, orientate and navigate through learning environments by providing contextual information that is relevant for performing learning tasks. Indicators are part of the interaction between a learner and a system (social or technical).

Indicator system is defined as a system that informs a user on a status, on past activities or on events that have occurred in a context; and helps the user to orientate, orgaise or navigate in that context without recommending specific actions.

So, it is not:
  • a feedback system (analyse user interactions to inform learners on thier performance on a task and to guide the learners though it) or
  • a recommender system (analyses interactions in order to recommend suitable follow-up activities),
  • instead it provides information about past actions or the current state of the learning process.
  • Moreover, smart indicator systems adapt their approach of information aggregation and indication according to a learner's situation and context.

The paper draws heavily on the notion of social navigation, interaction history and footprints, and offers a good review of this literature (ToRead).

The paper offers an architecture of smart indicators, where different layers are defined to support user modeling (first two) and helping the system to adapt to better decision making process (last two). Four layers:
  • sensor layer
  • semantic layer
  • control layer, where a strategy defines the conditions according to learner's context
  • indicator layer, presents aggregated information to the learner.

This approach of smart indicators adapts the strategies on the control layer (as opposed to semantic layer) to meet the changing needs of a learner.

SENSOR AND SEMANTIC LAYER

The paper further presents the information aggregates of sensor and semantic layers. The idea is to classify and organise the user's engagement (interaction foot prints) with the system, e.g. contributions, tagging activities. In the sensor layer, there is a division between "learner interaction" and "contextual sensors", e.g. location tracker, tagging activities (in my case this is considered direct) and contributions of peer-learners.

I am doing the same with my research data, and I call it the "user engagement" following the Yahoo!'s idea on STAR-metadata (kind of attentional and explicit metadata about users actions).

I tried to apply the classes of Chlan's prototype to my research data (learning repositories) that I collect using our CAM framework. Our focus being somewhat different, it did not really work out that well. The attempt below, though:

Direct: accessing resources through browsing, tag cloud, search result list, other user's favourites (implicit interest)
  • user views metadata
  • user views tags
  • user views resource ("entry selection sensor")
  • (timestamp on everything)

Direct: higher level interaction with a resource (explicit interest)
  • user adds a resource to favourites and tags it ("entry contribution sensor", "tag selection sensor", "tagging sensor" or"tag tracing sensor", hard to say in my case)
  • user rates the resource ("entry contribution sensor")
  • user comments on the resource ("entry contribution sensor")
  • shares resource with network ("entry contribution sensor")
  • (timestamp on everything)

Contextual sensors could be (here I'm blending them with user information):
  • context of a project within which the user access resources
  • the information about the country and school from where the user is from

SEMANTIC LAYER

The semantic layer users the information from Sensor layer and transforms it into meaningful information by using an "activity aggregator". This calculates the activity for a given period of time for an individual learner or the whole community according to different ratings that each activity has (beginners have different way of counting activity from power-users).

CONTROL LAYER

In this prototype the control layer defines how the indicators adapt to the learner behaviour. There are two elemental strategies:
  • motivate learners to participate to the community activity
  • raise awareness on the personal interest profile and stimulate reflection on the learning process
Moreover, a third level control strategy uses the activity aggregator as well as the interest aggregator.

INDICATOR LAYER

This layer embeds the indicators into the user interface of the community system. The prototype is being tested by a group of PhD students now.

Glahn, Christian, Specht, Marcus, Koper, Rob (2007) Smart Indicators on Learning Interactions
http://hdl.handle.net/1820/941

Thursday, September 27, 2007

Some thoughts after SIRTEL07

Last week the SIRTEL workshop took place. The papers are found here and the slides, well, most of them, at the EC-TEL07 conference wiki. I have pretty good feeling about the workshop, it was one day long, we had about 20 people participating, some of whom chose to stay with us for the whole time, and some who were hopping between workshops. For me that is totally fine, we all are responsible for our own learning! Especially in conferences where many parallel sessions are running, I would encourage people to try to get best out of them.

For those who could not make it at all, you can soon find recording on the SIRTEL site.

The workshop had four main sessions:
  • We started with a keynote address from people who work with music recommenders. MyStrands people talked about applying social recommender systems to technology enhanced learning. It was an interesting talk that challenged all of us to think what are recommenders for learning purposes in the first place (goal) and what kind of data do we want to use to do that.

    As any good keynote, this one gave more ideas to think than answers. It nicely set the base for the further discussions during the workshop that focused on the need to define the field of Social Information Retrieval for Technology Enhanced Learning, and to establish a baseline so that we know what are we really set to do.

  • The second session was about Tagging and Visualisation. We had my presentation about the user behaviour on tagging in multiple languages; then there was a presentation from COSL that talked about "Activities of Daily Living on the Web", Brandon also showed a few demos of the widgets that can be used to rate or recommend related content. That was followed by a talk on reward structures to encourage teachers to share open educational material. Finally, we listened about Visualisation of social bookmarks, a work that leads into visualising bookmarks in an educational repository.

  • The 3rd session was on Recommender Systems. Here we first heard about some R&D work that OU NL is carrying out using the idea of learning paths to better support learning activities of students. Then, there was a study about using affiliation networks as a mechanism for collaborative filtering (understood largely). This was followed by a study on simulating recommendations based on multi-attribute ratings on learning resources by teachers. Finally, we had a system demo of Daffodil that supports collaborative information seeking.

  • The final session was what we called "Enablers and Challenges". It was a discussion session, and as we advanced, it was clear that people had a lot to say. It might even have been better to allow more time for this, but hey, you live and you learn.
I try to sum-up, but basically it illustrates the main topics that we talked about. If you look at the left side, there are the fundamental questions:
  • How to define and chart out the area of Social Information Retrieval (SIR) for learning?
  • Is this application domain different from other SIR, on micro and macro level?
  • What do we recommend?
  • In what context?
  • and based on what?
On the right hand, there are the issues related to implementation and evaluation of it. These are:
  • What are the best SIR methods for TEL?
  • And what is the data that we should use? The "data issue" was something that was heavily emphasised by the MyStrands folks, who obviously speak of experience.
  • The questions rouse also: when do we start implementing these for real or are we just over-engineering and never ready to launch?
  • Evaluation and empirical data for real evidences was on the focus a lot.











More will follow. This is quick and dirty now, hopefully I will get more input from people participating in order to get more depth on our summary.

Sunday, September 16, 2007

SIRTEL'07: la raison d'etre

"We use people to find content. We use content to find people."*

On Sept 18 our SIRTEL workshop takes place. It's gonna be "Serious Fun"! Let me just outline why:

SIRTEL'07: Raison d'etre

Recommender systems, as well as social navigation, have been around since the popularisation of WWW, that's some 15-20 years now. The idea is to help people choose the right stuff from a potentially overwhelming set of choices. To facilitate that users could be helped with information from other users, the choices made before (by themselves or similar users), the ratings or reviews other people had done, etc. (Rescnik et al., 1997)

The field of learning technologies has seen recommenders of some sort being discussed and prototyped since the late nineteens. In the review of the field in Manouselis et al (2008) we identified about 10 recommenders, and even more conceptual papers of them, but very little has matierialised so far.

Since the last few years recommenders have made a second arrival into the discussion topics of technology, or network, enhanced learning. Undoubtedly, this has been influenced by the arrival "Web 2.0" with all its ideas:

- Collaborative tagging, for example, has changed lots of ideas of how metadata should be produced and how static a metadata record should be: it's not anymore one metadata record produced by a librarian, but lots of annotational and attentional metadata by lots of users.

- Other annotations by users that express their subjective judgements have seen a huge growth too, we don't only talk about ratings or reviews in their traditional sense, but also tumbs-up or down, giving pokes to people or objects, etc.

- Social bookmarking, which allows users to create easy references to their own collections of digital resources (photos, books, links, music,..), has given a new dimension to the concept of social navigations. The link between resource-user(-tag) allows users to navigate other people's collections and thus find novel resources. Also, the same resource-user-tag link gives researchers an itch to use this information to group similar users for recommendation purposes, as well as to study the emerging networks.

- Expressing social ties between people has also brought new possibilities along. We are not only seeing networks of friends, but there are new possibilities where people can express different networks, ones for professional use, others for personal, recreational, etc purposes. Also, portability of these networks has become an issue discussed for better designs (social-network-portability group, PeopleWeb ,..).

- Something else is also happening behind the scenes. Clicksteam and user behaviour on the Web is not anymore a property of the commercial portal on which users are, but users are starting to take seriously how their "attention" is being used, who owns it, etc. Attentional metadata is a huge source of information that educationalists are also starting to take more seriously and thinking how it could be used for better serving learners and teachers (Contextual Attention Matadata, Attention Profiling Mark-up Language, Attention Trust,..). Attentional metadata can also become crucial when it comes to better understanding the intentions of a user, why are they, for example, looking for some information and for what task at hand!

- Finally, content for educational use, or rather its production, is also seeing a change. Users generate more and more of the content on the Web in general, a trend which is also seen in the e-learning. Of course, traditionally teachers have always produced lots of their own material, but now its re-use also has been facilitated (e.g. repositories/referatories). Also, the collaboration aspect is facilitated by the Web, it has become easier for people to work together on things (e.g. wikis, collaborative platforms,..). Additionally, learners produce plenty of material which also should be seen and used as educational content.

To sum-up: two main topics evolve around social context and social content. Social context is how we express the who, where and with whom, and social content are the objects or digital artefacts that are in the center of the communication, exchange and networks.

All the above has hopefully also changed how we will see the future of social information retrieval for technology enhanced learning. This workshop will all be about that! Serious Fun!

-------

N. Manouselis, R. Vuorikari, F. Van Assche, “Collaborative Filtering of Learning Objects for Online Communities: An Experimental Investigation”, accepted for publication in Computers in Human Behavior, Special Issue on ‘Advances of Knowledge Management and Semantic Web for Social Networks’, 2008.

P.Morville, 2004

Resnick P. & Varian H.R., “Recommender Systems”, Communications of the ACM, 40(3),1997

Wednesday, September 12, 2007

How do teachers network?

Just wondering...

I had an awsome chance to join this teachers' community.

View my profile on eTwinning Reunion