Thursday, April 19, 2007

D.Watts on Social influence and popular songs

This study is pretty interesting: there were 14,000 participants who were asked to listen and rate songs by bands they had never heard of. The point was to study the social influence, i.e. how seeing cues from other people, like Top10 downloads, no of downloads, etc. would influence on people's choice.

The set-up of this study is pretty neat, the participants were sliced into eight parallel “worlds” so that participants could see the prior downloads of people only in their own world. Everyone started from the same line, zero downloads — but because the "worlds" were kept separate, they subsequently evolved independently of one another.
What we found....In all the social-influence worlds, the most popular songs were much more popular (and the least popular songs were less popular) than in the independent condition. At the same time, however, the particular songs that became hits were different in different worlds, just as cumulative-advantage theory would predict. Introducing social influence into human decision making, in other words, didn’t just make the hits bigger; it also made them more unpredictable.

Our experimental design has three advantages over both theoretical models and observational studies. (i) The popularity of a song in the independent condition (measured by market share or market rank) provides a natural measure of the song's quality, capturing both its innate characteristics and the existing preferences of the participant population. (ii) By comparing outcomes in the independent and social influence conditions, we can directly observe the effects of social influence both at the individual and collective level. (iii) We can explicitly create multiple, parallel histories, each of which can evolve independently. By studying a range of possible outcomes rather than just one, we can measure inherent unpredictability: the extent to which two worlds with identical songs, identical initial conditions, and indistinguishable populations generate different outcomes. In the presence of inherent unpredictability, no measure of quality can precisely predict success in any particular realization of the process.

This makes me want to test and set up experiments, too. In the project that I'm part of, and where I will get my data, we are planning some experiences on the input part of the tags to see how social influence in terms of seeing other users' tags when inserting own ones, will effect on the nature of tags, their number, their convergence, etc.

But, on the retrieval side of things this would be very interesting too! To have two different interfaces to see the search result list, where on the one there would be all the social cues for social influence (no of downloads, no of bookmarks, others' tags), and on the other one there would be nothing. The experiment would test whether the users, in this case teachers, would be viewing the metadata of similar resources and what would they actually download, bookmark and rate, if they did any.

Well, actually the latter is the situation as it is now. So maybe I can just compare the data from this year and the year after, when we actually start implementing the social navigation part.


Link:
In NYTimes

Science 10 February 2006:
Vol. 311. no. 5762, pp. 854 - 856
DOI: 10.1126/science.1121066
http://www.sciencemag.org/cgi/content/full/311/5762/854

Supporting material:
http://www.sciencemag.org/cgi/content/full/311/5762/854/DC1