Research:Characterizing Wikipedia Reader Behaviour/S3-English

From Meta, a Wikimedia project coordination wiki

Survey 3, English Wikipedia[edit]

Design[edit]

Categories identified via hand-coding in step 1 and 2 were used to run a larger follow-up survey.

When
The survey ran from 2015-11-24 (8:20 PST) to 2015-11-30 (8:47 PST).
Project
English Wikipedia
Platform
Desktop and Mobile
Sample
Five out of 100K requests were sampled for this survey. 10,503 users participated in the survey.
Questions
Potential survey participants saw a widget with the message: "Answer three questions and help us improve Wikipedia." Upon accepting the survey invitation, the participants saw three questions (the sequence of the questions were randomly changed for each participant):
  • Q1. I am reading this article to: get an overview of the topic; get an in-depth understanding of the topic; look up a specific fact or to get a quick answer; other (with a text field to explain what the other reason is).
  • Q2. Prior to visiting this article: I was already familiar with the topic; I was not familiar with the topic and I am learning about it for the first time.
  • Q3. I am reading this article because (please select all answers that apply): I need to make a personal decision based on this topic (e.g, to buy a book or game, to choose travel destination, etc.), the topic came up in a conversation, I am bored curious, or randomly exploring Wikipedia for fun, the topic was referenced in a piece of media (e.g. TV, radio, article, film book), I want to know more about a current event (e.g., Black Friday, a soccer game, a recent earthquake, somebody's death), I have a work or school-related assignment, other (with a text field to explain what the other reason is).
Date collection and privacy policy
Data collection occurred via Google Forms. The survey widget linked to a privacy policy designed for this survey.

Analysis[edit]

The percentage of responses to each question, as well as the raw count of responses are shown below:

The very small percentage of "other" selections in the above charts indicated that the categories identified by hand-coding in the first two surveys address the categories the users associate with the questions asked in English. To understand whether the categories are robust across languages, we will run Survey 1-2, and 3 in 2 more languages.