Jump to content

Research talk:Characterizing Wikipedia Reader Behaviour Old

Add topic
From Meta, a Wikimedia project coordination wiki
Latest comment: 5 years ago by Pbsouthwood in topic Multiple responses

Re: Google Forms


Survey best practices has something to say on the topic. Nemo 18:59, 11 November 2015 (UTC)Reply

Hi Nemo. Thanks for the link. Do you think a specific component of the survey can be improved or you are sharing the link for us to review it in general? --LZia (WMF) (talk) 20:51, 11 November 2015 (UTC)Reply

Please don't use Google Forms due to privacy concerns (it's still being used at this time). -- 21:58, 28 June 2017 (UTC)Reply

  • @LZia (WMF): Sorry for the late comment, I just found where to write this. I would encourage you not to use Google Docs next time, and preferrably use a more survey-dedicated solution like Qualtrics. The issue I had with Google Forms is that I happened to open a Wikipedia article at work and see this survey, although I failed to participate as Google Forms website is banned in my office network for privacy reasons. I still saw a survey at my home laptop and at my mobile phone, although this made my answers somewhat skewed (I could submit an answer when I used Wikipedia for personal needs, but I could not when I used it for work). Perhaps this is not a big problem if I am the only one concerned and there are very good reasons to use Google, but please take such issues into account. Thanks — NickK (talk) 14:38, 9 July 2017 (UTC)Reply
@NickK: thanks for writing and sharing your experience. It's certainly helpful for us to understand your context better. A couple of points to share:
  • The issue you faced is real, and it will have impact on the data. Since there are a variety of biases which may not be in our control, we do de-bias the data based on the approach described in the appendix of our first paper. Of course, none of us will argue that it's better to have clean data to start with.
  • Regarding Google Forms vs. Qualtrics: One thing that put me off about Qualtrics was that we had to pay for the responses we would receive while Google Form is free. The Foundation does have a Qualtrics account and when I checked with the team that owns the account last year, they mentioned that roughly for every response, we have to pay 10 cents. Given that we could easily get 100K responses (across the 14 languages), we would be talking about $10K+ budget using Qualtrics. My thought was that we are doing the de-biasing anyway, so let's not spend the extra money and time that will have to go into figuring out where the budget for this expense should come from. Do you think this is a valid way of thinking about choosing one service versus another in the absence of strong evidence that the data will be affected to the extent that the results may not be accurate? (btw, I'm fully responsible for making these decisions, I'm asking you given that you've thought about this problem and you may have other perspectives. Please don't feel obligated to respond.:)
  • At the end of the day, the best solution would be for QuickSurvey widget to accommodate the survey questions completely in the widget and on-wiki. As I mentioned elsewhere, taking the user from a wiki page to an external website and asking them why they are reading an article (on the previous website) is not a good user experience and we will lose potential participants at this transition. I do hope that Reading team can allocate some time to accommodate this service in QuickSurvey itself (the service is already there, it just needs some improvements before it can be used).--LZia (WMF) (talk) 23:41, 9 July 2017 (UTC)Reply

What is the point?


Is this a philosophical question? Like, "I am reading this article because I have thirst for knowledge"? Or is it more of a how did you find it question? Like "I am reading this article because I clicked on a link in the search engine"? It's completely unclear and you only have 100 characters to explain. So I had to go with "I wanted to know about *page title*" - is this not obvious? If I didn't want to know about the content of the article, why was I reading it? But now it's too late, and you have a very obvious and useless response -Iopq (talk) 08:41, 13 November 2015 (UTC)Reply

Hi Iopq. It is not a philosophical question. We'd like the participant to reply as specifically as possible, without us directing the response to a specific direction. So, a response like "to gain knowledge" is considered somewhat specific, but not very much. We debated over the wording of this question, we considered other alternatives such as "Why are you in Wikipedia today?" or "Why do you use Wikipedia now?". We rejected the former because it could get too confusing, people could go down the path of why they value Wikipedia. We wanted them to focus on their current experience, the article they are in right now. We rejected the latter because we did not want to suggest that the participant should "use" Wikipedia, people who read it just for fun, for example, could get confused with the use of the verb "use". Ideally, we would know the end of a session for a user, and right before the user exits the website, we would ask the user why they were here in the last session they were in. This is technically not possible.
Iopq, if you have other suggestions for the question, please let us know. Learnings in this stage can help us do better in the future. --LZia (WMF) (talk) 17:28, 13 November 2015 (UTC)Reply
I found it a little tricky to summarise the reason for reading that article too. I managed to get it down to 100 characters or less but without making it sound awkward. DaGizza (talk) 12:55, 13 November 2015 (UTC)Reply
Hi DaGizza. Do you have recommendations for a new character limit? I'm more than happy to change that, even in the middle of the survey and as soon as we reach 1000 responses (we are very close to that), to see if we see a different response pattern. If I don't hear from you in the next couple of hours, I'll bump it up to a higher value to see if we observe any change. --LZia (WMF) (talk) 17:31, 13 November 2015 (UTC)Reply

Character limit confusion


The question says there's a 300-character limit but when I tried to submit my response it told me there was a 100-character limit. I don't care which it is but consistency in advice would be good! --Zeborah (talk) 01:38, 14 November 2015 (UTC)Reply

Hi Zeborah. Thanks for your note. We fixed this problem. --LZia (WMF) (talk) 16:01, 14 November 2015 (UTC)Reply

Great research, I love it


At Wikimedia Nederland we've been wondering how to fill in a Business Model Canvas and do some empathy mapping in a value proposition design exercise. That required to identify customer segments. As customer segments come to mind users, editors, developers, donors and readers. What is the user story of a reader? Are there multiple user stories? What drives a reader in real life to start reading a Wikipedia article? The results in Why are your reading this article today? are very much clarifying. Topic came up in a conversation is one, but not the most common reason. Neither is it the least common. The most common reason mentioned is I’m bored, curious, or randomly exploring Wikipedia. The question "Why are you reading a Wikipedia article?" has been raised on Quora. In depth answers can be read there! Ad Huikeshoven (talk) 13:43, 17 February 2016 (UTC)Reply

Thank you for your supportive comments, Ad Huikeshoven and for starting a conversation about this research question in Quora. We will continue documenting the tasks, upcoming events, and the results of this research here, so please let us know what you think as we move forward.
Your current comment relates to another line of research which focuses on developing pragmatic personas as well. We hope that the result of the quantitative research we're doing here can help that research, and vise-versa. --LZia (WMF) (talk) 23:42, 17 February 2016 (UTC)Reply

Complete Translation


I just filled out the questionnaire on de.wikipedia and noticed that it is only partly translated. Some directions are still in English and you can see a formatting mistake on the last page... --Gereon K. (talk) 15:28, 22 June 2017 (UTC)Reply

@Gereon K.: thanks for participating and pointing this out. The formatting mistake in the last page is resolved, but I'm not sure what directions you're referring to. Can you be more specific about the first part of your comment? Thanks. --LZia (WMF) (talk) 19:26, 23 June 2017 (UTC)Reply
@LZia (WMF): since I cannot redo the questionnaire I cannot look that up, but there are some directions in brackets in a smaller font that are still in English. --Gereon K. (talk) 21:29, 23 June 2017 (UTC)Reply
@Gereon K.: Of course. :) I just sent you an email with instructions on how you can check the survey. I'd appreciate if you check it and let us know in the email or here if you spot any more issues. Thanks for taking part in the survey and your feedback. :) --LZia (WMF) (talk) 04:40, 24 June 2017 (UTC)Reply
Solved. Thank you! --Gereon K. (talk) 11:30, 27 June 2017 (UTC)Reply

Statistics and PICTURES


Hey wikipedia folks,

I was redirected to this article or rather, a link brought me here which I clicked. It was a survey that wikipedia does right now in general about why people read certain articles. So I guess this is quality assurance.

Anyway, here is my plea or suggestion - PLEASE add some graphics, or LINKS to graphics. I like graphics about statistics, they help me get an instant look. I was never going to read the huge wall of text on this page, so it was sort of useless to me. So please more pictures if you already make statistical surveys, then please also add in pictures that are easily viewable for viewers. 2A02:8388:1602:A780:BE5F:F4FF:FECD:7CB2 18:05, 22 June 2017 (UTC)Reply

This is a fair suggestion. Thanks for bringing it up. We will keep this in mind and will try to add more visuals moving forward. Thanks! --LZia (WMF) (talk) 19:24, 23 June 2017 (UTC)Reply

Um Ihre Antwort abzuschicken, klicken Sie bitte auf die "[[[NAME_OF_BUTTON]]]"-Schaltfäche am Ende dieser Seite.


was soll das? --Steffen2 (talk) 07:06, 23 June 2017 (UTC)Reply

@Steffen2:, sorry that should have been "Senden" and is now fixed. thanks! --LZia (WMF) (talk) 19:22, 23 June 2017 (UTC)Reply

Feedback from a reader


I just want to let you know that I missed the option "I'm reading this article because I am a Wikipedia editor", or, in long, "I'm reading this article because it is on my watchlist and I'm regularly checking the subsequent edits after I've edited it a couple of times in the past." Wikipedia is not like a printed encyclopedia. Our readers are often also contributing as authors, but more importantly, all of our authors are also readers. If you display this survey to logged-in users, chances are high that they are Wikipedia editors and that a significant portion of all the articles they read are articles they edit. Or that they read an article in order to see if there is something they can improve. --Neitram (talk) 12:21, 27 June 2017 (UTC)Reply

@Neitram: Thanks for writing and participating in the survey. This comment has come up a few times in the past days, for example, check the thread on wiki-research-l. Let me describe to you the reasoning we didn't include what you recommended as an option and if it doesn't make sense to you, I want to hear it and improve the survey. Here are a few reasons:
  • We only included options for the motivation question that came up in our free-form text surveys often. Check Section 2 of Why We Read Wikipedia to see how we developed the initial taxonomy. When we did the hand-coding, some responses were along the motivation that you mentioned but they were not common and we didn't include them as a result, mostly to make sure the survey doesn't become too long or hard to read/comprehend for the participants.
  • While we didn't include the option you suggest, we included an Other field mainly to capture in the future surveys (including the current one) whether there is a need to expand the taxonomy. The idea here is that if enough participants choose Other and add a reason to that field that is not captured, we need to reevaluate the options that are available in the taxonomy. We do text analysis and sampling in the Other field in all languages at the end of this survey to assess this, for example. My question is: did you use Other and if not, why? We need to understand this, too. :)
The above being said, I'm leaning more and more towards us considering adding a field to the taxonomy to this regard: Yes, the fraction of responses choosing this field will be small (because the number of editors is much smaller than pure readers, perhaps), but by design Wikipedia is about editing and reading and we need to find a way to capture what editors do when they read an article.
@Cervisiarius: feel free to share your thoughts here, or we can discuss it first offline and get back here. :)
--LZia (WMF) (talk) 06:40, 28 June 2017 (UTC)Reply
Do you have any idea of the ratio of editors reading an article to non-editor readers? · · · Peter (Southwood) (talk): 16:47, 8 July 2019 (UTC)Reply

Multiple responses


I notice that the survey notice appearing on several pages I opened, possibly all the pages opened before the first time I filled in the questions. To test what would happen, I filled it in twice, but I don't know if the reference number was the same in both cases. I don't know whether this is intentional or would affect your model, so just saying in case it was not supposed to do this. I am aware that this information might make it possible to identify me in the survey, but really not bothered at all. Cheers, · · · Peter (Southwood) (talk): 08:16, 27 June 2019 (UTC)Reply

@Pbsouthwood: thanks for passing this along. The reference number would have been different but we also try to filter out duplicate responses (e.g. for people who browse in multiple tabs at once) by only retaining a single response from each unique combination of IP address and user-agent, which is based on browser version etc. I suspect this will catch the scenario you raised. I see you're not concerned but also you likely won't be the only person to submit multiple surveys and the deduplication process is automatic so none of us will know which surveys were duplicates and which were singletons. Best --Isaac (WMF) (talk) 14:40, 8 July 2019 (UTC)Reply
I was concerned whether this would affect the validity of conclusions that could be drawn from the data. · · · Peter (Southwood) (talk): 16:49, 8 July 2019 (UTC)Reply