General User Survey
After the first attempts to a general survey of Wikipedia users, a scientific survey has been conducted in collaboration with UNU-MERIT: it was launched on October 2008 and final results have been published on March 2010 at wikipediastudy.org.
For other surveys, please see Editor Survey.
- Research:Newsletter/2013/July#Survey participation bias analysis: More Wikipedia editors are female, married or parents than previously assumed
- October: survey is online, and the results are to be shared under a CC licenses. Sounds good, even if they are not using most of the questions we have designed.
- July: Some data might have been presented at Wikimania: details are sparse. One would hope - since this is a project endorsed by the Foundation - the anonymized data would be made available as open data, but so far there is no indication this is the case...
thanks to delphine this has been awakened again by proposing a partnership with a company willing to help us on this. crosschecking with opensource tools revealed also tools which may be used for that:
http://limesurvey.org/ (former phpsurveyor), which seem to scale quite well, carsten schmitz (carsten.schmitz.hh (@t)gmx.de) told me (rupert) on irc://irc.freenode.net/phpsurveyor (now irc://irc.freenode.net/limesurvey) on 2007-05-13 that azureus project used it for a survey, and they had 50,000 responses in 12 hours, with about 20 questions.
Dutch WP has reopened this discussion, main object being: can the active membership maintain the work that has to be done? The work of Erik Zachte could be a great help answering this, so it should be continued. Next question: do the other WP's have the manpower to maintain tasks like vandalism control and quality maintenance? Have you done a survey to find out what your regular members are doing at all in this respect or in the maintenance field? If so, I'd be interested to know, please respond to: my user survey. - Art Unbound 19:51, 22 September 2007 (UTC)
- Hey hey! I have begun to create this, I have actually run a survey as a test to see the project's viability and limesurvey a month or two back - the results were outstanding. They can be found at www.wikimediafoundation.spaces.live.com - you might want to see census Symode09 08:04, 7 October 2007 (UTC)
The project was proposed by Erik Zachte. It would be interesting to get a better perspective on how Wikimedia software and procedures are valued by the user.
The project was presented and discussed on Wikimania 2005, but after that progress stopped more or less due to lack of manpower. It was again discussed on Wikimania 2006, programming needs were discussed briefly and resources promised. In order to warm more people to the discussion of what to include into the survey and how to deal with results, it seems useful now to focus on implementation issues first, collect technical requirements and produce a mockup or prelimary version, which can help to make the dicussions more concrete. See General_User_Survey/Implementation_Issues for discussion about the technical design.
Examples of question to ask:
- Is syntax becoming too technical and unwieldy (e.g. wiki tables) or is functionality and flexibility valued above ease of use?
- Are search facilities satisfactory?
- Is it difficult to keep up with new developments (technically and organisationally) More precise: Do we have too many forums, like goings-on on meta, many mailing lists, village pumps, etc. If so, what what would help to improve the situation?
- Are there issues that keep you from contributing more? Like response time, edit conflicts, etc?
- Would you be willing to tell more about yourself on your user page (country of residence, age, sex, education, etc) provided noone else can see this, it is optional and only used for demographic statistics ?
- Do you feel comfortable with the way decisions are taken within the Mediawiki community ? (cabal vs democracy, anarchy vs offical functionaries, consensus seeking vs voting (note: these are not part of the question for the survey, just to explain the question here, of course careful phrasing of the final text is important)
- On a scale from 1=very bad to 7=very good how do you rate the athmosphere (think of friendliness and cooperation, openess to newbees, etc) of the major projects you are involved in (use an average when projects differ in your opinion) ?
- On a scale of 1-7 how do you rate the average trustworthiness of the data in the projects that you participate in? Follow-up: do you think improving trustworthiness should be a high priority project ?
(again use an average when projects differ in your opinion)
What would be the best way to organize such a survey? A form based html page? Meta ? Would it matter that people can tilt the scales by giving feedback several times? Do we target regular contributors specifically, or all users, casual readers and wikiholics alike?
- From my perspective, getting feedback from casual users or new editors would be extremely valuable, because it gives a clearer picture of what Wikipedia looks like to fresh eyes, and hence real issues of usability (it still somewhat overwhelms me after a year!). The other main question here is data collection - if you use open-ended questions (as I have done), the data generated is huge and can be difficult to bring together, especially if the 'sample' is big. On the other hand, 'tick-a-box' answers can be very prescriptive, and often not very meaningful (though they are easy on the statistics end of things). I think, if you're asking difficult questions (which I think research should do), giving people the option of anonymity is generally helpful, with the option, as you say, of giving personal details. Or was this idea about creating a sort of invisible template on user pages that is visible only to someone doing a user demographic survey? (If so, who has access to this? You'll remember there was a semi-related debate on one of the mailing lists about David Gerard's access to IP addresses.) This is just an idea, but how about doing several surveys, one targeting specific users (maybe newbies) on usability questions, another targeting as many as possible on demographic questions, and another maybe a more open-invitation one on deep issues like content and interpersonal conflicts? My work so far has concentrated on this latter point and I'll happily work on this or any other questionnaire - also, Sj has drafted a broad questionnaire which fits with many of the other issues here. But as for Erik's specific question on the mode of submitting the survey/questionnaire, I'll leave the technical end to someone else, but the main question remains: what do we want to find out and what do we want to do with this data? Cormaggio 08:11, 20 Jun 2005 (UTC)
- Seems to me that several of the questions might be better tackled with a useability study: The questions related to the technological barriers to participation. The behaviorial/motivational questions, however, would work well in a survey format.
- How, exactly, do you see the difference between these two studies? Cormaggio 20:06, 21 Jun 2005 (UTC)
- A usabability study would be conducted with a group of users (e.g new users) and you would observe their interaction in front of a computer with the software when given a task to complete. We're going to be doing a usability study on a Web site at the first of the year, and we might be able to include some sort of study with Mediawiki at that time as well. The issue again is generalizing the results to a larger population -- what would make the people we're studing "representative" of a the larger population of new Wikipedia users? (Sorry to have taken so long in responding to your question.)--K1v1n 19:20, 3 Jul 2005 (UTC)
- I would favor dividing these questions into multiple studies with more targeted data collection strategies. If we want to be able to defend this research (meaning we get answer to questions that people feel comfortable making decisions with) we need to be able to describe the populations, and the methodologies used to "discover" those populations; i.e. who we surveyed are representative samples of the populations we are describing (newbies, seasoned veterans, etc). The more we stratify the population from the start the easier it will be to target questions and to draw conclusions. Email might work to deliver the survey. Then we would have to do some follow-up that determines that those who responded were not significantly different from those that opted out (ignored us).
- I would suggest before we collect any data we need to be able to answer the, "So once we have this information how will it be used?" question. If we can't answer that question we shouldn't collect the data in the first place. K1v1n 16:09, 20 Jun 2005 (UTC)
- Both of you raise very valid points IMO.
- Actually my idea is to start with a general survey and a very moderate set of questions, say about 20 or 30, so that people can answer those in 5-10 minutes, to get a lot of response. This survey could help to pinpoint problem areas in every aspect of Mediawikis operation. Then a follow up survey could deal with a specific area of interest, say usability, in much more detail, once an area is recognized as something that needs improvement.
- Both of you raised this question "So once we have this information how will it be used?". Of course this is a crucial one. I see the general user survey as a new means to define 'the state of the wiki' to borrow a term from Eloquence. It could make the community aware of issues that need more attention, elicit discussions, and lead to follow-up studies in more detail. A wake up call so to speak, or a reasurrance and pet on the back in some areas.
- Of course people who read mailing lists daily (but there are so many nowadays and huge archives), visit several village pumps, read the newspaper and meta goings-on have a general feel for what goes well and what does'nt. But it is easy to miss out on the forest because there are so many trees. (free from a Dutch proverb, not sure if this an English one as well).
- "You can't see the wood for the trees"
- For example my own subjective feeling is that the athmosphere on one of the wikipedias that I browse from time to time has changed quite a bit in a year time, with more bickering and extended disputes than a year ago. I heard similar reports from other wikipedias in the past. I'm not that alarmed, but I would like to ask a question or two about it in the survey. If 30% or more of the people feel that the spirit of openess to outsiders and cooperation is not as good as we think it is (or used to be) that would be a very serious but important conclusion from the survey (the percentage is hypothetical of course, it might be 5%).
- For some questions it might be interesting to see also whether people think the trend is upwards or downwards. A question like "Do you feel it is easy/difficult (scale 1-5) to keep track of all sorts of major mediawiki developments" might be accompanied by "If applicable do you think is was more easy or difficult a year ago ? Or maybe the question about the perceived athmosphere is a better example here as well.
- With about 20 questions we could target 7 areas of interest with 3 questions each, as a rule of thumb. The questionaire by Sj has a lot of questions (in fact too many IMO) and I would be interested in results on most of them, but some of them are somewhat superfical 'nice to know' or only take meaning as part of the whole set. Which makes analysis complicated and open to lots of interpretation.
- To use a metaphor I would like to ask "Do you feel your personal economic situation has improved in the last 5 years" instead of "How old is your car" "When do you plan to replace it?" "How long did your drive your previous car?" etc which may provide the same answer and more, but is more complicated to interprete.
- I would like to process the responses with a script and produce some stats from them. Open ended questions are indeed difficult to bring together. Yes/no questions are very course. Questions like: on a scale from 1 - 5 how difficult do you find it to master wiki syntax ? 1 = very difficult 5 = very easy could be more useful. A follow-up question like "Do you prefer even more editing options at the cost of a more extended or even more complicated syntax" would bend the outcome of the survey into a preferred course of action. Either a usability study or constraint from the side of the developers.
- It would be interesting to ask some demographics and which wikimedia projects people contribute most to, to put their answers in perspective. But here also conciseness should win from completeness, I think
- I see the survey as a finite project, say a questionaire that is advertised all over wikimedia for two weeks only, after which processing is done.
- Later on a more thorough questionnaire about demographics could be added to the mediawiki code as part of the user page. It should indeed be only visible to the logged in user, and it usage stated clearly (stats only ?), and of course be completely optional to fill in. But that would be a different project where mediawiki hackers would have to be involved. Also that part would work much better once single-login is a fact. Then one could collect all activity from one user from all projects and couple it to demographic data. In fact that could become an extension to my wikistats script.
- As for the usability research: On one hand it makes sense indeed to target newbees for questions about usability. Long term contributors obviously have taken most hurdles, probably one by one as syntax evolved, those who failed miserably are not there to ask :) Then again there must be more people who are still overwhelmed after a year, the question is how many, and that may be even more significant as measure of how steep the learning curve is.
- I doubt email is the best vehicle for a short survey. We don't have email addresses from many people. People don't like to be spammed. I have no idea how to process the returned info. I would favour a simple one time only discardable php script for the initial survey. People could choose to see the form in their own language (if we can get translators involved). Erik Zachte
- One benefit that email gives is that you can target specific people, eg. newbies, admins etc. and be clear on your methodology, as K1v1n points out (Though, if the problem is that this would be messy, then we could still target people by email and direct them to a webpage with a form which is submitted via some sort of script, as you say.) I've only set up forms before that are submitted by email, but if you have access to a server then it can be set up nicely - I don't actually know how to do this, but i'm sure someone will. If you don't target, the results could be biased in that you only get enthusiastic regular contributors, who aren't always the best ones to ask difficult questions. I really like your idea of translating surveys (but will the same be said for the translating team?!) If we do it by multiple choice boxes, then i'm sure the data could be collected without having to translate the answers, which would be really nice. I also like your idea of this being a 'state of the wiki' survey - which means that it's important to get something that is able to give a sense of the change over time. I think we would also get feedback on the survey itself, to improve it over time.
- The question of the atmosphere on some wikis is a very complex one and at the moment I don't see a way of making it clear from a multilingual multi-wiki tick the box survey which wiki is being talked about. Or do we do a survey on each wiki?
- So, I suppose we just need to draft the thing first, right? As for question areas, what about, roughly, usability (4-5 qs), actual use of and contribution to wikimedia (3-4 qs), atmosphere/civility (2-3 qs), process/procedure (4 qs), personal background (3-4 qs) - I also favour keeping the questionnaire compact, 20 questions is a good number. And just pointing out for what it's worth, if these are to be multiple choice (ie closed) questions, the real challenge is getting something real and meaningful at the end, without leading the answers in a certain way. Cormaggio 20:06, 21 Jun 2005 (UTC)
- "..if you have access to a server then it can be set up nicely.." I plan to run the script from the Wikipedia server.
" i'm sure the data could be collected without having to translate the answers" Most questions can be asked in a format that is numerically processable.
"..we would also get feedback on the survey itself.." Certainly, but it would help to have a more concrete proposal first.
".. which wiki is being talked about." We might ask a person: list at most three projects on which you are most active. People than can select the project from a drop down box which lists 'Wikipedia' 'Wiktionary' 'Wikinews' etc and add the language code in an edit box. And this three times.
"The real challenge is getting something real and meaningful at the end, without leading the answers in a certain way." Without any doubt this is very important.
Erik Zachte 20:34, Jun 23, 2005 (UTC)
- "..if you have access to a server then it can be set up nicely.." I plan to run the script from the Wikipedia server.
- Erik Zachte 22:03, Jun 5, 2005 (UTC)
- K1v1n 16:13, 20 Jun 2005 (UTC)
- Piotrus 15:55, 27 July 2006 (UTC)
Sounds like a neat idea Erik. Sj might be interested in giving you some input on this. Cheers. Anthere 23:13, 23 Jun 2005 (UTC)
General remarks and questions
Relation to usability?
I admit, I didn't read all the above, but I had a glance at the questionnaire. Having said that, I think it is safe to state that one of the implicit or explicit aims of doing such questionnaire as part of the Wikimedia Research Network initiative is to improve wikip/media's usability, or, more precisely its web-usability in this case.
In order to gather data that will allow to improve usability there are several methods. Surveying is one of these methods. Alas, the "surveying" method has proven to be one of the less effective methods for gathering reliable data to enhance usability. The main difficulty is that there appears to be a disconnection between how usability is in reality experienced (which appears to happen more on a near to subconscious level) and what people can tell about that experience. But there are other difficulties, e.g. someone having trouble with wiki-type editing methods is more difficult to seduce in participating in a poll about that topic, than someone who is over the hill about such methods, so there is a difficulty in reaching the real target audience for such polls (still speaking in general terms about improving usability polls, not e.g. about popularity polling). There is literature about these topics, from the useit website to very specialised literature. Note that (web)usability is a topic to deal with regardless of commercial exploitation, and as far as I can see the general scope of the Wikimedia Research Network initiative includes the usability topic.
So, the question I have is the following: could it be made explicit whether or not the General User Survey Questionnaire aims at gathering decision-making material for usability-related improvements? If it does, that appears very tricky to me (at least the set-up of the questionnaire needs more thinking); if it doesn't, the questionnaire would be more of the "pop-poll" type (and can be shortened to about a third of its present size I suppose). --Francis Schonken 09:07, 27 July 2005 (UTC)
- In the survey editors can state their level of satisfaction with a certain aspect of wikimedia/mediawiki. It is extremely high level, asking one or two questions per aspect only. So it will probably not lead to immediate proposals for improvement. However it should give us an idea which issues need more attention. Further more detailed research possibly through a survey may then be needed. So which questions would you deem as redundant in order to shrink it to a 'third of its present size'? There are 4 questions about usability now:
- I rate the wikimedia editing process: Very simple  ..... Very difficult  (RBS7)
- I would favour more editing power and flexibility even at the cost of a more extended syntax (e.g. new tags for extensions, new atributes for layout) Very much not true  .... Very true  (RBS7)
- I rate the wikimedia facilities for navigation and information gathering: Very unsatisfactory  ..... Very satisfactory  (RBS7)
- I rate the wikimedia facilities for monitoring article changes: Very unsatisfactory  ..... Very satisfactory  (RBS7)
- Erik Zachte 10:10, July 27, 2005 (UTC)
- Thanks, I hadn't considered the multiple step option you describe, which is a third way, apart from the two options I mentioned, and indeed viable.
- Note that nearly all the questions following the explicit "usability" questions have at least a usability angle. I'll only give one example, I'm sure you can see what I mean for the other questions too (if not, ask me). The question "Compared to a year ago decision making processes on Wikimedia are:" is usability-linked: giving an example from Dutch wikipedia (which you know): about half a year ago there were efforts to reform voting procedures on Dutch wikipedia (the "stemhok"). In the end the procedure became less strict, and I think mainly for usability reasons, not while (on an abstract level) it wasn't the better "decision making process". The problem with a questionnaire is (in relation to usability) that one never can be sure whether a participant answers the question on an abstract level (pro or contra democratic voting procedures with a maximum of objectivity) or on a practical level (pro or contra the usability of an elaborate voting procedure). The net result when looking for an indication what way the wikip/media policy, or alternatively, the wikip/media software implementation, has to go is thus near to negligible. But as you say, no direct conclusions from this poll, and further investigation (I would make it: to be decided whether or not by survey) before any decision is made, then I see the aim,
and would encourage the proceedings. --Francis Schonken 10:58, 27 July 2005 (UTC)
- Rephrasing: the present version of the questionnaire gives the impression some usability testing is going on, well I found this in en:usability testing:
- Caution: simply gathering opinions is not usability testing -- you must arrange an experiment that measures a subject's ability to use your [application]
- So, if neither this survey, nor any other survey of the type "on-line questionnaire", is apt to perform usability testing, while on the other hand the form suggests it is (while asking direct questions about usability), the least that can be done is to put up a up front a disclaimer that the questionnaire is not part of a usability testing program.
- But then the other problem I still neglected in my contribution above is that a questionnaire from the Wikimedia Research Network initiative should not be about policy-making in the individual wikiprojects, that's simply outside the well-defined scope of the Wikimedia Research Network initiative thus far (and a lot of other problems, coming down to: you're not equipped to do that, even less than for doing some kind of usability testing). The only remaining range of topics a "multiple choice" type of questionnaire could be about (if not usability and not top-down policy-making) is merely technical, e.g. prioritisation of the next features to develop. --Francis Schonken 16:59, 27 July 2005 (UTC)
- "Simply gathering opinions is not usability testing" Who said it would be? It was never my plan to do usability testing in any way though this survey. The subject of usability testing was brought up later, and most agreed it should be dealt with separately. Again, this survey is meant to ask the opinion of as many regular contributors as possible about varied aspects of the wikimedia experience. In my opinion no kind of research is outside the scope of WRN. Even if that would be so, it does not alter the proposal, which could have been brought forward by anyone anytime. The WRN is to me mostly a forum were ideas like this one are discussed and fostered. This 'poll' or 'survey' or what name you prefer for it, is slightly different from most polls on meta pages or elsewhere in that it aims to produce quantifiable anonimized results from authenticated users that can be analyzed by anyone. It may serve as a test case for other partially automated surveys in the future. Since you seem to interprete the form slightly different from others who commented before this again stresses the importance of carefully phrased questions and introductory/explanatory texts. Although I have difficulty myself with confusing the following concepts: at one hand asking for a general verdict or level of satisfaction about usability of the current interface (which is one of the aspects this survey inquires about), at the other hand detailed analysis of what might possibly be wrong with the interface and what could be done about that (which you seems to think this survey should not do, well it doesn't). Erik Zachte 00:13, July 28, 2005 (UTC)
- Disclaimer: I would keep it, most people filling in the form wouldn't have read this talk here, but might start wondering...
- "scope": yes, thinking it over, I might do some suggestions regarding the scope of the Wikimedia Research Network, for instance:
- "formulation of recommendations": I'd cut that out of the scope, because that is something hampering research on wikip/media policies: traditionally, in wikipedia context, policies are developed by the community/communities, NOT by a scheme of research -> recommendations to those in charge -> top down implementation. Since WRN is a research department separated from where things are implemented there is no need to "formulate recommendations", which generally lowers the scientific level of research (while open to threats of bias towards the goals one wants to achieve). Further, there are enough clever wikipedians (in fact, all of them) who can read the findings of the research and decide for themselves what to do with it.
- "present results to the board", I would make no distinction, just: "make available to the community", that is: without separate treatment of board members. Further I would put in the scope to make the findings of the research available on a permanent base, structured. Meta-wiki allows that, so I think that better than to write "one off" reports that are sent to a board. This seems all very self-evident to me.
- "test case for other partially automated surveys": yes, I like that aspect: it had crossed my mind that maybe even that aspect in itself is enough to justify the present poll.
- "quantifiable anonimized results from authenticated users that can be analyzed by anyone", yes, but the "material" that is gathered for further analysis should have some level of quality too: quality is not achieved by doing information gathering with flawed methods. Gathering information about usability by a type of customer satisfaction survey IS a flawed, non-scientific method. Jacob Nielsen's plain comment: such method is the wrong method to collect the wrong data, see Misconceptions About Usability, especially the last paragraph about Usability vs. Customer Feedback --Francis Schonken 06:35, 28 July 2005 (UTC)
I do agree with Francis's points about usability. I raised this possibility before (above) and the offer stands that we would be willing to conduct a "real" usability study for Wikimedia as a part some other work we are doing within our organization.
- Which of Francis's points do you agree with exactly? The only point I disagree with Francis, so it took up most space in this discussion, is whether a detailed usability survey and this survey have any overlap or interference. This survey merely tries to establish whether some issues require more discussion and investigation, because large number of respondents express concerns or dissatisfaction. It merely might signal to us how urgently a usability survey is actually needed. It is very important to ask unbiased questions. Yet at the same time a high level survey can not address the finer details of any subject. Similar to usability one might question the validity of most questions on the ground that the implications of the answers will be vague, or that they interfere with much more detailed studies. Of course when a question becomes too broad or vague it needs to be improved, but I don't agree that the word usability is off limits here because there will be a detailed usability survey sometime. Erik Zachte 10:10, August 11, 2005 (UTC)
- I agreed with him that doing a survey to measure usability is not the best way to get at that sort of information. We have, however, already discussed and agreed that doing the general survey as a preliminary step to a more scientific usability study made sense as a strategy. I mostly wanted to reiterate our willingness (at some point in the future) to do some scientifically valid usability testing with live users. K1v1n 02:15, 12 August 2005 (UTC)