Jump to content

Learning patterns/Who should we survey?

From Meta, a Wikimedia project coordination wiki
Who should we survey?
problemWhat you learn from your survey, and how you can use that knowledge, depends who you ask (and how many people respond).
solutionIdentify your target population and choose a representative sample of them to survey that makes sense given the size of your population, the number of responses your realistically expect to receive, the kind of analysis you intend to perform, and the kinds of conclusions you want to be able to make.

What problem does this solve?


Surveys can be used to gather all sorts of useful data that can be analyzed in different ways, but you need to survey enough of the right people.

What is the solution?


To identify your survey participants, you first need to identify a population, and then a sample. The population is the full set of people whose experiences, opinions or activities you are potentially interested in learning about. The sample is the set of people you are actually going to ask to take your survey.

In some cases, the population and the sample will be the same: for example, you may want to get feedback from all 75 people who attended a workshop. In other cases, the population is huge and you want to survey a sample of it. In these cases, you might want to gather a random sample (e.g. 100 people who have attended at least one Wikimania since its inception), or a more targeted sample (e.g. the 50 most active dispute resolution mediatiors on fr.wikipedia.org).

Sometimes you want to give different surveys to different groups: for example, you may wish to give one survey to people who submitted an Individual Engagement Grant, and a slightly different one to IEG reviewers. This approach can be useful if you want different kinds of input from people who participate in the same project or process, but filled different roles.

Sample size


Another important consideration is sample size. You should never expect responses from everyone you ask to take your survey. The response rate can vary a great deal and can be hard to predict, but it's often between 10% and 20% and could even be less! So when considering how many people to ask to take the survey, consider:

  • ... the maximum number of eligible participants. How many people qualify for your survey? If the answer is "everyone who has every edited Wikipedia", you might not need a 20% response rate. Then again, you might get some pretty skewed data from a target population that diverse, so you might want to narrow it down a bit. If there are only 45 possible respondents, you probably want to hear from more than half of them.
  • ... the minimum number of responses you need to perform your analysis. This varies a lot based on the kind of analysis you want to do. For quantitative analysis such as comparing numerical editor satisfaction ratings between male and female editors, you will need in excess of 100 respondents (mind the gap!). If you are asking mostly open-ended questions that require respondents to write prose sentences, 100 respondents may actually be more data than you can realistically synthesize into "findings" within a reasonable timeframe.
  • ... the number of people you realistically expect will respond. If your target population is small and distinct, you may get a higher response rate than if it is large and poorly-defined. If you are asking people to fill out a survey about something that is very recent (e.g. yesterday's Edit-a-thon) or very relevant (something you know they feel very passionate about), your response rate is also likely to be higher. The personal touch also matters: a survey request posted on someone's User Talk page or made in a face-to-face meeting will usually get a higher percentage of responses than email spam or a banner advertisement. Finally, offering a small honorarium may help in some cases, but usually isn't necessary.

Representative samples


TODO: brief summary of how to make sure your sample is represetative



See also