Community Insights/2016-17 Report/How to interpret the data

From Meta, a Wikimedia project coordination wiki
Community Engagement Insights 2016-17 Report

Error messages[edit]

You will see big red error messages in the report. This means we know there is an issue with the question. There are two error types:

Known question error If there is an error in the question, there *may* be something that we can do about the issue, and we have indicated what the likely issue is.
Few responses For few responses, there is not much we can do. Sometimes it might be due to an error in filters, but often times it is not. Due to our sampling process and high volume of questions, many questions ended up with fewer than 40 respondents. These were the questions that were flagged.


The participants number is very important. It tells you how many people responded to the question. Only about 20 questions were shown to all participants, The rest of the 240 questions were divided into each audience. Some groups, especially editors, had to have additional randomization within the survey to reduce the survey length. For example 50 questions were grouped into questions of 10, and then those 5 groups of 10 questions were randomized for each participant. The purpose of this was to help reduce overall burden on respondents.

Remember that the overall participants varied by audience, where editors was about 3500 sampled editors, while program leaders, affiliates, and volunteer developers were each about 100 people.


There are two types of graphs. The graphs that say "Participants could select more than one option" means that the graph represents how many people responded to each option, so the total percents in the graph will often add up to more than 100%! For the graphs that are scales (e.g. strongly agree/disagree, not at all/completely), these should add up to around 100% (there are often rounding errors, which is totally normal).

Check all that apply[edit]

How to identify them[edit]

These questions have a statement that says participants were able to select more than one option in the question.

How to interpret them[edit]

For these, the total percents in the graph will often add up to more than 100%! Check all that apply are easier to interpret. You can just look to see which options were selected more than others. It is important to examine how many people selected "other", because it will tell you if the list of responses options were sufficient.


How to identify them[edit]

Scale questions are those like strongly agree to agree or not at all to completely. These are sometimes called "likert" scales or "likert-type" scales.

How to interpret them[edit]

These questions should add up to around 100% (there are often rounding errors, which is totally normal). For these questions it works best if the results are grouped. For example, on a agree/disagree scale, grouping the results will give you better information that if you take each option by itself. This tells you generally what proprtion agree, what proportion disagree, and what proportion neither agree or disagree.

Please note that some scales also include "no opinion" or "I don't know" in the bottom left of the graph. If a number is present, this means that there was a group of people who selected an option outside of the scale. Thus this option is displayed outside of the results. While the graph shows everyone who respondent to the scale, the percent in the bottom left is out of everyone who answered the question. This number can tell you important information, for example, what proportion of people might not care about a certain issue or topic, or if something is unaware of a certain topic.