User:AlecMeta/Interpreting Results from Referendum on Image Filter

From Meta, a Wikimedia project coordination wiki
Jump to navigation Jump to search

Rationale[edit]

Post hoc analysis and predictions are inherently less trustworthy than statistical tests included a priori in a design. After the fact, it may be difficult to determine which results indicate what. Stating a proposed method of interpretation BEFORE seeing the data is a good thing, when possible.

What will be asked?[edit]

On a scale of 0 to 10, if 0 is strongly opposed, 5 is neutral and 10 is strongly in favor, you will be asked to give your view on how important it is:

  1. for the Wikimedia projects to offer this feature to readers.
  2. that the feature be usable by both logged-in and logged-out readers.
  3. that hiding be reversible: readers should be supported if they decide to change their minds.
  4. that individuals be able to report or flag images that they see as controversial, that have not yet been categorized as such.
  5. that the feature allow readers to quickly and easily choose which types of images they want to hide (e.g., 5–10 categories), so that people could choose for example to hide sexual imagery but not violent imagery.
  6. that the feature be culturally neutral (as much as possible, it should aim to reflect a global or multi-cultural view of what imagery is potentially controversial).

About the data[edit]

Each vote consists of a set of six integers. Additionally, each voter should have recorded 'host project'. We could also probably obtain some rough measure of user activity or account age.

Q1: Wikimedia projects to offer this feature to readers=[edit]

  • Are there any projects whose voters strongly wanted this, judged by a criteria of % of voters who voted higher than 5?
    • If yes, that's a good indication that at least some projects should have this feature.
  • Are there any projects whose voters are strongly opposed to this, judged by a criteria of % of voters who voted lower than 5?
    • If yes, we probably shouldn't 'impose' the feature on those projects.
  • Globally, do we see a bimodal distribution on this question or a normal one?
    • If we're strongly split, we need to proceed with caution, identify the editor populations that are split and see what's, if anything, is different about the different populations causing them to split on this issue.
    • In particular, if user activity is strongly correlated with this answer, that might point toward the right answer.

Side-Issue questions:

  • Do voters in language AR vote significantly higher than global mean?
  • Do voters from Swedish vote significantly lower than global mean?
  • (and generally, such types of questions, not specifically just AR and Swedish)

Q2: that the feature be usable by both logged-in and logged-out readers.[edit]

  • Confusing text even in the original english. A dichotomy between uncertain options.
  • Where will the "Hell no" contingent vote on this? I'm not sure.

Q3: that hiding be reversible: readers should be supported if they decide to change their minds.[edit]

  • This is a fascinating question.
  • In my interpretation, it may actually screen for peoples whose personal beliefs are incompatible with the values of the Wikimedia Movement.
    • Thus, people who very strongly oppose reader's having power of choice are people who will not be happy with any filter we can provide.
  • Thus, Lots of people who very strongly oppose this may be signs that something is "up".
    • Strong negatives could represent their own trend or subpopulation.
    • Strong negatives could be the "Hell No" contingent.
    • Something else?
  • Either way, "Strong No" is something I'll have to see comments to understand, I think.

Q4: that individuals be able to report or flag images that they see as controversial, that have not yet been categorized as such.[edit]

  • The real key to this question is to compare Q4 to Q1.
    • Strong negative on both is probably the "Hell no" contingent.
    • Negative on Q1, Positive on Q4: Anti-filter, but customizable helps.
    • For people who wanted a filter in Q1, the rating of Q4 will be interesting. Logically, Q1 and Q4 should positively correlate. If they don't, something is "up".

Q5: that the feature allow readers to quickly and easily choose which types of images they want to hide (e.g., 5–10 categories), so that people could choose for example to hide sexual imagery but not violent imagery.[edit]

  • Tough question-- what's the alternative? per-user filters categories or 1 category?
    • per-user filters are more "culturally neutral", and should correlated with Q6.
    • 1-category may or may not be 'culturally neutrally', depending on who you ask.
  • "Hell No" contingent needs to be accounted for.
  • Look at comments for clues to identify how people are answering this.

Q6: that the feature be culturally neutral (as much as possible, it should aim to reflect a global or multi-cultural view of what imagery is potentially controversial).[edit]

  • If globally this is rated high, that's strong support for a culturally neutral approach.
  • IF globally this is rated low, it's inconclusive.
    • People who strongly oppose any feature at all might vote highly negative on this.
      • We could partially account for this by 'factoring out' voters who rated Q1 negatively.
    • People who strongly oppose culturally-neutral may not all agree on what the alternative is.
      • Voters in other cultures might be saying "No, filter only according to my culture, but not anyone else's".
    • A negative on this question, by itself, is somewhat meaningless. Even if everyone opposes a multicultural approach, they may agree on nothing else.

Broad classes of opinions[edit]

Aiding censorship is inherently wrong[edit]

Censorship is inherently wrong[edit]

Censorship should be the right of people who impose it on themselves[edit]

Censorship should be mandatory for some humans[edit]

Censorship should be mandatory for all humans[edit]