Jump to content

Image filter referendum/Sue's report to the board/en

From Meta, a Wikimedia project coordination wiki

Prepared by Sue Gardner, with support from Philippe Beaudette and other members of the Referendum Committee
September 18, 2011

I thank the referendum committee for its help in creating this document. Opinions and interpretations expressed here are mine, and may or may not reflect the views of the committee members. -- Sue.

Executive summary

[edit]

In August 2011, the Wikimedia Foundation held a referendum to gather editor input into the development and usage of a personal image hiding filter, which had been requested by the Board in June 2011. 24K editors participated in the referendum and, of those who expressed an opinion, 56% were in favour of the filter and 31% were opposed.[1] Write-in comments suggest that commenters who are in favour of the filter think it will be “nice-to-have” and may bring in readers who had previously been avoiding using Wikipedia. Commenters who are opposed to it believe it constitutes, and/or will enable, censorship of the projects, and that it either does not support, or directly conflicts with, our core mission. Since the referendum, there has been lots of follow-up conversation on wikis and mailing lists, most opposed to the filter, and the German Wikipedia community has held a poll that resulted in a strong majority of participants rejecting its implementation.

Background and context

[edit]

Sixteen months ago, the Wikimedia Foundation Board of Trustees began discussing the issue of controversial imagery in the projects. To aid it, it commissioned the development of a study that included both a research component and a component of internal Wikimedia consultation. The study, which made a series of recommendations, was delivered to the Board in October 2010. In June 2011, the Board published a unanimous resolution asking its Executive Director to develop a “personal image hiding filter” for the projects, which would allow readers to voluntarily screen particular types of images from their own view.

From 15 August 2011 until 30 August 2011, at the request of the Executive Director, the Wikimedia Foundation, working with a committee of experienced Wikimedia editors, held a referendum to gather editor input into the development and usage of the filter. The referendum was held in late August 2011 and the results were announced on 3 September 2011. The bar to participate was set at 10 edits.

Purpose of this document

[edit]

The purpose of this report is to provide a high-level overview of the results and subsequent discussions, to ensure all Trustees have a good basic understanding of the issue to supplement whatever reading and talking they’ve been doing individually.

Referendum results: overview

[edit]

More than 24K people representing at least 65 languages participated in the referendum. (In comparison: 3.3K people voted in the 2011 Board of Trustees election, and 17K people participated in the 2009 license migration vote.)

The referendum did not directly ask whether respondents supported the idea of the filter. It did ask this question:

On a scale of 0 to 10, if 0 is strongly opposed, 5 is neutral and 10 is strongly in favor, please give your view of the following: It is important for the Wikimedia projects to offer this feature to readers.

24,023 people responded to that question, with 23,754 selecting a number on the scale. The result was mildly in favour of the filter, with an average response of 5.7 and a median of 6.

19.4% of respondents selected 10 as their response
24.84% selected 9 or 10
37.15% selected 8, 9 or 10
48.26% selected 7, 8, 9 or 10
55.75% selected 6, 7, 8, 9 or 10

15.66% of respondents selected 0 as their response
18.95% selected 0 or 1
23.79% selected 0, 1 or 2
27.86% selected 0, 1, 2 or 3
30.84% selected 0, 1, 2, 3 or 4

To summarize: of those who expressed an opinion, 56% voted in favour of the feature, 31% voted against it, and 13% selected the neutral option.[2]

Additionally, 6,956 respondents (29% of the total) added write-in comments to their ballot. A quick-and-dirty analysis of a subset by the referendum committee suggests that about 41% of comments had a negative tone and 30% had a positive one, with the remainder being neutral. Some were brief, but the overwhelming majority were detailed, thoughtful and constructive. See the sampling at the end of this document.

Community members who support the filter say they believe it will be helpful for readers. A large number said that they won’t use the filter themselves, but they expect it to be useful for others, such as parents of small children who don’t want their kids to see sexually explicit imagery or Muslims who don’t want to see images of Muhammed. Some said they think the feature might help attract new readers, particularly groups or individuals who are relatively culturally conservative. Some said they would use the feature themselves to screen violent/gory images such as photographs of medical conditions, or in circumstances during which they are temporarily reading the projects in public or semi-public, such as at work, school or on a commuter train, or when they are reading the projects with a parent or a child.

By far the most common reason given for opposing the filter is that it constitutes censorship or will enable censorship. Indeed, the word “censorship” appeared 1,127 times in the write-in comments.[3] Some say that any screening constitutes censorship, even if it is purely voluntary and individual. Others say creating categories supporting the filter will create the conditions that make it possible for other parties (e.g., governments, industry groups, schools) to block images for people who do not want them blocked (e.g, citizens, consumers, students). Some commenters said that creating this filter is not the job of the Wikimedia movement: that our role is to compile the sum of all human knowledge and make it available to the public, and that this filter either doesn’t support, or directly conflicts with, that mission. These commenters believe that creating filters should be done by external groups if it’s done at all. Others argued that creating this filter may hurt editor retention (i.e., editors will leave because of it), and/or that the filter should be de-prioritized below other more important initiatives such as editor retention activities.

It’s worth noting that the comments in favour of the filter seemed to be fairly mild in tone, with the comments opposed to it being more vehement. This is probably due in part to the fact that the Board had already said it wanted the filter built. So, those in favour were expressing agreement with that decision, whereas those opposed were facing the uphill battle of arguing for it be overturned.

What has happened since the referendum

[edit]

The referendum committee has posted its initial analysis of the results.

There has been lots of discussion, both on our mailing lists and here on meta. Much has been critical of the referendum’s questionnaire design, primarily saying that the wording of the question about “importance” is ambiguous, and that the referendum results should therefore be discarded as unusable.

A page discussing “next steps” has developed.

From 28 August until 15 September, the German Wikipedia held a poll asking whether the feature should be implemented on the German Wikipedia.The result: 357 German Wikipedians (86%) voted against implementing the feature, while 57 (14%) voted in favour.[4] Those opposed include prominent Wikimedians including chapter leadership such as WMDE Board Chair Sebastian Moleski, WMDE Executive Director Pavel Richter, and former WMDE Board chair Kurt Jansson. The German community has had serious discussions about whether to fork if the Wikimedia Foundation implements the feature on the German Wikipedia.

Additional analysis

[edit]

Part of the purpose of conducting the referendum was to provide some insight into the degree to which on-wiki and mailing-list discussions are reflective of general community sentiment.

To that end, it’s important to note that discussion about the filter on wikis and mailing lists, both before and after the referendum, was overwhelmingly opposed to the idea of a filter, while the referendum revealed that general community sentiment was mildly in favour of it. On this particular issue, therefore, we now know that the views expressed on mailing lists and wiki discussions are not reflective of general community sentiment.

If we were to assume that on-wiki and mailing list conversations might be different from community sentiment in general, there are a number of ways that could be interpreted.

It’s possible that the people who express opinions on wikis and mailing lists are just different from people who tend not to express opinions. For example, our conversations take place only among people who have the time and interest to participate, and at a global or meta level they take place solely in English. So, our discussions may tend to reflect the interests and attitudes of English speakers who have the time and interest to participate in them.

It’s also possible that over time, the people who participate in our discussions have developed, through discussion, consensus that doesn’t exist outside the group. Note that this consensus could be good or bad. It would be good if, over time, discussion has strengthened decision quality through collaboration and debate and challenge. However, that isn’t always the case, and there are indicators that our discussions are less-than-perfectly healthy: we have many reports of people finding our discussion spaces argumentative, bullying, and unsatisfying, and choosing therefore to either flee entirely (e.g., unsubscribing from foundation-l) or simply lurk rather than participate.

Next steps

[edit]

The Referendum Committee will continue vote analysis, and will publish the results. There will also be further analysis of the write-in comments.

A number of community members are considering creating further polls and discussions aimed at re-running the referendum with a revised-and-refined question set, or surfacing ideas about how to achieve the filter’s goal in ways that meet with greater community support.

The Referendum Committee will conduct and publish a post-mortem of the referendum process. There are some initial notes for the postmortem in the appendix of this document.

The Executive Director plans to publish a statement responding to the referendum results, with a target date of 6 October. That statement will be informed by the Board’s discussion of this report.

Appendix: Postmortem on the process

[edit]

The referendum committee will be publishing a post-mortem report. Here are some initial points that will likely appear in it:

In general, it’s clear that this referendum, however flawed in its specifics, surfaced community sentiment we otherwise wouldn’t have known about. That is an important result, and suggests that the Wikimedia movement should continue exploring ways to invite a broader range of people into its discussion and decision-making mechanisms. In order to mitigate the unhealthy effects of group-think, it probably makes sense to employ mechanisms that enable participation that’s both easy and anonymous.

Roughly 750,000 emails were sent inviting registered users who met the qualification criteria to vote: this was an experiment in large scale outreach of this type, and it clearly drove up voting (there is a demonstrable spike the day of/after the election mail was sent). Publicizing the vote is important to getting a good turnout.

The election materials were entirely translated (including the FAQ, background, committee organization pages, and voting interface) into 32 languages and partially translated into an additional 33 languages. The voting interface itself and FAQ were each available in 35 languages. Translation is critical, and requires stable and relatively succinct basic materials. We also asked people to add a Google translation for comments they made in languages other than English. Google Translation is far from perfect, but it makes non-English comments feasible where, in its absence, they might not be.

There was confusion about whether the results of the referendum would be binding for the Foundation, in part because some participants believed use of the word referendum implies a binding result. In future, non-binding referendums are probably better labelled as polls.

Initially the referendum was mistakenly positioned as being requested by the Board of Trustees rather than the Executive Director, and there was some confusion throughout the process about who was in charge. We should aim to accurately determine roles-and-responsibilities up front, and communicate them clearly.

The design of the referendum ballot itself was confusing. Many participants would have preferred a straight “up or down” question, and also felt a 1-5 scale leads to more easily analyzed results, relative to the 0-10 scale that was used.

The write-in comments added valuable flavour and texture to the referendum results. Also, the referendum invited people to comment in the language of their choice, and attach a Google translation for any comments that weren’t in English. Many did this, and we believe it’s a great, easy mechanism to make it possible for non-English speakers to have their voices heard.


Appendix: Image Filter Referendum Comments

[edit]

About 24,000 people participated in the image filter referendum. Of those, 6,956, writing in 30 languages, took the time to add comments. A quick-and-dirty analysis of a subset of comments suggests that about 40% were negative in tone and 30% were positive, with the remainder being neutral. Below is a sampling of the comments submitted, selected because i) they don't include any information that could personally identify the writer, and ii) they seemed particularly thoughtful. This sample doesn't attempt to accurately reflect the distribution of opinions in the total comments submitted: its purpose is just to give a sampling of the range of opinions expressed and their substance, as well as to show the depth of editors' engagement. The comments have not been edited.

On the whole I'm opposed to this kind filter: the only criterion that counts is the relevance of the image wrt the topic. Implementing a filter only gives the reader the opportunity to refuse to see the world as it is, and alters its perception of a topic. It is harmful to open-mindedness. Seeing a ""shocking"" image is an opportunity to learn something one would not have imagined, and to overcome one's fears and prejudices, whatever the age of the reader. I also don't think a culturally neutral filter can exist, except a filter that would block most images.
I know a few people who, due to their very conservative, traditional religious views, keep away from Wikipedia because it contains images with sexual or violent content. While I personally disagree with their stance, I understand it and respect it. If they could set up their own account to always hide those types of images, I'm confident they would use Wikipedia frequently.
Not sure this is a healthy trend; the web is already becoming too "customized" (= sanitized) to individual viewers' predetermined viewpoint. Part of enlightenment involves stepping out of your comfort zone, in order to see things from other perspectives. The only real justification I can support is children, though I find it curious that there's such strong prohibition to sexual content, and frequently not equal aversion to violence. It seems to me that is fundamentally backwards, as children are more likely to be harmed from depictions of violence than from depictions of procreation. (But addressing that social oddity is tangential to the issue at hand.)
Sounds like a good idea! Even though I'm not a minor, I sometimes come across articles which have images which I really did not want to see. Perhaps I wanted to read about the topic; but the image is really unnecessary (for me, at least). I think Wikipedia can still be considered an open information source even if users have to do a little bit more (i.e., ""click a button"") in order to see it.
Users should be able to change their minds; but it would be great if Wikipedia could allow a ""children's account"" that a parent can create and which forbids the hiding from being reversed. Of course, Wikipedia can't enforce who uses an account; but it would be a helpful tool for parents in case there ever exists a client-side software to enforce it (i.e., child logins to Wikipedia only if they are using a PC from a restricted user account).
I appreciate your desire to act carefully and make sure that this does not turn into censorship, but I think it is very difficult to do this in a culturally neutral way. A beneficial effect that projects like Wikipedia have on the world is that they bring people outside their comfort zone and make them realize that certain topics are actually *not* controversial. Whether you ever reach censorship or not, you risk halting such progress. In the year 2100, will I still see that a photo depicting gay marriage is "controversial" because some people from the southern US thought so in 2011? Letting random people flag images according to random prejudice must be avoided at all costs. Especially if the user has not logged in. This will lead to cookie proliferation and a huge number of images being "semi-protected" so people can't flag and unflag repeatedly in an argument. I could see this happening in the Muhammad article. While I appreciate everything the Wiki community has done and I have the utmost confidence in its abilities, I fear that this feature is very unlikely to be implemented in a way that makes most people happy.
I think categorizing an image as sexual or violent is a process that should not be taken lightly, if the feature is implemented. It is a gateway to segregating content as appropriate/inappropriate, which is a bit of a dangerous precedent. The feature idea certainly has merit, but it should be noted that categorizing material as offensive for certain to users is but a step away from removing potentially objectionable content, or relegating it to invisible by default. Cultural neutrality for categorizing objectionable content is a minefield - setting an image to be hidden under a certain content filter can in itself be interpreted as offensive to cultures where the ""objectionable"" content of the image is perfect acceptable.
It is also vitally important that users know *exactly* what's being hidden - there should be occasional home-page reminders, a very accessible settings panel, and pages where images are hidden for a given user should be labeled as such. The images themselves also need to be tagged with the filters under which they are hidden. Discussion for categorization should be centralized, not distributed across thousands of disparate talk pages. In short: tread lightly.
People have different views and opinions about everything. If someone thinks that hey should not view something, then the decision lies entirely with them. Sometimes, religion demands that people do not view some content which is considered offensive for them. If they wish to abide by that, then we should respect their decision. However, at the same time, if they want to change those settings, then they should be able to do that. For example, some parents might think that while searching for an article on Wiki, if their child encounters an article which has- I'm using the example stated here- mutilated corpses, they might think it is unsuitable for their child. However, later they can change the settings because by then, the child will be mature enough to...handle that.
This feature should be useful to avoid distressing religious sensitivities as well, e.g. Muslim readers seeing pictures of Mohammed, or Indigenous Australians seeing Uluru. The last question seems to be asking for permission to design the filter so that Wikipedia can carry on offending people with such sensitivities, on the basis that Wikipedia should force people to be more multicultural. This will not encourage the use of Wikipedia in Muslim, or other, cultures which have such sensitivities.
This kind of feature could be co-opted into a censorship tool (e.g. proxying requests through an account that has anything controversial hidden by default). I think the most important aspect is that this feature must be designed in a way that it is trivially easy to turn it off, and in a way that would make it difficult to use as a means of censorship.
Would anyone ever vote for a similar functionality in a conventional lexicon (i. e. book)? Of course not, because in a book this is technically impossible. But the question is not so silly and goes much further: even if it was technically possible (like in web-based medias), would this be an appreciated tool from the point of view of open and free and true and comprehensive information? Where will be the limit? Would something alike be applicable to words and sentences, too? What does this mean in presenting ""the truth"", when parts of it can be suppressed? Wikipedia is already suffering from too much of reglementation (as you know very well from your analyzes...). Too much of functionality is -- in my opinion -- a similar problem and leads Wikipedia towards a wrong direction, as well as too much of reglementation is dooing.
My vote is against this functionality. If it is rolled out, then omissions should be visible in any case -- on screen and on print (even on screen readers), so that readers always are aware of the fact, that the complete information is partially made invisible.
I strongly disagree that this feature should be implemented at all. Is it simply a new form of censorship. Wikipedia is an encyclopedia. It needs to be open, true (verifiable) and free at the point of contact. Establishing artificial barries to what can and cannot be viewed by a particular user is a recipe for disaster. It will alter the fundamental premise of what Wikipedia is. Wikipedia already suffers from a fundamental bias, in that the majority or articles are westernised. Adding a new viewing filter will add new barriers to use.
I wouldn't plan on using it myself, but I definitely agree that it will be an important feature to some. I imagine it may be difficult to establish a culturally neutral standard. I could see this feature maybe leading to a few more edit wars (topless women -- offensive? culturally neutral?), however (1) it seems very clear to me that this feature is mainly intended for things more graphic than breasts and (2) as an opt-in system, anyone who doesn't like it will simply not use it. Hopefully, most users will understand this and rollout will proceed smoothly. I think the benefits (making the Wikimedia "safer" for those who are hesitant) would be worth it. Sounds good -- I'm in support. Thank you for taking the time to poll everyone. Best wishes!
Similar features have been discussed by the en-wiki community in the past and have been strongly opposed by many editors. While I understand the logic behind the implementation of this feature and feel that it is probably on balance a good thing, I worry that certain communities like en-wiki will see a net loss of editors (rather than readers) over it.
The last question alludes to the problem that there are cultures in which there may be very different views in which is acceptable and which is taboo. I'm thinking about images of Muhammad or of partial nudity as being two examples where there is a vast spectrum that will be difficult to negotiate (but good luck!)
Wikimedia should implement a system so that inappropriate images are automatically blocked for children under a certain age. Users should have to verify their age before these images are displayed by default. It is a shame that Wikimedia currently facilitates the distribution of inappropriate images to children. Though I do not believe YouTube is a good model to follow, they do have a system such as the one I described. If one were to be implemented and were to be operated more effectively than YouTube's, Wikimedia would be well on its way toward promoting a safe environment for people of all ages.
I cannot distinguish this from censorship of the worst sort - by readers who by their nature are at best uninformed and therefore not competent to make any judgement in relation to the merit or appropriateness of content. The inclusion of imagery should be based on the same criteria as textual content - indeed with multi-media content it is the article as a whole that seeks to impart knowledge & understanding and therefore particular portions cannot and should not be discarded without compromising the integrity of the whole enterprise.
I support this feature : I sometimes had to scroll with apprehension, in fear of a potential chocking (sic) picture. In my view, what is the most important is to let the reader choose : an image must be displayed/hidden with a simple click in order, among other things, to allow the younger ones to get rid of a disturbing picture as quickly as possible despite their confusion at this moment.
I endorse the use of this feature to hide sexual and violent images from say, parents moderating for the benefit of their young children, people browsing at work or people who just want information on sexual/violent subjects without the visual. Where it may get murky, is if - as is always the internet example - images of mass graves at Auschwitz were hidden. The importance of remembering such events in all their tragedy should be something I don't need to go into. It's here I am concerned about the how to draw the line between using this feature to protect those who do not want to see such images and willful ignorance and censorship. As such, if this is instated I am very much for this feature being for logged-in users only. My (perhaps optimistic or flawed) rationale is that people who take the time to register on wikis are knowledgeable, tech savvy and smart enough to use this feature only for good. Best of luck and thanks for getting people involved.
The devil, as they say, is in the details. The "culturally neutral" point is problematic. How does one rate a picture of a woman in shorts and a tank top in a culturally neutral fashion. In some of the more fundamental cultures it could be considered pornographic, in the majority of Western cultures, it is non-controversial. So perhaps better than cultural neutrality, it should have cultural context, e.g., "I am a devout Saudi Muslim, I do not wish to see pornographic or blasphemous images" would have quite different results than "I am a devout American Catholic, I do not wish to see pornographic or blasphemous images". I have no idea how one would implement this.
In this editor's opinion, the system should be opt-in. Namely, the system should be all-allow by default and require the user to set up their own filters, otherwise I feel that the system, good intentioned as it may be, might convey the thought that the Wikipedia community feels that these images are inherently troublesome. It's up to the user to make their own judgments about what should and shouldn't be seen by them personally. That being said, obviously an effort should be made for a reasonable period to make every user aware of the system's implementation and how to use it.


Appendix: Referendum Committee membership

[edit]

The referendum committee was responsible for planning and maintaining virtually every aspect of the image filter referendum. For example, the Committee planned the type of voting and requirements for voters, drafted and organized all of the official referendum pages on Meta, verified that voters meet the criteria, and audited votes to ensure there were no duplicate votes or other problems.

The referendum committee was made up of User:Cbrown1023 (meta and enwp - United States), User:Risker (enwp - Canada), User:Mardetanha (fawiki, meta, steward - Iran), User:PeterSymonds (enwp, commons, steward, UK), Robert Harris (Canada), User:Dcrjsr (enwp, commons - United States) and User:Jan eissfeldt (dewiki - Germany). Wikimedia Foundation liaisons were Philippe Beaudette, Maggie Dennis, Andrew Garrett and Phoebe Ayers.

The Wikimedia Foundation is extremely grateful to everyone who served on the referendum committee: they did really good work in difficult circumstances. We are also grateful to everyone who participated (voted) in the referendum itself, and particularly to those who took the extra time to write in comments.

Footnotes

[edit]
  1. This statement slightly oversimplifies the findings. Please see the section Referendum results: overview for more precision, including the actual text of the question.
  2. This kind of interpretation has been characterized as overreaching on-wiki and on our lists. David Goodman, an experienced editor who opposes the filter, said this: “I think this quibbling is a little ridiculous. The interpretation of the vote, regardless of its defects, is clear enough. Only a minority, albeit it a sizable minority, of Wikipedians who voted totally opposed the filter.”
  3. Around 25 of those included a modifier such as “not” or “isn’t” before the word “censorship.”
  4. To understand these numbers, it might help to know that in August 2011, a total of 1,016 people made 100+ edits to the German Wikipedia. This means the number of people who participated in the poll was nearly half the total number of very active editors of the German Wikipedia.