Desirability of end-user image suppression

From Meta, a Wikimedia project coordination wiki
Jump to: navigation, search

This page deals specifically with the desirability of the scheme outlined at end-user image suppression. For other discussions on this topic see the February 2005 archive of WikiEN-I mailing list and Image censorship.

The are generally four facets to this discussion:

  1. End-user image suppression: the technical implications of implemention
  2. Potentially offensive images: the identification of categories that would be included as part of implementation
  3. w:Wikipedia:Descriptive image tagging: the useful categorising of images
  4. Desirability of end-user image suppression: discussion about whether it's all desirable (this page)

Pros[edit]

  • It would afford end-users the choice to defer potentially offensive images when they are viewing Wikimedia projects and allow to them to load deferred images on a case by case basis (dealing with a large majority of the complaints that have been made with regard to potentially offensive images)
  • In turn this could help to decrease the urge of some editors to participate in self-censorship (i.e. suppress certain images by default) as has been happening with topics such as the clitoris, torture and autofellatio
    • On the other hand, it could also lead to less debate over whether or not images are appropriate or beneficial to an article's content, and so lead to a larger number of poorly-placed images.
  • It could be used as a PR tool to placate those who would seek to discredit Wikimedia projects based on the adult nature of some content (disclaimers aside).

Cons[edit]

  • It would add another user-preference option
  • It could either have too little or too much effect. If certain classes of images were linked by default, this would be a certain degree of censorship. If no images were linked by default, the vast majority of users would see no difference.
  • If end-user image suppression is used as a PR tool, Wikipedia might give the impression that it is needed to cover up bad editorial decisions.
    • On the other hand, our Wikipedia content disclaimer already notes, "Wikipedia contains many different images, some of which are considered objectionable or offensive by some readers. For example, some articles contain graphical depictions of violence, or photographs of human anatomy."
  • Some opponents of this scheme have called it censorship, based on the assumption that outside organisations (many of whom could be considered end-users) will use the Wikipedia category system to suppress content. Making it easier for users to filter what they see, would also make it easier for third parties to filter Wikipedia for them.
    In contrast, as it is now, a third party that wants to filter Wikipedia, but doesn't have thousands of man-hours to do so selectively, is forced to block the entire site. This is something which cannot be done surreptitiously; so, for instance, we know instantly when a country has blocked access to Wikipedia.

History of this article[edit]

This page was set up to respond to opposition to the proposal now at end-user image suppression, which initially included an option for organisations to use the proposed mechanism to "filter" content.

Dispute over this article[edit]

Some contributors feel that this proposal facilitates censorship, and write in response to this page:

Why would we want this form of censorship? Censorship in relation to the Wikimedia projects is what is at stake! Restricting the discussion to "Desirability of end-user image suppression" is not realistic (for more on the subject, see Censorship on wikimedia projects).

A self-reflective portion of this page (like the current section) was moved to the discussion page, and the pro and con sections above were subject to an edit war.

Relevant links[edit]

Tagging and image suppression

Censorship

Other links