Talk:End-user image suppression

From Meta, a Wikimedia project coordination wiki

What should the default setting be?[edit]

As one who has no problem with the most objectional content imaginable and as one who has advocated this idea, to ensure that we do not have to participate in self-censorship, it appears even to me that potentially offensive content should be absent or inline by default and that people need to read the disclaimer and create an account with censorship turned off to be able to see, use and edit fully all content.

Another option would be to offer up a warning page when surfing to a page with potentially offensive content and to get it away permanently the user would have to register an account and choose to have the content shown without the warnings.—Christiaan 15:55, 19 Feb 2005 (UTC)

If PICS were used, the default wouldn't be our choice - it'd be the browser default, or set by the people who manage the user's LAN. JimH 01:16, 20 Feb 2005 (UTC)
It would be the choice of the downstream user in either case. That's the point of the proposal. —Christiaan 12:57, 20 Feb 2005 (UTC)
I can safely say that few browsers have built-in PICS support, and few users have any idea how to make use of any support that's there. -- Nknight 13:42, 20 Feb 2005 (UTC)
I've incorporated this point into the article, noting that it might be possible in the future. —Christiaan 14:11, 20 Feb 2005 (UTC)

I really don't care what the default is too much. I'd prefer it be "off", but I'm not going to complain either way. There are a number of reasons even for non-editors to register accounts already (skin choices etc.), one more won't hurt. Also, keep in mind that it doesn't neccessarily have to be restricted to logged-in users. Something I don't see exploited nearly enough on websites (including Wikipedia) is the fact that cookies can be used to store preferences on a per-browser basis, without the user needing an account on the server. -- Nknight 13:42, 20 Feb 2005 (UTC)

Oh yeah, I'd forgotten about cookies. We should incorporate that into the article. —Christiaan 14:09, 20 Feb 2005 (UTC)

The default should be exactly the same as what Google has for its images - an intermediate setting, that would ban the likes of autofellatio and goatse, but not less controversial material. And if we're going down this line, then I strongly suggest we do it in MediaWiki. PICS is a disgusting, vile database that I oppose on principle, and I'll (and I dare say I won't be the only one) leave before I'd use it. Ambi 23:56, 20 Feb 2005 (UTC)

Alright, by the sounds of things PICS isn't suitable on technical grounds in any case. In regard to the default setting we may not need one, it could be dealt with by a cookie on first visits (although I guess we'd still need a default setting). —Christiaan 00:21, 21 Feb 2005 (UTC)
Absolutely agree. If we do not have a default, though, then cookies should be the way to go. Johnleemk 12:19, 23 Feb 2005 (UTC)
I also agree, but I think it should be possilbe for a user to adjust his or her settings without logging in. Cool Hand Luke 22:41, 26 Feb 2005 (UTC)
Agree, but you should not limit access to the files; just put them behind links. - Omegatron 00:12, 13 Apr 2005 (UTC)

How it might work[edit]

Potentially offensive content would be tagged as such so that any downstream user, whether they be a wikipedia.org reader, an organisation such as a school, or a publisher making a book, can filter out content considered by editors to be potentially offensive. —Christiaan 16:10, 19 Feb 2005 (UTC)

Might we be able to use PICS[1]? This has the advantage that browsers and proxies already support it for content filtering. This would involve setting up our own PICS Labeling Service, which could get information from special tags in the articles themselves, or on a special page. It is also a W3C standard, which are always good to follow where appropriate. —JimH 18:00, 19 Feb 2005 (UTC)
Sounds like an excellent idea. While we'd still need to tag images (which we should really be doing in any case) this would mean we wouldn't need to create a user preference option but instead have the users web browser do the donkey work. However the system does appear to be primarily about blocking access to any particular URL. I couldn't find any particular reference to the idea of blocking say an image on a page. —Christiaan 23:34, 19 Feb 2005 (UTC)
Yes, AFAIK it operatates on a per-URL basis, but aren't images accessed through their URLs? Images could be tagged on 'their page' ie [2] (a only slightly offensive example that I just happened to have uploaded to Wikipedia :)
It wouldn't always require browsers to be set up for filtering. PICS specifies the labels but not mechanism, so the filter can be built into the browser (which isn't a very good idea since anyone can just install a different browser), a network http proxy, or anywhere else along the way. It would be very easy for a school or whatever to set up filtering for their entire campus. Since the tags would be GFDL, comercial signing agents could use them to also offer PICS information on our content JimH 23:57, 19 Feb 2005 (UTC)
The idea is not only to give organisations the choice, but also individual users, so browser support would be needed if we didn't have a site-based preference. —Christiaan 14:16, 20 Feb 2005 (UTC)
There is no way we could choose to support where the filtering is done, if we support PICS filtering with the browser will aways be supported (Although AFAIK only IE has it built into the browser). the filtering in each category would effectively be at the level at the most restrictive point between Wikipedia and the user. For most people, though, the only filter would be on the local machine and it would be all under the users' control. The problem (perhaps) is that PICS allows filters anywhere along the way to be used, and content could be decided as unacceptable site-wide by organisations such as libraries where encyclopedic content really should be avaliable. 81.5.150.113 18:58, 20 Feb 2005 (UTC)
I didn't suggest we would choose where the filtering is done. The point of the proposal is that this choice is left to the end-user. Christiaan 19:23, 20 Feb 2005 (UTC)
Further to this, to enable PICS filtering in IE wouldn't we have to use ICRA's rating vocabulary? (PICS just defines the rating mechanism.) GeorgeStepanek 03:49, 21 Feb 2005 (UTC)

How the user-preference might work[edit]

How might the user-preference work? I think we could have an "Image Suppression" preference section. In this would be a choice to show all images by default. Then there would be the ability to choose a bunch of categories based on potentially offensive images and choose to have one of the following happen:

  1. Hide images (this would create a place-holder that would allow the user to display the image at will)
  2. Remove images (this would remove any link or placeholder (but not the image from the server of course))

Christiaan 15:37, 23 Feb 2005 (UTC)

We might have problems with the "Hide Images" ability: do we implement this in JavaScript (and if so, what about browsers that don't have the ability to script)? Although it shouldn't be too bad since this is an option. Ambush Commander 00:07, 1 Mar 2005 (UTC)

Performance and caching implications[edit]

What would the effect of multiple possible presentations of content have on page caching and general performance, if any? —Christiaan 19:01, 21 Feb 2005 (UTC)

If we have JavaScript do all the heavy lifting, it's concievable that the there would be no difference in all the pages. However, such a scheme is not feasible. Unless we have only two versions of the page (one censored and one not), it may be necessary to add an extra component that intercepts the pages and rewrites the code for the particular preference rather than caching six or seven copies of the page (although I doubt that one page contains images that would fall into multiple suppression categories.

Page moved[edit]

I've moved the page from End-user censorship to End-user content suppression so as to minimise controversy regarding the word "censorship". —Christiaan 16:27, 19 Feb 2005 (UTC)

Here's how I envision this would work... (with a slightly more technical bent)[edit]

For anon users:

  1. They load an article with images in a category most people would not want to see immediately (i.e. photographic depiction of nudity, excretion, sexual intercourse, etc.).
  2. By default, the image boxes and caption load, but the image is replaced by a link reading something like "This image has been categorised as one containing a photographic depiction of nudity. If you desire to view this image, please click here." The image itself has been tagged in the document's CSS with "display: none" by the PHP script. The link is a Javascript-based one altering the display value in the CSS, but if the reader's browser does not support CSS, it links to the article (i.e. en:Autofellatio) with an appropriate GET argument, like this: http://en.wikipedia.org/w/index.php?title=Autofellatio&action=display_img1.
    1. (optional, only if we decide to use cookies) Under it would be a link saying "I want to view all photographic depictions of nudity by default. Please don't ask me this again."
    2. Clicking this link would bring the reader to a page saying something like "A cookie has been stored in your web browser allowing you to view photographic depictions of nudity by default. If you would like to change your other preferences for image filtering, please click here," followed by a link to the article from which the reader came, although by default, the page would redirect in 30 seconds to the article.
  3. The user either moves on or clicks the link to view the image. If Javascript is supported, the image automatically appears while the text disappears (again, that handy display property coming in). Otherwise, the article is reloaded, but the image now appears. The two documents should be the same; they just come about through different processes.

For logged-in users using the default settings, the process would essentially be the same, except step 2.1 would not be optional, and the page in step 2.2 would lead to the user's preference page (or something like that).

The technical processes involved in this are very rudimentary and should be quite simple. Of course, a big problem arises when the user's browser does not support CSS, although this can be partially overcome (perhaps through altering the document based on the user's user-agent string). Still, if you take CSS out of the equation, it's very hard to find a simple solution. Cursed backwards compatibility... Johnleemk 12:37, 23 Feb 2005 (UTC)

Interesting, thanks for your comments John. I don't think the use of CSS would be a problem. This is primarily targeted at certain users - users that are highly unlikely to forgo this feature rather than install a compatible web browser.
CSS would be a good idea. Alphax 03:31, 25 Feb 2005 (UTC)
No, you see, based on this system, CSS would also be required for anons who don't want filtering. I figure that if we instead don't rely on CSS at all and just use Javascript to insert the image, it would probably work better. Johnleemk 12:04, 26 Feb 2005 (UTC)
Ah I think I see your point. So, if it were implemented purely with CSS, what would be displayed to someone who didn't have a CSS compatible browser? —Christiaan 14:28, 26 Feb 2005 (UTC)
The image would be displayed. Johnleemk 15:50, 27 Feb 2005 (UTC)
And then we may have to have the script manually splice the image out the page when it is not wished to be seen (which adds lots of extra code) Ambush Commander 00:10, 1 Mar 2005 (UTC)

Possibilty for censorship[edit]

As noted on Desirability of end-user image suppression GerardM sees this scheme as a route toward censorship, based on the assumption that outside organisations (many of whom could be considered end-users) will use the category system we employ to suppress content. As there doesn't appear to be anything stopping organisations from doing this already can we have some technical opinions this on aspect with regard to the scheme? —Christiaan 11:44, 26 Feb 2005 (UTC)

As far as I know, there is nothing stopping them from doing it now. I've never looked at the Wikimedia code closely, though, but from what I know through my experience, all they have to do is to remove pages/images based on category, which shouldn't be too hard to do with a few SQL commands. I don't see how adding a filtering system to Wikipedia would alter that. Johnleemk 12:04, 26 Feb 2005 (UTC)
That's what I thought, it would be good to get more opinions on this though, particurlaly from MediaWiki developers or someone who has some experience in this line. Do you know anyone? —Christiaan 14:33, 26 Feb 2005 (UTC)
As someone who's spent quite a bit of time getting around schools' censorship systems, most of the censorship we'll see is two things: systems that censor based on keywords in the text or URL, which means they'll trigger on categories, and systems that censor based on URLs, which we have absolutely no control over. We can let people get around the first by allowing secure HTTP connections to the server, but that will probably cause them to run into the second type. These censorship systems almost always block entire pages or sites, not just images. --Carnildo 08:56, 27 Feb 2005 (UTC)
The worst case scenario, then, is if web censorship applications start blocking Wikipedia. I don't think that's too likely. Ambush Commander 00:11, 1 Mar 2005 (UTC)

I don't think there is any way for this not to make it easier for third parties to filter WP content. The easier we make it for users to filter content, the easier it will be for middlemen to do the same, subtlely, without blocking the entire site (and having to deal with the fallout from that). Wikipedia will eventually be an important enough resource that these filtering orgs will consider making WP-specific filters. +sj+ 09:40, 9 Mar 2005 (UTC)

That said, it's worth considering how significant this effect would be, and how significant the benefit would be (both to individual users who want to improve their browsing experience, and to classes of users who otherwise would be blocked from the entire site). +sj+
Yes except that such discussion would be much better had at Talk:Desirability of end-user image suppression or else technical discussion about how it might or might not work will get shouted down by those trying to shut the discussion down. —Christiaan 10:28, 9 Mar 2005 (UTC)

Another reason[edit]

Don't forget that even people who don't care about seeing "offensive" content (like me) would still love the ability to filter it simply because we view the wikipedia at work, public terminals, etc. I once accidently visited en:pubic hair at work and thank goodness no one was behind me. Someone added a picture!  :-) I think this is a great idea, however it is implemented... - Omegatron 21:20, 15 Mar 2005 (UTC)

Definition of end-user[edit]

What End-user image suppression suggests seems to be server-side image suppression. Why not provide information on browser-specific ways of getting rid of images? IMO, that's true end-user image suppresion. —Markaci 2005-04-4 T 09:01 Z

Usability. I am fairly certain that Safari, Netscape, and Konquerer have no way of doing image-by-image blocking. Further, browser-side blocking of an image can only be done after you've seen the image you find objectionable. --Carnildo 17:30, 5 Apr 2005 (UTC)
Plus it's a huge pain. We want a single setting that filters/disables/blocks/hides everything at once. - Omegatron 00:15, 13 Apr 2005 (UTC)

Big Benefit[edit]

I see this proposal as anti-censorship, because it enables to include content that would otherwise get deleted or hidden, even though Wikipedia is not censored. Such a scheme invalidates the argument that an image is offensive and so it should be deleted. It might help a lot with the argument that an image that is a "vandal magnet" be deleted. I see this as enabling us to keep more information.

As for downstream users being able to remove it and republish the information--more power to them. That's exactly what the GFDL and free content is all about. Demi 22:27, 7 Apr 2005 (UTC)

This whole thing of en-user "image" suppression relies on the fact that people agree with it. This will be not the case. It relies on particular pictures to be deemed offensive this is just a concept where you will not find agreement on. Your idea that by allowing for filters we can have everything is flawed because it may be right in abstracto, agreeing WHAT to filter will not find consensus. GerardM 08:00, 12 Apr 2005 (UTC)
It's still a step in the right direction, and some images/content will very obviously fit in the category to be hidden. Obviously some borderline stuff will need votes, but so what? - Omegatron 00:16, 13 Apr 2005 (UTC)
Because it is flawed to the core. It is disgusting to suggest that the default should be to censored. It is blatantly POV because it proves you do not trust the editorial process to have proper content. GerardM 06:44, 13 Apr 2005 (UTC)

Too complicated[edit]

Let's not get too complicated and subject ourselves to instruction creep. I think we basically need these templates to add to image pages:

  • {{nudity}} - Image contains nudity.
  • {{violence}} - Image contains graphic depictions of violence.
  • {{explicitsex}} - Image depicts sex acts

For non-logged-in users, these images are shown only as links in article pages (even if the markup inlines it) along with their caption (if no caption, then the description from the image page). Logged-in users get three checkboxes:

  • Show images containing nudity inline
  • Show images depicting sex acts inline
  • Show images with graphic violence inline

Arguments about which image gets which tags take place on the image's talk page; disputes are resolved via the usual dispute resolution mechanism.

A non-technical implementation is possible too: we just make the policy that such images must be linked ([[:Image:...]]) instead of inlined ([[Image:...]]). This is inferior to a technical solution because a) it doesn't address the use of these images for vandalism and b) it doesn't allow people to read inline content in the most natural manner. In my opinion it would, however, be better than what we have now (nothing, and endless arguments on IFD about deleting these things).

I know there are other types of "potentially offensive" images but the above are very common while the others are not so (depictions of Ba'haullah for example). My intent is to keep it simple so that it's something we can actually implement.

Demi 17:47, 8 Apr 2005 (UTC)
  • Agree -- I think Demi's proposal is great if this can be implemented on the developer end. --Wgfinley 05:24, 12 Apr 2005 (UTC)
On the Dutch wikipedia there was the picture of an eye, it had the category Nude attached to it. It is a stupid idea that will not work. GerardM 08:01, 12 Apr 2005 (UTC)
Fully agree. Such a system of categorisation is (a) massively POV towards what Demi considers filter-worthy categories, and (b) massive POV towards what Demi considers unworthy. In short, it's wholly inappropriate.
James F. (talk) 17:42, 12 Apr 2005 (UTC)
We can't manage every part of Wikipedia by exceptions, there are always going to be people who tag things wrong. I believe once the new upload form is rolled out it will cut down on the rate that this happens then there's janitorial duties of making sure things stay tagged. There's also no POV pushing going on here, just identification of the three most likely areas of objectionable content. You don't think any of those are objectionable? Super, you can have all your filters off then. You think there are some more? Super, point them out. The point is to allow people who don't want to see it to have their wishes respected while allowing those who do the same. --Wgfinley 19:32, 12 Apr 2005 (UTC)
I like this. These are the basic categories that the typical person would have a problem with. There will always be things put in the wrong category. Just fix them. - Omegatron 00:18, 13 Apr 2005 (UTC)

Passwords[edit]

If part of the idea of this is to allow schools/libraries/etc. to trust Wikipedia to be school-friendly — that is, trust that their eight-year-old students aren't going to come across shocking images — then we're going to need some mechanism to allow setting to be fixed, and not changed back by a student.
Currently, if this system were implemented as people are suggesting above, a student would have two methods of getting around the image blocking:

  1. Switching the setting to "Allow all images"
  2. Logging out

The first would be easy to fix by allowing an optional password block on the setting. If a setting is placed and the optional password is filled in, the same password would be required to change the preferences (or, alternatively, a password would be required to change the setting to a lower level of suppression, allowing anyone to temporarily change the setting to block more images).
The second, however, would require a more complex fix. If we could always assume that schools/libraries/etc. used a fixed IP address, the settings and password could be associated with that IP. I don't think we can assume this, however, especially for the parents of child who want the block but are using Wikipedia through AOL or something. A possibility would be to require a password to log out of Wikipedia. This, of course, would mean that real editors who want to edit from public libraries where a block is set would be required to ask for the password to log out, so would not be a perfect fix by any means. Without some mechanism that prevents the setting from being changed, however, we'll never be able to assure end-users that they can successfully prevent their eight-year-olds from seeing inappropriate images. Asbestos 10:31, 18 Apr 2005 (UTC)

End-user suppression of ALL images[edit]

To avoid the inherent POV and inevitable wars of classifying offensive/not images, why not have an option for the end-user to suppress all images unless s/he wants to see them? (There's more detail near the end of w:Wikipedia:Village pump (proposals). Nickptar 20:42, 21 Apr 2005 (UTC)

I have to agree with this. Besides the fact that I absolutely hate the use of the word "suppression" here, the use of image labelling is going to be extremely POV, and edit warring will ensue. I know I would be deleting "offensive image" or even "potentially offensive image" labels from any image as POV. RickK 21:08, 23 Apr 2005 (UTC)
Your point of view basically proves mine. End user image suppression is controversial it is the first step towards censorship it will prove fractionous. There is no middle ground as there will always be people who are even more extreme. GerardM 08:14, 24 Apr 2005 (UTC)