Talk:Image filter referendum/en

From Meta, a Wikimedia project coordination wiki
Jump to navigation Jump to search


Push this button if the page content seems to be lagging: Purge

The talk page has been split into some sub-sections and subpages (still being worked out): if possible add your new subsections at the end of the relevant main section (see the index below), otherwise add it here and it will be moved later. Direct links to subpages:

Process[edit]

Page protection[edit]

Hi. I've just unprotected the subject-space page (Image filter referendum). There seems to be some confusion about when it is and is not appropriate to protect certain pages in Meta-Wiki's mainspace. Absent vandalism or some other problematic editing, it is inappropriate to protect pages, particularly full protection. This is fairly established on nearly every Wikimedia wiki. I've no idea why people are attempting to make exceptions here. --MZMcBride 22:36, 29 June 2011 (UTC)

Thank you... SJ talk | translate  
I don't know if someone logged out to prove a point, but constructive edit by IP. :) The Helpful One 22:48, 30 June 2011 (UTC)
Protected again now. Protection reason "This needs to stay stable" . Why? --Kim Bruning 10:44, 23 August 2011 (UTC)

Who is Robermharris, and why should we follow his opinion?[edit]

Who is Robertmharris, and what is his qualification to suggest anything to the WMF? Is Dory Carr-Harris his wife and what is her qualification to give the Board an direction in basically questions? [1] What kind is their connection to Wikimedia? -- WSC ® 12:32, 21 August 2011 (UTC) Now I read, he is related with the Canadian Broadcasting Corporation were en:Sue Gardner is coincidentally the previous director. But what maks him an expert in things like enzyklopedia. -- WSC ® 15:46, 21 August 2011 (UTC)

What is his qualification to dicide if we have to launch an image-filter or not? Aside from the fact that he worked for the Canadian Broadcasting Corporation, whatever that means? -- WSC ® 16:50, 21 August 2011 (UTC)
I mean, maybe he is a philosopher, ethnologist or an sociologist? a professor, who knos best all the works of enlightment? I mean Diderot, Rousseau, Hume, Kant, and just forgott to told us that on his User page? Did he ever read the writings of them? Did he understand what these fellers wanted to change in the world and that Wikipedia is in the tradition of the enlightment? Hey, its possible! He knows classical musik. -- WSC ® 17:39, 21 August 2011 (UTC)

Robert M. Harris is a broadcaster and journalist who has worked for the Canadian Broadcasting Corporation for the past thirty years. His daughter Dory Carr-Harris is just finishing her MA in Cultural Studies at Goldsmiths, University of London 2010 Wikimedia Study of Controversial Content/Archive
And surprise: Sue Gardner began her career on Canadian Broadcasting Corporation (CBC) EN Sue Gardner. It smells badly. --Bahnmoeller 19:13, 21 August 2011 (UTC)

Hm? Why is an journalist competent to decide what the policy of the board in the case of contents is? I've never heard any website of an newspaper or magazin got an content filter for their own articles. -- WSC ® 19:42, 21 August 2011 (UTC)
You absolutely should not blindly believe the Harris report, nor should you trust in its conclusions because Harris has some magic infallibility credentials. But nor should you fault Sue and the foundation for recruiting him to investigate and report on the issue. Just read the report and agree or disagree with its content. I think I agree with it-- that is, I agree with what I think it means, though of course, others might interpret it differently. Either way-- rest assured, Harris hasn't "decided" anything for us, our board and our global community decide. --AlecMeta 17:53, 22 August 2011 (UTC)
Just read the report and agree or disagree with its content -- The agreement of the user basis with the report is apparently not of any interest to the board. The board members read it and apparently agree with it so strongly that they act on it. --195.14.220.250 18:00, 22 August 2011 (UTC)
The only action they've taken is to call for a global discussion and survey. They listened to _thousands_ of voices before the Harris report, Harris listened to lots and lots of voices, and the board is now doing it's best to listen to 20,000+ voices. These aren't the actions of a group that have their minds made up in advance. --AlecMeta 17:13, 23 August 2011 (UTC)
Alec! Don't you think, an professor of philosophy or an professor of Ethnology, or anybody how had ever befor in his live think about the meaning of an enzyclopedia would present better arguments and an more powerful argumentation than an Strawperson of the Board ääh, excuse me, an expert of, of what? Classical Music. I don't like to be fooled. -- WSC ® 19:56, 22 August 2011 (UTC)

Alec, you are wrong. The board has made a clear decision to introduce a content filter for images. That was made clear by the wording of the so called referendum. We could only declare if we find it important or not. I hope, that the board will cancel the whole project and will not stand for reelection once the term is over. --Eingangskontrolle 21:31, 1 September 2011 (UTC)

Link to the discussion[edit]

Could someone PLEASE put a highly visible link and encourage users to read this discussion on the main page of the referendum. There are only links to the boards proposal, a one sided FAQ section and voter eligibility. This discussion is listed on the box on the right at the bottom (after results) or some reason. How can a voter make a reasoned decision based solely on what the board proposes and a one sided FAQ? --Shabidoo 22:11, 21 August 2011 (UTC)

+1 to any method that successfully involves more people in discussion. --AlecMeta 17:48, 22 August 2011 (UTC)
Can you please define "main page of the referendum"? I don't find any link on the SecurePoll page so I guess you mean Image filter referendum, but I still can't see the other links you mention. I couldn't find anything else so I've changed the navigational template a bit. Nemo 23:18, 22 August 2011 (UTC)
Im talking about Image_filter_referendum/en. There is no link to the discussion on the main part of the page...only in the box on the right side...below the results link (as though this discussion is only an after thought (after the results come in). It's bad enough that the questions are written in an ambiguous way, that the FAQ is entirely one sided...but at the very least there should be a very clear link to this page and it should be advertised so that users can see that other users have many problems with the image filter before they cast their vote. It is the least that could be done. --Shabidoo 01:44, 23 August 2011 (UTC)

Would someone please link this discussion on the main page of the referendum?[edit]

I don't know what I have to do to get any response from anyone. Over the course of several days I have posted this in three different forums/discussions and it is being ignored.

Would someone please link this discussion on the main page of the referendum? Would someone please link this discussion on the main page of the referendum? Would someone please link this discussion on the main page of the referendum?

This discussion is important, and overwealmingly anti-image filter and yet this discussion is not properly linked on the main page but an afterthough in a referendum with biased FAQ and biasedly phrased questions. --94.225.163.214 01:17, 23 August 2011 (UTC)

It is linked on the main PIF referendum page – you can get here either by clicking Discussion tab on the top of the page, or Discuss link in the box on the right-hand side. As you can clearly see here, all your posts can be found on this page – just search for you signature (IP). --Mikołka 02:06, 23 August 2011 (UTC)
That is NOT enough. The discussion tab is at the bottom of the box ... comming AFTER the results tab. Why would it be there? There are big links to the voting page and the boards questions but not to the discussion. I didn't even see it myself until after I voted. The whole page and all the links (other than the discussion) are very one sided...this discussion page should be hilighted.
--Shabidoo 02:11, 23 August 2011 (UTC)
Looks like you had no problem finding this page. ;) Anyway, considering your contributions on enwiki you know were the Discussion tab is. --Mikołka 02:18, 23 August 2011 (UTC)
I just saw that the discussion tab have moved up. Thanks for doing that. Now the only thing left is a proper link. :) --Shabidoo 02:39, 23 August 2011 (UTC)

Why is it so hard to vote?[edit]

By golly, why is it so hard to vote, with all the instructions to search for something in the search bar (which, by the way, fails for those who use the HTTPSEverywhere addon for Firefox, because it redirects them to secure.wikimedia.org), and figure out on what wiki you're most active, AND with a link to WikiMEDIA? Most people are active on WikiPEDIA.

Please put a link to http://en.wikipedia.org/wiki/Special:SecurePoll/vote/230 and make voting simple. WTF.

(PS: if this has been suggested already, sorry - I searched this discussion page, but no way I'm going to read the enormous amount of debate on it, which after all is on a silly topic - bending Wikipedia to the cries for political correctness of those who, of course, want "the benefit of the children". Oppose. -- Dandv(talk|contribs) 10:47, 22 August 2011 (UTC)

The problem with putting a link to that page is that it is one of many hundreds of projects affected by this vote, including over 250 languages of Wikipedia alone. Hopefully, recent clarifications of the instructions have made it a bit easier, but it would be so nice if we could somehow provide a specific link for each user to a project on which he or she qualifies. :/ --Mdennis (WMF) 11:44, 25 August 2011 (UTC)

Refactoring and archiving[edit]

I see that several sections have been archived only depending on inactivity. This page is too big and has to be split but there's already quite a lot of repetitions; someone has to reorganize sections by topic and move each topic to its own subpage. Any volunteer? :-) --Nemo 21:02, 22 August 2011 (UTC)

Well, I've just been bold and did it the way I could... Headers and subpage titles can always be changed and sections can always be moved. It would probably useful to move some things back from the archives as well. --Nemo 22:18, 22 August 2011 (UTC)

Last page protection rationale[edit]

I've probably been able to guess it with some digging in the history of the page despite the very unclear edit summaries, but could someone explain the rationale of the page protection clearly? The last discussion I see about it is #Page protection and as far as I know "high traffic page" is not a valid reason for full protection of a main namespace page (not even for the main page here on Meta), so a note here would be useful. Nemo 22:35, 22 August 20

yes. There were attempts to move and rename pages after the election had launched, which was causing even more confusion. It was done to keep it at a stable and consistent state during voting. Sub-optimal, I agree, but it was confusing an already hazy situation. Philippe (WMF) 22:48, 22 August 2011 (UTC)
If the problem is the title, can we restrict only moving? Nemo 23:02, 22 August 2011 (UTC)
No one said they were confused by the proposed rename. Rich Farmbrough 11:15 29 August 2011 (GMT).

Not clever to make a sondage in the middle of summer[edit]

I don't know if american people usually work 52 weeks a year (if it is the case, Nicolas Sarcozy, the french republic president may apreciate it), but in europe, a lot of people take holidays in July ou august (ot both when they can). and holidays is a possibility to leave internet.

For people who keep or use an alternative internet access, this access if often less confortable than the one used for the rest of the year.

This sondage ends on the 30 of august, and the next time I will see my ADSL box in the oposite part of the country will be during the night between the 30 and the 31 of august.

Up to last year, at the same period, I would not have been able to answer the sondage. In fact, it is a feast linked to my first name that was the occasion for me to see my parents and to download in their house several weeks of emails including the email speaking of the sondage. Presently, I use a 7 inches sceeen netbook in a garage connected by wifi to a neigbour box to access this page. It's happy french internet providers permit it.

It would have been more simple for a lot of people to wait the 2 first weeks of september to start the sondage. An alternative solution would have been to open the sondage during 4 weeks from middle of august toi middle of september. I don't think the absence of decision about filtering pictures on the 1st of september may have been dangerous for the life of Wikipedia and similar projects. Bech 01:08, 23 August 2011 (UTC)

Ne jamais attribuer à la malignité ce que la stupidité suffit à expliquer (and I know you aren't saying it was deliberate, but some people might wonder given the way this 'referendum' is being conducted). It would not have occurred to most Americans that parts of the world slow down during July and August, since they do not take much time off in the USA (that, like having a functioning healthcare system, would be evil socialism). The only time many Americans stop working is Sunday, when they go to church to pray that God will cover up naked bodies on the internet with JavaScript, just as Adam and Eve (a real couple who lived 6000 years ago) covered themselves with fig leaves (an early forerunner to JavaScript). Seriously though, there is not much point any of us complaining, as the Board has already decided that this filter will be implemented regardless of the wishes of the community, and most of those 'voting' (this exercise stretches the definition of referendum to breaking point) probably have about 100 edits and have only read the heavily biased summary of this proposal offered here, instead of engaging in an actual discussion, which is the way we used to decide policy on Wikimedia projects. So whether you can participate or not is of no importance to the Board. The outcome is already decided. Beorhtwulf 15:14, 23 August 2011 (UTC)

Definitions[edit]

Eligibility and emails[edit]

A page has been set up to gather reports of false positives in the emailing campaign: Image filter referendum/Email/False positives

To be or not to be eligible[edit]

I received a message that I am "Sorry, you are not in the predetermined list of users authorised to vote in this election. " when I went to http://meta.wikimedia.org/wiki/Special:SecurePoll/vote/230 , but I received an e-mail saying I was eligible...strange.Analogue Kid 13:31, 19 August 2011 (UTC)

That usually means that you voted from a wiki where you don't primarily operate. Go to your home wiki and follow the same instructions there, and it should work.  :) Philippe (WMF) 13:39, 19 August 2011 (UTC)
I have the same problem and I'm voting from the place I do primarily operate. Wikiant 15:28, 19 August 2011 (UTC)
Ditto Wikiant's complaint. I cant seem to vote from Wikipedia, yet have made hundreds of 'edits' there. As far as I know, I'm eligible. i don't know why we require the use of more than one Wikimedia Foundation website for eligibility, but I have used the WP Commons, as well. Jack B108 18:51, 19 August 2011 (UTC)
A follow up comment that I was just able to vote, while logged in to WP. i'm not quite sure why it didn't work the first time, but the 2nd try went fine. Jack B108 18:58, 19 August 2011 (UTC)

Same here. I have tired (this should read: tried) on three (English, German, Dutch) wikis and give up. The answer is still no!Sintermerte Sintermerte 18:06, 22 August 2011 (UTC)

Same here too (English, German), giving up too. --HeWhoMowedTheLawn 19:18, 25 August 2011 (UTC)
Do you actually qualify? I find exactly two edits under this username on the English Wikipedia. You must have made at least ten edits (while logged in) before 01 August 2011. WhatamIdoing 21:17, 25 August 2011 (UTC)



::This is a perfect example of small-minded bureaucratic thinking~
Because my contributions have been on en.Wikipedia and not Wikimedia, I, too, have been excluded from voting on a topic I have strong opinions about, in spite of the fact I was invited to "Learn more and share your view."
Who are the fools who make a distinction between contributions to the one part of Wiki or the other?
I'm sure everyone who bothers to write here at all likely possesses above average intelligence and certainly has an avid interest in furthering global education and each just does so wherever s/he happens to be without regard for the heading they're under, be it "Wikimedia", "Wikipedia" or whatever other "Wiki".
Frankly, I didn't know I had to divide my hours of contribution between different sub-sections to be considered valuable enough to granted the right to vote. That's the kind of "Nope, can't do" that Career Hall Monitors of the world think up.

I may as well speak my mind here:

Making pictures hide-able is so sickeningly P.C.
When will this crippling P.C. scourge ever end?

Perhaps Wiki should allow readers to make entire paragraphs and sections vanish accordion-style from entries out of fear they might find them offensive?
With books, one had to get a glimpse, then shut one's eyes and quickly turn the page. Sometimes the reader could later find the courage to actually revisit the offending image and thereby possibly learn something new about the world. I guess one can't do that if one never knew the object was already censored, huh?
Sadly, Michelangelo's David was included as an example of a hide-able. Still, today?? So we hide away historic and world-famous art now, too, because in 2011 some people still take offense at human bodies? (And yes, even children).
If this community of people, keenly interested in knowledge and learning, doesn't make a stand to continue pushing humanity out of the Dark Ages, who's left to do it?
Teachers are scared of parents, school boards and churches –and American parents have thinner and thinner educations themselves. Everything slides down a notch to the lowest common denominator. Mediocrity.

Maybe entire entries should be excludable?
Perhaps a German grandma should decide if her grandson could ever find a Wiki article on Nazi death camps? Or, if so, who is the know-it-all arbiter who chooses how blandly written and illustrated that entry should be?

So, maybe we should make all text completely malleable, paragraph by paragraph, to parrot back a reader's own view of reality?

This is a far cry from the purpose of an unbiased international repository of knowledge in the classical sense of a encyclopedic reference source.
The fact that people can censor articles before even visiting them, pull a fig leaf over any facts they might not like to know, just caters to ignorance because of cowardice to stand up to controversy.
Where are we all headed if this hand-wringing continues?

One last question:
Who decided that ten contributions (all within one sub-subsection, no less) is the proper criterion to judge whether an individual is worthy of a vote?
What about the man or woman who devoted months to writing a single technical contribution for an entry, say, on quantum mechanics?
Is that person's opinion less valuable or less informed than that of someone who changed who's to whose in fifteen different entries?
Yup. Some people are definitely smarter than others. No question about that.Mykstor 20:20, 24 August 2011 (UTC)

No, Mykstor, you have not be excluded because your contributions are mostly on en.wiki. Most of mine are on en.wiki, and I could vote. You have been excluded because you do not meet the eligibility requirements, which require ten edits before 01 August 2011 on any or all WMF projects. You had made just four edits before that date. WhatamIdoing 19:07, 25 August 2011 (UTC)
You must read the rules to qualify, if you don't meet the basic you may be disqualified it is now over. -- Mohamed Aden Ighe 00:19, 1 September 2011 (UTC)

Spam from wikimedia[edit]

I have edited 4-5 times the wikipedia in the past. Today I have also received an email from wikimedia about "Image filter referendum". In my valuation this email is spam, as I have not asked this email, addition to this I haven't signed up for any email group on wikipedia/wikimedia. Notice that in USA as I can remember the spamming is strictly banned, and the sender have to pay approx. 500 dollars after each spam. Furthermore you say in email that "To remove yourself from future notifications", the problem with that I have not signed up for any notification. Could you be so kind to stop spamming my folder, and where could I turn with my complaints?

Please use Wikimedia nomail list in that case. --WizardOfOz talk 17:10, 19 August 2011 (UTC)
Yeah, opt out. a fine answer :-/. But that doen´t take away that it is unsollicited bulk e-mail commonly classified as spam. Zanaq 17:51, 19 August 2011 (UTC)
I found the following page: http://meta.wikimedia.org/wiki/Image_filter_referendum/Email/False_positives there I don't see any permission to send out a massive 700000 emails. Probably wikimedia become the biggest spammer on the world?
The only way you would have received this mail is if you checked "Enable e-mail from other users". If you'd still like to receive e-mails from other users, but not ones about votes like this, then you can add yourself to Wikimedia nomail list. Cbrown1023 talk 01:51, 20 August 2011 (UTC)
e-mail from other users means that a user can click the link on the userpage and send an email to the user. one at a time. using the stored email address via some other tool to send bulk mail is spam, and not email from other users. Zanaq 07:00, 20 August 2011 (UTC)

Who has sent mail to users? As I understand this was done in a way which is not open to normal users. Please explain and give me the opportunity to adress all receipients of that mail with my point of view. --Bahnmoeller 10:07, 23 August 2011 (UTC)

The Wikimedia:Privacy policy plainly says, "The email address put into one's user preferences may be used by the Foundation for communication." This means that e-mail messages such as these do not meet any legal definition of spam. You opted in to these messages when you added your e-mail address to your account. You can undo that action by opting out, but you cannot make a credible claim that WMF is spamming you by using your voluntarily supplied e-mail address exactly like they said they would. WhatamIdoing 18:57, 23 August 2011 (UTC)
Like anyone reads that. Anyway, what is legal and what is moral are two different things. It may be legal to send these mails, but mass mailing in such a biased way, while the vote is underway, to users that did not ask for it, doesn't strike me as a very good idea. And it might be that they saw the vote results among the regular editors were not as they wished, so they sent out a mail to all users who are less regularly here, with a text that portrays the proposal in a very positive light. I would be interested to compare the results before the sending of the mail to the figures after the mail was sent. Zanaq 11:20, 26 August 2011 (UTC)
Quote "Like anyone reads that." You're failure to read the terms of service is not WMF's problem, it is yours. You agreed to the terms when you signed up, you have been told how remove yourself from similar e-mails from the foundation. Either do it or quit expecting the world to wipe you're backside. People like you who refuse to accept responsibility for your own actions are exactly why proposals like this are being forced onto the foundation. MrZoolook 10:27, 1 September 2011 (UTC)
Nobody knows the figures. They will not be accessible until the voting period is closed. Too, a review of the history of the image filter referendum page will disclose that the email notification was always part of the plan; it has been included in the timeline for as long as the timeline has been published, well before the launch date of the vote. Language for the letters was crafted well prior to the launch of the vote, as it takes quite some time to coordinate translation into multiple languages. --Mdennis (WMF) 14:43, 26 August 2011 (UTC)
Zanaq, the votes are secret ballots being collected by a third-party voting group. The WMF has no idea what the results are. They will not find out what the results are until after the voting has ended.
And Maggie's right: The plan has always included a specific date for sending out e-mail messages. It would have been immoral (called "lying") to say for months that the WMF will notify eligible people by e-mail, and then not to do so. It is not immoral to follow your previously announced notification plan to the letter. WhatamIdoing 17:16, 27 August 2011 (UTC)

But it is still interesting to note, that the qualifications to vote are already at the lowest possible level with a total of 10 edits. Some users need more edits for just 5 lines of text. And all these, perhaps onetime users, would might not cared about wikipedia for months now receive an invitation to vote in this matter. How many users have received a email for the latest fundraising? How many users have received a letter regarding vector? How many users have received a notice to vote for the board? While it might be legally ok, it is not in the general context. Or put it another way: It was very unexpected. --Eingangskontrolle 21:51, 1 September 2011 (UTC)

Invitation email caught by spam filter[edit]

Hmm, don't seem to be able to log into meta. Anyway my referendum email to *@gmail.com was identified as spam. You may have lost a lot of putative votes that way —The preceding unsigned comment was added by 80.177.155.110 (talk) 10:21, 20 August 2011

From my understanding most of these mails were completely unsolicited. Therefore they are spam. If enough people report it, wikipedia could possibly end up blacklisted on a lot of smtp servers. This was a very foolish endeavor that was carried out with all the tact and care of a bull in a china shop. I have lost a massive amount of respect for Wikipedia over this. Pothed 17:26, 20 August 2011 (UTC)
They are not spam; this sort of communication is explained by the Wikimedia:Privacy policy that you agreed to when you created your account: "The email address put into one's user preferences may be used by the Foundation for communication." WhatamIdoing 18:59, 23 August 2011 (UTC)

I got an email about a referendum but I cannot vote![edit]

Hi, I got the email:


"Dear BlitX,

You are eligible to vote in the image filter referendum, a referendum to gather more input into the development and usage of an opt-in personal image hiding feature. This feature will allow readers to voluntarily screen particular types of images strictly for their own accounts.

Its purpose is to enable readers to easily hide images on the Wikimedia projects that they do not wish to view, either when first viewing the image or ahead of time through individual preference settings. The feature is intended to benefit readers by offering them more choice, and to that end it will be made as user-friendly and simple as possible. We will also make it as easy as possible for editors to support. For its development, we have created a number of guiding principles, but trade-offs will need to be made throughout the development process. In order to aid the developers in making those trade-offs, we need your help us assess the importance of each by taking part in this referendum.

For more information, please see http://meta.wikimedia.org/wiki/Image_filter_referendum/en. To remove yourself from future notifications, please add your user name at http://meta.wikimedia.org/wiki/Wikimedia_nomail_list."

I got the same email, but when I go to the actual vote page, I get the message, "Sorry, you are not in the predetermined list of users authorised to vote in this election." Which is it? If we get an emailed invitation to vote, I imagine we're eligible, no? Denzera 19:43, 19 August 2011 (UTC)

If it says your account doesn't exist, you're probably trying to log on to Wikimedia (with an "m") with an account name that's only known to Wikipedia (with a "p"). Note the instructions on the main page for the referendum: "2.Go to one wiki you qualify to vote from. In the search bar, type in Special:SecurePoll/vote/230." You might also want to look up "Unified login" so you can use the same account login on all the sites. (I did all this just now, with 100% success.) Gregory Bennett 20:39, 19 August 2011 (UTC)

I got the same 'ineligible' message. Enough already, just give us a voting site with a radio dialogue voting doodad...

Radiojonty 21:50, 19 August 2011 (UTC)


What site have you guys edited? Is it Wikipedia, Wiktionary, Wikimedia, etc, etc? You are only eligible to vote on that site. Go to whatever site it is and log in. If you can't log in, make sure you have typed your username in correctly (it is case sensitive). Black.jeff 22:29, 19 August 2011 (UTC)

How am I supposed to know for which site I was classified? Since Wikimedia knows, they should include a direct link the email.--Ullrich.c 05:58, 20 August 2011 (UTC)

I've edited in Wikipedia, got the email says I'm eligible. But when I log-in to Wikipedia and go to the 'Special:SecurePoll/vote/230' in Wikipedia it still says I'm not eligible. This is silly, give us a proper voting system.

I'll add my voice to the number of people who got an e-mail saying they were eligible, but when I go to Special:SecurePoll/vote/230 (while being logged in) the web site claims I am not eligible. I have almost exclusively edited on Wikipedia and there is where I should vote then. --NiklasBr 12:32, 27 August 2011 (UTC)


When I tried to log in: wrong password; when I attempted to reset password I received an error stating my username did not exist (SNAFU?)

I would have voted NO. I probably will not vote (I'm lazy; who knows when I'll be able to resolve login issues).

FYI: turns out there is an opt-out option for all http images in most browser settings.

I sometimes find real life visual input offensive. It turns out that not seeing offensive images is as easy as not looking at them. If I don't want to see the leper across from me on the bus all I need to do is look away, same goes for the little girl in the 6" skirt (both images I consider disgusting)

In summery NO TO CENSORSHIP; The primary purpose of any encyclopedia should be the truth. Let the censors choose not to include wikipedia before wikipedia compromises it's integrety to include the censors' wishes.

To reiterate: WHOLE TRUTH - CENSORED IMAGES = INCOMPLETE TRUTH

Some people prefer a partial truth, and there is nothing wrong with that as long as they do not impose it on others. The op out of images seems intended so institutions can remove partial content(images) from wikipedia and redistribute that (partial) information (to students etc.).

If it was for bandwidth limitations I would totally be for it. But, let's be honest, this censorship request is not for bandwidth limitations and 64k isn't enough memory for the home computer! ;-)

Why a vote?

Isn't it possible in the wikipedia profile to create a checkbox to ask if I wanna see the photos clear? Isn't it possible because of technical reasons? If its possible can someone answer in german, my understanding in english isn't so good, thanks! --Stoppoker 20:45, 19 August 2011 (UTC)

Of course it is possible to create the checkbox. The question is, "Shall we create a checkbox to ask you whether you want to see photos clear?" Some people (like Pothed, below) think the checkbox should not be created because you might use it for self-censorship. WhatamIdoing 19:06, 23 August 2011 (UTC)

How is it censorship? This is giving people the option to hide content that they don't want to see. If they want to see the image, they can. Censorship would be removing the images or hiding them without giving people the option to see them. Black.jeff 22:29, 19 August 2011 (UTC)

Self imposed censorship is still censorship. I think it's atrocious and runs counter to the purpose of the site. I'm suprised this idea has made it this far in development. Pothed 22:58, 19 August 2011 (UTC)

Self imposed censorship and censorship in the wider sense are very different issues. One is imposed on you by a third party, that goes against freedom of several things. Self imposed censorship is you not wanting to see or read something, so you don't. If anything, opposing that goes against freedoms. If you allow people to censor images for themselves, they might then find out more about a particular issue, and thus it can further the goals of WMF. Black.jeff 21:24, 24 August 2011 (UTC)

I agree.


I can't vote[edit]

I keep getting email telling me to vote, but I can't vote even when I log in globally from Wikibooks. b:User:Jsalsman 21:16, 21 August 2011 (UTC)

Hmm. You should qualify. Have you tried voting at [2]? If so, and it's not working, can you tell us what message you're getting? --Mdennis (WMF) 21:46, 21 August 2011 (UTC)
Same problem here.
Sorry, you are not in the predetermined list of users authorised to vote in this election.
Not possible to vote :( --E7 10:12, 22 August 2011 (UTC)

It worked after I logged in to Meta with my real name. James Salsman 16:28, 22 August 2011 (UTC)

It worked for me after trying it a few days later... Silly stuff. --E7 09:36, 25 August 2011 (UTC)

Lopsided and Strange[edit]

Oppose: This proposal is ludicrous and lopsidedly biased.

I've made more than 10 edited and contributions prior to Aug 2011, but I'm not eligible to vote. However, I did receive an E-mail to participate!

So what gives? How are those whom are capable of voting chosen? Looks lopsided to me that it's not open for everybody to vote! Why isn't this open to the general public? It seems to be a limited crowd with a biased point trying to make something that everybody else will have to live with. That's unfair to say the least.

And as for the topic of discussion, topic to vote on being censoring images... come on people, why should Wikipedia... an information source be wrapped up in such a discussion in the first place. The internet is full of all kinds of information and images. If one doesn't like to see such images, then they shouldn't go to such sites... or if it's certain images of certain types, then they should use filtering software on their own PC to filter such.

Having such a Wikipedia committee take up such a controversial issue in the first place makes me wonder what the real reason behind all of this is. Wikipedia should spend more time working on how to get more information to more people and not on censoring certain information from certain minorities of people as I'm sure the majority would NOT go for such a proposal.

Again, I go back to my lead point... why isn't this voting open to the public? I presume that if it were open to the public, the minority trying to get this pushed through would loose out... and thus the bias inserted on Wikipedia's part by not allowing the majority to vote only enforces such tactics.

Wikipedia should NOT be delving in censorship! It's not Wikipedia-like!

Wbenton 04:27, 22 August 2011 (UTC)

Did you go to en.wikipedia (which is not here at meta), log in, and type Special:SecurePoll/vote/230 into the search box? WhatamIdoing 19:11, 23 August 2011 (UTC)

Why spamming e-mails that are not allowed to vote?[edit]

my apologies for this discussion topic, but why on earth would i receive an e-mail, saying that i can vote and everything, while when i go to Special:SecurePoll/vote/230 it simply says that my account can't vote

seriously, why?

Hoorayforturtles 14:21, 22 August 2011 (UTC)

Did you visit the page on skwikipedia? that is http://sk.wikipedia.org/wiki/Special:SecurePoll/vote/230 ? --Bencmq 14:26, 22 August 2011 (UTC)
Also, compared with discussion, voting is evil-- but non-participation is even more evil. We'll always take votes as feedback, but discussion is even more valuable and informative as feedback. If you can't vote, it's probably because you're not coming from your main wiki. But even if someone actually isn't eligible to vote, we'd still want your opinion. --AlecMeta 15:56, 22 August 2011 (UTC)
Thanks for the skwikipedia link, it worked
Hoorayforturtles 08:57, 23 August 2011 (UTC)

Proposed change to instructions[edit]

The directions currently say, "Go to one wiki you qualify to vote from. In the search bar, type in Special:SecurePoll/vote/230. For example, if you are most active on the wiki meta.wikimedia.org, go to meta.wikimedia.org/wiki/Special:SecurePoll/vote/230."

I think we want to expand this to say, "Go to one wiki you qualify to vote from. You are currently reading a page at meta.wikimedia.org. For most users, you will have to leave Meta and return to your main wiki project to be able to vote. In the search bar at your regular wiki project, type in Special:SecurePoll/vote/230. For example, if you are most active on the wiki meta.wikimedia.org, go to meta.wikimedia.org/wiki/Special:SecurePoll/vote/230." WhatamIdoing 19:19, 23 August 2011 (UTC)

Hi. Sorry that this went unanswered so long. It's really hard to keep up with what's new on the page with the current format.:/ That sounds like a good idea to me; I'll run it by the committee. --Mdennis (WMF) 15:03, 24 August 2011 (UTC)
I have been told that I may go ahead and implement this. :) About to do so. --Mdennis (WMF) 18:10, 24 August 2011 (UTC)

What's a referendum and what's this[edit]

A discussion about the title of the page is taking place at Meta:Proposed page moves#Image filter referendum.

Vote is needed to trash the proposal[edit]

Please people, vote against this insanity. Self-cencorship is still cencorship. How blissfull it must be for people to continue being ignorant on topics they find "offending" or "harmful". Also, using child-safety once again as a reason for such system is just maddening. Once again! And in Wikipedia of all places! Isn't slowly but steadily tighteningn noose around neck of free Internet enough? All reasoning to cencoring internet is always about protecting children or preventing child-porn, while actual results are much wider-ranging in every cencorship case ever recorded. Now that self-deluding cancer has reached Wikipedia. This is truly maddening. 0rino 13:17, 20 August 2011 (UTC)

The proposed Image Filter project is a precursor to imposed censorship, as it involves the subjective categorization of images, e.g.: an image of a girl on a beach in a bikini, which would be inoffensive to some would be categorized as objectionable or even pornographic by religious fundamentalists. The very basis of this project is opposite to Wikipedia's core values, one if which is NO CENSORSHIP. And as noted previously above, if the project is implemented, then '....welcome to image tag wars'.

This proposal is COMPLETELY UNWORTHY OF WIKIPEDIA, which should abide by the firm policy of no censorship. Readers of our projects who view articles on masturbation or the Nanking Massacre should reasonably expect to see images which are sexually explicit or graphically violent; they and their children should not view our works if they can be so offended, since our works and their images are based on codes of verified reliable sources and neutral point of view. Parents are wholly responsible for what materials their children access via the Internet –Wikipedia is not their babysitter.

As to 'surprise' images of nudists riding bicycles (and similar objections): if such images are not in the norm (i.e. most people do not ride bicycles in the nude), then the image is misplaced or irrelevant to the article and should be expunged. If and when an article on 'Nude bicyclism' is created, then the images of nude bicylists are entirely appropriate to that article. The argument of 'surprise' is a complete red herring submitted largely by those of fundamentalist right stripes. Harryzilber 16:20, 17 August 2011 (UTC)

I absolutely agree. I think that it's despicable that the board would take such a heavy-handed approach to such a controversial topic, possibly against consensus. I marked 0 for the first question on the ballot and I encourage others to do so as well. — Internoob (Wikt. | Talk | Cont.) 17:08, 17 August 2011 (UTC)
That's naive though. Suppose that I need to learn about Naturalism (art) - so I type in "Naturism" - and find (to my horror), not some crappy porcelain figurine of a shepherd, but instead, right there at the very top of the page, to the right of the lede - a picture of half a dozen naked people. A simple typo or misunderstanding of a term (that the reader is looking up in an encyclopedia precisely because they don't yet understand it) can result in such problems. Consider that there are some companies out there who might fire someone for looking at the picture at the top of Naturism on company time when they are supposed to be researching Rococo figurines! So precisely because I'm against censorship of Wikipedia - and because I demand the right to have full-frontal pictures of nudists at the top of the naturist page - I'd very much like a "No Porn Please!" filter. SteveBaker 19:10, 17 August 2011 (UTC)
SteveBaker, I like what you're saying. I'd like to add the point, in response the the censorship purists, that this isn't, fundamentally, imposing censorship on ANYONE. It sounds like an opt in system. Making a list of check boxes (I think would be the best system) that say "Would you like to block images of: Porn (Scant dress/Soft/Hard)? Mohammad? Gore (Nasty papercut/Wounds/Serious mutilation)?" ect isn't imposing censorship. It's tabulation of images. Wikipedia? All about organizing ideas, just by defining them. Why shouldn't we allow users to block what they don't like?
In response to the peeps below me, "broden you horizon"? Grammar/Spelling aside, why would I want my horizons forcibly moved out? If I want to try that, I'll do it myself, thanks. And communist China? Not the only people touchy about employee porn. -Reade
You are very easely horrified - but that experiance may broden you horizon. Perhaps you should teach your computer not to accept certain words you type into searchfields. --Eingangskontrolle 20:25, 17 August 2011 (UTC)
If your company fires you for reading Wikipedia then you're likely in Communist China, in which case you'll probably have many more important issues to deal with. Harryzilber 20:33, 17 August 2011 (UTC)

Wikipedia is not intended solely for the twenty-something leftist neckbeard from a first world country. I would hope Wikipedia is used as an educational tool by people who most need it, and if allowing opt-in filtering of pictures of the prophet Mohammed gets a young Muslim child from the Sudan to educate themselves, then that is a far better thing than you getting to feel ideologically pure about "censorship". Which this ISN'T. Unigolyn 08:52, 18 August 2011 (UTC)

Unigolyn: while your concept of education involves the removal of knowledge from our collective works, that is certainly not Wikipedia's which has had for many years the stated goal of "...a world in which every single human being can freely share in the sum of all knowledge". Not: "a world in which every single human being can freely share in the sum of all knowledge except categories a through m". You're obviously an usurper to that cause.
It's not wikipedia, or Unigolyn, or you that forcing information out of the article (if only potentially), it's the culture that says, "No images of Mohammad." What's better, a situation where religious adherents cannot morally (as they see it) view a page, or one where they can? Which is freeing up the flow of info, at least for them?
Young Sudanese students are entitled to view or not view images of Mohammed as they see fit —that decision is definitely not for you to make on their behalf. Hopefully you're not a teacher or in any way involved with an educational system. Harryzilber 21:12, 18 August 2011 (UTC)
The problem is partly that it can't be done in a non-neutral way. No matter what we will end up being simultaneously too conservative for some viewers and too liberal for others. It is not the position of an encyclopedia to say "this photo is inappropriate" or alternatively, "you shouldn't reasonably be offended by this photo". The cultural aspects of it add a whole other dimension of POV. You might decide that women in bikinis are not nudity, but that's your culture talking. Your "naturism" problem can be solved in any of several less drastic, more effective ways (disambiguation page perhaps?) but the POV aspects of this cannot. — Internoob (Wikt. | Talk | Cont.) 22:40, 17 August 2011 (UTC)
Please define porn. Does the foot need to be naked to be porn? Does it need to be in a stocking? Or with a shoe hanging from a toe? Under a stream of water? Only if it's a male foot, and how obvious must it be that it's male? Is a naked neck porn, or only when the veil is shown partly lifted to expose it? The knee, porn or not? The lower thigh? The upper? Only the inside? Does male or female thigh make a difference? A male buttock in soccer shorts during a game? A possible bulge from a penis under trousers? A group of nude people? The group only out of a clear artistic or naturist context? A two day old naked in the bath? A 1 year old? 2 year old? 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 89, 90 year old? How much of the nakedness must be visible and how much hinted at or presumed? Are signs of sexual maturity required? Which could be porn and which innocent? This description of things that might be porn, or parts of it? Each of those would be erotic material for some portion of the population. "Porn" is not a readily definable concept since so much is in the mind of the viewer. While I couldn't define porn reliably in a way that covers the range of human experience, I could describe facts about an image that an individual could choose to use to create their own individual description of what they consider to be possibly pornographic. Jamesday 03:04, 18 August 2011 (UTC)

The irony with this objection is that forcing unwanted imagery on users is worse than censorship. Personal choice is not censorship. Why, if it were legal I think that those crying out "censorship" at this proposal would argue for including child pornography in the paedohillia article, as not doing so would be "censorship". 2.120.227.113 01:07, 18 August 2011 (UTC)

The above contributor wrote "The irony with this objection is that forcing unwanted imagery on users is worse than censorship." That's highly disingenuous. No one is being forced to view Wikipedia, which amply notes its non-censorship policy along with its disclaimers. If you don't want to see articles that include images which offend you, then you have the right to fork off another encyclopedia and stay there. The wiki software is completely free and no one will mind if you never visit this work again for the remainder of your life. Harryzilber 04:22, 18 August 2011 (UTC)
Cannot unsee. If, through ignorance, the road I usually follow to wikiprojects, I run into something I find offensive, I can't unsee it, not readily. It takes time to forget, and it's not a process I have very much control over. I can, accidentally, be 'forced' to view an image. Why not let me block those images categorically before I run into them? It affects no one else, unless someone is borrowing my account, and really? That's not exactly likely.
There are many more topics that don't violate law, but will also fall under the curse of censorship. Whats lost is lost and the WMF is supporting it. --Niabot 01:13, 18 August 2011 (UTC)
This about "paedophilia" is one of the largest straw men I have ever seen. People that are against the proposal don't necessarily want the WMF to host illegal images. You're also missing the point about censorship: if some POV pusher wanted to tag all the women in bikinis as nudity for example, they would effectively be censoring "reasonable" content for those who have this feature turned on. And I put "reasonable" in quotation marks because the POV pusher would actually have a case: for some people, this is nudity, which brings me to my next point that this can't be done culturally neutrally. You might know exactly where to draw the line on nudity, or even if you mostly know where to draw the line, there will be people who are more conservative and others more liberal than you. — Internoob (Wikt. | Talk | Cont.) 20:04, 19 August 2011 (UTC)

I completely agree, this proposal is not in the spirit of Wikipedia and I don't know how such a major change could have gone so far without broad discussions with the community (not just some small "consultation" which nobody will hear about). Instead they simply spring upon us this change and ask for votes, though we are not told whether this "referendum" is even binding. The entire referendum question is written in very biased way, which makes it sound like a minor technical change, I'm guessing no one who objected to the proposal was allowed to contribute to writing the question (this type of vote would shame a Banana Republic). --Hibernian 06:26, 18 August 2011 (UTC)

It's too late. This vote will not decide whether the feature is implemented or not. If you don't like that, you can complain to the board, or walk, but as I understand it, nothing here matters in terms of the question of whether such a feature will be implemented. This discussion is moot, let's move along. --Joe Decker 02:55, 22 August 2011 (UTC)

Ik ben faliekant TEGEN! - Dit riekt, nee: stinkt naar censuur! Tenzij het een door volwassenen (op hun eigen computer - en dáár alleen) in te bouwen/schakelen kinderbeschermingsfilter betreft.

Daarom heb ik vraag 6 met "?" beantwoord; aangezien "cultureel neutraal" een oxymoron is. Cultuur is per definitie divers. Al ademend zuurstof in kooldioxide omtoveren is zowat alles wat wij mensen gemeen hebben.

Vandaar ook mijn ongemakkelijk gevoel bij het ZEER suggestieve voorbeeld in vraag 5 (wel 'bloed', geen 'bloot') waarin ik het misselijkmakend puritanisme omtrent alles wat in de verste verte naar sexualiteit (of zelfs maar lichamelijkheid) zou kunnen verwijzen herken. Sintermerte Sintermerte 14:08, 22 August 2011 (UTC)

I am utterly AGAINST! the proposal. - It reeks of censorship! Unless it's a filter, to be built in/activated by adults (on their own computer - and ONLY there) for the protection of minors.

Thus "?" I answer question # 6; for "culturally neutral" is an oxymoron. Culture is diverse by nature & definition. To conjure oxygen into carbon dioxide (by magic) while breathing is about all we humans have in common.

Hence my unease at the VERY suggestive example in question # 5 (condone the "ferocity of brute force" versus proscribe a "state of undress") redolent of the nauseating puritanism which abhors all that could ever so remotely hint at sexuality (or indeed the corporeal). Sintermerte Sintermerte 14:08, 22 August 2011 (UTC)

Ich bin ausgesprochen GEGEN den Vorschlag! - Er riecht, stinkt sogar, nach Zensur! Es sei denn es betrifft ein Filter (von Erwachsenen ein zu bauen/schalten auf ihren eigenen Rechner - und NUR dar) zum Schutz Minderjähriger.

Weswegen ich Frage # 6 mit "?" beantworte: weil "kulturell neutral" ein oxymoron ist. Vielvältiger und verschiedener als Kultur ist kaum vorstellbar; dafür brauch' ich meinen Browning nich zu entsichern. Beim atmen Sauerstoff in Kappesdioxid um zu zaubern ist beinah' das Einzige was uns Menschen gemeinsam verbindet.

Daher auch mein unheimisches Gefühl bei dem SEHR suggestiven Vorbild in Frage # 5 (ja für "Not & Tod" nein für "Nackt & Titten" - (mehr "Blut", weniger "Brust")) welches ein abschäulicher Puritanismus atmet der Alles was nur ein Hauch von Sexualität, ja sogar Körperlichkeit aufweist, verachtet. Sintermerte Sintermerte 14:08, 22 August 2011 (UTC)

Survey is ambiguous[edit]

It is important that the feature be culturally neutral: as much as possible, it should aim to reflect a global or multi-cultural view of what imagery is potentially controversial.

It's completely unclear whether this is intended to be an inclusive or exclusive addition, i.e. would a Muhammad filter be offered/implemented because one cultural group prefers this, or would Muhammad cartoons remain completely uncensored because only a minority group opposes this? While the potential contradiction between "global" and "multi-cultural" in one of the possible interpretations hints at the intended meaning, this insight is immediately torpedoed by the inclusion of "potentially". This means whoever will be in charge of implementing the outcome of the survey can interpret this question to mean whatever they want. Therefore many people may be uneasy with answering this question. Samsara 22:02, 19 August 2011 (UTC)

It's worse than that actually - large groups of Wikipedia contributors finds censorship as such offensive. And it's long been recognized that tagging images (or other media) with tags for the explicit purpose of censoring them, is helping out those who want censorship. Thus you have the situation where some people find certain images offensive, while others find that implementing technical infrastructure in order to facilitate the censorship of the same images, is also offensive. Thus you literally *cannot* avoid offending someone. The only choice is who to offend. Do we want to offend those who object to free distribution of information (including topics that are controversial to someone somewhere). Or do we instead want to protect those people from offense, but trample all over those editors that believed in Wikipedias stance on NOT censoring. --Eivind 06:49, 22 August 2011 (UTC)

Petition against the development of an opt-in personal image hiding feature[edit]

We, the undersigned, are Opposed to this development.

95.147.60.247 08:02, 20 August 2011 (UTC) (wikipedia:bobcousins)

Please don't do this. Actual discussion is occurring above. Appending one's name to a "petition" without elaboration is far less constructive. —David Levy 08:23, 20 August 2011 (UTC)
Agreed -- allthoug in fairness, this is really a missing-option in the poll. The poll should, if it is to be useful at all, have a question where you're allowed to communicate if you strongly support, strongly oppose or something in between this misfeature. The closest thing there now is if the feature is "important", which isn't the same thing at all. Thus, while I agree with you that adding your name to a list of people who oppose the feature is less constructive - I would argue that the poll itself thus is less constructive. --Eivind 06:59, 22 August 2011 (UTC)

Bias in Referendum Questions - Important/Unimportant vs. Support/Oppose[edit]

I like the idea that this feature would go to referendum, as there are strong arguments on both sides. But it is deeply discouraging to read through these arguments, go to the voting page, and realize that the decision has essentially already been made. We are not being asked whether we support or oppose this feature. We are merely being asked whether it is important or unimportant that it be implemented. Even a vote of '0' does not register active opposition, merely disinterest. Personally, I have grave concerns that this initiative is wholly incompatible with Wikipedia's neutrality policy and will be a great boon to third-party censors. Saying that it is "not important" that the feature be implemented is not an accurate reflection of my views.--Trystan 17:58, 21 August 2011 (UTC)

Third-party censors already can use the existing commons category system. All or most images showing nudity for example are already under a nudity-category (see "nude women with shoes": the censors even can chose most meticulous details of our already in place censorship system!). We have a violence-cat with several under-categories too and so on. This is the reason why I think we don't need many new categories for this filter system. They are mostly already there. One could think of abolish the complete category system to make it harder for censors though... Adornix 18:09, 21 August 2011 (UTC)
I'm not entirely pleased with image-hiding based only on current cats, but it is survivable. This does still open up new avenues of attack, but the security measures (soft and hard) are already in place for current cats. These will have to be strengthened, but at least we won't be on terra incognita.
But do we have any way to stop people making new cats, specifically for purposes of filtering (which we know they will). That will be killer. :-/ --Kim Bruning 18:29, 21 August 2011 (UTC)
There would need to be new categories (see the example in the previous section). Rich Farmbrough 19:44 21 August 2011 (GMT).
As has been pointed out about, we currently categorize images in an attempt to be neutrally descriptive. This is a world apart from tagging potentially "objectionable" contant to empower each user (or a third party) to filter whatever they do not want to see.--Trystan 18:39, 21 August 2011 (UTC)
As far as I understand the foundation's plans, all new filtering categories would have to be as neutrally descriptive as the already existing ones. This is my interpretation of "culturally neutral" as Phobe explains it above. And since the community will have to tag the images, there has to be consensus for every new category and for all categorizations.Adornix 20:27, 21 August 2011 (UTC)
How can the formal determination that x image is "potentially objectionable" and y image isn't possibly be neutral? How can the formal determination that x subject constitutes "sexual" or "violent" content and y subject doesn't possibly be neutral? —David Levy 20:42, 21 August 2011 (UTC)
I don't doubt that we can set standards and use them to label potentially controversial pictures. Immodest dress, religious satire, etc. can be objectively (if rather arbitrarily) set down into working, applicable standards. But the process of applying them - of tagging images to warn about certain types content - is a very different process from looking at an image and trying to describe what it is substantively about. For example, if a picture has a rather incidental depiction of a same-sex couple that is largely irrelevant to what the picture is about, it would not make sense to categorize it under homosexuality. But if we are in the business of warning people who don't want to look at any depictions of homosexuality, it would need to be labelled as such.--Trystan 20:46, 21 August 2011 (UTC)
"to filter whatever they do not want to see" - How do we know what they don't want to see unless they give us specific parameters? Won't they be upset or at least annoyed when they inevitably find our idea of "nudity", "sex", etc. doesn't match theirs? Evil saltine 21:32, 21 August 2011 (UTC)
If you read the article page describing it with the images, they 1. get to chose if they accept the categories to block or not and 2. can open an image that is blocked or close one that isn't. Thus, they will be able to set their parameters. Ottava Rima (talk) 21:52, 21 August 2011 (UTC)
So, for example, someone would be able to hide all depictions of Mohammad (and nothing else) if they wanted to? Evil saltine 21:55, 21 August 2011 (UTC)
Who determines which categories to use? (No, we can't simply dump in the entire current list of image categories, which don't come close to covering the "potentially objectionable" content.)
Who determines what images are placed in a given category? —David Levy 22:32, 21 August 2011 (UTC)
I propose that the people who are offended tag the images so the majority of the community won't wast their time. I suggest that no-one who opposes this system should participate in the classification process. Zanaq 17:36, 22 August 2011 (UTC)
Ok, then I suggest that no-one who is in favor of this system should participate in the classification process either. Get to think of it though, the proponents are the ones who are far less likely to be neutral at deciding between images as objectionable and non-objectionable, while the opponents of the filter are rather neutral when it comes to the images. You know what, actually, only opponents of the filter should be creating and maintaining the filter category (ie. if the board is insane enough to allow such a Wikimedia-hosted filter category). --195.14.220.250 18:21, 22 August 2011 (UTC)
How should I know what other people find objectionable? I find nothing objectionable so I won't tag anything. If you find something objectionable you should tag it. The filter itself won´t be neutral: one cannot judge objectionability in a neutral way. The mechanism would be neutral since anyone can declare anything objectionable. Zanaq 17:22, 23 August 2011 (UTC)
So...every single interest group or individual can tag whatever they like however they like with no policy or interference and all images even remotely questionable can be censored from those who use the filter and/or in libraries, schools and other places where the filter would be implemented for all computers. It may be culturaly neutural, in that there will be very few photos that will not be tagged.--Shabidoo 22:18, 27 August 2011 (UTC)
That is correct. Zanaq 10:52, 3 September 2011 (UTC)

Just to spell it out[edit]

Oppose, for reasons I expounded above. I also urge other editors, no matter if in favour or opposing, to vote to know where the community stands: whether you agree with the proposal or not, I think nobody likes the fact that it is being crammed down our throat. This will also prevent wrong assumptions about where consensus lies. complainer 08:18, 22 August 2011 (UTC)

I don't think we'd be able to accommodate that here, as we've received over 20,000 votes already. --Mdennis (WMF) 12:50, 22 August 2011 (UTC)
People actively discussing here are far fewer; I think the total number of contributors to this page is less than 1% of that. Hopefully, they are a representative sample. --complainer 12:54, 22 August 2011 (UTC)
I think the important thing in the coming months us to ensure that the results are not misused to try and establish evidence of community support. This is not a referendum presenting arguments for and against, and then asking whether people support or oppose, and should not be misrepresented as such.--Trystan 13:31, 22 August 2011 (UTC)
I don't think there is any risk of misrepresentation: the nature of the referendum has been understood by the community, in spite of the fact that the term "referendum" is, as observed multiple times, wildly misleading. The problem with the referendum is that it can establish evidence of community support, but has no way whatsoever of assessing whether the community does not, in fact, support the proposal. The purpose of this head count is to fix this (not really minor) flaw. Due to the vastly different voting procedures, I don't think anybody could possibly confuse the two. --complainer 13:41, 22 August 2011 (UTC)
+1 to the need to take care with the results, and I agree with complainer that the results aren't actually going to wind up misused, for a lot of reasons. The best thing we can do is publish all the results in aggregate, after removing usernames. To me, if one of our project communities reach consensus that they wants this, then the feature should, at minimum, be implemented on that project. --AlecMeta 16:43, 22 August 2011 (UTC)

We have to note, that according to the way the question is asked the result will be: Voters feel it is important by different degrees. Lets hope, that 0 and 1 will be interpreted as not important at all, respectively not important enough to spend any more money and manpower into it. This referendum is a farce and until a real vote with yes or no option will be held, it will not be accepted. --Bahnmoeller 07:41, 23 August 2011 (UTC)

So, complainer, you are trying to make people in favour of the filter think "I support the decision, but the process was bad, so I'll vote against it". That's not the way to govern a project, or a country, or the world. It's like voting if favour of the death penalty just to complain about a country's crime policy. --190.135.94.198 00:53, 24 August 2011 (UTC)
What on Earth gives you that impression? I am trying to make most people against the filter say "I don't support the decision, and the process was bad, so it looks like I voted for it". I don't need to deceive anybody: if you read the posts, and I mean all the posts, you'll easily see that people are mostly against the proposal. There is no real way of seeing it from the way the referendum is formulated right now. I am, by the way, also urging people like you to say "Support" instead of speculating on my intentions: why would I be concerned about censorhip if I were to act underhandedly myself? It wouldn't make any sense. --complainer 24 August 2011
No, you'll see that many of the people who (1) figured out where to comment, (2) speak English well enough to join this discussion, (3) care enough to comment at all and (4) believe that they're going to lose—a group that is dramatically more limited than "people"—are against the proposal. Or, at least, they're against what they believe is the proposal.
"People" and "people who comment in a complaints forum" are not the same thing. People who are extremely unhappy and feeling powerless are far more likely to comment than people who think it's a great idea and are confident that they are going to get what they want. If you think that this proposal is so obviously a good idea that of course everyone will support it, you cast your ballot and go about your business. If you think this is a truly horrific idea that will be implemented anyway, then you are motivated to complain here.
This is a very typical pattern. You can see the same pattern right now at the discussions about the Article Feedback Tool: A dozen people are screaming bloody murder about it. The fact that 90% of the (literally) thousands of people who have used it and said that they support it is incomprehensible to them. The absence of those thousands of people from the bug-reporting and other complaints-oriented pages has led some of them to basically believe that those thousands of people don't exist (or don't count, since many of them are merely readers rather than experienced editors). They have confused "number of supporters who are willing to waste time arguing with complainants" with the "number of supporters". WhatamIdoing 18:52, 25 August 2011 (UTC)
Can we stop the assumption war for a moment? And maybe try not to turn it into a (spurious, too: this is not a bug, but an intentional proposal) parallels war instead? This is the entire idea of a head count. As for people being happy or unhappy, let's face it, the accepted practice in democracies is that people who choose not to express their opinion have no influence on decisions; you can't invalidate the results of an election just because you think that people who are happy with the government vote less--although this is, technically, true in most countries. And please, stop telling me that the people who support the decision are a silent majority: many, including a user banned for trolling on wikipedia, are extremely active, vocal, and aggressive here. Which reminds me: if everybody signs his posts somehow, it becomes a lot easier to communicate. --complainer
We are getting a head count; so far, more than 20,000 people have participated in the head count. However, the head count is not happening on this page. I don't know (I doubt that anybody knows: that's the point behind having a third-party vendor collect the votes) what the results will be, but I do know that the participation on this page is not a reliable marker for the actual results.
The reason I mentioned the parallel discussion is because there's already been extensive surveys there, and we have results from several thousand people, which are that about 90% say they support the AFT, 5% say they think it useless, and 5% don't care. But on the discussion pages, the supporters and detractors are evenly divided. The discussion is therefore a completely unreliable indicator of the community-wide view.
Back on this discussion—but also closely paralleling the AFT discussion—I also know that the individual supporters who choose to comment on this page are very confident that the tool will be supported, and that the individual detractors who choose to comment on this page are convinced that their very strong opposition will "be ignored" (to use a typical claim) because a majority will support it. However, we might all be wrong. It could be that the supporters will turn out to be the minority. We'll find out next month. In the meantime, I encourage you not to mistake the views expressed on this page for the views of the whole community. WhatamIdoing 16:54, 26 August 2011 (UTC)
The community supports the filters because all the information they had to answer the questions was based on a biased FAQ, strangely phrased questions and the proposal written as though it was a harmless change to wikipedia and nothing more. If you carefuly look at these discussions you will see that there are FAR more people against this "referendum" than for. --Shabidoo 18:44, 26 August 2011 (UTC)
I know that there are many people here opposed to the filter (not the referendum). Whether those people represent the view of the whole community is unknown.
It appears, however, that you believe the rest of the community will support the filter only because you think they are much stupider than the critics here. I am sad to read that you believe that the supporters' ability to read and understand the referendum is so much less than the critics'. Have you considered the alternative, good-faith interpretation, which is that the support is based on fully understanding and actually liking the idea presented here, rather than them being ignorant, confused and easily swayed by what you call "a biased FAQ" and "strangely phrased questions"? Could it be that the supporters, rather than being stupid, merely have different values than you? WhatamIdoing 17:28, 27 August 2011 (UTC)
@WhatamIdoing: the problem is that the referendum text is misleading; I don't think people reading the referendum:
1 - have thought through what the tagging can mean: myself, originally, came to this page with a merely technical concern, namely that the servers would not have been able to bear the added load (the point remains valid, by the way).
2 - if opposed, have a coherent way of expressing it: while some might actually vote zero to the importance, I ecpect most other just to abandon the page. People that like the proposal have no brainwork to do: they just have to praise it.
The problem goes deeper than this referendum and down to the shift of wikipedia away from its consensus-based decision procedure, which began with the restructuring of the WMF Board of Trustees rules into a complicated abomination that, in practice, allows Jimbo to always control its majority (it would be off topic to discuss it here but, if you are interested, try to read the restructure announcement, and follow the various branches of the election procedures). Then there is the matter of form: a decision taken by a former pornographer with Lovelace's syndrome and based on a "study" with no hard research and countless simplification, which amounts to little more than a high school essay is not something people should accept, regardless of their opinion about pictures. --en:User:complainer
I had the very same experience. I thought the filter seemed harmless and voted for it. I looked at the discussion (as I thought it was only a discussion about how to implement the referendum). I discovered to my horror, that there isn't even anything close to a community consensus on the topic. Very revolting. I've been trying ever since to get a very big and noticeable link to this discussion, and NO ONE listened. I just got them to move the discussion link a little higher. So, to answer your question...it has nothing to do with the cleverness of anyone and everything to do with neutrality in presenting the issue (more than one side) and encouraging and fostering debate (including advertising that there is debate instead of writing a completely one sided FAQ and manipulative questions). I cannot recount how many times I have expressed this. I am positive the results would be different if the discussion page was advertised. --Shabidoo 08:53, 31 August 2011 (UTC)
As with all votes, you are not supposed to vote what you believe some other people want. You are supposed to vote for what you—and only you—personally think is best. This means that even if everyone else in the world thinks this is a great idea, but you hate it, then it is your duty and right to oppose it in the referendum—and even if everyone who bothered to complain on this page hates it (which is far from the case, and I believe we'll find that the people on this page are far from representative), if you personally think it's a good idea, then it is your duty and right to support it in the referendum. It is not your job to vote for what you guess other people want. They've got their own votes; you shouldn't surrender yours to them. WhatamIdoing 16:38, 31 August 2011 (UTC)
It might be his duty, but it certainly isn't his option: he cannot oppose anything in the referendum. HE can just say it isn't important, which is certainly not the same. But you know this as well as we do. complainer
Once again...you fail to see the point. It's almost as if everyone cannot see the point. If someone asks me "Hey, there is this girl who is a little short on money, and we are all pitching in a dollar...why don't you help out?". I'll say...of course...here is a dollar. Now...if someone takes me to the side and says...did you know she is a bit of a loser and always needs money and is out of control and manipulates people? I'll think...I should look into this more...see what other people have to say...see if there are various views and ask questions if I can.
This is how I see this referendum. Unless you know you have to click on the discussion, unless you have any reason to doubt the way this referendum was presented (which was entirely biased and one sided), or unless you are just lucky enough to have randomly clicked on the "discussion tab"...you would have NO IDEA that the filter isn't simply some innocent feature that the board is simply trying to get a vote on. If you take a moment and look at the entire presentation of the referendum, the wording of the questions...and the FAQ before I changed it...you just might pick up on what I am saying. --Shabidoo 02:34, 2 September 2011 (UTC)

Proposal to change the questions on this "referendum"[edit]

There is no question which asks...should wikipedia utilize an image filter. There is no way to clearly and difinitively say...no. --Shabidoo 21:43, 24 August 2011 (UTC)

I do not believe that any changes will be made to the questions in the referendum at this point in the vote. --Mdennis (WMF) 11:48, 25 August 2011 (UTC)
At least the WMF should take into account that there was no "no option" if she starts with the interpretation of the results. --Niabot 11:53, 25 August 2011 (UTC)
Taken into account, and amplified by some of the combinations of freeform comments and votes. SJ talk | translate   01:53, 6 September 2011 (UTC)
AlecMeta writes above: "if one of our project communities reach consensus that they wants this, then the feature should, at minimum, be implemented on that project. --AlecMeta 16:43, 22 August 2011 (UTC)" -- I like this formulation of community empowerment, and think one of the reasons to improve the poll data is to find out which communities have consensus there. But please note that a sticking point here is how much time it takes to develop a workable solution. If only one community wants it, the work would likely have lower priority. SJ talk | translate   02:00, 6 September 2011 (UTC)
I don't think that's a reasonable interpretation of the ballot questions. If you think this should not be implemented, then your answer to "How important is it for the Wikimedia projects to offer this feature to readers?" is "Zero: it is not important for WMF to offer this feature".
What isn't offered is the more subjective, "How strong are your emotions on this subject?" A person who votes "ten" but doesn't really care much one way or the other will be counted the same as a person who votes "ten" and thinks the world will end if the feature isn't offered. WhatamIdoing 19:00, 25 August 2011 (UTC)
A botched experiment. Nothing like this can accurately give you the information you need, if the planning of it and the execution of it is organised and run by people who have one point of view and assume a positive response from everyone. --Shabidoo 23:59, 25 August 2011 (UTC)
The point is not so much the interpretation, but the procedure: if somebody stops you on the street and asks you "From 0 to 10, how important do you think it is that your left kidney is surgically removed?", would you think "What a nice man, he is thinking of my possible renal disease, I'll say '0' to let him know I'm fine"? Most people would assume he wants to sell your organs, and will do so no matter what you answer is; this is exactly the feeling one has with this particular referendum. To get back to the metaphor, a bunch of fundamentalists shouting "What about the children who need a transplant?" and "One God, one Government, one Kidney" or accusing you of slandering the medical profession all around wouldn't help, either. --compainer 09:18, 30 August 2011 (UTC)

Alternative ideas[edit]

All images on or off[edit]

Any kind of filtering system seems to me a waste of project resources and a morass. Instead, why not simply give readers the ability to turn all images on or off by default? And perhaps the ability to turn any given image back on, based on the reader's own evaluation of the caption and the article's subject. No tagging needed. No discussions required. Much easier to implement. Culturally neutral. Simple. Barte 11:21, 16 August 2011 (UTC)

That is already included in every browser setting! In Firefox is Tools > Preferences > Content > remove check mark from "Load images automatically". Finish! So an easy solution is already there. --Dia^ 13:02, 16 August 2011 (UTC)
That's not the point. If we're considering a quick "hide or show" system, this has to be done on wikimedia project, not by the browser. As you're probably browsing different websites simultaneously, you can want/need a specific website to hide it's image content, while you'll want another site to keep the images visible. Cherry 14:09, 16 August 2011 (UTC)
Please choose another browser. Opera has such a functionality in, Firefox has likely either a plugin or a greasemonkey script somewhere. For IE and Chrome will also very likely exists some extensions for that purpose (called site-specific preferences, plugins, addons and similar features do handle these!) Mabdul 20:22, 17 August 2011 (UTC)
"Please choose another browser" ?!! Either this image filter is the reader's issue : he wants it, he gets it by himself, and this vote is pointless, because this is not's Wikimedia's business. Or, this is a wikimedia issue, and there is no way you say something like "choose a decent browser". Accessibility of our content is our problem. Cherry 10:17, 18 August 2011 (UTC)
I could support this. — Internoob (Wikt. | Talk | Cont.) 22:48, 17 August 2011 (UTC)
This is what i thought of while reading here. Not browser based, but site based. but how do you support this for readers not logged in? is that possible (tech ignorant here). or would "IP reader" soft (ie can be overcome with a series of clicks) image blocking be good for schools, etc?(mercurywoodrose) 76.232.10.199 07:36, 19 August 2011 (UTC)
Yikes! Filtering images and any other content is a reader issue, not a Wikimedia issue. Individual readers (or their system/network administrators) can and must determine whether and how to filter images (and other content) they or their user groups access, block, or selectively filter. The associated qualitative issues and practical how-to's are bigger than and not specific to Wikimedia. Accordingly, for Wikimedia to involve itself in this morass (to use Barte's apt term) would be inappropriate, off-mission, controversial, and a waste of project resources. Froid 12:52, 19 August 2011 (UTC)
I agree with the off-mission sentiments partly, but this proposal is still better than the POV garbage of the original proposal, which would certainly be an even bigger waste of project resources. If we have to implement something, implement it this way. — Internoob (Wikt. | Talk | Cont.) 19:47, 19 August 2011 (UTC)
That's my take as well. I think a simple on/off option gets Wikimedia out of the morass of trying to selectively filter content. Of course, that would also be true if the project simply posted some instructions on how to turn images off using different browsers. But the former approach, as I imagine it, would be more convenient for readers, while still keeping the project out of the "morass" of selective filtering. It's the selective part of filtering that I think will cause the project grief. Barte 13:51, 20 August 2011 (UTC)
Support Support without regarding what's going to happen with 5-10-category-filter. Helpfull as well in case of animations which can't be stopped by users who aren't familiar with changing browser settings. Hæggis 20:57, 23 August 2011 (UTC)
+1 SJ talk | translate   02:12, 6 September 2011 (UTC)
  • Support Support – the only workable option, and the only one which takes care of en:Aniconism. --Florian Blaschke 11:22, 27 August 2011 (UTC)
  • Something like .thumbinner img{display:none}.thumbinner:hover img{display:inline} (or whatever, including some way to easily show an image) would be good to have as an option, I think. --Yair rand 02:32, 6 September 2011 (UTC)
  • Support Support solves also children's browsing of wikipedia. --Sir48 23:23, 7 September 2011 (UTC)
  • Support Support, but not excluding the category-based filter. Both filters can practically work together for the best result. -- Sameboat (talk) 07:13, 8 September 2011 (UTC)

Create an API for 3rd-Party Websites[edit]

Wikimedia could uphold its commitment to openness by keeping the main WP sites as they are — free of image-hiding — and creating a software API that would enable independent groups to create their own sites which mirror WP's content while allowing them to implement their own display schemes. For example, a group of volunteer parents concerned about their kids seeing disturbing images could create a site called "ChildSafePedia.org" with tools that enable crowd-sourced tagging, rating, and optional hiding of images, all without affecting the official WP site. The API's Terms of Use could include provisions to prevent commercialization and require display of a disclaimer clarifying that the 3rd-party sites are independent from WP. This would allow access to most of the WP content for groups that might otherwise block the site entirely. --KnowWell 18:10, 20 August 2011 (UTC)

This might be more complex than we need just for this task-- but we do desperately need to open up and start working with 3rd party sites to let them interface with us-- take our content, modify it on the fly, etc. "wiki" is the next "http" . --AlecMeta 23:59, 23 August 2011 (UTC)

Greasemonkey Off-The-Shelf[edit]

User:87.79.214.168 points out this NonCommercial Off-The-Shelf (NCOTS ;-) ) greasemonkey script which covers most of the requirements as it stands: [3] . Instead of making a lot of hoopla, could we just point users who wish to hide images to this script? Does this cover all the requirements we would like to meet? If not, which requirements still need to be covered? --Kim Bruning 13:57, 21 August 2011 (UTC) I'm all for *OTS. Time and money savings are a Good Thing

There's no guarantee any 3rd party script will be safe. Pointing our user to such script is very irresponsible if the user's computer is infected with virus or spywares thru these scripts. -- Sameboat (talk) 14:31, 21 August 2011 (UTC)
Welcome to the wonderful world of open source. We could convince the author to release the code under an open source license (currently (s)he has released the source, but not provided a license afaict; it seems likely they wouldn't mind an F/L/OSS license). Worst case, we can then check the code, digitally sign it (to ensure none of your creepy crawlies get in), and release it ourselves.
Alternately, we can just ask the author to do the same. --Kim Bruning 15:14, 21 August 2011 (UTC)
Oh yeah, so we have a proposed one right here. -- Sameboat (talk) 04:24, 22 August 2011 (UTC)
So, you are forcing the people to burden themselves when it could easily be implemented mass scale? The mere presence of such a script undermines all of the "censorship" claims and not the other way around. The script's existence is proof that it is right for Wikipedia to implement a large scale option so people can do it more simply. Ottava Rima (talk) 16:09, 21 August 2011 (UTC)
If the image filter is a "click here to see this image" on all images, I am not opposed to it. However, if it involves any categorization of images, it will be used for evil by third parties. The good thing about a "click here to see this image" filter is that it will be so annoying that nobody will want to use it. Kusma 16:21, 21 August 2011 (UTC)
Evil? We already categorize images. We have categories that blatantly label porn and violent images. Claiming that giving people the ability to filter out images so they can edit a page without worrying about them is somehow "evil" is really inappropriate. Ottava Rima (talk) 16:23, 21 August 2011 (UTC)
Right, we categorize all images in a manner intended to simply describe them. We don't single out "objectionable" ones (a non-neutral value judgement) and create special categories just for them. —David Levy 17:05, 21 August 2011 (UTC)
That is a rather silly statement. You claim that we categorize things simply, but then claim we shouldn't create "special categories". The first part would imply the second part was impossible. We already have categories for porn. The filter would block them for users wanting to block them. We aren't creating new categories. It would be impossible for anyone to not know this or realize this, so please stop trying to confuse people by putting up misleading and contradictory claims with a tone of "I'm right and you are wrong no matter what either one of us says". It is really inappropriate, especially when you lack a real argument. Ottava Rima (talk) 18:54, 21 August 2011 (UTC)
Some sweets, yesterday. Currently categorised as "Violent" but not as "Nude".
We may have categories for porn, but not on every one of the 777+ projects, moreover these categories are not what is required. What was considered pornographic in some places in the seventeenth century might not get a second glance now. We would need a whole new ontology, [category:Nude images class 1] .. through to [category:Nude images class 5] for example, with some images getting multiple categories (or tags, or whatever mechanism is used).
And certainly some images such as File:Woundedsoldiervietnamwar1968.jpg are not categorise as either a "medical image" or a "violent image", whereas the image on the right is categorised as violent. Rich Farmbrough 19:22 21 August 2011 (GMT).
Once again, you've quoted someone out of context. I wrote that we don't create special categories for images deemed objectionable. Regardless of whether pre-existing categories could be used (and Rich has explained why that isn't entirely feasible), the plan requires whatever categories are used (whether old or new) to be formally designated "potentially objectionable" or similar.
If you're under the impression that the idea is simply to enable readers to filter any of our current image categories, you're mistaken. The proposed implementation absolutely involves the creation of new categories (whether brand new or categories of "potentially objectionable" categories). —David Levy 19:48, 21 August 2011 (UTC)
It would also affect how existing categories are used. If we are tagging an image that happens to feature, in the background, a woman in a bikini, a same-sex couple, or a person wearing a sacrilegious T-shirt, we wouldn't likely take that into account. But under this initiative, we need to make sure that any of those depictions are properly tagged so as to not offend those who wish to avoid them. I can picture the warnings on the upload screen: "Please make sure all pictures of women are properly categorized according to immodesty of dress."--Trystan 20:12, 21 August 2011 (UTC)
Your comment assumes that we don't already require images to be properly categorized or that it is somehow difficult to add images to a category based on another category they are in. I don't know how you came come to either conclusion, especially when we have done such quite easily in the past. Ottava Rima (talk) 20:20, 21 August 2011 (UTC)
I don't believe I make either assumption. We do properly categorize pictures, but this does not include categories like "Any depiction of homosexuality," "Female midriffs", or "Incidental blasphemy." Yet these are all things which people are likely to be offended by. Neutral description of an image's subject is very different from content warnings for filtering.--Trystan 20:30, 21 August 2011 (UTC)
You have no proof of that. And even if they -were- offended by such things, we do have related categories. Bare midriffs, as one example. Ottava Rima (talk) 20:41, 21 August 2011 (UTC)
This would be censored if you didn't like midriffs
We may have a category for everything (sorta kinda) but these are for pictures of bare midriffs. If we pollute them with pictures that merely contain bare midriffs (actually it's called [Category:Crop tops] see example to the right with no bare midriff) we wreck the category system, [Category:Sky] would have to include any picture containing sky and so on. Rich Farmbrough 22:42 21 August 2011 (GMT).
Exactly. I'm trying to get at the fundamental difference between a functional category, like Crop Tops, and a warning label, like Bare Midriffs. As for not having evidence of what people are offended by, Ottava, I couldn't agree more. We don't have any good evidence about what people are offended by, how many are offended, or how offended they are. So how could we possibly implement a system to identify it for them?--Trystan 22:49, 21 August 2011 (UTC)

┌─────────────────────────────────┘
Reusing the current category system wouldn't work. Just take a look at them. For example the category violence. At first it has many subcategories (with many subcategories on their own) which are only indirectly related to violence. That means that including all subcategories as a filter criteria would give wrong results. If you take a look at the pictures then you will soon notice that many of them don't show actual violence. They are depicting monuments, protests against violence, portraits of people involved, and so on. In fact this means that we will need to create separate categories, look at every individual file and conclude/discuss inside which category it belongs. This will be an enormous effort with ten-thousands hours spend that could have been used to increase the quality and quantity of the current content. --Niabot 01:29, 22 August 2011 (UTC)

3rd parties can trivially expand the scope[edit]

The software side of this is fairly simple to implement, with a prototype working in a couple of days at worst. The issue is then "merely" (because it's a lot of work for the community) to assign categories for the software to work with. Done, perfect.

Here comes a massive application of en:WP:BEANS, or it would be WP:BEANS, once our own innocent filter is completed :

Now, as it turns out, our database is down-loadable, and our category system is readable by third parties. A third party can write software that either looks at our database or parses our pages, and can figure out what categories the images are in. They can then directly add those categories directly to an actual censoring proxy or filter. This will only take a couple of days too. *really* only a couple of days, since our fine volunteers have already generated all the data this third party needs. No sweat.

You know, while we're at it, images are usually linked to from pages. It's fairly easy to figure out what pages use a certain image. (Commons actually lists those pages!). To be on the safe side, said third party can probably block all the Wikipedia pages that link to a Bad Image. In thier mind, very little will be lost, and a lot of Bad Content can be blocked in that way.

Now, we can take that a step further. Since we now know Bad Pages, we can derive Bad Links, and Bad Topics.

Bad Links are links leaving from a Bad Page. A third party can probably figure to censor anything on the far end of a Bad Link, possibly out to a link-depth of about 3 or 4, without anything that they consider "of value" to be lost.

Bad Topics can be derived from page titles. If a Bad Topic shows up in the body-text of a search result, our hypothetical 3rd party filter-writer can probably "safely" block that too.

And hey presto, a perfectly censored internet. Due to the crowd-sourced nature of wikis, this method of censorship would be more reactive and more current than any other. I think the idea might be worth quite a bit of money. And the really great thing? The actual code for this can be written in a week or two, since all the actual hard work has already been done by wikimedia volunteers!

--Kim Bruning 13:57, 23 August 2011 (UTC)

That's about the size of it... If you don't have access to a free internet, you're not a free citizen anymore, you're something else. Wikimedia projects, all information systems, are _incredibly_ useful for big brother, or a network administrator who plays the role well. The only consolation is that current Wikimedia volunteers won't waste their time on filters-- the time spent will be spent by a very different group of people with a very different mindset than our current Wikimedia volunteers. And they'll all hate each others guts, and they won't agree on anything. and THAT is educational. :) --AlecMeta 00:09, 24 August 2011 (UTC)
I think that this is why the ALA classifies such lists and categories as "Censorship tools" --80.101.191.11 19:24, 24 August 2011 (UTC)

A new filter function based on existing categories, but no new categories[edit]

Most of the objections are related to the definition of what is or isn't objectionable, and about the process of arriving at that definition in the case of individual images.

If the WM community spends discussion time and effort to classify images, then the result of their effort will be used by censors to prevent certain groups of users from accessing those images. That constitutes a worse deal, in terms of the principles underlying WM, than if the censors themselves build and establish a censorship database around and on top of WM without the involvement of most WM editors. We would be making their jobs easier than before.

However, if the category system is left as is, including the requirement that the definition of all categories needs to be neutral (implied: objective), then the would-be censors do not gain any new advantage over the current situation.

The authors of the proposal intend for the individual user to be able to filter their own experience. It would be acceptable for the WM projects to offer such possibility, not based on 5-10 specially-designed categories, but based on any existing categories that the user selects. All we are offering is an add-on to the user interface, not an change to our data structures.

Arachnophobics need not refer to a special category of objectionable images, they can simply exclude spiders. Prudes can exclude any number of existing categories depending on their personal sensitivities. I may have a personal objection against the existence of the category Pornography, because its definition involves second-guessing the intention of the author of the image, but my objection is not made any worse by the possibility of filtering away all the images in that category.--Lieven Smits 09:21, 25 August 2011 (UTC)

You raise good points. This won't please everyone who wants a "Safe For Work" toggle, but add a simple useful feature without any of the issues you mention. SJ talk | translate   02:12, 6 September 2011 (UTC)
I think that the use of the "category" language is misleading people. Nobody's talking about setting up a new category, like "Category:Images that are offensive because they show naked humans" and adding that new category to hundreds of thousands of images. They're talking about using the existing category tree, and compiling a 5–10 lists of pre-existing, related categories for simple, user-friendly, optional image filtration.
A one-by-one, 100% customizable approach is definitely more flexible (and I can think of good reasons to move in that direction eventually), but there are three major problems with that approach:
  • It's far more complicated for the user. I believe that Commons alone has about half a million categories. I suspect that most users would want to filter less than one percent of those categories. Can you imagine reading half a million category names? Can you imagine individually ticking one to five thousand of them? Even a narrow area, such as the arachnophobic person in your example might want, is likely to require ticking about 100 categories.
  • It requires extensive knowledge of the material you want to avoid. For example, the people you label as "prudes" might well want to filter images from Commons:Category:Goatse, but they might not know what that even is. The person with a spider phobia might not realize that images of the en:Amblypygi could be triggers, and he is not likely to think that looking at all the pictures to find out if they trigger a panic attack is an appropriate method for figuring out what should be listed.
  • Now remember that Commons's cat tree is in English, and imagine trying to figure out which categories you are likely to want to exclude if you don't speak any English at all. An English-speaker could make a quick decision about Category:Photographs of sexual intercourse, but a non-English speaker might have no idea what that says. WhatamIdoing 18:33, 25 August 2011 (UTC)
I admit that I feared that new categorisation information was going to be added to individual images, whether in the form of new categories or in the form of a new entity type with an m:n relationship to individual images. After your post I re-read the proposal and I do not think that it is stated very clearly. The Harris report talks about the existing category system, which is not the same as sticking to existing categories; and in any case the referendum is not about the wording of the report.
If the proposal is interpreted in the way that you describe, then most of my worries are alleviated, because editors will no longer be asked to judge directly on the 'appropriateness' of individual images. The remaining concerns are (1) that the definition of the 5-10 groupings is unlikely to be guided by objective criteria, and (2) that the definition of the underlying, existing categories will be de facto modified as a consequence.
(1) Even the Harris report is ambiguous in the examples it gives of 'sexual' images: in one place it seems to relate to images of genitals and masturbation, in another place it suggests including bare breasts -- although the latter example could be meant to belong to a different grouping. Sexual arousal is both a personal and a cultural matter that is impossible to predict in a neutral (i.e., independent from personal and cultural preferences) way. Note that not all groupings need to suffer from this. It is objectively verifiable whether an image depicts the prophet Muhammad, or a spider, or a corpse.
(2) If a grouping is created without the necessary objective and neutral criteria for inclusion, then the mere existence of the grouping will influence the definition of the existing categories from which it is composed. Thus, for example, the current category Pornography may start including pictures that have the effect of arousing a number of users sexually, rather than its current definition which is limited to pictures that have the intention of sexual arousal. In the nightmare scenario of broad interpretation, this could even include the likes of the Ecstasy of Saint Teresa.
But both these concerns are far smaller than my original worry (not entirely gone) that we would be doing the tagging ourselves.--Lieven Smits 08:53, 26 August 2011 (UTC)
Directly tagging ten million images according to "offensiveness" is my idea of a nightmare, too, and that's why I'm glad that it's not planned.
I think one advantage of using the regular category system is that it's so much more detailed. Someone might put something in a "Porn" category, but—well, go look at what's actually in Commons:Category:Pornography. Would you put that in the list of things to suppress if someone decides to "Tick here to suppress sex and porn images"? I wouldn't. The categories that we'll actually want to put on that list are things like Commons:Category:Photographs of sexual intercourse, and I figure that pretty much any adult should be able to make a good guess about whether a given image actually fits in that category.
Borderline images are difficult to get "perfectly" categorized now, but there's surprising little edit warring over it, and people clean stuff up when they encounter problems. I don't think that this situation will change much. We may have a few more people appoint themselves guardians of certain categories, but that happens now (and is generally a good thing), and I expect it to work in both directions, with the 'guardians' both adding more images and removing inappropriate images. WhatamIdoing 17:08, 26 August 2011 (UTC)
Our community is surprisingly good at descriptive classification because they have a category system that serves no purpose other than descriptive classification. So they can tackle the inherently difficult task and just keep improving the system.
If our classification system becomes also a filtration system, I think it would be reasonable to expect that we would have at least as many dedicated and efficient filterers as we now have categorizers. And from an information science perspective, the processes are simply fundamentally different. A filterer doesn't have any concern as to what an image is about, while that concept is at the very heart of descriptive categorization. Once arbitrary standards have been set down, filtering is actually much easier, because it's simply a "does contain"/"does not contain" decision.--Trystan 18:44, 26 August 2011 (UTC)
Even worse we don't have enough categorizers now. Rich Farmbrough 11:22 29 August 2011 (GMT).

Pre-project enabling?[edit]

How controversial would filtering be if it was initially adopted only on Commons, which doesn't have to be NPOV. That would let us work out the bugs and let the projects decide when and whether to enable the filter on their project.

Would this help relieve concerns? --AlecMeta 02:37, 26 August 2011 (UTC)

Probably not, at least not my main concern: that WM editors will be spending time to make the work of third-party censors easier. It's the tag data itself that worries me, not the functionality that interprets it (on WM or elsewhere). Certain a priori restrictions on the way in which filters are defined could do the trick and put me at ease. Such as: filters are composed of relatively broad pre-existing categories, not of individual images or specialised categories that pertain by definition to very similar images (e.g., categories dedicated to a single work of art). And filters cannot be created unless they have an easily verifiable, objective definition.--Lieven Smits 09:15, 26 August 2011 (UTC)

Design ideas and issues[edit]

Questions about the design, existing alternatives[edit]

Hello there, I will just add a few comments regarding the upcoming referendum. As to my background: I am a Computer Scientist by profession, so I can technically judge what is involved. As a quick outline:

  • A user (either anonymous or named) is able to exclude certain images from search reasults. In the case of anonymous users, the preferences are stored inside the users session. Closing and re-opening the browser will reset the settings. Users who log in to edit can store their settings in their profile; their settings will not be reset when they close/open their browser.
  • Architecture wise: blacklists and whitelists can be used. To be determined: do these act on groups of images, or on single images? - Is it possible to whitelist single images in a group that is blacklisted? - To blacklist single images of a whitelisted group?
  • How does the software identify the images to "filter"? - There are two options: At load time, the software analyses the image; the result of this analysis is used for filtering. This will incur extra costs in computing time, and memory; There are different algorithms, which yield different results. The other option is static tagging. This option has the drawback that some people need to decide the tags to use ("tag wars" have been cited above). Also the behaviour needs to be specified if an image does not have any tags; the blacklist/whiltelist approach can be used.
  • There are programs on the market that implement a client-side proxy, and that probably cover 80-85% of what this development will achieve. I currently see no benefit in implementing this solution on the server. The solution where the filtering is done dynamically (i.e. no static tags), and on a per-image basis would probably be superior to the client-side filtering. This however comes at the cost of additional cpu and memory usage, as well as false positives/false negatives.

To summarize:

  • If the solution of static tagging is chosen, we have the problem that images need to be tagged, and "agreement" over the tags to use needs to be reached in some way. Also, the behaviour in the case of an untagged image needs to be defined. Finally, we need to define the granularity: Is it possible to "whitelist" individual images of a group that is "blacklisted" (or to "blacklist" individual images of a whitelisted group). Finally: how do we determine the "tags" (or group of tags) to use?
  • If we tag dynamically, we incur extra costs in cpu and memory use of the system. We need to reach agreement over the algorithms to propose for identifying images; we need to implement those algorithms, which may be technically difficult; we may need to think about caching results of calculations, to reduce cpu load. Also note that the algorithms use stochastic information. There will be false positives, and false negatives.

Both approaches have their benefits, and drawbacks. Neither is "quick to implement". So given that client proxies ("filters") out there probably cover 80-85% of the requirement the usual client needs ("don't show images of nude people of the opposite sex"), where is the use case that would justify 3-5 people work 3-6 months, to get the extra 15-20%? --Eptalon 09:12, 4 July 2011 (UTC)

Please provide more data to explain what "80-85%" means to you here. (list some of the clients you have in mind, and the use cases you feel constitute the 100%). If there are client-side tools that are aware of Commons image categories [or can be customized to be] that would be a useful data point. (And pointing to an open-source client-side option for readers who want one, that is known to work smoothly with WM projects, would be in line with the goal here). Two use cases, for discussion purposes:
  1. You're browsing wikipedia, possibly on someone else's machine, and want to toggle off a class of images. [for instance: giving a WP demo at work, or in Saudi Arabia, &c.] It may not be possible to install your own client software, and you'd like to be able to set this in under a minute.
  2. You come across a specific image you don't want to see again (and checking, find it is part of a category of similar images), and want to hide it/them in the future.
SJ talk | translate   15:42, 4 July 2011 (UTC)
Client-side proxy filters are aimed at parents worried that their children might see the wrong type of image; AFAIK most of them work with whitelists/blacklists of sites; they do not do an on-access scan of the image. In addition, they might do "keyword scanning" in the text (to filter hate sites, and similar). The customisation of these products lies in being able to select "categories" of sites to block/allow, perhaps on a per user basis. Our "static category blacklist/whitelist" approach would in essence do the same thing, except that to achieve it, we need to do development work, and at the best, we match the functionality of a USD 50 product. In addition, load is placed on our servers to do the filering work (+possible problems with the categorisation). Using the dynamic approach will mean even more load on our servers, the possibilities of "false positives"/"false negatives"; the difficulties in finding training data (note: that data can not be used later on), etc. In short: a lot more (difficult) work. We may exceed the USD 50 product as to functionality, but we have 3-5 people developing 4-6 months. I really don't know if I want to spend up to 250.000 usd (24 man-months) to "not see an image again" - it seems out of proportion.--Eptalon 19:29, 4 July 2011 (UTC)
Point of clarification... $250,000? NonvocalScream 19:36, 4 July 2011 (UTC)
Let's be very careful about throwing around dollar figures. That hasn't been scoped, so I think it's dangerous to introduce false numbers to the equation at this point. Philippe (WMF) 19:39, 4 July 2011 (UTC)
Duration of project: several people, 4-6 months (for the dynamic approach, not using static tags). --Eptalon 20:15, 4 July 2011 (UTC)
According to whom, by what metrics and judging by the speed of what resources? How about we let the folks who are designing the thing scope it out, once it's, you know, designed? Philippe (WMF) 23:59, 4 July 2011 (UTC)
Eptalon, you seem to assume that everybody's got a web proxy. I don't. If I really, really don't want to see the infamous picture of the guy jumping to his death from the Golden Gate bridge, my options at the moment are:
  1. Don't read Wikipedia (because any vandal could add it to any page at any time),
  2. Especially don't read pages where that image might logically be present (bummer if you need information about suicide), or
  3. Figure out how to manually block all versions of that image in every single account (five) and every single browser (two) on every single computer (four) I use—which will effectively keep that image off my computer screen, but not any others like it.
This proposal would let me control my computer by clicking a "don't really feel like seeing images of dead bodies today, thanks anyway" button. The images would appear "hidden", and I could override the setting any time I felt like it by simply clicking on the "This image hidden at your request because you said you didn't feel like seeing any images of dead bodies today" button. There is nothing here that would let some institution control my computer. WhatamIdoing 19:23, 5 July 2011 (UTC)
I dont have one (or use any filtering software); Your porposal shifts the problem though. You need to agree with other people about the categories. In the 16th century, a painter called Lucas Cranach the Elder pained a woman, before a tree, wearing a necklace (called 'Venus'). In the same century Mechalangelo did his statue David. In the 19th century, Jules Joseph Lefebvre painted a woman with a mirror ('Truth'). To me, all these works are works of art, and as such, limiting their audience does not make sense. In the 1990s, a museum in London, used Cranach's painting as an ad for an exhibition; they showed it on posters in the London Underground - and there was an outcry.--Eptalon 14:27, 6 July 2011 (UTC)
@Eptalon: I think there is a very real advantage to doing a system specific to Wikipedia that takes advantage of our category structure: filtering can be made more precise, which means not just missing less things that people want to block, but more importantly, avoiding blocking educational materials that young readers need access to to learn. This is our chance to give people a solution that isn't as conservative and overzealous as every other generic solution on the market. Dcoetzee 22:31, 16 July 2011 (UTC)
I didn't sign up to provide people with such a 'solution', Dcoetzee. But thanks for pointing out the extreme slipperiness of this slope.
Now, if someone wants to hide a certain picture for themselves by clicking on a little X or whatever, *fine*, I'm all for user control.
If you pop open a menu that also says "hide all other images in <one of the categories the image is a member of>" ... well, that's fairly sane, as long as no new categories are generated.
But let's not jump on the actual censorship bandwagon. Fair enough? --Kim Bruning 10:35, 21 August 2011 (UTC)
Incidentally, if we're adding menus to images anyway, could we also have other options that make things more explicit, like "view image info" and "view full size"; or editing assistance like "Alter size", "Alter location", etc... --Kim Bruning 10:45, 21 August 2011 (UTC)

The problem is not with clicking things. That's fine. People keep concentrating on the clicky bits while there's nothing wrong with clicky bits. ;-)

The problem is with the category scheme. When you categorize images as "problematic" or "potentially clickyable", you're in trouble, because that classification system *in itself* is defined as a censorship tool by the ALA.

So if you have a clicky tool that can click images on or off, but without using cats, you're fine. However, if you have cats, you're toast. --Kim Bruning 19:33, 24 August 2011 (UTC) 19:28, 24 August 2011 (UTC) meeouw!

Logging in to permanently save[edit]

Just to point out: the concept presented in File:CC-Proposal-Workflow-Anon-FromNav-Step3.png (logging in to permanently save options) wouldn't necessarily work in the classic scenario of parents applying filters to their children's computers, as:

  1. Their children would then edit using the accounts that have been logged in (pro: would work well in terms of accountability, con: what happens if the account is subsequently blocked for vandalism? Note that the account would most likely pass semi-protection due to the length of time since creation - although it might not pass in terms of number of edits, at least to start with.)
  2. Their children could simply log out, and/or log in with their own accounts, thereby bypassing the filter.

Mike Peel 21:48, 23 July 2011 (UTC)

I do not see this as something parents should use to prevent their child seeing images. They still have google images. If a parent does not watch their child while they are on the internet (i.e. watches what they are doing on the internet) or does not obtain a filter program or service that is controllable only by them, then they have no one to blame but themselves is their child watches porn or the like. This is so people who don't want to see nudity can block it for themselves, not for their children. The only problem people might have is those with dissociative personality disorder where one personality wants the nudity but another doesn't.Black.jeff 09:05, 20 August 2011 (UTC)

You're right: this is completely useless for parents (or schools, or employers, or whatever) censoring someone else's reading. This is only useful for you controlling what you personally and voluntarily choose to display on your computer screen at any given moment in time. It is not designed to permit you to control what anyone else sees. If parents want to control their children's access to Wikipedia, they will have to find another method for doing that.
The point of this is to let a person with PTSD (for example) voluntarily stop the display of mutilated bodies on his own computer screen. Currently, we are forcing people to see types of images that they do not want to see. There is no possibility in this system for you to stop someone else from seeing images that they want to see; even if you could "lock" the preference (which you can't), overriding the setting only requires that the reader click the hidden image. WhatamIdoing 19:29, 23 August 2011 (UTC)

Protection levels[edit]

Just like many security software appplications have many security levels, I propose the same for this image filter. There must be few levels per tag, 3 or 4, so decisions are easy. For example, 0 for no sexual content, 1 for light clothes like here, 2 for underwear and 3 for visible genitals. --NaBUru38 18:39, 15 August 2011 (UTC)

And of course, it's always going to be women who will be tagged with "light clothes" because we all know they should wear potato bags to please certain Middle Eastern imaginary beings. Andkon 19:44, 22 August 2011 (UTC)

Dont use a collapsed frame![edit]

I think really that the system should NOT use AJAX or collapsed frames, for several reasons.

The first, and most important reason, is that a filtered image should not be sent over wire at all. It might get stuck in a company/school proxy server with logging turned on, and that can make bad things happen for the employed person who did think the filter was a "safe" way browsing wikipedia. (like fired from work, having computer account revoked/locked for a period of time, or expelled from school in a time).

With the filter ON, you should be safe and know that nothing that you have elected to filter out, should be sent over wire at all.

Another reason of not using AJAX or collapsible frames in the filter system, is that older browsers might not support them, causing the images to be shown even with the filter turned on. You can never know what happen with older browsers so safest is to control image hiding at the server side.

Also the filter settings dialog, contains category words, that might be stuck in a filter. So instead of showing the filter settings dialog with AJAX, show it in a simple target="_blank" popup. A _blank popup in a link will almost newer be blocked by popup blockers (if you not turned the security settings wayyy too high). That will make the filter safe for everyone to use and rely on.

Also, provide a option to block ALL images too, so if you have a slow link, so you can opt not to view heavy images under a 200kbps GPRS connection. Sebastiannielsen 02:45, 21 August 2011 (UTC)

Comment: I like the idea of a "No images" option, independent of anything else. In which case you certainly would not want to load the images behind a collapsed frame, you wouldn't want them to be present at all. SJ talk | translate   02:37, 6 September 2011 (UTC)
If a workplace is going to be that assholic, it's best that they do without Wikipedia. That will give a little edge for a better company to outcompete them. Wnt 17:10, 21 August 2011 (UTC)
To block all images follow the instructions at en:Wikipedia:Options to not see an image. Rich Farmbrough 18:55 21 August 2011 (GMT).
That does not block the images from being sent over the wire, which is the purpose of disabling all images, if you disable it because you are sitting on 200kbps GPRS. About workplaces, the sysadmin can in many cases not see if the client is showing a "forbidden" image or not, if they not have a client survelliance solution. They can only see if a client has downloaded a image or not, thats why the system should not send the images at all over the wire.
Remember that many pupils use Wikipedia as a resource for their school work, and they might be searching for something related to that, and comes into a article with forbidden images (according to the computer usage policy at that school). Sebastiannielsen 23:40, 21 August 2011 (UTC)
I tend to think of children and workers-with-harsh-workplace-rules as if they are subjects of a totalitarian society-- I don't exactly know whether we help or hurt by working to meet their needs. If we even consider sysadmins use a tool to determine who has the filter turned on, we legitimize their using it in that way. These populations are essentially "lost" to us, and it may be better to focus exclusively on the needs of people who have the right to view whatever they wish.
Then it's a much simpler problem. 1-click-to-view-a-shock-image is something lots of readers want, and it's really not about censorship as much as it is about turning the screen away from the room before you click to see what's underneath. That's a simple need, with a simple fix.
It's not really about politics or decency or anything as complex as that. It's just a momentary courtesy some readers would like. --AlecMeta 18:13, 22 August 2011 (UTC)
A simple thing to do, is to force images that are filtered according to user's filter setting, linked. Eg, instead of the image, the user is shown a link that he can click and land on the image wiki page. If we are going to use collapsed frames or AJAX, I think really that we should put a HUUUGE warning on the filter page that says "WARNING: The filter does only prevent you from inadvertently seeing images you don't want to see. The filtering will be done in your local computer, and it does NOT prevent any of the filtered images from being sent over your network wire, and if you use wikipedia from a school or workplace, you can face consequences for images transmitted over the wire, according to your local IT usage policy, even if the images never was shown on your computer screen." Sebastiannielsen 06:38, 27 August 2011 (UTC)
It would be reasonable to include that information on whatever image-filter help pages we might write (assuming the filter is ever implemented). I think that making it "a HUUUGE warning" is probably unnecessary, though. WhatamIdoing 17:33, 27 August 2011 (UTC)
Collapse boxes also take time to render, I see the content of collapse boxes quite often. Not a good technical solution. Rich Farmbrough 22:38 30 August 2011 (GMT).

Eingangskontrolle's example[edit]

Kruzifix
Image hidden by Wikipedia opt-in personal image filter. This image is categorized as: religious POV (christian) ; health advice (dangerous for vampires) ; contravention to "Thou shalt not make thee any graven image" (Exodus 20:4) ; torture ; corpse ; barely clothed male person -- the categories you blocked are marked yellow -- click link to show image

The preceding unsigned comment was added by Eingangskontrolle (talk • contribs) 11:54, 30 August 2011.

This page isn't complete: Archives have more alternative ideas[edit]

I should note that I proposed an alternative ([4]) which was oublietted to the Archives before these subtopic pages were set up here. There are five pages of such archives. Should they be divided up and transcribed to these topic-specific talk pages? In any case bear in mind that not all the ideas are listed here. Wnt 18:36, 8 September 2011 (UTC)

Categories[edit]

5-10 categories? A few dozen?[edit]

Who decides what the 5-10 objectionable categories are?[edit]

From the penultimate question on the survey, it looks like 5-10 filtering categories are being considered. I was thinking that as a practical matter, it might be difficult for many thousands of editors across different cultures to decide what those categories were, so I assumed someone else might be deciding, i.e. the WMF. Phoebe's statement implies my assumption might be wrong. If 5-10 categories are to be used, who will (and how will they) decide what those categories are?--Wikimedes 07:04, 21 August 2011 (UTC)

Actually, the research shows that providing three broad groups would make nearly everyone around the world happy (except, naturally, the small but vocal "anti-censorship" people who think it the inalienable right of every vandal to display any image he wants on my computer screen):
  • Certain religious images (e.g., en:Mormon temple garments and paintings of Mohammed)
  • Sexual images (definitely including porn, but possibly going as far as general nudity)
  • Violent images (e.g., mutilated bodies and abused animals)
I don't know, but I'd guess that five to ten were proposed because it would be useful to introduce some granularity. For example, I could imagine that a person might want to avoid stills from pornographic movies, but wouldn't mind the naked woman in en:Pregnancy. (It may amuse you to know that the "anti-censorship" crowd has successfully prevented the inclusion of even one image of a fully dressed pregnant woman in that article: all images show light-skinned women either completely naked or in their underwear.) WhatamIdoing 20:09, 23 August 2011 (UTC)
I am arachnophobic, and I get panic attacks from looking at spider images. A friend of mine suffers from BPD and he gets anxiety attacks from looking at images of tall structures. Et cetera. "Argument": destroyed. Contrary to the "censorship" strawman people like you keep raising (you guys are almost like an anti-anti-censorship false flag operation), there are many different overwhelmingly powerful arguments against a project-wide filter category system, not a single one of which has got anything whatsoever to do with censorship. At all.
Also, you are destroying your own "argument" without even realizing it: If there are actual, encyclopedic reasons that speak against the set of images included e.g. in en:Pregnancy, then we should solve that problem at its root (ie. go to the article and improve it), not throw a completely superficial "solution" like an image filter at it. That said, an image filter has my tolerance, if not enthusiastic support -- but only if it is implemented as a general image filter, without any project-wide filter categories. --213.196.212.168 20:54, 23 August 2011 (UTC)

As I noted at some point on the mediawiki discussion page: I'm also in favor of any implementation being a general image filter. I haven't seen any other option proposed that sounded like it would work out socially or philosophically. SJ talk | translate   01:23, 6 September 2011 (UTC)

We need this but it should be made sure to be flexible.[edit]

I agree with one of the above users, but only partially, that this could become imposed censorship, but if implemented correctly it won't be. In fact the point of user controls is to AVOID global censoring. Just as one must make sure a governing body doesn't impose censors, it should also be made sure that they don't impose no possibility for the individual to censor the content on their own computer. One can always avoid certain pages even now, but if one wants to read a topic that may contain offensive images, they should not be forced to view those images if they don't want to, any more than they should be forced to not see them if they do want to see them.

For more flexibility the 2 proposed categories ("nudity/sex" and "violence") should become 3 categories ("nudity", "sex", and "violence). Otherwise an image of a XXX sex act might be censored just as much or little as an image of the nude painting Birth Of Venus.

A user's personal control panel should contain not only an "on/off" switch for each category, but also a 0-10 slider under each category for showing or not showing an images of a certain "strength" of that category. It should also allow the image to be delayed showing or blocked entirely based on a check box (not forced to be delay method only as suggested in the official statement). Specific images that you believe should or should not be shown based on your image category sliders (but had been shown or not shown opposite what you wanted) should be allowed to be added to a user's blacklist or whitelist manually by the user on an image by image basis. Of course all the above settings should be reversible by the user so that no change to the settings made by a user is permanent for that user.

This is very important it be done though as NO web-client software package (i.e. Firefox) is capable of blocking some images but not others, and blocking images but not text. Instead of such software will just block WHOLE SITES that have "bad content" in general above a certain user set level (no distinction of pictures versus text, or other such surgical precision). Instead pages with such offending material will simply be completely blocked from being browsed to by the user. This browser based "complete censorship" is a FAR STRONGER method than that being proposed on Wikimedia, and makes "collateral damage" by blocking completely lots of web pages even with only a small amount of offensive content. Until browse censors allow finer control of the censoring, then Wikimedia is just doing what it has to. And keep in mind this is USER CONTROLLED! It is NOT "we wiki get to censor what you see cause we're evil big brother ha ha ha ha ha".

To the above user who is worried about institutions policy, my only reply is this. Follow the rules of that institution when at that institution. If you want to be able to brows freely with no censoring, then do it on your own computer at home or at an institution or other place with internet access who's rules don't restrict content accessed on campus. I'm guessing those institutions already have a policy in place preventing offending pages from even being accessed. At least with this in place they could block the images but not the whole page. There are MANY places with their own rules. If you go to a fastfood joint with a sign that says "no pets" are you going to bring your pet with you just because you don't like the rule? No, because then you won't get served food. You are correct that the US government is bound by the constitution. However schools and other privately run companies and institutions have the right to impose their own restrictions and rules. And you will need to follow them if you don't want to get in trouble. This applies for a private school/college, restaurant, grocery store, etc.

Animedude5555 07:14, 20 August 2011 (UTC)

About the institutions, overall I agree that public computer labs have a right to control what is seen. As my point above still stands, though, this greatly hurts a massive demographic. Most Wikipedia editors have personal computers, are well-educated, etc., and so have this false view that everybody owns a computer and lives in some sort of metropolis with a thousand internet cafes. But we can't ignore the demographic with only access to one public computer, especially in conservative or religious areas where administrators are likely to enforce filters. Many of the pictures we're thinking of allowing to be filtered have crucial educational content. And I know so many people, some street kids, some homeless, some deadheads, some just minimum-wage families without a computer, that if the local school or library enforced a policy would have no alternative Wikipedia access. I can only imagine how many people are in this situation in extremely poor, rural areas.
To protect this demographic at large, I don't think we should even consider giving institutions this tool in the first place Rodgerrodger415 16:36, 20 August 2011 (UTC)
You do not seem to understand the proposal. This system lets any reader, with a single click, see any image the reader wants to see. This system will not work for schools, libraries, or parents. Think about it: the school says "Hide these images from the children". The student says "Ha ha, I click here to see the image anyway!". There's nothing the school's computer systems can do to prevent the individual user from overriding the settings on this system. If you really want to censor your users' images, you need to get a en:web proxy (or similar, paid systems, like "CyberPatrol") to do this. This system simply will not work for that purpose. WhatamIdoing 20:21, 23 August 2011 (UTC)
1. As discussed above, "nudity," "sex" and "violence" are merely three of countless types of image content regarded as offensive by some individuals in some cultures.
2. Also discussed above is the inevitable difficulty of determining whether to include a given image in a particular filter category (and the likelihood of controversy, argumentation and edit wars). If I understand correctly, you propose that this be further complicated by requiring that each image's "strength" (whatever that means) in a given category be ranked on a numerical scale. How is that remotely feasible? Via what process do you envision this occurring? —David Levy 07:32, 20 August 2011 (UTC)
You missed the part where I said the individule could force a image blocked under a certain user's setting to be shown (via a user account specific or cookie specific whitelist) if that user wanted a specific image to be shown that they believe should not have been blocked. Likewise if I find an image that I believe should be blocked under my current settings but isn't, then I could "blacklist" it.
Keep in mind this policy gives the USER (not Wiki) control of the censoring. It is NOT a violation of speech, and given certain images on certain wikis I may well choose to NOT BROWSE TO A CERTAIN WIKI ENTIRELY if this new feature is not put in place. By forcing me to NOT go to a certain wiki (based on my values) they would be in effect censoring THAT ENTIRE WIKI for me. Now THAT would be the unfair thing to do. What they are proposing gives the user MORE POWER, rather than taking away power as you claim. It does NOT infringe on their right to free speech in ANY WAY!
As for the implementation, it would be simple. It should use the world's best shape/image recognition software implementation ever invented. Not sure what the name of the software is, but it would be expensive, VERY expensive. But in the end it would be worth it to allow for a very good implementation of my suggestion. It would (for nudity) auto detect the presence of certain body parts, how much of the different parts is shown, the pose used (does it look like a "slutty" pose like in a playboy magazine, or just a "plain nude" pose like nude art in an art museum), the overall realism of the image (from "flat" images like a cartoon, to realistic 3D computer graphic renderings, to actual photos), and also in general what fraction of the skin is exposed (bikini coverage, versus fully clothed, etc). All these factors would contribute to the final computer determined "strength" rating in nudity category.
Of course violent images and images with sexual content would also have similar things looked at, and ratings calculated automatically by computer based shape/image recognition technology.
And don't think my suggestion is science fiction. There already exists forensic software that the police can use to detect child porn on a computer that indeed depends on implementation of image/shape recognition algorithms to determine in an image the age of the person depicted and also what amount of nudity and/or sexuality is present in the image. This allows the images the cop is searching for to be narrowed down, and he/she can then manually inspect these images to make sure that no "false alarms" are marked as illegal images.

Animedude5555 07:57, 20 August 2011 (UTC)

You missed the part where I said the individule could force a image blocked under a certain user's setting to be shown (via a user account specific or cookie specific whitelist) if that user wanted a specific image to be shown that they believe should not have been blocked. Likewise if I find an image that I believe should be blocked under my current settings but isn't, then I could "blacklist" it.
I "missed" nothing. I didn't comment on that portion of your proposal because it's irrelevant to my main concerns. I don't know why you've reiterated it, as it doesn't address anything that I wrote.
Keep in mind this policy gives the USER (not Wiki) control of the censoring.
...within a predetermined set of filters.
It is NOT a violation of speech, and given certain images on certain wikis I may well choose to NOT BROWSE TO A CERTAIN WIKI ENTIRELY if this new feature is not put in place.
That's your prerogative. Under the above proposal, persons objecting to image content other than "nudity," "sex" and "violence" face the same dilemma.
By forcing me to NOT go to a certain wiki (based on my values) they would be in effect censoring THAT ENTIRE WIKI for me. Now THAT would be the unfair thing to do.
Note the phrase "my values." Why do you believe that your values take precedence over those of other individuals/cultures?
What they are proposing gives the user MORE POWER, rather than taking away power as you claim. It does NOT infringe on their right to free speech in ANY WAY!
I don't recall claiming any such thing.
As for the implementation, it would be simple. It should use the world's best shape/image recognition software implementation ever invented. Not sure what the name of the software is, but it would be expensive, VERY expensive. But in the end it would be worth it to allow for a very good implementation of my suggestion. It would (for nudity) auto detect the presence of certain body parts, how much of the different parts is shown, the pose used (does it look like a "slutty" pose like in a playboy magazine, or just a "plain nude" pose like nude art in an art museum), the overall realism of the image (from "flat" images like a cartoon, to realistic 3D computer graphic renderings, to actual photos), and also in general what fraction of the skin is exposed (bikini coverage, versus fully clothed, etc). All these factors would contribute to the final computer determined "strength" rating in nudity category.
Of course violent images and images with sexual content would also have similar things looked at, and ratings calculated automatically by computer based shape/image recognition technology.
Okay then. Thanks for sharing your thoughts on the matter. —David Levy 08:19, 20 August 2011 (UTC)

Objective/neutral filtering is easy : just pick a few dozen keywords[edit]

First off, I'm rather appalled at the "Wikipedia (and I) know(s) better than you what images you should see" attitude. Also, "the government will start censoring Wikipedia if we add filters" is an absurd slippery-slope arguement.

More importantly, it's extremely easy to have an objective filtering system. You just need to use objective keyword system. For example, you tag all images that have nipples in them. The reader can decide whether they will accept images with nipples in them. The presence of an exposed nipple is not subjective. No one has to determine how sexually explicit the image is on some sort of scale. There is a nipple in it, end of story.

This would easily cover the vast majority of content that could be considered offensive in any culture. Think of the keywords you can easily and objectively add to an image:

  • Nipples, genitals, penis, vagina, anus
  • Intercourse/penetration, sexual stimulation (or masturbation and variations thereof), oral sex/stimulation
  • Blood, exposed organs, disease, surgery
  • Suicide, deceased person (or corpse, cadaver)
  • Carnivorism, skinning
  • Religious imagery (or, Catholic imagery, Hebrew imagery, etc, when the particular religion is not questionable)
  • Muhammad, Jesus, God, etc
  • Weapon, firearm
  • Hijab

Keywords that could potentially be ambiguous could be defined or not used. There could also be potentially combination filters, for example, do not display an image tagged "female" unless it is also tagged "hijab".

I think this is a reasonable approach to the idea. 68.126.60.76 13:35, 21 August 2011 (UTC)

Why do we need an "objective" filtering system? People get to chose to use it or not. If they disagree with it then they don't need to use it. Ottava Rima (talk) 13:39, 21 August 2011 (UTC)
To satisfy concerns a filtering system will be non-neutral in any way, including cultural bias, and prevent the need to come up with subjective image categories. 68.126.60.76 13:44, 21 August 2011 (UTC)
A filter isn't supposed to be neutral. It is stupid for a filter to be neutral. A filter is supposed to filter what the -user- wants, which makes it 100% subjective and non-neutral. Otherwise, it isn't a filter. Ottava Rima (talk) 16:11, 21 August 2011 (UTC)
No matter how many times people explain the distinction between "the reader deciding which filter(s) to use" and "the project deciding what filter categories are available and what images are placed in said categories," you continue to conflate the two. It's rather frustrating.
The concern isn't that a reader's personal beliefs are non-neutral. That's a given. The concern is that the filter options presented to the reader will be non-neutral (i.e. only some "potentially objectionable" images will be assigned categories, with subjective determinations of the files contained therein).
For example, if we have a filter category for sexual content, there will be significant disagreement regarding what images are "sexual" in nature. Meanwhile, if we don't have filter categories for unveiled women or homosexuals, this will constitute a formal declaration that persons/cultures deeming those subjects "objectionable" are wrong to do so. (And if we do have filter categories for those subjects, the opposite value judgement will be inferred.)
The only neutral approach, short of providing filter categories for literally everything (which obviously isn't feasible), is to provide filter categories for nothing. (This, of course, doesn't bar the introduction of a system providing the native capability to block all images and manually display them on a case-by-case basis.) —David Levy 17:05, 21 August 2011 (UTC)
After several pages of well-argumented debate, where an overwhelming majority of veteran wikipedia editors vent their concerns about the matter, dismissing the argument as "absurd slippery-slope" seems rather asinine to me. As for your proposal, although slightly less dangerous, it shares with the original one a mix of good intentions, leniency towards moralism and practical dangers. To begin with, governments could filter based on legitimate tags for oppressive reasons: governments in North Africa, for example, could censor all non-muslim religious images just for the sake of bothering religious minorities, which they already systematically do. On the other hand, images having nothing to do with, say, genitals, could be tagged as such if they are found to be inconvenient. And yes, wikipedia is an encyclopedia, so it does know a lot of things better than you (as in the general you), probaly including, but not limited to, which images you should see. Complainer
Actually, the majority of users are for the filter. The "majority" you are speaking of on this page (which is barely over 50%) are mostly IPs from those logging out and using multiple IPs to make it seem like far more people. If we implemented a bot that CU checked everyone and blocked those who log out edit, a lot of users would be blocked from their actions on this page alone. Ottava Rima (talk) 16:11, 21 August 2011 (UTC)
Actually, most users are opposed to this filter, especially outside the US. (If you don't need proof for your claims, neither do I. Helps the quality of the debate, doesn't it?) Kusma 16:23, 21 August 2011 (UTC)
Most users? No. Not even close. Most users in the world are actually against porn. Stop trying to project a tiny, vocal minority that lives in the Netherlands or Germany upon the rest of the world. China and India are 1/3rd of the world and their culture is very anti-pornography as a whole. As is the Muslim world, which is another 1 billion. That is half the world that is completely opposite of a tiny few, and we haven't even discussed the Americas in that number. Ottava Rima (talk) 16:30, 21 August 2011 (UTC)
How do you know that the majority of users is for the filter? Adornix 16:25, 21 August 2011 (UTC)
Because you can take the talk page, which pornography related talk pages are always heavily libertine even though the raw votes never suggest even close to that. Then you can take out the duplicate IPs and the logged out editing and realize that those in for this still represent a vast majority. With the bias acknowledged, it is probably about a 80% for the filter. After all, this is about giving people choice, not tyranny of a few people in the Netherlands or Germany that wish to force their views on sexuality upon the rest of the world that wants moderation. Ottava Rima (talk) 16:30, 21 August 2011 (UTC)
Even if all IPs are the same person, your count seems completely off. Kusma 16:45, 21 August 2011 (UTC)
Maybe you need to do a recount. At least two people admitted to "accidentally" editing logged off. IPs have no way of really knowing about this except to have a user account. It is a majority for the proposal, and most polls that aren't done by having canvass shoutings on talk pages have always shown a vast majority against the porn. Only about 2% of the world's population lives in such a libertine culture. Ottava Rima (talk) 16:59, 21 August 2011 (UTC)
The Wikimedia Foundation's fundamental principles cannot be overridden by a majority vote.
It's reasonable, of course, to opine that this idea complies with said principles. But if it doesn't, no amount of support can change that. —David Levy 17:05, 21 August 2011 (UTC)
The primary principle of Wikimedia is accessibility to an encyclopedia. Encyclopedias do not require image. By denying a large portion of the world access to an encyclopedia that those like -myself- wrote because -you- demand that they have no ability to make it so pornography or other problematic images don't appear for them is one of the most egotistical things ever, and it has no purpose on this site. The WMF is acting completely based on what ethics and its principles demand. You are being incivil by attacking it for doing so. Ottava Rima (talk) 18:58, 21 August 2011 (UTC)
I tell you for the third time, OR, STOP calling people names. Like it or not, this is NOT how you conduct a discussion on wikipedia. Anyhow, I think there is some confusion here. Nobody here is arguing that people should not have access to wikipedia. We are arguing that not looking at pictures they consider indecent is their responsibility and neither wikipedias nor their teachers or whoever happens to be in control of their firewalls. You provide the perfect example of the concept, being obviously opposed to much of our image bank and still an active contributor. If it didn't bother you, why should it a billion Chinese? Complainer
The Wikimedia Foundation hosts several projects other than Wikipedia, so let's extend "an encyclopedia" to include all of them. You're correct that our written content can be useful (and preferable in some cases) without the display of images. (We haven't even touched on bandwidth issues or visual impairments.) That's why I support the introduction of an optional image filter, provided that it covers all images (and enables their manual display on a case-by-case basis). Problem solved. No more "pornography or other problematic images" appearing against the reader's wishes.
I merely object to the establishment of a system in which images are singled out and formally deemed "potentially objectionable," which cannot be accomplished in a neutral or non-contentious manner, let alone an efficient or reliable one.
You accuse me of incivility for expressing disagreement with the Board of Trustees (on a page set up to solicit feedback regarding the idea) while simultaneously describing my position as "egotistical" (one of many such insults that you've hurled). There's no need for that. —David Levy 19:29, 21 August 2011 (UTC)
"STOP calling people names" - I haven't called anyone one name. However, you are making up false statements about what others say, which is a major breach of civility. You really need to stop. You have already made it clear that you don't have respect for cultures different from you and that you want to ensure that hundreds of millions of people can never have access to Wikipedia because you don't want to give them a choice. It is highly incivil, selfish, and not appropriate. It is also racist because you are intolerant of other cultures that are not like you. You wish to deny them a simple choice because they are different. That is not what Wikipedia is for. Ottava Rima (talk) 20:11, 21 August 2011 (UTC)
1. You appear to have mistaken Complainer's message for part of mine.
2. As noted elsewhere on this page, you've somehow misinterpreted my messages to mean exactly the opposite of my actual sentiment. Among other reasons, I oppose the idea because I believe that it would discriminate against members of various cultures by affirming some beliefs regarding what's "potentially objectionable" and denying others. You needn't agree, but attributing my stance to intolerance and racism is uncalled-for. —David Levy 20:42, 21 August 2011 (UTC)
1. I didn't mistake it. I assumed that you had an extra colon than what you did and it formatted behind yours. 2. And don't dissemble about your claims. It is impossible for anyone to argue that a few people will be discriminated against when not doing anything continues the discrimination against hundreds of millions of people. You wish to impose your fringe view on the world to deny people a choice not to look at porn while using the encyclopedia. In essence, you deny children, those at work, and the billions of people in places like the Middle East or China who are not legally allowed to look at porn from having access to the encyclopedia. That isn't appropriate no matter how much you try to claim you are protecting people. Ottava Rima (talk) 21:25, 21 August 2011 (UTC)
1. You keep referring to "Wikipedia" and "the encyclopedia." Again, this pertains to every Wikimedia Foundation project.
2. As I've noted repeatedly, I support the introduction of an optional image filter, provided that it covers all images (and enables their manual display on a case-by-case basis). Problem solved. No display of "porn" against readers' wishes.
I merely object to the establishment of a system in which cultural beliefs (whether yours, mine or someone else's) of what constitutes "objectionable images" are formally codified, thereby deeming them correct and everyone else's incorrect. In addition to opposing this on philosophical grounds, I believe that such a method would be highly contentious, inefficient, resource-draining and unreliable.
But again, I'm not asking you to agree with me. If you feel that I'm wrong, that's fine. But please drop the allegations of intolerance and racism. There's no need for that. —David Levy 22:32, 21 August 2011 (UTC)

┌───────────────────────────────────────┘
porn -- An encyclopedically valuable depiction of a nude body or bodypart is "porn" in your worldview, Ottava Rima? "Porn" as in "portrayal of explicit sexual subject matter for the purposes of sexual arousal and erotic satisfaction"? Seriously: Is e.g. commons:file:Human vulva with visible vaginal opening.jpg "porn" to you? This is important because if it is, then you should understand that the problem is all yours. --87.79.214.168 21:40, 21 August 2011 (UTC)

You completely missed the entire point of this section. It is unnecessary to define porn if you use appropriate objective keywords. If he doesn't want to see a vagina, he can block images tagged vagina, and nobody has to care what his definition of pornography is. 68.126.60.76 04:41, 22 August 2011 (UTC)
I was replying to Ottava Rima. --78.35.237.218 09:07, 22 August 2011 (UTC)



"so it does know a lot of things better than you (as in the general you), probaly including, but not limited to, which images you should see" - this is arrogant idiocy. Anyway, arguing about how many people are on each side is incredibly childish and pointless. The final vote will be the final vote. You shouldn't try to sway people's opinions by telling them they are in the minority. Each individual should vote for him or herself - to try to tell them otherwise is authoritarian, elitist behavior which is absolutely incompatible with the views you are supposedly attempting to uphold.
"Lots of people are concerned about a slippery slope" doesn't necessarily make it a less fallacious argument. People are herd animals and tend to panic in large numbers. Why would governments censor images based on keywords for images but not based on keywords in articles? It would be easy to block every page with the words "Jew", "Hebrew", etc in them for example. On the other hand, they might uncensor entire articles when an "offensive" image is removed. Why is this any less likely than "Africa will censor all non-Muslim images to religiously oppress the people"? Why wouldn't Africa block Wikipedia entirely? 68.126.60.76 04:39, 22 August 2011 (UTC)
I support 68.126.60.76's idea. The categories should have strict definitions easy to measure. So instead of a vague "partial nude", we could say "underbody clothes (shorts, skirt, etc) shorter than X cm/in below crotch, cleavage below X cm/in from a certain point, visible belly". We could discuss about how many cm, of course, but this way each picture will be easy to judge and discussion would restrict to the definitions themselves. --190.135.94.198 01:06, 24 August 2011 (UTC)
I assume this is sarcasm, but it's a long leap of logic to assume objective filtering means guessing measurements. 68.126.63.156 02:43, 30 August 2011 (UTC)

Working definition of "controversial" + categorization scheme[edit]

I've been looking more closely at the 2010 Wikimedia Study of Controversial Content. The Study notes, among other things, that Wikimedia projects already have working definitions for "controversial" text or topics. As an example, the Study cites the English Wikipedia's list of controversial issues. The Study suggests that a similar, though of course different, working definition of "controversial" could be achieved for images.

I have two questions:

1. Does this mean that the Foundation is planning to have one single catch-all category of "controversial images" which users could choose to filter out? I'm confused on this point. From what has been said on the main page for this referendum, I was under the impression that the plan was to categorize images as belonging to various categories (e.g. nudity, violence, drug use) and allow users to filter out specific categories. Is the plan to have both a single catch-all category of "controversial images" and various sub-categories (e.g. nudity, violence, drug use), with the option of choosing which sub-categories to filter out? Please clarify.

2. The English Wikipedia defines a controversial issue as "one where its related articles are constantly being re-edited in a circular manner, or is otherwise the focus of edit warring". The Study of Controversial Content cited this in a footnote as an example of an objective, verifiable criterion for controversial content. In that case, is the standard for controversial images envisioned as being based on the number of complaints or edit wars associated with an image? Or is there no definite idea yet as to how images will be categorized as "controversial"?

Note: I'm looking for some clarification here, and ideally some discussion on how best to establish a categorization system for images, should the image-filtering plan proceed. Please don't comment here if you're just going to argue for or against the filtering plan. --Phatius McBluff 16:24, 18 August 2011 (UTC)

I would be also interested to see this questions answered. But I have to add that I'm strongly against the introduction of such a categorization, regardless in which way it will be done. --Niabot 16:34, 18 August 2011 (UTC)
Hello Phatius McBluff, and thank you for reading through all of the information in such detail! It's good to know that people are actually reading the stuff. :-) The controversial content study is just the background for the referendum and part of the inspiration. Robert Harris was contracted by the Foundation to look into the issue of controversial content on Wikimedia projects, and prepared that report. After that, the Board discussed it and felt it agreed with some and disagreed with other points raised, etc. (Phoebe's comment above in #discussion could help more, since she's on the Board and I'm not.) Then the image filter referendum proposal came up, which is what we're discussing now.
The current proposal is just what you see on the Image filter referendum page and its subpages; the controversial content study is just background information. The current proposal does not use a single catch-all "controversial" category, but instead has various subcategories that people can choose from. (Who are we to say what is "controversial"? Things like "nudity", "violence", etc. are a lot more straight-forward and neutral. Granular settings are good. :-)) The idea is also that it would fit in with our existing category system on Commons and the various projects, rather than by creating an entire new set of categories. Cbrown1023 talk 16:39, 18 August 2011 (UTC)
The current proposal does not use a single catch-all "controversial" category, but instead has various subcategories that people can choose from. (Who are we to say what is "controversial"? Things like "nudity", "violence", etc. are a lot more straight-forward and neutral. Granular settings are good. :-)) The idea is also that it would fit in with our existing category system on Commons and the various projects, rather than by creating an entire new set of categories.
I'm glad to hear this, if only because it confirms that I was correct in my initial impression of what the plan was. However, just to clarify, by "our existing category system on Commons and the various projects", do you mean the system of (say) putting this image into Commons:Category:Shamanism in Russia? In other words, is the plan to allow users to block images from whatever image category they please? I'm puzzled on this point, because the main page for this referendum mentions a possibility of only "5–10 [filterable] categories". --Phatius McBluff 17:04, 18 August 2011 (UTC)
Phatius, it's not just you being confused; there's been a lot of discussion around this, and nothing is etched in concrete. I think in a perfect world being able to filter any image category you chose would be nice; but there are pretty serious usability and technical concerns around doing that. So the proposal is to filter 5-10 categories. -- phoebe | talk 18:16, 18 August 2011 (UTC)
Okay, I think I now understand where things currently stand. Cbrown and phoebe, thanks for your prompt responses! I do hope that we get to have more input from ordinary users on exactly how the category/tagging system will work. Most of the feasibility (and at least some of the philosophical) concerns surrounding this proposal will probably hinge on that. --Phatius McBluff 19:49, 18 August 2011 (UTC)
I too took a closer look at the report. One salient point was that the recommendations were addressed to the community, and explicitly not to the board. Rich Farmbrough 17:09 18 August 2011 (GMT).
A companion report was delivered to the Board; the body was unchanged but it included an introduction directed explicitly to the Board. SJ talk | translate   01:23, 6 September 2011 (UTC)
"Things like "nudity", "violence", etc." If the purposes of this project are to allow people to filter out what they do not want to see, that's a pretty big etc. How do we select the 5-10 top objectionable categories, as measured by audience aversion? I'm very leery about a filter that could potentially invite users to filter out:
  • Nudity
  • Sexuality
  • Violence
  • Gore & Dead Bodies
  • Blasphemous Images (Christianity)
  • Blasphemous Images (Islam)
  • Blasphemous Images (Hinduism)
  • Sexist images
  • Racist Images
  • Depictions of Homosexuality
--Trystan 19:35, 21 August 2011 (UTC)
I'm also very worried about whether we can objectively assign images to categories. I can think of one objective measure: "Has been objected to by at least one known human". That category, over time, would approach filtering all images, but it would be completely fair. Additionally, we can fairly use mathematical criteria like "frequency of objection" to classify images.
5-10 categories would be very hard to sustain. People who object to classes of content will legitimately want equal footing for their personal objections, and we'll be hard pressed to find any fair way to assign categories or justify any particular assignment. Most of all, when someone is offended by an image not already categorized, what can we tell them? "Your offense isn't a valuable as other types of offense"? 5-10 categories will tend to slide towards unlimited categories, and I think that will work. --AlecMeta 00:20, 23 August 2011 (UTC)
It will not necessarily extend very much, as there are usability limits. But we will honour only some of the requests. We cannot rely on the number of complaints as anybody can create enough accounts to object many more times to a single image than the realistic number of real complaints. That means it is up to subjective decisions by editors who have an interest in editing the filters or related categories (depending on how it is set up).
For the encyclopaedic articles we can rely on editors wanting to improve the quality of articles on noteworthy subjects (and POV editors being in minority). Editors will get their reward by seeing those quality articles (and the quality Wikipedia). I am not convinced the same will work with the filter categories. The work may indeed be dominated by POV pushers - or take energy from the poor souls already fighting with backlogs and uncategorised media on Commons. I have seen no suggestion on how this will be solved or any reasoning about how this will not be so.
--LPfi 14:02, 23 August 2011 (UTC)

What will happen? All users who are offended by the filter will switch it of. And all the censors will put over time tousends of images into one or the other offending category. They will visit controversal articles in search for images to ban. And finally they will put classical works of art into such categories. And then the editors of value content are gone. Hopefully somebody has made a fork under a liberal jurisdiction by then. --Bahnmoeller 18:56, 23 August 2011 (UTC)

What categories are admissible?[edit]

The elephant in the room here is the incredible amount of handwaving that goes on when it comes to discussion of the categories that will be available to filter. We're to understand that they will include graphic sexual images, possibly nauseating medical images, ETC. Well the ETC is a big deal. If John Q. Antisemite and friends want the facility to hide images of Jews on the grounds that Jews have demon eyes that corrupt and defile all who meet their gaze, are we going to tell them that's unacceptable and we won't make such a category available? If so, what are the criteria for determining the admissibility of a filter category? This is crucial information that needs to be spelled out in advance rather than left until after this is a fait accompli. It's no good saying we will leave it to readers to determine which categories they want. I strongly suspect some will be more welcome than others, and this will have a lot to do with the particular cultural biases of the people overseeing this new tool. It will not be long at all before someone proposes a category that is judged unacceptable. What do we do then? How do we decide whose aversions to cater to? Beorhtwulf 15:53, 23 August 2011 (UTC)

You're making an excellent point! Huge and inevitable drawbacks like that are the reason why I believe only a general image filter (possibly with a per-user blacklist/whitelist system) could ever work. Other examples: A friend of mine has BPD and cannot look at images of tall structures without getting an anxiety attack. Do we create a filter category for people like him? A female friend of mine was raped by a black guy and she suffers from PTSD and cannot look at images of black guys without panicking. Do we create a filter category for people like her? Myself, I'm arachnophobic, but I have no problem with images of insects and do not want them thrown into the same category with spider images. Et cetera, et cetera, et cetera. --213.196.212.168 20:02, 23 August 2011 (UTC)
The reason that "etc" gets thrown about is that the system is not fully designed. The WMF has a reasonable point in making sure that this is generally supported before they invest more staff time in this.
I'm reminded of an old Doonesbury comic strip. A political candidate proposes an unpopular (but possibly necessary) economic policy and says that it is likely to result in some people losing their jobs. The media say, "But can't you be more specific?" He responds, "Yes, in this town, this particular factory will fire exactly this many workers". And the media say, "But can't you be more specific?" He responds, "Yes, the following people will lose their jobs: John Smith of 123 Maple Street, Alice Jones of..."
Back here in the real world, we can't foresee every single consequence. We can't predict exactly which categories people will choose to include. We can't even predict exactly which categories will exist: they change all the time. In just the last hour, about 75 new categories were created on Commons and about 20 were deleted.
So you're basically saying "Please show me the fixed, never-changing, guaranteed, final version of the product, so I can see exactly which images in which articles are going to be affected forever", and the fact is that what you want not only does not exist now, but it will never exist. This is a wiki: it will always change. We can give you general descriptions—there's pretty strong support for giving readers the option of not seeing certain general types of images—but the exact specifics, and how any given image fits into that, will definitely change over time. WhatamIdoing 21:43, 23 August 2011 (UTC)
the fact is that what you want not only does not exist now, but it will never exist -- Exactly. Which is why we are opposed to any project-wide filter category system. Beorhtwulf's argument illustrates that not only is the WMF proposal not perfect as can be expected from a work in progress, it is flawed beyond potential practicability as long as it includes special filter categories. There are most definitely far, far more than "5-10" types of images people will want to have filtered. On top of that, as Beorhtwulf correctly argues, there are types of images for which a category will be rejected out of reasons of political correctness (should e.g. my friend get the filter category for images of black men? are you saying we should have such a category? or are you saying that my friend's very real PTSD-based problem is not grave or widespread enough to warrant an image filter? how would you or anyone go about defining objection to a certain type of images as suffiently grave and widespread for a filter category? does that mean we only pay respect to widespread and politically correct objections to images?). All of that is conveniently hidden under the ETC rug. --213.196.212.168 22:06, 23 August 2011 (UTC)
Even if the system were fully implemented today, it would still be possible to change it in the future. We should not pretend that it won't. It would be silly to make promises today about exactly which types of images might or might not be included in the future.
In general, though, by limiting it to 5 to 10 groups of images, the WMF is signalling that the intention is not to provide thousands of options to cover every possible individual's phobia. If only 5 to 10 groups of images are available, then only the 5 to 10 groups deemed to be the highest priority will be implemented. If "images of men who look like the one who raped me" is in the top 5 or 10, then it might well be implemented; if it's not, it won't. People who need something else will have to do what they are doing now, which is either avoiding WMF projects altogether, or buying a third-party filtering system. WhatamIdoing 23:44, 23 August 2011 (UTC)
If only 5 to 10 groups of images are available, then only the 5 to 10 groups deemed to be the highest priority will be implemented. -- Who defines "highest priority"? According to what parameters?
People who need something else will have to do what they are doing now, which is either avoiding WMF projects altogether, or buying a third-party filtering system. -- Ah, so you're saying that the needs of the few should be ignored but the needs of the many shouldn't. That probably makes sense in your own worldview, but not in mine and hopefully not in the Wikimedia Foundation's. Also, there are alternatives (especially the adjustable general image filter) that take of everyone, not just the many.
Why are you so hellbent on the image filter being based on categories, especially when there are alternatives and when people are pointing out numerous fatal flaws with any project-wide filter category system?
Two blunt but honest questions at this point: (1) Are you being intellectually dishonest here (i.e. secretly recognizing mine and other people's valid points)? (2) Do you personally have a problem with any particular type of images that is likely to be deemed "highest priority" (e.g. nudity)? --213.196.212.168 00:10, 24 August 2011 (UTC)
"Highest priority" is determined by the community, which, in the end, is likely to mean what the most people (around the world, not just the young, single, childless, white males who dominate this page) demand.
The point behind using categories is that they already exist, and therefore impose far less burden than alternatives, like tagging specific images. If (to use your example of nudity) we wanted a category for nudity, it is the work of mere seconds to say, "Put Commons:Category:Human penis (or selected subcategories) on the list". Doing the same thing with a separate tagging system would require editing about a thousand image description pages. Doing the same thing with special, new categories would similarly require adding the new category to every one of those pages. Furthermore, categories are a well-understood technology, and we have a long history that helps us figure out whether any given file should be placed in any given category. Whether File:____ belongs in Cat:Human penis is an easy question; whether File:____ should be tagged as "Images that are offensive on sexual grounds" is not.
I have no expectation of enabling any of the filters (unless, I suppose, it becomes so widely used that it is useful for checking what other readers will see). I do not think that I need to protect myself from any images. WhatamIdoing 17:17, 24 August 2011 (UTC)
"Highest priority" is determined by the community, which, in the end, is likely to mean what the most people (around the world, not just the young, single, childless, white males who dominate this page) demand.
In other words, we'll vote on what is and isn't reasonably "objectionable," with the majority beliefs formally deemed valid and the minority beliefs dismissed.
If (to use your example of nudity) we wanted a category for nudity, it is the work of mere seconds to say, "Put Commons:Category:Human penis (or selected subcategories) on the list".
Does this count as "nudity"? Who decides? —David Levy 21:05, 24 August 2011 (UTC)
Who will decide how to categorize that image? The same people that decide how to categorize it now. There is no proposal to change the method by which images are categorized on Commons.
Who will decide whether the subcats under Commons:Category:Bikinis will be included under a "sex and porn" tickbox? The same community that does not connect those categories to any of the sex, porn, or nudity categories now. WhatamIdoing 17:58, 25 August 2011 (UTC)
1. The image filter system won't be limited to Commons; many Wikimedia Foundation wikis host images separately.
2. You're mistaken in your belief (also stated elsewhere) that no new categorization would be necessary. As has been explained in great detail, our current category system isn't remotely suited to the task planned. There's a world of difference between subject-based image organization and sorting intended to shield readers from "objectionable" material. —David Levy 19:02, 25 August 2011 (UTC)

┌─────────────────────────────────┘
You believe our current cat tree won't work. What does that mean? Most probably, you mean that you looked at a category like Commons:Category:Violence and decided that in your personal opinion, many of the images were not objectionable (however you are defining that), like a scan of a 17th century publication and a logo for television ratings. Then you assumed (wrongly, I believe) that this category would definitely be included in any list of "Tick here to suppress violent images".

The cat includes a few images that people who didn't want to see violent images probably wouldn't want to see. The first line, for example, shows a very bloody wrestling match. But that image would be suppressed by including either Commons:Category:Hardcore wrestling or Commons:Category:Professional wrestling attacks, basically all of which are images of violence or its aftermath. It is not necessary to include Cat:Violence (with its wide variety of image types) to filter out this image.

In other cases, the images are currently not categorized correctly. The images of the victims of violence belong in "Victims of violence", not in "Violence". The old paintings in belong in "Violence in art". The presence of so many images from movies suggests that Commons needs a "Category:Movie violence"—regardless of what happens with any filter. This is routine maintenance; people do this kind of work at Commons all day long.

There will be false positives on occasion. This is not a bad thing. It should be possible for people to identify most false positives by reading the image captions, and they will click through in those instances. If the contents of a category change so that are a lot of false positives, we can—and will—remove those categories, or substitute specific subcats. If the filter is restricting far more than people want restricted, they will turn off the filter.

I'm just not seeing the problem. We don't need to categorize specific images according our subjective view of whether the image is "objectionable". We're not promising perfection; perfection is not feasible. A reasonable list taken from the existing cat tree—based less on what should be considered "objectionable" and more on what actually produces objections—should be sufficient. WhatamIdoing 22:04, 25 August 2011 (UTC)

Most probably, you mean that you looked at a category like Commons:Category:Violence and decided that in your personal opinion, many of the images were not objectionable (however you are defining that)
I've been exceedingly clear in conveying that I strongly oppose reliance on my (or anyone else's) personal opinion of what's "objectionable."
Then you assumed (wrongly, I believe) that this category would definitely be included in any list of "Tick here to suppress violent images".
No, I've made no such assumption.
There will be false positives on occasion.
The greater problem, in my view, would be false negatives. As others have pointed out, our current categories are based upon what the images are about, not what they contain. For example, a photograph with a bikini-clad woman in the background (who isn't the main subject) currently is not categorized in any way enabling persons objecting to her sight to filter it. Using the current categories for this purpose obviously would dilute them beyond recognition, so for the system to work, we would need to create a separate set of categories. (Otherwise, there would be never-ending battles between those seeking to maintain the categories' integrity and those attempting to include off-topic images on the basis that incidental elements require filtering.)
In some cases, even an image's main subject isn't categorized in a manner indicating the "potentially objectionable" context. For example, many people strongly object to miscegenation. These individuals would have no means of filtering images such as this one.
And what about readers whose religious beliefs dictate that photographs of unveiled women (or any women) are "objectionable"? What current categories can be used to filter such images with any degree of reliability? —David Levy 23:18, 25 August 2011 (UTC)
I see no significant harm in false negatives. Sure: someone's going to upload another goatse image, and it's not going to get categorized instantly, and some innocent reader will want to bleach his brain. But—so what? How is that different from what happens now, except that we might reduce the odds of it happening by an order of magnitude? We're not promising 100% success.
We're actually promising that every single preference by tiny minorities won't be accommodated. Limiting the tickboxes to 5–10 options means that we won't be able to provide a filter that accommodates every single request. We'll have to focus on the 5–10 filters that are wanted by the largest numbers of users. This, by the way, means making those filters both as broad as our readers want, and as narrow as they want. For example, a filter that hides everything even remotely connected to nudity, from images widely considered benign (e.g., anatomical line drawings and marble statues) to images normally considered non-benign (e.g., photographs of sexual intercourse) is not likely to be as popular as a more narrowly tailored list. We might offer options that indicate degrees, like "serious sex and porn" and "most photographs showing genitals", but we are unlikely to waste one of our limited slots with "nudes in art" or "images of women" or "images showing people of more than one race". WhatamIdoing 17:28, 26 August 2011 (UTC)
"...images widely considered benign... to images normally considered non-benign" Perhaps my standards are not "normal", but if I was using a filter (for example, to make WP 'safe' for a child), I would want any and all nudity filtered, without pausing to consider how benign it is.--Trystan 18:30, 26 August 2011 (UTC)
Sure: someone's going to upload another goatse image, and it's not going to get categorized instantly, and some innocent reader will want to bleach his brain. But—so what? How is that different from what happens now, except that we might reduce the odds of it happening by an order of magnitude?
Under the current setup, readers familiar with our content have no expectation of protection from images to which they object. Those for whom this is a concern might disable the display of images via their browsers or add-on scripts.
As soon as a native filter system is implemented, readers will place their trust in it (and regard false negatives as unacceptable).
We're not promising 100% success.
That won't prevent readers from expecting it.
We're actually promising that every single preference by tiny minorities won't be accommodated. Limiting the tickboxes to 5–10 options means that we won't be able to provide a filter that accommodates every single request.
Indeed. That's a major problem, and one that won't be limited to "tiny minorities."
The belief that a photograph of an unveiled woman is objectionable is not rare. Will this be one of the five to ten filter options? Probably not. Why? Because we're allowing cultural bias to shape determinations of which objections are and aren't reasonable.
We'll have to focus on the 5–10 filters that are wanted by the largest numbers of users.
In other words, "if you aren't part of the majority, we don't care what you think."
We might offer options that indicate degrees, like "serious sex and porn" and "most photographs showing genitals", but we are unlikely to waste one of our limited slots with "nudes in art" or "images of women" or "images showing people of more than one race".
Right, we mustn't "waste" resources on those silly, non-majority-approved cultural beliefs.
To be clear, I agree that it would be impractical to dedicate slots to such subjects. We could have 100 slots and come nowhere near covering every "objection." That's one of the reasons why I oppose this implementation (and support the one discussed here). —David Levy 05:38, 27 August 2011 (UTC)
The literature on the history of warning labels in libraries does not support the suggestion that the community will happily settle on a few objectively identifiable, broadly agreeable categories. If we say that people have the right to filter images of nudity, and use Commons:Category:Human penis as one indicator of nudity, I think you will find that a very determined group of editors will be using that category to tag any image containing any depiction of a human penis, from Michelangelo's David on down. The WMF-endorsed user right of filtration will override good classification principles; it's not a very good reply to "I have the WMF-endorsed right to filter human penises," to say, "Well, yes, but this image isn't really about human penises, it just shows one." So any categories used as part of the system would cease to be organizational and descriptive categories, and become instead broad warning labels. You could certainly automatically populate the new warning label using existing categories, but they serve very different purposes and it would be vital to implement them separately.--Trystan 00:02, 25 August 2011 (UTC)
Trystan, I'm not sure I follow your reasoning. If you believe in the ALA's view of the world, there is no way to implement separate labels that would be appropriate and non-prejudicial. Allowing people to choose existing descriptive categories and say "I'd rather not see these images" is the only variant that might fit their criteria - with a style guideline that categories should be [remain] purely organizational and descriptive. That might not perfectly match the expectations of some readers, but then no automated system would. [By the way, if you think that there aren't already edit wars about whether or not to include certain images in controversial categories, you should spend more time on Commons :)] SJ talk | translate   01:23, 6 September 2011 (UTC)

"5-10" had better include the seven categories shown for this to be seen as legitimate[edit]

I would suggest that the "5-10" categories from question five should at least include the seven categories shown in this image which was linked from the voting instructions at http://meta.wikimedia.org/wiki/Image_filter_referendum or this vote will probably not be seen as legitimate. Those categories are: children's games; gladiatorial combat; graphic violence; medical; monsters and horror; other controversial content, and sexually explicit. 76.254.20.205 16:31, 24 August 2011 (UTC)

I doubt that anyone will mind if "children's games" disappears entirely from the list, or if "gladiatorial combat" and "graphic violence" are merged. WhatamIdoing 17:19, 24 August 2011 (UTC)
I think there's a good reason to exclude games: they can distract students from assigned work. I'd also like to see gladiatorial combat separate from graphic violence because a lot of very agressive forms of combat don't result in particularly violent images. If I had my way I would add religion and fiction to the categories too. 76.254.20.205 20:35, 25 August 2011 (UTC)
The games themselves might distract students from school work, but the pictures of the games will not do that any more than the text of the article will. This is only about hiding (temporarily) the images. It will not keep the students from reading the articles. WhatamIdoing 22:06, 25 August 2011 (UTC)



general image filter vs. category system[edit]

the only way forward is with a general image filter[edit]

I've noted this at other places, but would like to open a discussion specifically about this issue. Some proponents of the image filter appear to be hellbent on an implementation that relies on categories.

Me and many others argue that any possible category system would bring a lot of problems and is ultimately not a workable approach at all. The many varied problems, including neutrality, and catching all possibly objectionable images, have been discussed at length elsewhere on this talk page.

Personally, I believe that if we go forward with the image filter, it can only be as a general image filter:

  1. It could be instantly activated and deactivated by the user (or temporarily disabled on a per-image, per-page, or per-session basis).
  2. It is the only way to ensure that nobody will accidentally see images to which they object. At the same time, it is also the only filter variant that is actually "tailored" to each and every single last individual user.
  3. It would avoid the whole (imho completely inevitable) infighting and disagreement and drama over which images to include into which categories and so on. It would also exclude simple oversight.
  4. The general image filter could of course be made customizable by the user, who would sort individual images, articles, or entire categories into a user-specific blacklist and whitelist. Blacklisted images (or images included in blacklisted articles/categories) wouldn't show up for the user even when they disable the filter, while whitelisted images (or images included in whitelisted articles/categories) would be displayed even when the user browses with enabled filter.

In all, I can think of no argument that speaks for a category system rather than the general image filter. Imo, this is the single most important referendum-within-the-referendum. --78.35.232.131 01:07, 23 August 2011 (UTC)

I could support that, with one caveat. If it allows blacklisting of entire categories, there should be a policy in place stating that the purpose of categories is organizational, and not censorial (essentially the distinction drawn by the ALA above.) That way if people argue that an image should be placed in a category just so that it can be filtered (as opposed to it being a good fit for that category based on its subject matter), the policy rests on the side of maintaining the integrity of the category system.--Trystan 02:08, 23 August 2011 (UTC)
Yes, that sounds reasonable. --78.35.232.131 04:28, 23 August 2011 (UTC)
Both sound reasonable to me, if one intends to provide an image hiding feature. Both points are well defended by claims made elsewhere above. I agree that this is the essential implementation detail if we do indeed make it possible to set a personal preference for hiding images. There is only one argument I have heard for a category system: that it might be technically easier to implement. I don't think this is a good argument - a bad solution here could be worse than none. SJ talk | translate   01:23, 6 September 2011 (UTC)
What's the differences between your suggestion and the current proposed one? It's almost the same. -- Sameboat (talk) 06:15, 23 August 2011 (UTC)
The difference is: No labeling.
  • If you're looking at all this as "how can we make a simple filter system", this is merely a subtle point.
  • If you're coming at this from "how do we prevent people from exploiting the situation and doing us permanent harm", this is actually a really really important point.
Labeling appears to be exploitable.
--Kim Bruning 11:16, 23 August 2011 (UTC)
The 4th point is lame as I previously have explained. Without labeling by an organized group, it will be practically unmanageable by user alone. Don't give me that "if you want self-censorship, do the dull works all by yourself" nonsense. Our labeling does not do further damage than the existing censorship options. It will be too forgiving to be exploited by conservatives, authoritarian and communist. -- Sameboat (talk) 13:29, 23 August 2011 (UTC)
Mate, trying to explain things to you is like trying to play chess with a pigeon. But here goes anyway: I added point #4 to further expand my proposal with an option for the users to adjust the image filter according to their liking. They do not have to do the dull work at all. They can decide to simply switch the filter on and off and either filter all or none images. So, now you can knock over the pieces and so on. --213.196.209.190 13:36, 23 August 2011 (UTC)
Are you telling me the category you stated in #4 is the general category rather than filter category? That's pointless, category may contain images both worth filtering or not. -- Sameboat (talk) 13:46, 23 August 2011 (UTC)
the category you stated in #4 -- What "category" did I "state" in #4? I was talking about a user-specific and user-defined blacklist and whitelist. The categories I mentioned are the existing file categories. But that's just a tiny aspect of my proposal. If it turns out that blacklisting/whitelisting entire categories is not a good idea for some reason, I imagine it will be more because of what Trystan addressed above.
category may contain images both worth filtering or not -- In that case, it's for the user to decide the trade-off. Consider that a filter category would be largely out of control for the individual user and will in all likelihood also contain images that the individual user doesn't deem problematic.
Let me ask you something in return now: what exactly are the arguments that speak for a category system rather than the general image filter? --213.196.209.190 13:57, 23 August 2011 (UTC)
This IS the trade-off I can't accept. I need a separate category system for filtering. That's the whole point of this proposal. I need specific images to be filtered and the others unfiltered, the WM proposal will save me tons of time for that purpose. Do you know how many files uploaded to Commons right now for an individual to customize their black/whitelist? Commons:Special:Statistics gives you the answer. -- Sameboat (talk) 14:06, 23 August 2011 (UTC)
This IS the trade-off I can't accept. -- You're making progress! At least you're now talking about your own preferences. The question with regard to the image filter though is what is best for most people.
the WM proposal will save me tons of time for that purpose -- It may save you time, but it will cost the community a lot of time and stress. Time better spent writing articles.
That's the whole point of this proposal. -- Wrong. The whole point of the image filter is to give users the option to filter images. How exactly this should be implemented is completely up in the air. The possibility of a Wikimedia-hosted and -maintained filter category system is just one of the things determined via the referendum. It is not in any way, shape or form a necessary part of the final image filter. Fait accompli much?
I need a separate category system for filtering. -- That is exactly what the whitelist/blacklist system is: A separate, per-user filter category system, just better in that it allows each individual user to adjust the image filter exactly to their liking -- something which no project- or Wikipedia-wide category system could ever achieve. Alternatively, if they don't have the time to define their own blacklist/whitelist, users can simply enable/disable the filter at any time.
You still haven't explained the drawbacks you see with a general image filter, as they apply to most or all people, not just yourself. Enable the general image filter, you won't see any images. Want to see an image or all images? Disable it with a click, or whitelist it. What is the problem with that? The huge problem with any project-wide category system in that regard remains that different people all have a different set of images they deem objectionable. This is why the referendum asks users about 5-10 categories. But that arbitrary range of 5-10 categories is highly questionable, because it presupposes that there are only 5-10 different groups of images which people will want to have filtered. It also relies on the likewise doubtful presupposition that people can agree on which images to include into even one of those categories. So which direction is all this going to? Exactly right: individual per-user filter categories. Ergo I argue that the category system with all its inevitable drawbacks (neutrality, catching all objectionable images, agreeability etc) is dropped in favor of a general image filter, with the proposed additional option of a per-user blacklist/whitelist system -- which, again, is essentially a per-user filter category. --213.196.212.168 20:00, 23 August 2011 (UTC)
So this is an ALL images on/off function that already provided by browsers innately, case closed! OK not yet. The WM proposal is exactly allowing every user to customize their own categories. And this can be changed if 5-10 categories is deemed to be insufficient, that's why we're discussing it right here in search for the equilibrium of the amount of category and how categorization should be conducted objectively based on visible characteristics. -- Sameboat (talk) 22:15, 23 August 2011 (UTC)
See the section below this one for a deconstruction of that "argument". Project-wide filter categories are an utterly unworkable approach. That is just the fact of the matter, whether you are capable of and willing to wrap your mind around it or not. --213.196.212.168 22:22, 23 August 2011 (UTC)
I regard this approach as the best by far, for the reasons eloquently stated by 78.35.232.131. I've previously expressed support for such a setup, and the implementation described above seems ideal.
Sameboat: You need to understand that a user could simply enable the all-images filter and whitelist individual images upon encountering them (e.g. when viewing an article and seeing a blocked image whose caption describes something that he/she wishes to view). Likewise, he/she could blacklist image categories and whitelist individual images contained therein. That's substantially more advanced than the functionality included in browsers.
This method is vastly simpler and easier to implement, requires no special community efforts, and eliminates numerous major flaws. —David Levy 22:29, 23 August 2011 (UTC)
The all image on/off does not conflict with the categorized image filter. Somebody want to examine it to decide whether they want to add it to whitelist or blacklist. But some user definitely don't want to come in contact with images they don't want to see at all but in the mean time still show the image that is acceptable for them. This suggestion is not a trade-off, but striping the right to hide SOME image initially apart from some simple category selection for the enduser. -- Sameboat (talk) 00:00, 24 August 2011 (UTC)
There are no "rights" in a legal sense on Wikipedia (except the right to leave), only privileges. And no, you are of course once again completely and utterly wrong. The adjustable general image filter is the only filter variant that does give each individual user complete control over what they want to see and what they don't. Now, since you seem unwilling and/or unable to understand anything explained to you, would you mind telling me what you are doing on an encyclopedic project? Just honestly curious. --213.196.212.168 00:16, 24 August 2011 (UTC)
Check my contributions on zh & en WP if you like, but this is irrelevant, the image filter is designed for readers, not editors. What the WM proposal gives is the ability to filter image according to user's preference without requiring them to examine every image they come in contact beforehand. Any inferior suggestion (your suggestion) which does not provide an equal substitution is not gonna be helpful. Many users just do not need complete control over the filtering preference. If some volunteers of the WM community are willing to do all the dull work to label most images for them, why reject it? You said user can use 3rd party script for the same purpose. I would prefer the filter being exploited than the user's computer being infected thru 3rd party script. -- Sameboat (talk) 01:07, 24 August 2011 (UTC)
The above proposal does not require anyone to "examine every image they come in contact beforehand." It enables readers to easily view a desired image with one click (without ever needing to review anything beforehand or display a single image of which they don't approve).
You're asking why we would reject volunteers' offer to label certain images "objectionable," thereby formally establishing that various cultures' and individuals' moral beliefs are valid (if reflected in the categorization) and invalid (if not reflected). You don't understand why anyone would oppose that? —David Levy 02:55, 24 August 2011 (UTC)
You're missing the point I care the most: how does the reader is supposed to know if the image in question should be filtered or not when the hide all images option is enabled? It is not that "if you're not confident, just don't open any image at all" because this suggestion is no more than a minesweeper game to me. -- Sameboat (talk) 03:17, 24 August 2011 (UTC)
Actually censors care the most, that others don't see the images or read the book. To cite Joseph Henry Jackson: Did you ever hear anyone say, "That work had better be banned because I might read it and it might be very damaging to me"? --Niabot 03:21, 24 August 2011 (UTC)
WP is not just "a" book, it is an internet encyclopedia covering almost every topic possible. Citing that quote is merely telling people to avoid obtaining any knowledge if they don't wanna be offended by some images. Or if I interpret that political quote correctly, it tells people who are too easily offended have no right in obtaining knowledge. -- Sameboat (talk) 03:53, 24 August 2011 (UTC)
You must be silly to think that this only referrers to a book. It aims at all knowledge, all book, all articles. It says that you can't judge about something you never have experienced. It also says that only censors will read (to late for them) and that they make the decision for others. Making decisions for other is censorship. --Niabot 11:34, 24 August 2011 (UTC)
This is another misconception against proponent of image filter. We want some of the image filtered because we already have experienced many similar images previously. In order to avoid experiencing that unpleasure once more we need the filter. And this is not censorship when it is the end-users asking for filtering some of the images for themselves ONLY, not for others, make no mistake. -- Sameboat (talk) 13:06, 24 August 2011 (UTC)
It is already censorship to label content as objectionable:
"Labels on library materials may be viewpoint-neutral directional aids designed to save the time of users, or they may be attempts to prejudice or discourage users or restrict their access to materials. When labeling is an attempt to prejudice attitudes, it is a censor’s tool. The American Library Association opposes labeling as a means of predisposing people’s attitudes toward library materials." [5] --Niabot 13:14, 24 August 2011 (UTC)
Applyin ALA opinion on labeling here is seriously wrong to begin with. Library labels are applied to each book while image filter categories are applied to image, self-explanatory. Image may be included in the book or article. Do you see the difference now? Labeled book in the library might turn off some potential reader from picking it, while image filter does the exact opposite because reader might know some degree of unpleasure caused by sensitive images can be alleviated, they can be more confident in accessing the article after adjusting the image filter preferences properly. And I want to add one thing of the general image filter, turning off all images is problematic because there're many images should never be hidden such as diagrams and statistic graphs which are equally text in the form of image. -- Sameboat (talk) 13:45, 24 August 2011 (UTC)
It isn't wrong. You might label books, articles, images, comments,... It's everytime the same question and problem. If you label images you exclude images. If you label articles you exclude articles. This has nothing to do with the fact, that you can still access the article, while the image is hidden.
What you propose is to rip the images out of a book and hand them out if the user requests them! Absolutely unacceptable. --Niabot 14:12, 24 August 2011 (UTC)
Whether the image will be hidden or not will be decided by user's filter preference, not WMF. Censorship is about thought/information control, while image filter is giving users freedom to choose. What WMF endeavors now is clearly the latter. -- Sameboat (talk) 15:09, 24 August 2011 (UTC)
Whether the image will be hidden or not will be decided by the censors that categorize the images for the filter. The WMF provides the tools for the censors. --Niabot 15:13, 24 August 2011 (UTC)
Disable the filter if you condemn it, then you're living in your colorful world without censorship, your life will be totally untouched by this feature. -- Sameboat (talk) 15:22, 24 August 2011 (UTC)

┌─────────────────────────────────────────────────┘
That isn't the problem we are talking about. We speak about prejustice of content, optional or not. --Niabot 15:30, 24 August 2011 (UTC)

Indeed. The issue, Sameboat, isn't that anyone will be forced to filter images. It's that the Wikimedia Foundation and/or its volunteers will formally judge what images reasonably should/shouldn't offend people and offer inherently non-neutral filter options based upon said judgements. —David Levy 15:41, 24 August 2011 (UTC)
The filter categories are not set in stone. This is a wiki, meaning things can be changed and refined to avoid prejudiced categorization and come up with more neutral and objective selections. And ALA's statement is ultimately made in fear of "restricted access" by the public, which may be a concern of information access policy in different states/countries/nations. But the thing is, whether accessing the particular information violates the law is judged by the local authorities/judiciary departments themselves, rather than depending on our labeling system. Which means if accessing particular media (for example, image of pedophile) is a crime in that region, labeling it or not does not change the fact that it is against the law. -- Sameboat (talk) 15:58, 24 August 2011 (UTC)
You can't change this by redefining categories. Any answer to the question "What is objectional content?" is "non-neutral". It will always be a "non-neutral" judgement. This starts with the names of the filter-categories itself. There are only two categories that are neutral. A category that contains not a single image and category that contains all images.
You don't need to put the terms "pedophelia" and "law" on the table, since non of our used images is pedophelia or in violation with the law. We don't need a filter for non existing content. --Niabot 16:07, 24 August 2011 (UTC)
I never define if the content "objectionable" to me. It is the visible characteristics of the image which should be used to shape the filter categorization system so there's no concern of prejudiced labeling. Pedophile is just an example. In reality there're many kinds of images on WM server which are legally acceptable in your hometown but illegal in many other countries, particularly the Muslim. But I almost forgot that I brought this out simply for refuting the ALA's statement, law is a reasonable mean to restrict information access. If law is not the concern, may the labeling hurt the people's self-esteem for accessing the labeled media? -- Sameboat (talk) 16:34, 24 August 2011 (UTC)
What visual characteristics define objectionable content? A question related to the previous one. But there are many more questions that come closer to the point: Why does the filter not allow the exclusion of flowers? What are the default categories? Which categories will be listed? Why are this the categories that are listed?
If you follow this questions back to core, you will giving an answer that provokes the question: "What is objectional content?" There is no way you could avoid this non-neutral question and to give an neutral answer.
Again. Law isn't a reason. We are only bound to US-law, with or without filter. The people in foreign countries are bound to the law, with and without filter. The filter doesn't have any impact on law. It's not a necessary tool to comply with the law. As such, law is not an argument of the debate. Thats exactly what the ALA states. If we introduce this censors tool, then without any need to do so. --Niabot 16:49, 24 August 2011 (UTC)
Simple enough: common sense and understanding of different cultures/religions. Note that I'm not trying the say this is an easy task, but if the slightly detailed categorization than the currently proposed 5-10 categories can cover the preferences of more than half of the users on WM, it is already a good start. -- Sameboat (talk) 17:01, 24 August 2011 (UTC)
At some point i will repeat myself like an record player... It should be obvious that there is no common sense that is acceptable by liberals and conservatives. There is also no room for compromise. An image is either inside a category or it is not. That is a straight line between yes or no. You will never have the case that all opinions are under or over the line. That makes a decission non neutral. A filter that allows all images is guaranteed to fulfill neutrality. It is a line at the top. No opinion is above it. A filter that excludes all images is a line at the bottom. Again, all opinions are on one side. The filter is neutral. Anything else is up to the pattern/opinions. You can lower or rise the line as long as you don't make a opinion change from yes to no or the other way around. That is the maximum tolerance for neutral judgment. For example: All would agree, that this is a flower. That someone disagrees is very unlikely, but it already limits maximum tolerance. What you try to find with "common sense" (i doubt that it exists in the first place), is the median line trough all opinions, described as a rule. If you found this median and you make a decision after this rule. The problem with "yes/no" categorization is, that 50% will agree with the decision and 50% won't. A non-neutral judgment is programmed in this case. You can't avoid it. If you would like, you could proof it with mathematics. --Niabot 17:33, 24 August 2011 (UTC)
Suppose that most users of Wikimedia Foundation websites are Christian. (I'm not claiming this to be true, but let's imagine that it is.) Would you support a filter system based strictly upon Christian beliefs? In such a scenario, this would "cover the preferences of more than half of the users," so would that be "a good start"?
Any system favoring particular sets of value judgements (whether yours, mine or someone else's) is unacceptable. Even if we could reliably include a vast majority (which I don't regard as feasible), we mustn't discriminate against minorities by formally establishing that their definitions of "objectionable" aren't worth accommodating. Neutrality is a core principle of the Wikimedia Foundation — one that can't be voted away by a majority. —David Levy 21:05, 24 August 2011 (UTC)
I think someone with better knowledge/reasoning than me should take my place in this discussion, otherwise just let this section fades over time. Maintaining neutrality does not mean denying or condemning the existence of unneutrality. True Neutrality cannot be achieved without accepting the biased opinions. That's why we never have neutral POV in our WP articles but stating every POV possible with citation. So is the filter system. It is meant to accept different preferences without imposing your so-called neutrality on other users. I'm out of it, but that does not mean this suggestion gained my acceptance. Making it a choice between black and white just disgusts me. -- Sameboat (talk) 01:32, 25 August 2011 (UTC)
Indeed, maintaining neutrality does not mean denying or condemning the existence of non-neutral views and cannot be achieved without accepting biased opinions. That's why I oppose implementations that would ignore/exclude certain cultures' beliefs and support one that would accommodate "every POV possible" to the best of our ability. —David Levy 02:22, 25 August 2011 (UTC)
Your arguments appear to be three:
  • that making a maintenance-oriented list of pages that concern people is prohibited by NPOV (which it isn't, or the anti-spam folks would be in trouble),
  • that NPOV is a mandatory policy on all projects, including Commons (which it isn't), and
  • that since we can't please 100% of users with a "perfect" solution, we should refuse to take even a small step towards an improvement that would please most of them (a fallacy known as making the best become the enemy of the good). WhatamIdoing 17:48, 25 August 2011 (UTC)
  • Filter categories like "violence" are no maintenance-oriented lists. Prejudicial labels are designed to restrict access, based on a value judgment that the content, language or themes of the material, or the background or views of the creator(s) of the material, render it inappropriate or offensive for all or certain groups of users.
  • The images are filtered and excluded/hidden from articles inside other projects. That the technical implementation will be hosted on commons has nothing to with this.
  • Labels on images or articles may be viewpoint-neutral directional aids that save the time of users, or they may be attempts to prejudice or discourage users or restrict their access to materials. When labeling is an attempt to prejudice attitudes, it is a censor's tool. [6]
--Niabot 18:01, 25 August 2011 (UTC)
1. I reject your description of this system as a "maintenance" task. It will be a reader-facing site feature.
2. The image filter system is to be implemented across all Wikimedia Foundation projects, including those for which NPOV is a nonnegotiable principle.
3. Setting aside the neutrality issue, I don't believe that the proposed setup is even feasible, let alone a step toward an improvement. The amount of resource consumption and extent to which perpetual controversy would be provoked are staggering.
Returning to the neutrality issue (and setting aside the issue of feasibility), I don't regard "pleasing" majorities (at the expense of the minorities whose value judgements are deemed invalid) as a laudable goal. We could "please" most people by customizing each language's projects to declare that whatever religion predominant among its speakers is correct, but that doesn't make it a sensible idea. —David Levy 19:02, 25 August 2011 (UTC)
An image, whether accessed directly or via one of the Wikimedia Foundation's various projects, typically is accompanied by descriptive text, which would inform the reader of the blocked file's subject and enable him/her to decide whether to view it. —David Levy 04:01, 24 August 2011 (UTC)
In order to inform the reader what characteristics that image contains, the description must be more detail than it normally requires. In the end, it is almost like labeling the image by filter category (I mean the labels of objective characteristics, not subjective interpretation). Not to mention its inconvenience that requires the user to access the file description page before filtering and possible lacking of description text or the language that user understands. Even though alt text can more or less fulfill that purpose, it is not an integral part of the file description page. The image alt text needs to be written per article which is the source of impracticability. Filter category eliminates the troublesome of internationalisation. While your suggestion greatly relies on the file uploader's self-discipline and more translation works for worldwide users. -- Sameboat (talk) 04:27, 24 August 2011 (UTC)
Actually, I'm thinking primarily of the text accompanying images in articles (or equivalent pages), which should clearly explain their basic nature. —David Levy 05:23, 24 August 2011 (UTC)
This is even more inefficient than labeling filter category because there can be more than 1 article/page embedding the same image. -- Sameboat (talk) 13:06, 24 August 2011 (UTC)
How does an image's use on more than one page render the setup inefficient? If a reader were to whitelist an image via any page, it would be unblocked for him/her on every page containing it. —David Levy 13:26, 24 August 2011 (UTC)
I mean the description text of image for reader to judge if that image worth being filtered or not. If there're more than 1 article including the same image, you need to copy and paste that description text to all including pages, this is needlessly repetitive. -- Sameboat (talk) 13:51, 24 August 2011 (UTC)
Copy and paste what description text? The pages in question already contain (or should contain) such text, which is essential for context.
For example, someone viewing the English Wikipedia's "Penis" article would see the top image described as "a selection of penises from different species at the Icelandic Phallological Museum." A reader with the general image filter enabled would choose to whitelist that image only if he/she wished to view "a selection of penises." —David Levy 14:11, 24 August 2011 (UTC)
The caption text is not always that obvious when the images are hidden indiscriminately. It may be sufficient in well-written articles, but generally not in most stub to decently short articles. Also considering the "random page" option and articles of unknown/utterly professional/ambiguous title (medical/surgery terms for example), the general image filter cannot prevent such surprising moment because the general image filter requires the user with some degree of vigilance of being offended before turning on/off the filter, while WM proposal does not. -- Sameboat (talk) 14:38, 24 August 2011 (UTC)
The caption text is not always that obvious when the images are hidden indiscriminately. It may be sufficient in well-written articles, but generally not in most stub to decently short articles.
If an article is missing reasonably descriptive captions, its images lack the context needed to be useful. This problem can be addressed in a straightforward, neutral and uncontroversial manner, thereby providing a valuable service to all readers. This is a far better use of volunteers' time than tagging "objectionable" images (an inherently complicated, non-neutral and contentious task) would be.
Also considering the "random page" option and articles of unknown/utterly professional/ambiguous title (medical/surgery terms for example),
If a user is unfamiliar with a subject, he/she can simply read the page (or relevant portions thereof) before deciding whether to unblock an image.
the general image filter cannot prevent such surprising moment because the general image filter requires the user with some degree of vigilance of being offended before turning on/off the filter, while WM proposal does not.
Sorry, I don't understand what you mean.
But if you want to discuss "vigilance," I can think of nothing in the Wikimedia Foundation ecosystem requiring more vigilance to implement and maintain than a category-based image filter system would. Millions of images would need to be manually checked (with countless tagging disagreements and edit wars inevitably arising), with thousands more uploaded every day. —David Levy 15:41, 24 August 2011 (UTC)
One of many practical reasons not to use custom categories. But note that this section was about not using a small # of categories at all, whether or not they were already in place and maintained by the community. SJ talk | translate   01:23, 6 September 2011 (UTC)
Indeed. I wrote the above in support of the "general image filter" implementation proposed in this section. —David Levy 12:16, 7 September 2011 (UTC)
I'm not sure this is the only way we could implement an image filter, but I'd be happy if we introduced it this way and when I use a slow connection I'd definitely use it with "hide all images" set on. I'm assuming that blocked images would have their caption shown as well as the click here to see the image box. We need to decide whether it would also display alt text or it would give people the choice whether alt text was displayed. I think my preference would be for the former. WereSpielChequers 10:20, 18 September 2011 (UTC)
If it is technically feasible, then I support the inclusion of alt text. WhatamIdoing 16:23, 18 September 2011 (UTC)

Give choice - not one central category list, but several user defined[edit]

I am strictly against a centralized categorization list. This is not self-censorship, this is censorship! The only option for me is - selecting each image by yourself OR -

having the choice to select from user-, or user-groups defined censorship lists. So every user can select pre-defined censorship list from users or groups he trusts. This is the only way to go for me, a centralized Wikipedia censoring-commitee is inacceptable. 94.217.111.189 07:09, 21 August 2011 (UTC)

Fully agreed. It's also amusing that some proponents of such a central, Wikimedia-hosted "objectionable content" list seem to believe that the same community that in their view is unable to make sound editorial decisions regarding images in articles should somehow be perfectly able to produce a workable filter category. Hilarity will ensue. --87.78.45.196 07:19, 21 August 2011 (UTC)
Please, WMF will not enforce users to use the filter. If you're worrying about the filter being abused by 3rd party organization, that's not our problem at all. The major problem of user-defined filter category is that you actually have to see then add the images to the filter categories all by yourself. It is also equally as bad as adding the original categories into your filter category. Considering the absurd amount of images and their categories on the server, it is too inefficient and impractical to be done by a single user for himself. The filter itself is meant to hide the image from users if they don't want to see it in the first place. What's the meaning if user must see each unwanted image first before filtering it? -- Sameboat (talk) 07:56, 21 August 2011 (UTC)
How is a category of "objectionable content" compatible with the core idea of an encyclopedic project, with core tenets like neutrality? A centralized blacklist would essentially label parts of Wikipedia's content as objectionable (by whose standards?) and non-essential (according to whose encyclopedic judgment?).
Even more importantly, how much sense does it make to label some content as objectionable and non-essential, yet at the same time to consider the same content as important enough for full coverage of the subject matter to include it in the article? --87.78.45.196 08:13, 21 August 2011 (UTC)
Nothing worth discussing with you if you think the filter category is identical to a blacklist. We're not talking of morality of the labeling but people who need this feature to avoid things they don't wanna see. Disable the filter if you don't want to use it. Objectionable or not doesn't concern you. -- Sameboat (talk) 08:35, 21 August 2011 (UTC)
Nothing worth discussing with you -- Oh wow. Insulting others because you're out of arguments. Well done, considering.
if you think the filter category is identical to a blacklist -- Nothing worth discussing with you if you intentionally seek any opportunity to evade the arguments. So you don't think that a centralized filter category that labels content as objectionable and enyclopedically non-essential is anything like a blacklist. Then ignore the word "blacklist" and focus on the reasoning I presented. Or do you need a filter button for the word "blacklist" in order to enter meaningful discussion?
Objectionable or not doesn't concern you. -- At least change your tone if you don't have anything of value to contribute to this discussion. And yes, it does concern any editor. Read my arguments, respond to them, or be quiet. Thank you. --87.78.45.196 08:44, 21 August 2011 (UTC)
You're the one who is evading. I've already explained the user-defined filter is practically inconvenient, unusable. All you do is complaining on the labeling to be objectionable. You, the one who will definitely not using this feature, are asking to shut the project down which is asked by a considerable amount of other users. We believe WM user group can do a better job than the 3rd party censoring option which may censor everything indiscriminately, because our filter is tailor-made for WM projects which can create less controveries. -- Sameboat (talk) 09:15, 21 August 2011 (UTC)

┌────────────────┘
complaining on the labeling to be objectionable -- I am saying that an image filter category hosted and maintained within the Wikimedia project and specifically created for the purpose of filtering a community-decided set of images would, by sheer virtue of being offered to the user as a part of Wikipedia, become a part of the encyclopedic project itself. We'd be offering (i.e. suggesting!) images sorted into one or more filter categories to the users for blocking, thereby agreeing as a community that these images are objectionable according to some general/common/basic/universal? sense/feeling/opinion/consensus? regarding the objectionability of images. Like I also pointed out above, by including an image in a filter category (and offering=suggesting it for blocking), we'd also be labeling it as non-essential to the article. Which in turn would then bring up the question of why those images are there in the first place. An image either contributes to the encyclopedic coverage in an irreplaceable way, or it doesn't. If the image is and integral part of and essential to the article, then how could we be suggesting it for blocking? Conversely, if the image is not essential, then why is it in the article?

We believe WM user group can do a better job than the 3rd party censoring option -- I don't think they can. You can install Firefox, install the Greasemonkey add-on, install the userscript and all images on Wikipedia will be hidden by default, unhide-able with a click and with additional unobtrusive, Wikipedia-style links at the top of each page, giving the user the options to show all hidden images on that page and to disable image hiding for the rest of the session. It hardly gets any more convenient than that. When I know that I'm looking at articles that are unproblematic for me, I can e.g. disable image hiding, otherwise it's activated by default and thus failsafe and foolproof.

Again: This is all technology that already exists and which is freely available and which I am already using. And which, by not being part of Wikimedia, poses no problems with regard to the project's basic tenets. --87.78.45.196 11:15, 21 August 2011 (UTC)

Just because the filter categories are being stored in WM server, doesn't mean WM acknowledges/declares those images to be objectionable or non-essential to the article. You're making a fallacy here. I like to read some medical article but somehow the real photo may be too gory for me. I understand the image is essential to the article but I just want these kind of images hidden from my sight until I want a peep at them. A truly workable filter for WM must be hosted by WM itself and managed by its users, otherwise the whole project is to difficult to be effective and up-to-date. -- Sameboat (talk) 11:29, 21 August 2011 (UTC)
You keep ignoring the crucial points. I was talking about an image filter category hosted and maintained within the Wikimedia project. A filter category built e.g. by the English Wikipedia community means it's a part of the project itself. You cannot argue your way around that.
I just want these kind of images hidden from my sight until I want a peep at them -- 1, 2, 3, done. You're welcome. --87.79.214.168 11:59, 21 August 2011 (UTC)
I never ever trust 3rd party script. It is the best news the filter WILL be part of the WM project. It does not discriminate the labeled images objectionable or unessential to the project. That's all. -- Sameboat (talk) 12:13, 21 August 2011 (UTC)
I am tired of throwing reason at you, but I hope that someone else may profit more from my postings in this section than you managed to do. --87.79.214.168 12:21, 21 August 2011 (UTC)
You're making a huge fallacy to support the shut down of the project, that's why no one else is willing to reason with you except your fellow who echos your vague point. -- Sameboat (talk) 12:25, 21 August 2011 (UTC)
Ahm, yeah, right. The original poster of this thread, me, and a couple more people on this page, as will you will see if you at least skim over it. Farewell. --87.79.214.168 12:29, 21 August 2011 (UTC)
Ooh, perfect. That greasemonkey script might cover most issues nicely. Next problem! --Kim Bruning 13:53, 21 August 2011 (UTC)
It's only "perfect" if you think "see zero images no matter what" is a reasonable response to "I don't want to see images of mutilated bodies by default, but I'd sometimes like to be able to click through to them anyway". If a reader finds 2% of images distressing, do you want the reader's choices to be limited to (1) giving up all of the other 98% or (2) being distressed by 2%? WhatamIdoing 20:27, 23 August 2011 (UTC)
Greasemonkey can be enabled and disabled with a single mouseclick. Nobody's choices are affected in any way, shape or form with a general image filter, that's exactly the beauty of it. --213.196.212.168 23:15, 23 August 2011 (UTC)
I don't think that "nobody's choices are affected in any way, shape, or form" is an accurate description of "unable to see the 98% of images of images that I want to see without also seeing the 2% that I don't want to see", unless by that you mean "the only choice anybody should ever be given is 'all or nothing'". WhatamIdoing 23:46, 23 August 2011 (UTC)
This should never operate invisibly. It's important to make it clear to the user in every instance that some content has been concealed, why it was concealed, and what they need to do to view it. - Elmarco 21:10, 27 August 2011 (UTC)
Yes, that would be an absolute requirement. To the original idea of having a set of user-defined categories: you don't even need to have a visible list of what other users use. You could make it easy for readers to identify the categories that a given image is in, and choose a set of them to shutter. (One of the long-term technical improvements that this discussion highlights a need for, is access to metadata about media embedded in a page. It currently takes a number of aPI calls to get author, upload date, and category information about a file; and there is no quick way to see all of its parent categories. So for instance there is no easy way to identify all pictures of Books, since most will be in a sub-sub-category.) SJ talk | translate   01:23, 6 September 2011 (UTC)


Categories are not filters or warning labels[edit]

Categories are inclusive, filters are exclusive[edit]

Oppose for reasons quoted by many other posters, basically, the fact that offering filter tags makes government censorship a lot easier.

Remark on implementation: if this proposal does make it through the democratic scrutiny, then I remain opposed to the idea of it being implemented through categories. Categories were designed to be able to find pages, not to exclude them. Their purpose is inclusive, i.e., when you are searching for something then it is better to have a few results too many than to miss out on the exact page (image, resource...) you were looking for.

Censorship (aka "filtering") tags, if implemented at all, should have the opposite philosophy: it is better to accidentally come across an offensive image every now and then than to miss out on important information that may not be so offensive, after all. Resources should only be labeled 'offensive' after they have been established to be so, not 'potentially offensive' because someone thinks its a good idea to do so. In a debate on whether a certain resource 'potentially' has a certain property, the 'yes' side stands a better chance of winning by definition (in case of uncertainty, the claim remains 'potentially' true).--Lieven Smits 08:05, 22 August 2011 (UTC)

This is why images should be given objective tags such as "nipples, genitalia, blood, intercourse" rather than an "offensiveness" rating. 68.126.60.76 10:54, 22 August 2011 (UTC)
Well, keeping in mind that I am still very much opposed to the whole idea of tagging objectionable images, if we remain within the limited discussion of whether or not to use categories, I think that you mostly confirm my point: the 'normal' category Blood [7] would have a rather different set of pictures from the 'objectionable because too bloody bloody' tag. Hence: two different concepts, two different implementations.--).--Lieven Smits 12:02, 22 August 2011 (UTC)
Yes. The filtering categories are going to have to be totally totally different than categories like 'Shuttering requested due to _____ '. People will want to add images to the filter categories that don't objectively meet the stated criteria, and over the long haul we won't have a way to stop them from doing it. Admitting up front that the 'filter categories' are different is a good insight that will save us a lot of trouble later on. --AlecMeta 16:48, 22 August 2011 (UTC)
You're trying to invent a problem where there doesn't need to be one. We shouldn't be trying to define what is objectionable. You tag an image with blood, and someone determines whether or not they will allow images with blood to display on their computer. It doesn't matter how much blood or how objectionable anyone else might think the blood is. The presence of blood in an image is at least 99% of the time going to be an objective fact which is indisputable. You may get people who want to add improper categories, but this can be undone, just as people add bad content to articles and it is reverted. It will still be an encyclopedia anyone can edit. If such an objective system is used your concerns become moot, and you let the end user worry about which tags they want to block, if any. 68.126.60.76 19:46, 22 August 2011 (UTC)
In my theory, the desire of people to censor will be inarguable-- people will always find an image they dislike and then use any excuse they can to get it in a filter. And since bigotry is multiplicative, black gay men will get filtered all the time if they stand anywhere near ANYTHING potentially objectively objectionable. And that won't do.
We can't "filter neutrally"-- instead let's admit upfront all filtration is, by its nature, non-neutral. Stop and think about it-- we can live with this. Individual filters don't have to be neutral as long as there is no "one" filter or one set of filters or a pretense that filtration is objective, authoritative, or morality-based. We say upfront it's all prejudice, but if you REALLY need a very-slightly-prejudiced content, we can give you the tools to do that on your own time. --AlecMeta 21:39, 22 August 2011 (UTC)
It's still an encyclopedia anyone can edit. Why are black gay men more likely to be filtered than deleted entirely in the absence of filters? If you properly use objective filters, the problem you propose can't happen. Say a black gay man is standing nude, and you filter "penis". The presence of the penis is not debatable, and he will be just as filtered as an image containing a straight white man's penis. Now suppose a multiplicative bigot sees a picture of a clothed gay black man, standing in a room in his house, everything normal and everyday about the scene. What is the bigot going to tag? Wall? Chair? How many other users will filter photos by the tag "chair"? If the black gay man is at a pride parade, maybe the image will be tagged with "homosexuality" or "gay pride" or something of that nature, which some could consider objectionable, but only those who choose to filter those contents will have the image blocked. If you filter purely by keywords, and don't assign an offensiveness rating, it will be nigh-impossible to game the filtering system to prevent others from seeing the image you don't want to see, since everyone will filter different keywords rather than choosing the "moderate" setting. You could vandalistically add a false tag, but someone will notice and remove it, just as articles are vandalized and reverted. 68.126.60.76 04:58, 23 August 2011 (UTC)
I may not have been clear on the example of 'Blood' categories/filters/tags. It may be objectively verifiable whether an image contains blood, but that is beside the point. The category 'Blood' should not contain all images with blood on them, but only images whose content is relevant to someone who intends to learn about blood: blood groups, microscopic images of blood cells, chemical formulae, test tube images, blood drives etcetera. The other 'thing', the category that you intend to use as a filter against objectionably bloody images, would presumably not even include any of those; it would contain depictions of humans and possibly animals who are bleeding. The two concepts of categorisation, while both deserving the name 'blood', are very different both practically and theoretically.
My principal opposition against creating the other 'thing' remains. It can and will be used to limit other people's access to information. Apart from being almost the exact opposite of NPOV.--Lieven Smits 07:15, 25 August 2011 (UTC)
You are confusing Wikipedia's existing category system and a content filter keyword system. They would be separate systems. 68.126.63.156 02:47, 30 August 2011 (UTC)
I agree that Wikipedia categorisation serves a very different purpose from content filtering. I do not think that I have mixed the two concepts up.
The purpose of the proposal is essentially content filtering. But several contributors on this page have argued that the proposal includes using the existing category system for that purpose. Thus, confusion appears to be in the air. I have tried to argue that the category 'Blood' should be distinct from the content filter 'Blood' if there is ever a need for the latter. The category 'Blood' is unfit for filtering out shocking images of bleeding people and animals.--Lieven Smits 09:42, 30 August 2011 (UTC)

Your point about categories having an inclusive, not exclusive purpose is a good one. Nevertheless, for sufficiently specific categories, the two become the same. SJ talk | translate   01:23, 6 September 2011 (UTC)

Libraries have fought warning labels for years[edit]

When it comes to passionate defenders of intellectual freedom and access to information, one need look no further than the library profession. They have opposed warning label systems, like the one being proposed, for years. From the American Library Association:

Labels on library materials may be viewpoint-neutral directional aids that save the time of users, or they may be attempts to prejudice or discourage users or restrict their access to materials. When labeling is an attempt to prejudice attitudes, it is a censor's tool. The American Library Association opposes labeling as a means of predisposing people's attitudes toward library materials.
Prejudicial labels are designed to restrict access, based on a value judgment that the content, language or themes of the material, or the background or views of the creator(s) of the material, render it inappropriate or offensive for all or certain groups of users. The prejudicial label is used to warn, discourage or prohibit users or certain groups of users from accessing the material...

I would encourage anyone interested to read the full statement and seek out some of the substantial body of library literature on why warning labels are incompatible with intellectual freedom.--Trystan 13:59, 22 August 2011 (UTC)

Trystan, while I know some proposed implementations have involved a custom set of 'controversial' categories (which would qualify as prejudicial labels), I believe the more general quote from that document which is relevant here is: "Directional aids can have the effect of prejudicial labels when their implementation becomes proscriptive rather than descriptive." We would need to be careful in any use of existing categories as well not to present them in a proscriptive way. SJ talk | translate   01:23, 6 September 2011 (UTC)
Read w:Wikipedia:Free speech please. Your right of speech here is a privilege granted by WM, not US government. -- Sameboat (talk) 14:12, 22 August 2011 (UTC)
You get to choose what 'warning labels' you want. No body is going to paste a 'label' on your 'book' for you. --Bencmq 14:18, 22 August 2011 (UTC)
@Sameboat: I don't get your point at all: Trystan is not even American, as far as I know. @Bencmq: you are not getting Trystan's: the issue is that, once the labels are there, you can be denied access to the books by third parties. Besides, the labelling would be done by the wikipedia community, including, but not limited to, shills of the Chinese government, not by the single user. --complainer 14:24, 22 August 2011 (UTC)
On that point I agree. There is inevitably going to be ridiculous edit warring over images whether they should go into a certain category that is used for the filter - but I think the availability of the tool itself is as important as well. I hope, before we see anything, the likelihood for such inevitable dispute could be minimised through technical means.--Bencmq 14:30, 22 August 2011 (UTC)
Sameboat, I am not American, and am aware that free speech guarantees don't apply to private organizations regardless. The issue is not free expression but the related concept of intellectual freedom. I chose the ALA wording because it is well-expressed, but there are librarians world-wide that support intellectual freedom.
Bencmq, the warning labels show up in at least two places that have the potential to prejudice users in the sense that the ALA is talking about. First, in the filter itself ("Do you want to avoid looking at controversial images like sex, nudity, violence, immodestly dressed women, blasphemy, or gay people?") Second, on the image pages themselves, which would need to be appropriately tagged with those labels for the filter to work.--Trystan 14:32, 22 August 2011 (UTC)

Yet every library I've been to is divided into a Children's section, Young Adults' section, etc. And there's librarians and parents around to ensure that children don't wonder off. There's a difference between a government telling organizations what to do and organizations using their own judgement. --Michaeldsuarez 16:08, 22 August 2011 (UTC)

That's an excellent point, thanks for bringing it up. Material is organized to help the reader find it, not to warn about it's content. It is a frequently attempted form of censorship to try and get material aimed at children or youth filed in the adult section based on ideological objection (e.g. Children's books like Heather Has Two Mommies or Judy Blume Young Adult books dealing with sexuality.)
As for librarians monitoring children, absolutely not. Libraries very explicitly do not stand in the place of parents. We do not scrutinize or evaluate what patrons are doing, and are most certainly not censors for other people's children.--Trystan 16:26, 22 August 2011 (UTC)
(ec) You raise some great points, Trystan. w:Wikipedia:Free speech not withstanding, I actually do think of labeling offensive content raises very real 'free speech' issues, because if we did filtering "wrong", we might empower or legitimize real-world censors taking away our readers' real-world rights.
But labeling done "very right" can be way less bad once you take away any sense of "authority" behind the labelling. If the foundation personally decided on "the best" labelling scheme, it would come with authority. But filtering will be massively parallel, "buggy", controversial, and it won't work the way users might expect it to. The kind of "prejudgment" our labels will contain is far less reliable and monolithic-- nobody will agree on where the lines are or if there should even be lines. It will become apparent to anyone using the feature that its logic is irrational, idiosyncratic, and non-ideal-- in short, our labels won't be monolithic and authoritative.
This isn't exactly what our 'heart' tells us to do, but let's be realistic. A library is usually located in a specific location-- we are everywhere-- buses, subways, airplanes, offices, mosques, and everywhere else. We also are a 'face of the internet' to whole cultures that are getting online for the very first time-- imagine our ancestors at the dawn of film, when the silent black and white image of a train heading towards the viewer could actually elicit enough fear to be called a "scary movie"-- and now realize that whole cultures are going to be meeting us at the same time they meet any internet technology. I don't know who these cultures are, I don't know where they are precisely, and I have no idea what they need in a "library".
If they really deeply want a very simple 'privacy shutter', should we really let our original vision of the ideal global library stand in the way of them having it? I ask this none-rhetorically, but my emotions are saying that this is 'close enough' that we should give it to the people who want it. --AlecMeta 16:17, 22 August 2011 (UTC)
Hm, but. For that to make sense, we would have to place a permanent and quite prominent note on every single content page (not just the main page, where first-time users may or may not arrive!) drawing attention to the filter option. Otherwise, new and technically inept users in particular may keep running headlong into objectionable material (which probably has done untold damage over the past years, including work stoppages, people poking out their own eyes, and spontaneous combustion). --195.14.220.250 16:30, 22 August 2011 (UTC)
I agree that we'll have to advertise the filter on either all content or content that has objectively been shown to upset people to a certain _objective_ level. It will still involve the reader having to do some effort on their own, so it won't be perfect from their point of view, but it will be as close to perfect as we can get. --AlecMeta 18:54, 22 August 2011 (UTC)
Nope. The closest we can get to a perfect filter is a general image filter. That is the only way to (i) guarantee that --once they activate the filter-- people won't see any objectionable images and (ii) to prevent any completely unnecessary snafu that would inevitably accompany any filter category system.
Bottomline: IF the image filter is implemented, it should most definitely be done as a general image filter, which users can enable and disable at will. What exactly, in your opinion, actually speaks for a category system rather than the general image filter? --78.35.232.131 00:44, 23 August 2011 (UTC)
I would certainly like to see what sort of interest there is in a general image filter. That is the only version of this feature that I could imagine myself using. (for a small and annoying phobia.) SJ talk | translate   01:23, 6 September 2011 (UTC)
«labeling done "very right"» How can we expect to do it right. It might be right for yourself, but not for others. You will be in the trouble that the average American, the liberalst German and the conservativest Arabian have to agree that the labeling is done right. Otherwise you simply don't do it in a "very right" way and all your argumentation is for nothing. I hope that this is clear. --Niabot 18:25, 22 August 2011 (UTC)
Either that, or we create several different filter categories, one "Typical Arab Sensibilities Filter", one "Typical Conservative Christian Sensibilities Filter", one for mom, one for dad, one for grandma, one for granpa, one for average children, one for smart children who are ahead of their age. I can see it now, it suddenly all makes perfect sense. Seriously, to anyone easily offended by sarcasm: I feel sorry. For you. --195.14.220.250 18:38, 22 August 2011 (UTC)
Yes! Precisely. I'm not sure if all the participants who want a filter yet realize this basic insight that you capture so well-- there is NO SUCH THING as a filter everyone can agree on. The liberalist American and the most conservative American will NEVER agree on a filter, and that's just Americans. There is absolutely no consensus to find-- it's JUST opinion.
So "Very Right Labelling" is infinite labelling-- it's a wiki with infinite forks-- as many categories as people want, right down to per user. I could make a whole category with no better label than "images that offend AlecMeta" and that would be fine. No one would care what offends me, no one would expect ANY labelling be perfect any more than we expect any article to be perfect.
Doing this "very right" would actually be something we could actually be proud of, at the end of the day. We could show Youtube and GoogleSafeSearch that there's a better way than a "one-size-fits-all" labeling. We could, if we chose to, make filters in infinite diversity in infinite combinations. NO two people would EVER agree on a filter, and we could prove it in front of the whole world.
I still don't see it as a priority. But if it has a low price tag, is it an exciting and educational project? absolutely! --AlecMeta 19:02, 22 August 2011 (UTC)
Alec, the point is that no filter category or set of filter categories will ever be sufficiently evolved, nor will they evolve towards anything useful in the first place. You're trying to play the eventualist / incremental improvement card: "it doesn't need to be perfect right now, but it's approaching perfection". No, it's not. Not only will it never reach perfection (as in: working really well as a filter category/categories), it will actually never even be approaching anything like that. --195.14.220.250 19:13, 22 August 2011 (UTC)
There is no perfection to find for this kind of project. It will never make anyone completely happy. But if we want to actually understand what images upset people, Mediawiki is a powerful technology, and we can use it to let people make filters that are "as perfect for them" as time and effort allows. Most of them will always be upset that their filter isn't imposed on everyone, but that's okay. --AlecMeta 19:20, 22 August 2011 (UTC)
But thats all under assumption that you could create personal filters. Or at least a bunch of them. I doubt that it would be practical to have a list with 100 kind of different filters from that some might hit your expectations. At least for the endusers this would be a pain. "Oh, cool. I can choose from 100 filters. I will spend an hour in reading and choosing. But, damn, i just wanted to read something about XYZ". --Niabot 19:55, 22 August 2011 (UTC)
Indeed, it is all under that assumption that we "do it right". If we do this very wrong, it would blow up in our faces very badly. If we do it only semi-wrong, I predict the Image Filtration project will quickly evolve unlimited categories as we learn, experimentally, that there are no consensuses and no hard lines. What's on the table is close enough to perfect that either it will be "good enough" (and I'm wrong) or it will develop infinite categories with time. Even better, of course, would be to get it right the first time for a low price, if that's an option.
The first screen would probably only show the n most popular filters, and trying to make sense of infinite filters would be a big challenge. It's just about offering people who are displeased with the status quo a limited SOFIXIT power. People are coming to us upset with our content, the solution is to give them a 'project' and let them try their best to solve it for themselves. If they fail, that's okay-- the experiment doesn't have to succeed for it to be a success. Offering people the chance to fix it is enough. --AlecMeta 20:02, 22 August 2011 (UTC)
Now you did a whole lot of good faith assumptions. At first it is a bit doubtfull, that people, don't wanting to see the images (offended), will actively search for such images and categorize them in a neutral way. The second point is the default filtering. It's still on the table. Which will be the default filter? (More objections to follow, but two for the start) --Niabot 20:35, 22 August 2011 (UTC)
People won't categorize in a neutral way-- there will be no neutral way, there will be no neutral categories, just like real life. There is no default-- default is off. Even when turned on there is no default, only 'most popular'. (and I admit a LOT of assumptions, so by all means pick my arguments apart, that's what they're there for.) --AlecMeta 20:46, 22 August 2011 (UTC)
Right, right. It's about the children, all about the children, always and only about the children. Won't somebody please think of the children?! Great argument! So new and fresh and powerful, kinda. --195.14.220.250 16:19, 22 August 2011 (UTC)
I just can't express deeply enough how much this is NOT about the children. The children don't want this, the children don't need this, and the children can get around this without blinking. Won't somebody please think of the grandparents who can never unsee Goatse! --AlecMeta 19:28, 22 August 2011 (UTC)
  • 50 years ago, childrens libraries were indeed restricted to material thought suitable for them. Movies were censored also, on the same traditional grounds as that proposed here: sex, and violence. For that matter, so were encyclopedias, which carefully avoided anything that might disturb the readers. It was a survival of what once was a important cultural value, paternalism in the quite literal sense, where (usually upper-class) adult males prevented other people from reading or see what these supposedly more responsible people thought them unfit to see. We did not make an encyclopedia on that basis. We were most of us raised in a world that was free from this, or at least becoming free from this, and we did not make an encyclopedia to suit the views of our great-grandparents. We're a contemporary encyclopedia, and contemporary means uncensored. It was one of our first principles. We also believe in the freedome of our readers. If they choose of their own will to censor it with their own devices they are free do do so. regardless of what we may personally think of that decision. But we an not in conformance with our principles encourage them to do so, or adopt a system aimed at facilitating it. The board seems not to have been as much committed to free expression as the community, and the members of it need to rethink whether they are in actual agreement with the goals of the organization they are serving. DGG 17:31, 22 August 2011 (UTC)

Any idea, why the American Library Association has made such a proclamation? Because they were and are under pressure from local politicans - and we are just the next one on their targetlist. --Bahnmoeller 18:32, 22 August 2011 (UTC)

It is worth noting that we are not, in fact, under pressure from local or global politicians on this score. A few countries block us, but not primarily for our use of images. And some filters may block commons, but we do not receive significant political pressure to change as a result. SJ talk | translate   01:23, 6 September 2011 (UTC)
[Warning: the following post contains feminist, gay, and blasphemous content, which is offensive to many users. If you are offended by this content, please make sure your filters are set to hide it.]
...but in all seriousness, you raise some excellent points, Alec.
I don't think taking the labeling authority away from WMF and giving to users makes much difference. If someone tags a picture of a woman wearing shorts and a T-shirt as "Image filtering: Immodestly dressed women", there really is no response to that. I can't sensibly argue that the person who tagged it doesn't find it offensive, as it is completely subjective. And WMF will have validated their right not to be offended by such images, so I guess it should be tagged that way. But labeling an image with such a pejorative, moralistic label is antithetical to intellectual freedom principles. Whether it's done by an authority or a collective, I simply don't see a right way to indicate that wanton women with bare arms and legs/sinful gays/blasphemous Pastafarians are offensive and that the user might want to think about blocking them.
"I have no idea what they need in a "library". We don't really need to know in that sense. Whatever the culture, we provide free access to information (free from warning labels prejudicing users against that information) and let individuals access what they want to. In every culture there are those who want to freely access information and those who want to stop them. A commitment to intellectual freedom sides with the former group and actively opposes the latter.
There are also, in every culture, disenfranchised and unpopular minorities. Time and time again, calls for warning labels disproportionately target these groups. Empowered female sexuality, cultural minority traditions, gay and lesbian issues, and religious minorities are frequently held to stricter standards than what the majority deems proper. Adopting a warning label system just helps perpetuate this.--Trystan 00:12, 23 August 2011 (UTC)
Agreed. If we make a category "Immodestly Dressed Women", we should naturally expect it to grow to include every possible image that could be added to the category in good faith-- and on earth, that's going to wind up meaning "Immodestly Dressed Women" ultimately filter all images of women that show more than eyes-- there would be no way to stop that expansion. The whole system will be prejudice embodied-- but prejudices are just another form of information, and if people want to tell us about their prejudices, we can treat those prejudices as information to be shared, categorizes, forked, remixed, etc.
We have no clue how to reach consensus across languages, across cultures, across projects, but we are going to learn those things. We also have no clue how to reach consensus on 'offensiveness'-- and I think consensus has to fail there because no consensus exists. So we use forks instead of consensus-- that's okay. the image filtration project, by it's nature, is nothing but POV-forks already, because POV is all it is. --AlecMeta 00:38, 23 August 2011 (UTC)
I have great interest in participating in a project committed to neutrality and intellectual freedom. I have absolutely zero interest in participating in a project that rejects those principles and sets out to embody the prejudices of its users.--Trystan 02:17, 23 August 2011 (UTC)
Mmm-hmm... but you seem to have forgotten the important distinction between "censorship" and "selection". Have you ever worked for a library that has a collection of Playboy magazines? No? Why not? They're popular, they're informative, they're in demand by readers, they have cultural importance—but you haven't ever recommended that any library acquire them, have you? there are just ten libraries in all of Canada that include even a single copy of this magazine. Nine of them are university libraries. The other nearly 2,000 don't list it in their catalogs at all.
So if not acquiring Playboy doesn't violate your commitment to intellectual freedom, then how could letting me decide whether to display video stills from hardcore porn movies on my computer screen possibly violate intellectual freedom? Is skipping porn a "legitimate acquisition decision" only when it's done by a paid professional and affects every single user in the system, but merely base "prejudice" when the individual reader is making the decision to choose something else? WhatamIdoing 21:00, 23 August 2011 (UTC)
Selecting and proposing images for filtering equals labeling them as (i) non-essential to the article (in which case they shouldn't be included in the article in the first place) and (ii) as somehow generally potentially objectionable (according to whose standards?). The problem is not the "censorship" strawman you keep raising (although it is detrimental to the level of discourse that you keep raising it), but the fact that proposing (ie. suggesting!) images for filtering equals labeling them as somehow negative, and as non-essential to the articles -- to which all images are supposed to make an irreplaceable contribution. --213.196.212.168 21:15, 23 August 2011 (UTC)
Every image does not make an irreplaceable contribution to an article, just like every book does not make an irreplaceable contribution to a library, and every artwork does not make an irreplaceable contribution to an art museum. A brief look at the battles that the English Wikipedia has had with keeping vanity shots and extensive galleries out of some articles is sufficient to disprove this idea. To give one example, en:Bride has been reduced to "only" nine images, half of which are still unimportant.
You are proposing that a person going to a museum not be allowed to pick and choose what he (or she) looks at, but must look at every single image in every single room merely because the curator thought it worth displaying to the public. Why should I have to look at every single image? Why should I not look at the images I want to see, rather than the images that someone else wanted to display? Does your notion of free speech include an inalienable right for speakers to have an audience, but no freedom for you to use earplugs if you don't want to listen to the man yelling on the street corner? WhatamIdoing 23:28, 23 August 2011 (UTC)
Playboy is an interesting example. You make a very good case for libraries carrying it, and some choose to,[8] with somewhat predictable results.[9] Whether each individual library's decision with respect to that title is censorship or selection is a very complex and interesting and debatable question.
I wouldn't say that skipping "porn" per se is based on prejudice, depending on how pornography is defined. The difficulty is that in defining pornography for people who wish to avoid it is very difficult. The prejudice we are likely to introduce into our system is that people tend to be much more offended by (and tempted to label as "pornographic") certain types of images. For example, women are often held to different standards than men in terms of acceptable dress. Images of same-sex couples attract disproportionate criticism compared to similar images of opposite sex couples.--Trystan 23:39, 23 August 2011 (UTC)
I don't think that anybody believes that the filter could be 100% effective, if for no other reason than the undeniable fact that commons:Category:Media needing categories has thousands of uncatted images, and new images turn up every second of the day. We already have problems with correctly categorizing borderline images, and nothing here will change that.
It's true that the borders of categories are sometimes hard to define. We already see people getting blocked for edit warring over the cats, and that is unlikely to change. It's likely to be a 98% solution rather than a 100% guarantee. But none of those limitations are good reasons for saying that it's unethical to give readers the ability to use neutral information to make their own choices about which parts of our "library" they personally choose to see.
In fact, I believe that the greater ethical transgression is forcing them to see what they do not want to see. If your library included Playboy, you would never shove it in front of an uninterested reader, and you would stop any patron who did that to other patrons. You'd be calling security, not telling the other patrons that the library had a legitimate reason to include porn magazines and if they didn't like having them shoved in their faces unexpectedly, then they should stop using the library. Similarly, I think we should support allowing a Wikipedia reader to say, "I don't want to see that, so please stop shoving it in my face". WhatamIdoing 00:03, 24 August 2011 (UTC)


Negative effects of using categories as warning labels on the usefulness of categories[edit]

This topic has been mentioned in a few places above, but I thought I would add it under a separate heading here for a fuller discussion.

Categories, effectively used, attempt to capture that elusive quality of aboutness; the main subject of an item, in this case an image. They let us organize and find items based on content. Warning labels are very different; they attempt to flag any image which contains the offending content, regardless of what it is primarily about.

For example, we have a category about Nudes in Art. The implied scope of this category, based on how it has been used, is something like "artistic works primarily featuring the nude form." The scope is not "artistic works which contain any nudity whatsoever, however trivial." That would not be a useful scope for it to adopt. But if Nudes in Art becomes the warning label for people who want to filter out nudity, it will become diluted in that way. A category being asked to serve double duty as a warning label will invariably become diluted and much less useful.

There is also a detrimental effect on the images being categorized. A good practice is to categorize something based on a limited number of categories that best describe it. For example, this image of the Barberini Faun has a very well-chosen set of descriptive categories, but (goodness!) they forgot to include the all-important fact that he is naked. Let's add that, and any other potential objections people might have about it, and drop off some of those categories which merely tell us about its significance as a work of art. If we make warning people a primary function of our categorization system, it is to the detriment of its descriptive and organizational power.--Trystan 22:44, 21 August 2011 (UTC)

This is no difference than 87.79.214.168 stating the filter will make the labeled image "unessential to article". I have enough such fallacy here already. He may not think so or just worry other's thought. But WM can and should establish a statement that filter categories do NOT reduce the priority and usefulness of labeled images to be included in the article than unlabeled images. -- Sameboat (talk) 23:28, 21 August 2011 (UTC)
the filter will make the labeled image "unessential to article" -- Funny that you would mention it, seeing as you didn't have a good reponse to that powerful argument above, either. Again, not for you, Sameboat, but for others who may inadvertently think that you have any sort of legitimate point (which you do not): Sorting an image into a category of objectionable content and offering it to users for blocking equals (among other things) labeling it as non-essential to the article. That's not a fallacy, nor a statement of opinion. It's a fact, and as such non-negotiable reality.
An image either contributes to the encyclopedic coverage in an irreplaceable way, or it doesn't. If the image is an integral part of and essential to the article, then how could we be suggesting it for blocking? Conversely, if the image is not essential, then why is it in the article in the first place?
WM can and should establish a statement that filter categories do NOT reduce the priority and usefulness of labeled images to be included in the article than unlabeled image -- Unfortunately for your line of reasoning, that statement is blatantly untrue, whoever says it. --213.196.218.6 01:32, 22 August 2011 (UTC)
Of course if labeled image would be refused by editors to include it in the article, WM would not propose the image filter in the first place. Your statement may be true for some editors, but definitely does not represent the whole community. -- Sameboat (talk) 04:22, 22 August 2011 (UTC)
Your statement may be true for some editors, but definitely does not represent the whole community. -- What on god's green earth are you blathering about? --78.35.237.218 09:03, 22 August 2011 (UTC)

When labeling is an attempt to prejudice attitudes, it is a censor's tool. The American Library Association opposes labeling as a means of predisposing people's attitudes toward library materials. http://www.ala.org/Template.cfm?Section=interpretations&Template=/ContentManagement/ContentDisplay.cfm&ContentID=8657 --Bahnmoeller 18:20, 22 August 2011 (UTC)

True, but irrelevant.
Labeling an image of the Prophet Mohammed as being an image of the Prophet Mohammed is informative and appropriate. It is not a means of predisposing people's attitudes towards the image: it is a method of telling them what the image contains. The ALA would object to labeling such an image as "artwork created by evil people" or "artwork whose existence is an offense against all morally sound people". Those labels would prejudice the viewers' attitudes, but nobody is proposing that we do that.
Similarly, the ALA does not object to labeling pornography as pornography. This is informative, appropriate, and non-prejudicing. It would object to labeling it as "things decent people would never look at" or "materials only of interest to sexual perverts", but nobody is proposing that we do that.
The ALA does not oppose labels; it opposes labels whose purpose is to impose the librarians' prejudices on the users. The ALA does not object to readers using informative labels to help them choose which materials they want to see. WhatamIdoing 21:21, 23 August 2011 (UTC)
Labeling an image of the Prophet Mohammed as being an image of the Prophet Mohammed -- Ah, but that is not all of what we'd be doing if we implement a variant of the image filter that relies on special filter categories. We'd be categorizing that image and suggesting it for filtering! --213.196.212.168 21:39, 23 August 2011 (UTC)
Only in the sense that my local library indicates to me that an entire section of the library is mostly garbage by labeling it en:Young-adult fiction. I am independently prejudiced against such (commonly depressing and frequently idiotic) books. Are they wrong to label the books in a way that allows me to exercise my prejudice against them, by choosing books from other parts of the library? WhatamIdoing 23:18, 23 August 2011 (UTC)
Do you see how you need to go back to the library analogy in order to make a "point"? If libraries had a user interface and were suggesting certain books or types of books for "filtering" of some sort, yes, they would be wrong to do so and it would be very different from simply labeling the books according to their content. --213.196.212.168 23:22, 23 August 2011 (UTC)
My library provides exactly that filter in their online user interface. They also provide filters according to format, subject, language, publication date, and more.
You are claiming that libraries do not provide labels that readers can use to filter the contents of the library because you think the ALA prohibits it. I am proving that you are wrong: they do this routinely. Their goal is to have "a book for every reader, and a reader for every book", not "read this because the librarian know better than you what you should read". That means letting the readers ignore and reject the books that they do not choose to read—just like Wikipedia should allow readers to ignore and reject images they do not wish to look at. WhatamIdoing 23:35, 23 August 2011 (UTC)
You are claiming that libraries do not provide labels that readers can use to filter the contents of the library because you think the ALA prohibits it. -- I don't think I said anything like that. In particular, I have not expressed any opinion (nor do I have one) with regard to the ALA, not here, nor in any other section. So you are, in fact, not proving anything I said wrong. More importantly however, you are not even proving wrong what you think I said.
"read this because the librarian know better than you what you should read" -- You accuse the people opposed to an image filter that relies on project-wide filter categories of an overly authoritative and normative approach? Also, not having an image filter (you may now realize how I keep going back to the Wikimedia situation at hand) does not equal an advice, let alone a suggestion or order ("read this"). By contrast, suggesting images for filtering by presenting them to the user in his user interface does exactly that. So you are guilty of what you are wrongly accusing me of. Coincidence?
just like Wikipedia should allow readers to ignore and reject images they do not wish to look at. -- You are, consciously or not, presupposing that there could be no image filter variant that does not rely on special filter categories. Also, the "should" is your personal opinion. --213.196.212.168 23:50, 23 August 2011 (UTC)
A library could well adopt a "pornography" label as a finding aid, helping locate sexually material explicit material designed for sexual gratification for those users with that need. But when people say to public libraries that they want to avoid pornography, their objections are almost never limited to that class of works.--Trystan 23:45, 23 August 2011 (UTC)
Labels always work both ways. One patron could use a "pornography" label to find a work, and another could use the same label to avoid the work. I am certain that my local library adopted the "Young adult" label to help people find works of interest to them; I am equally certain that I use that label to avoid works not of interest to me. You should label books anyway.
It is true that most people object to many kinds of works. A person who does not want to see pornographic magazines might not want to see sexually explicit romance novels, either, or possibly any romance novels at all. I do not want to read depressing YA fiction; I also have no interest in similarly depressing works of fiction that happen to be outside of the YA section. It happens that I also have no interest horror, vampires, romance, sports, or dinosaurs. That I don't want more than one type of book does not mean that it is inappropriate for you to provide neutral, factual labels for all the books. We can provide factual information, like "pornography" or "photographs showing human genitals" without using subjective qualities like "depressing" or "sexually immoral". We have been doing this for years, after all, in categories like Commons:Category:Human penis. WhatamIdoing 17:06, 24 August 2011 (UTC)
The key distinction is that categories like Commons:Category:Human penis haven't been singled out as "potentially objectionable." No matter how factual the image filter categories are, the fact that they've been compiled on the basis of what should (and shouldn't) be deemed "objectionable" renders them inherently non-neutral. —David Levy 21:05, 24 August 2011 (UTC)
I disagree that making a maintenance-oriented list of categories that people might want to filter under a tickbox labeled something like "sex and porn" is inherently a non-neutral activity—or that NPOV applies to maintenance activities rather than to the main namespace—but let me try to focus your attention on this fact:
Commons does not have a neutrality policy. Commons rejected NPOV. NPOV is not required for WMF projects. So your argument sounds an awful lot like "But it might violate a non-existent policy", which is not a convincing argument. WhatamIdoing 17:40, 25 August 2011 (UTC)
1. Please explain how a formal determination of what content is and isn't reasonably regarded as "objectionable" is neutral.
2. Are you suggesting that the feature won't directly affect the main namespace?
3. The image filter system is to be implemented across all Wikimedia Foundation projects, including those for which NPOV is a nonnegotiable principle. 19:02, 25 August 2011 (UTC)
When categorizing images, we don't and won't decide what is "reasonably regarded as objectionable". We will instead decide (for example) if an image is reasonably regarded as a photograph of sexual intercourse, and place it (or not) in Commons:Category:Photographs of sexual intercourse based on that determination. That determination has everything to do with what our typical reader expects to find in a category with that name, and nothing to do with anyone's views on the morality of the images.
When assembling a list of categories for "Images to suppress if the person ticked the 'no sex or porn pictures, please", we will not decide whether the contents of a category are "reasonably regarded as objectionable". We will instead decide whether the category is reasonably regarded as images of "sex and porn" based on that determination. That determination will have everything to do with what our typical reader expects to have hidden (and expects not to have hidden) if he ticks that box, and nothing to do with anyone's views on the morality of what is or isn't hidden.
This process of implementing the principle of least astonishment is every bit as neutral and factual a process as the one that ALA members use when they list Hustler magazine under "Pornography -- periodicals" (which is exactly what they do with that magazine). You would expect to find Hustler in such a category; you would not expect to find The Last Temptation of Christ in that category. The fact that some people believe one (or the other, or both, or neither) is "objectionable" in some sense is completely unimportant. WhatamIdoing 21:27, 25 August 2011 (UTC)
You're missing the point. Even if the "sex and porn" filter's coverage is 100% accurate and undisputed, its creation — combined with the non-creation of a [something objectionable to someone] filter — will constitute a formal declaration that "sex and porn" is reasonably objectionable and [something objectionable to someone] isn't. —David Levy 23:18, 25 August 2011 (UTC)
"...what our typical reader expects to have hidden..." I don't know that such a creature as a typical reader exists. Personally, if I activated a nudity filter, I would expect all nudity to be filtered, including artistic, medical, educational, and pornographic. If I went to nudity-related categories, I would expect a very different set of images (those primarily or significantly about nudity.) Similarly, we could certainly create a filter for works about pornography, but I think that would be a rather surprising scope for most people who use it. In the library setting, complaints about pornography are almost never about anything I would classify as pornography.--Trystan 02:31, 26 August 2011 (UTC)

Can filtering be done in a way that minimizes the prejudicial effects of warning labels?[edit]

I've been re-reading the Harris Report to highlight areas that I agree with and pin down the basis of disagreements. The report talks about balancing intellectual openness with other goals; which I think is a fair point, and I could agree with it if the language was strengthened. I also retain my inherent dubiousness towards labeling based on its prejudicial power. However, I think I could support a filtering system that identifies 5-10 controversial areas for filtration if it was founded on the following principles:

  1. We acknowledge that warning labels prejudice users against certain classes of content, and are therefore an infringement on intellectual freedom.[10]
  2. In a few extreme cases, this infringement on intellectual freedom is justified, in order to give users control over what images they see, where an objectively definable set of images can be shown to cause significant distress for many users.
  3. The scope of a warning label must be clearly and explicitly defined based on image content.
  4. In order to be a reasonable infringement of intellectual freedom, warning labels must be minimally discriminatory.
    1. They may not have the express purpose of filtering of any group of people identifiable on the basis of race, national or ethnic origin, colour, religion, sex, age, mental or physical disability, sexual orientation, or other personal characteristic.
    2. Where disproportionate filtering of an identifiable group is likely to result from the implementation of a label, the scope of the label must be crafted to minimize this.
  5. We acknowledge that any system of warning labels will be inherently non-neutral and arbitrary, reflecting majority values while being over-inclusive and under-inclusive for others, as individuals have widely different expectations as to which, if any, groups of images should be filtered, and what images would fall within each group.
  6. We acknowledge that introducing warning labels, despite being for the express purpose of allowing personal choice, empowers third-party censors to make use of them.
  7. Categories are not warning labels. Because the task of labeling images that contain any controversial content is fundamentally different from the classification process of describing what an image is about, the warning label scheme used for filtration will be kept separate from the category scheme used to organize and describe images for retrieval.

I think the above principles could lead to the establishment of workable warning labels. It acknowledges this as an intellectual freedom limitation, which places the discourse with the right area of caution, and acknowledges the cost of every label we add. It also seeks to minimize the worst effects of warning labels in terms of prejudicing the user, namely, targeting or implicitly disadvantaging certain classes of people.

So what labels could potentially meet this criteria? Well, I think the following might:

  1. Nudity, including any depictions of buttocks and genitalia. If we include nipples, we include both male and female (i.e. Providing a filter that applies to Topfreedom and not to Barechested would not be minimally discriminatory.) We also would not distinguish between artistic, educational, or pornogrpahic depcitions of nudity, as such distinctions are not objectively definable.
  2. Wounds, medical procedures, and dead bodies.
  3. Sacred religious depictions, such as depictions of Mohammed or temple garments. But not including practices of one group of people which another group feels to be blasphemous, sacrilegious or otherwise offensive.

The major con of the above principles is that they will lead to categories which are perhaps not the best match we could develop to meet user expectations (e.g. a lot of people would probably prefer to filter female nipples but not male nipples, or entire minority topics like homosexuality.) This is by design, as it flows inherently from valuing the principles of objectivity and minimal discrimination above user expectation. It's also likely to be not all that culturally neutral (Though I confess that I don't understand how any set of warning labels could be neutral at all.)-Trystan 23:31, 26 August 2011 (UTC)

How do you reconcile "never on the basis of physical disability" with "it's okay to suppress images of wounds"? Do the victims of acid-throwing attacks or landmines not count as having a physical disability? WhatamIdoing 17:40, 27 August 2011 (UTC)
That's exactly the sort of discussion that I'd like to see the community address. My thinking of "wounds" was fresh wounds, like battlefield images, as opposed to a filter that hides images of amputees or burn victims. I have real difficulty when we start identifying classes of people as controversial.--Trystan 18:14, 27 August 2011 (UTC)
The unpleasant truth is that some diseases and disabilities really are incredibly disfiguring. There are regular complaints about the images at en:Smallpox. It was a horrifying, disfiguring, disgusting, deadly disease, and the pictures of victims are seriously disturbing to some people, just like the real thing was often seriously disturbing to both its victims and their caregivers. We want to educate, but we have people who are so upset that they close the page to get away from those images, and that means we aren't educating those people. (The images don't happen to bother me.)
I've read that the same was true in real life. Badly marked survivors were discriminated against. They had trouble making friends, forming relationships, and getting jobs. People who caught sight of them frequently reacted with disgust. (Human reactions to skin diseases may have evolutionary roots: our disgust might protect us from contagious diseases.[11])
So, yes, they're real people, and they deserve respect. But, yes, other people, who are equally real but unfortunately squeamish, may be unable to tolerate some of these disgusting images long enough to read the article. I don't believe that we will find an easy solution to this problem. Since we have announced that we don't want special categories solely for filtration (e.g., "Category:Images that are so disgusting they should be filtered") this may be one of those subject in which the proposed filter do far less than what some users want. WhatamIdoing 21:24, 29 August 2011 (UTC)
Please consider how well the alternative implementation discussed here would work in such a context. A reader with the filter enabled would visit the article and not encounter any disturbing images without specifically opting to load them. As in the category-based setup, he/she would have the ability to view only some of the images (depending on their descriptions). —David Levy 04:12, 30 August 2011 (UTC)
Especially in this case it would hide the knowledge from the user. Yes it doesn't look nice, but we want to educate. This images are a big part of it. They allow the reader to get a good understanding which crucial effects this disease had/has. I would find it unacceptable to hide the images in this context, out of respect to the people that fought against it. --Niabot 05:55, 30 August 2011 (UTC)
WhatamIdoing makes a good point about readers avoiding these articles entirely. Surely, we'd prefer that they avail themselves of the prose.
I oppose a setup in which we define such images (or any others) as "objectionable," but one in which all images are treated identically (and individual users are empowered to decide for themselves) seems appropriate. —David Levy 06:51, 30 August 2011 (UTC)
How would they know what to expect? They read the description, read the text and have no clue what they will be facing. Out of curiosity they will open such an image, "be shocked" and learning that they shouldn't open images, even so most of them (the other images hidden) wouldn't disturb them. Regarding this article i have to say: Anyone that truly wants to understand the article has look at the images. Otherwise i doubt that he had an interest for this topic to begin with. --Niabot 06:57, 30 August 2011 (UTC)
I agree that sighted persons can best gain an understanding of the subject by availing themselves of both the text and the images (however uncomfortable this might make them), but I disagree with the idea that we should seek to impose this as a mandatory condition (already a technical impossibility). I also disagree with your "doubt that [someone blocking the images] had an interest for this topic to begin with," which projects your thought process onto others. —David Levy 07:44, 30 August 2011 (UTC)
Would you really study medicine if you can't see blood? You would either understand only the half of it, or you should not study it. Learning only half the truth is mostly worse then to never hear about it. --Niabot 08:46, 30 August 2011 (UTC)
Obviously, a person unable to stand the sight of blood cannot realistically enter a profession requiring him/her to routinely encounter blood. The same isn't true of someone who merely wishes to read about such a subject. Relevant imagery can aid in one's understanding, but your assertion that its omission results in "learning only half the truth" simply doesn't make sense. I wonder whether you'd apply this bleak assessment to a blind reader's experience. —David Levy 16:56, 30 August 2011 (UTC)
A blind reader will always have this problem, that he can't actually see it and has to rely on sources that go in the very detail. He needs a description by someone who expresses his feelings about such an topic. Something we can't do, due to principles of an encyclopedia: Short to the point, neutral judgement, no emotions, ...
That we lack such detailed descriptions is true for many articles, and can be only improved with better more detailed articles. A new task like sorting images after new categories will bind manpower and would have the opposite effect. It doesn't help a blind man or woman. Instead the new buttons/links could confuse screen readers. Another issue.
You should simply believe me, that someone that has an detailed interest in such an article and would also be able to accept the images. It is as it is, its part of the knowledge. --Niabot 17:14, 30 August 2011 (UTC)
1. I strongly oppose the introduction of a filter system based upon "sorting images after new categories." I support the alternative implementation discussed here, which would enable a reader to block all images and unblock whichever ones he/she wished to view.
2. I agree that it's very important that the interface not interfere with screen readers (or otherwise reduce accessibility). Presumably, the additional buttons/links won't appear unless the user enables them (which a blind person obviously wouldn't do).
3. Again, you're applying your thought process to others. It isn't our place to pass judgement on readers. I personally believe that it's best to view the images, but people have the right to decide not to. In such a circumstance, reading the available prose is far better than avoiding the article completely. —David Levy 18:00, 30 August 2011 (UTC)
I agree to the first two points and i partially agree with point three. That someone is able to hide images manually isn't a problem. That he can hide all images as default is also no problem. The problem arises if we start to select the pictures for others, that are hidden by default. This results in non neutral judgment (everyone has at least some other opinions), discrimination of the content (majority wins), manipulation (majority is widely spread, some minority groups can push content in/out [local majority]), endless debates (wasted time) and many more problems (accessibility could be one of it).
You said, that we should give people the freedom to hide what they don't want to see. I say, that we should not decide for people what they shouldn't see. That is not their freedom; that is our personal judgment about what is acceptable for the public and what not. --Niabot 18:47, 30 August 2011 (UTC)
I agree 100% with all of the above. I strongly oppose any implementation requiring the community to take an active role in determining what content is "potentially objectionable." —David Levy 19:07, 30 August 2011 (UTC)
I guess we came to the agreement that
  • a feature to personally hide/show images is acceptable,
  • a feature to hide all images as the default is acceptable,
  • a feature with predefined categories is unacceptable,
if we want to keep our goals intact. We came to this conclusion after a relatively short discussion. The ALA did so, sixty years ago. They still think that this is the foundation for free knowledge, that should be kept under any circumstances, if possible. Wikipedia was founded under the same premise 10 years ago.
Now we may ask:
  • Why wasn't the Foundation able to figure this out for itself?
  • Why is a report written by an non expert and his family the valid and only source for such a critical project?
  • Why it was decided to implement a filter, even before asking publicly for other opinions?
  • Why must such an critical decision be the first test case for a "global" poll?
What makes me most curious at the moment is how they want to come to a conclusion after the poll. Will the results be split by language or does the English speaking majority count? It's important, since one of the goals is: "cultural neutrality". --Niabot 19:46, 30 August 2011 (UTC)
Niabot, you say that people with "detailed interest" in smallpox won't be put off by the images. Shall we educate only those people with a "detailed interest"? Should only medical professionals learn about it? Personally, I would rather educate every single reader, including those squeamish people who currently close the page out of sincere disgust. WhatamIdoing 19:54, 30 August 2011 (UTC)
If they close the page out of sincere disgust, then they already have seen enough. Reading a good written article should cause the same effect, otherwise it isn't a good article. Blind man example: You can't change the attitude by hiding facts without making it a lie.
PS: I did not say, that hiding images it out of question. But we shouldn't be the chosen ones, that decide what to hide and what not. --Niabot 20:54, 30 August 2011 (UTC)
No, that's not enough information. Closing the page because you are sincerely disgusted by the photo at the top of the article, which is a close-up, full-color head shot of the last-ever person to catch smallpox, does not teach you, for example, that the disease was completely eradicated through vaccination. It does not teach you that people died from the disease. It does not teach you that the scars are permanent. That disgust only teaches you that at some point in the course of the disease, it looks disgusting.
Different people react differently to words and to images. Some people can see images without any effect, but are disgusted by the description. More people accept the description without a murmur and are disgusted by the pictures. Most people don't mind either very much. We need to educate all of these people, not just some of them. WhatamIdoing 16:30, 31 August 2011 (UTC)
Then give them the option to hide any image or no image, but don't play the judge for what might disturb them. It's a very simple solution for the problem. We ensure that the feature is neutral in any way, that it can't be exploited and would have the effect you desire. Any problem with that? --Niabot 16:37, 31 August 2011 (UTC)
That feature already exists.
Users have repeatedly said that the existing image-blocking feature is not good enough.
Users say specifically that it is not good enough precisely because they want someone else to make a guess at what is likely to disturb them, and to filter out such images—and only such images, not 100% of images—before they have the opportunity to be disgusted by them. WhatamIdoing 16:43, 31 August 2011 (UTC)
"It doesn't exist", because most users don't know this feature hidden deep inside the configuration of their browsers.
That is your personal opinion on this matter. There is no given source that proves your claims at this point. But it can be proven that letting others judge for yourself will harm you at the end. The history books are full with this insight and this story repeats itself again and again. Today it is our turn to make or to make not the same error again. Turning on a filter is not the same as to decide which content will be filtered. Filtering for others and not for yourself is not an option.
Additionally you ignored so far any problems that will arise with category based filtering. It's already mentioned some paragraphs above. So i won't repeat myself at this point. --Niabot 17:04, 31 August 2011 (UTC)
No, the existing image-blocking options do not require the user to do anything "deep inside the configuration of their browsers". There are several purely on-wiki options. (Please go read en:Wikipedia:Options to not see an image instead of guessing what the existing options are.) WhatamIdoing 19:30, 1 September 2011 (UTC)
  • How many percent of the readers actually find this help page or are searching for it: < 0,0001%?
  • How many people find the option to hide images inside their browser: 5%?
  • How many of the people that found the function actually use it to block images: 0,1%?
  • How many browsers allow to block images only from a single domain without plugins: 0?
  • How many people that found a way to block all images from Wikipedia are complaining? 0,00000001?
I know this page and i know that it is visited by 12 People a day in average [12]. A big number right? How many of the 2.200 daily Visitors [13] of Futanari complain about the image? < 1 in a month?
The numbers look a bit funny, right? --Niabot 22:01, 1 September 2011 (UTC)
The setup discussed here (which would be far easier to implement than one based on categories/community judgement) would provide a solution significantly more robust than simply disabling/enabling images via one's browser. I don't know how advanced the various add-on scripts are, but very few users even know that such a thing exists (let alone how to use one). So it's unreasonable to equate this with anything available now.
I don't doubt that some readers would prefer to automatically filter only the images that they deem "objectionable" (and many probably have similar feelings regarding text). For the reasons discussed, this is neither feasible nor compatible with most Wikimedia Foundation projects' core principles. —David Levy 19:48, 31 August 2011 (UTC)

Categories for muslims[edit]

my proposition[edit]

  • photos and images (and videos' and gif/flash/silverlight/html5 animations' thumbnails and ascii art) of women for men
  1. do not hide any women
  2. hide all women except women covered at least from knee to shoulder and neck
  3. hide all women except women who has only face or hands or feet open
  4. hide all women, including images where their part of body is visible

explanation of 1st 2 degrees: it can be considered/counted by somebody that first glance is allowed and then that images should be hidden manually. and may be general naked men filters can be used instead of them, and this degrees may be removed from islamic filter levels.

explanation of children images: women and men mean all women and men including childs and babies.

explanation of image where it is not clear whether it is woman or man - they should be rounded to the direction to be hidden.

some people may think it is allowed for men to look at little girls who are not in "hijab" ("hijab" means only face, hands, feet visible) but "moderately" clothed, i mean, closed from knees to neck and shoulders, including knees and shoulders, or some people may even think looking at more naked girl children is ok. i have found that that idea is based on a weak hadith, and i do not want to make the categorisation more complex. and that hadith even do not say anything about little girls. probably, if it is true hadith, it just means someyhing like that children should not be scolded much...

explanation of being covered: clothing should cover form of body part as cloth can do, clothing should not stay too close to body, if it is so, let it be considered as uncovered by this my degrees, for example, arm with skinny sleeve should be accounted as uncovered arm; and clothing should not be transparent, for example, arm should not be visible through clothing.

  • photos and images of men for women
  1. do not hide any men
  2. hide all men except men covered at least from knee to navel
  3. hide all men except men covered at least from knee to elbows and neck
  4. hide all men, including images where their part of body is visible

(see explanations in the section of images of women for men) explanation of levels with covering more than from knee to elbow: it is not said clearly in quran, to what women should not look, so, that levels are also possible.

  • photos and images of men for men
  1. do not hide any men
  2. hide all men except men covered at least from knee to navel

(see explanations in the section of images of women for men)

  • photos and images of women for women
  1. do not hide any women
  2. hide all women except women covered at least from knee to navel
  3. hide all women except women covered at least from knee to shoulder and neck

(see explanations in the section of images of women for men)

  • photos and images of thigh of any men/women or animal
  1. do not hide any thigh (except if they are hidden already by other filters)
  2. hide thighs of any animals and/including people, not including thighs of insects

# hide thighs of any animals and/including people, also thighs of insects, also pistils and stamens ie reproductive organs of plants, also reproductive organs of other living creatures

explanation of level of hiding reproductive organs of all living creatures. i think hadith about not looking to thigh also shyly refers to reproductive organs. and hadith allowing to look at thigh of locust is a separate hadith, so, if it is not true, then such policy is also possible. i deleted it, but now i bring that back and strike that out. strong argument to not use that level: if that be so, there would be hadith about that. though, i am not sure, maybe in time and place of muhammad there just were not culture of giving flowers nor looking at them, nor artificial growing of them, nor was photos, and so there was no enough need to say about that.

  • photos and images of any animals (and people)
  1. do not hide any animal (nor people)
  2. hide animals that are big, clear visible, some criterion should be selected, for example, size in any direction more than 10 pixels
  3. hide all images where is any animal or people or their parts are visible

--Qdinar (talk) 11:15, 10 January 2015 (UTC)

Usage of the filter[edit]

Proposal: Filter is enabled by default[edit]

I am proposing that, by default, the image filter is enabled (all potentially offensive images are blocked). There are several reasons why this is preferable:

  • It is in keeping with the "principle of least surprise/least astonishment". Users should not have to take action to avoid seeing images which may offend them.
  • Some of Wikipedia's critics have long complained about offensive images. Unless those images are blocked by default, the criticism will continue. Having potentially offensive images blocked by default will silence that criticism.
  • Schools and some libraries are likely to want potentially offensive images blocked by default.
  • Many institutions with shared computers do not save cookies for security reasons, so any setting would be gone the next time the browser is started. This poses a problem for institutions which require potentially offensive images to be blocked, unless the filter is on by default.
  • Wikipedia vandals often place offensive images in templates or articles. While users viewing en:Fisting may not be surprised to see a photograph of a man being anally fisted, they would not expect to see it on an unrelated article, which is what happens with this type of vandalism.
  • The misuse of images (as opposed to the deliberate abuse of images by vandals), coupled with the potential for users, especially younger users, to be unaware of alternate meaning of terms, means that users may be surprised by images that they did not expect to see. Blocking offensive images by default mitigates this possibility.

Therefore, I propose that implementation of image filters be done in such a way so as to enable image filters by default. Unregistered users could turn it off with the dialog shown in the mock-ups. Registered users would be able to change it via a preference that they set once. Note that I am not proposing any change to which images are filtered or how they are classified - those remain issues for the WMF to work out. When I refer to "potentially offensive images" I simply mean images which would be filtered if a user had chosen to use the filter (there is no difference in what is filtered between the opt-in or opt-out implementation). Delicious carbuncle 20:52, 16 August 2011 (UTC)

  • Support - as proposer. Delicious carbuncle 20:52, 16 August 2011 (UTC)
  • Support - flickr is probably the largest source of free porn that is hovered up and stored here, and by using simple opt-in filters they still manage to keep the place relatively porn free for those that don't want to encounter it in specific circumstances. John lilburne 21:20, 16 August 2011 (UTC)
  • Oppose - My understanding of the original "opt-in" filtering proposal was that, under that proposal, we would tag images as belonging to various (relatively value-neutral) categories (e.g. nudity, spiders, firearms, explosives, trees), provide each viewer with a filter that doesn't block anything by default, and allow the viewer to choose which categories to block. I'm not entirely happy with that suggestion, but not strongly opposed as of yet. In contrast, Delicious carbuncle's proposal is to have certain "potentially offensive images" blocked by default, while giving viewers the option of un-blocking those images. Unlike the opt-in proposal, this one would require us to make a definite value judgment (i.e. regarding which kinds of images are "potentially offensive" to a reasonable person); there seems to be no way of drawing the line here, unless we're going to have the filter block by default all images that any identifiable demographic finds offensive. Sorry, but I don't think it's feasible to expect people from all over the world to come to an agreement on that issue. --Phatius McBluff 21:33, 16 August 2011 (UTC)
    • Comment - However, if we do end up having some categories filtered by default, then we had better allow people the option of adding additional categories to their filters. That's only fair, given the extreme subjectiveness of what counts as "potentially offensive". --Phatius McBluff 21:33, 16 August 2011 (UTC)
      • To be clear, what I am proposing is that any image which would simply be filtered under the opt-out scheme be filtered by default under the opt-in. Whatever system and schema used to classify images for the original proposal would be used in my variation. By "potentially offensive" I am referring to any image tagged as belonging to any of the categories shown in the mock-ups. Nothing is changed except the filter is on by default. Delicious carbuncle 22:32, 16 August 2011 (UTC)
  • Strong Oppose for all reasons given by me in the sections above this poll. --Niabot 21:48, 16 August 2011 (UTC)
  • I can't think of a worse way of using Wikimedia resources than to help libraries keep information from their users. Also, it is impossible to implement in a culturally neutral way... Kusma 22:31, 16 August 2011 (UTC)
    • Again, there is no difference in implementing classification under my proposed variation, simply that filtering is on by default. If your issue is with the idea of a filter or with the difficulties of classifying images, please do not comment here. Delicious carbuncle 22:35, 16 August 2011 (UTC)
"All potentially offensive images" would include all images of people for a start, and I suspect all mechanically reproduced images (photographs etc.), and quite possibly all images. Rich Farmbrough 22:47 16 August 2011 (GMT).
OK, I stand corrected. There are two culturally neutral ways to filter images: all or none. Kusma 22:48, 16 August 2011 (UTC)
Once again, I am not proposing any changes to which images are filtered or how they are classified, only that the filters are on by default. If you have concerns about how image will be classified or who will do it, take them up with the WMF. If you have an opinion about whether the filters (which will be implemented) are on or off by default, feel free to comment here. Otherwise, your participation in this discussion is entirely unhelpful. Delicious carbuncle 23:03, 16 August 2011 (UTC)
I strongly suggest that they are disabled by default or will never be implemented. There is one big, open and time consuming question: Which image is offensive to who? "Personally i hate flowers and numbers, please block them all in the default view, because i often change my workplace..." --Niabot 23:36, 16 August 2011 (UTC)
Let me then rephrase. Oppose because in the event that filters are implemented they should, and undoubtedly would, include a filter for the human form "it is feared by many Muslims that the depiction of the human form is idolatry", all human and animal forms - certain hadiths ban all pictures of people or animals - and for all representations of real things "You shall not make for yourself a graven image, or any likeness of anything that is in heaven above, or that is in the earth beneath, or that is in the water under the earth". Rich Farmbrough 23:31 16 August 2011 (GMT).
What a load of crap historically, and currently. Damn I've not seen such ignorance outside of BNP and EDL lies. John lilburne 06:55, 17 August 2011 (UTC)
I guess WP:CIVIL doesn't apply on Meta? You may wish to go and purge this "crap and ignorance" from en:Wikipedia - and possibly stop off for a chat with the Taliban and advise them of the errors of their ways in banning photography. Or perhaps you could credit people with the knowledge that Persian illustrations such as "Why is that Sufi in the Haman" contain figures which were permissible where and when they where created, but would not be by other groups of Muslims at other times in other places. Since the prohibition is derived from hadiths rather than the Qur'an it is certain Sunni sects that have observed it, while other groups have not. To generalize from a couple of specific localised examples to a faith of a billion people over 14 centuries, is an absurd leap. Rich Farmbrough 23:54 17 August 2011 (GMT).
You wouldn't be trying to censor words here now would you? Back on point every Muslim country has TV with images of people and animals, all have newspapers containing images of people and animals, all have billboards with local politicians, street demonstrations have people holding aloft images of religious leaders, or dead martyrs. Walk around in a Muslim tourist spot and the women in niqab and burka are taking photographs of their family. ban all pictures of people or animals crap. John lilburne 00:12, 18 August 2011 (UTC)
Just because every country (currently) does or has something, it does not follow that there are not communities that do not dislike, condemn or prohibit it. Rich Farmbrough 12:06 18 August 2011 (GMT).
There is no significant number of people that are offended by general images of people and animals. If such people exist then they must have a very unhappy time wherever they live as there is hardly anywhere where there are no images of people or animals, and if they are truely offended by such images they won't be browsing the internet. So excuse me if I don't dance of into la-la land with you. John lilburne 14:06, 18 August 2011 (UTC)
Interesting that you feel the need to be rude in every comment you make. However it might still be worth your while to read, for example, en:Islamic aniconism, to consider that there are other ways of delivering the projects than Internet (some of which WMF representatives have specifically discussed in relation to similar minority religious groups), to further consider that it is possible to use the Internet with all images suppressed, and one positive feature is that this might allow access to a reduced subset of WP where nothing was available before. Alternatively you can continue to deny the existence of such groups, or claim that since they are small they don't matter, and that I am a loony to consider that "all potentially offensive images are blocked" is a very broad, indeed overly broad proposal - but you'd be wrong. Rich Farmbrough 00:13 19 August 2011 (GMT).
  • Oppose: If you implement the filter, make it off by default, otherwise it would further marginalize images seen as controversial by those in power, something to which I'm opposed to on anarchist grounds. Cogiati 01:37, 17 August 2011 (UTC)
How is that an Anarchist perspective! As an Anarchist myself I don't think ANYONE should be dictating to what I MUST see anymore than some one should dictate what I MUST NOT see. The solution that allows the most freedom of choice is to allow me to decide whether to view the image or not. John lilburne 11:46, 17 August 2011 (UTC)
It is your choice to filter some categories or not. But it is not your choice which image belongs to which category. Some might still offend you and some might stay hidden even if you hadn't any objections to view it. --Niabot 18:35, 18 August 2011 (UTC)
  • Oppose – Forcing users to opt out is far more intrusive than granting users the ability to opt-in. --Michaeldsuarez 02:56, 17 August 2011 (UTC)
  • Oppose. Delicious carbuncle: You seem to be under the impression that the idea is to compile a finite list of "potentially offensive" subject areas and offer the option of filtering them. This is incorrect. An image "offensive" or "controversial" to one culture/individual is not necessarily viewed as such by another. "All potentially offensive images" = "all images". The "categories shown in the mock-ups" are merely examples.
    For the record, I strongly oppose the plan on philosophical grounds. Your suggested variant, in addition to being even more philosophically objectionable, isn't even viable from a technical standpoint. —David Levy 03:45, 17 August 2011 (UTC)
    You appear not to have read what I wrote. A filtering system is being implemented. I am simply proposing that the filters are on by default. Any issues with deciding what' to filter exist in the current implementation. Delicious carbuncle 10:59, 17 August 2011 (UTC)
    You appear not to have read what I wrote (or what Phatius McBluff wrote). The proposed system would not include a single set of filters applicable to all cultures (an impossibility) or to one particular culture (a non-neutral, discriminatory practice). It would, at a bare minimum, encompass numerous categories widely considered controversial/offensive in some cultures and perfectly innocuous in others.
    "The filters are on by default" would mean that everything categorized within the system would be filtered for everyone by default. Some religious adherents object to images of women without veils or women in general, so all images of women would be blocked by default. That's merely a single example.
    For the plan to even conceivably succeed, "deciding what to filter" must be left to the end-user. (Even then, there are significant issues regarding how to determine the options provided.) Your suggestion relies upon the existence of a universal filter set, which is neither planned nor remotely feasible. —David Levy 18:54, 17 August 2011 (UTC)
  • Oppose. Just because you're playing on a slippery slope over shark-infested waters doesn't mean you have to dive off it the first minute. Look, I'm not completely ignorant of the religion of the surveillance state; the concept is that the human soul is a plastic card issued by a state office, and all of the rights of man emanate from it. For the unidentified reader, who doesn't run Javascript or cookies, to be able to access specialized information (or eventually, any information), let alone post, is a Wikipedia blasphemy against this faith no less extreme than the Muhammad pictures are against Islam. And so first you feel the need to make the image hiding opt out by default; then limit it to the registered user; and eventually only such registered users as are certified by the state, because your god commands it. After that it would be those specialized articles about chemistry, and so on. But Wikipedia has been defying this god since its inception, so I have high hopes it will continue to do so. Wnt 05:45, 17 August 2011 (UTC)
  • Support, as a parent in particular. Currently I have all of the WMF's websites blocked on my daughter's browser, which is a shame because she loves learning and wikipedia would be a good resource for her if it weren't for problematic images that are generally only a few clicks away. --SB_Johnny talk 12:30, 17 August 2011 (UTC)
I'm disturbed to read your vote, because if you can block WMF websites on your child's computer you must be running some sort of "nannyware", but I would expect any remotely professional censorship company to be able to block out the sort of images being discussed here while allowing through the bulk of Wikipedia material. What do they do in schools which are required to run such software? Wnt 14:41, 17 August 2011 (UTC)
Why would you be "disturbed" by a parent using nanny software? Or (say) a preK-8 school, for that matter? --SB_Johnny talk 16:39, 17 August 2011 (UTC)
There is no software that will reliably detect inappropriate images (whatever the criteria for inappropriate is). Thus it is simpler and less error prone to block the entire site. John lilburne 15:03, 17 August 2011 (UTC)
Yup. I seriously doubt the sites will come off the red flag lists until at least some effort is made to change the defaults. OTOH, from other comments on this page about "cultural neutrality" and "radical anti-censorship", the categorization and tagging efforts might fail anyway. --SB_Johnny talk 16:39, 17 August 2011 (UTC)
  • Strong oppose Oops, I think we slipped. Our boots are losing traction. Brekkefossen icy 2.jpgInternoob (Wikt. | Talk | Cont.) 17:33, 17 August 2011 (UTC)
  • Oppose, for reasons outlined in previous comments on this talk page. If I oppose opt-in filtering, it's likely of little surprise that I oppose opt-out filtering. If we must have image filtering, it should never be opt-out.
  • Support. No point to doing it any other way. Why would a reader create an account if they're offended by an image in order to hide it in the future? They'll simply leave. So much for disseminating all of human knowledge (like the WMF actually does that anyway). – Adrignola talk 00:06, 18 August 2011 (UTC)
    Do you advocate that all images (or at least all images known to offend large numbers of people) be suppressed by default? If not, where should we draw the line? At images that seem "offensive" to you? To me? To whom? —David Levy 01:16, 18 August 2011 (UTC)
    Flickr, picassa, facebook, ipernity, youtube, have vast amounts of content that is considered as acceptable for their international audience, flickr has a huge amount of sexually explicit content that is filtered too. Some of the most prolific uploaders of sexually explicit content to Commons are sourcing the content from flickr. Flickr's rules of acceptability are crowd sourced. Why do you think that WMF sites would be unable to distinguish acceptable from unacceptable in relation to the same audience? John lilburne 06:43, 18 August 2011 (UTC)
    Flickr is a commercial endeavor whose decisions are based on profitability, not an obligation to maintain neutrality (a core element of the nonprofit Wikimedia Foundation's mission).
    Flickr (or any similar service) can simply cater to the revenue-driving majorities (with geographic segregation, if need be) and ignore minorities whose beliefs fall outside the "mainstream" for a given country. The Wikimedia Foundation mustn't do that. —David Levy 07:33, 18 August 2011 (UTC)
    And yet flickr has far more controversial images than Commons, is the place where Commons sources much of its material, and where the minority ero community and majority non-ero communities coexist side-by-side. Where ero-content producers often express support for a filtering system where they can browse the site with their children, or work colleagues and not encounter age inappropriate or NSFW images. John lilburne 17:29, 18 August 2011 (UTC)
    You're defining "controversial images" based upon the standards prevalent in your culture. In other cultures, for example, images of women without veils or women in general are considered highly objectionable. A website like Flickr can make a business decision to ignore minority cultures (on a country-by-country basis, if need be), thereby maximizing revenues. ("Only x% of people in that country object to this type of image, so it needn't be filtered there.") Such an approach is incompatible with the Wikimedia Foundation's fundamental principles.
    Incidentally, Flickr hosts hundreds of times as many media files as Commons, primarily because most fall outside the latter's scope. —David Levy 18:19, 18 August 2011 (UTC)
    Your position is inconsistent. If true that unfiltered images of women without veils on WMF sites are objectionable, WMF sites are not less objectionable with unfiltered images of a woman's vagina. Flickr, google, and others are in the business of getting the maximum number of viewers on their sites. Both do so by having a click through before revealing images that may be objectionable. Logged in users that have expressed a preference to see certain types of image are not bothered once their preference has been set. John lilburne 19:16, 18 August 2011 (UTC)
    No, my position is not inconsistent. My position is that the Wikimedia Foundation should filter no images by default, irrespective of who they do or don't offend. (I also oppose the introduction of opt-in filtering, but that's a somewhat separate issue.)
    It's true that someone offended by an image of an unveiled woman also would object to an image of a woman's vulva, and your argument seems to be that there therefore is no harm in filtering the latter. You're missing my point, which is that it isn't the Wikimedia Foundation's place to decide where to draw the line.
    In some cultures, a photograph of a fully nude woman is regarded as perfectly acceptable. In other cultures, any photograph of a woman (even if clothed in accordance with religious custom) is considered problematic. You want the Wikimedia Foundation to filter its content in a manner precisely conforming with your culture's position in the spectrum, thereby abandoning its neutrality by deeming the former cultures excessively lax and the latter cultures excessively strict. (And as noted above, this is merely a single example.)
    Flickr, Google and others are in the business of making money. For them, value judgements catering to majorities at the expense of minorities improve the bottom line. For the Wikimedia Foundation, they would violate a core principle. —David Levy 23:28, 18 August 2011 (UTC)
  • Oppose. This would block what search engines see and that would block the ability of those who do not want filtering to use search engines that analyse image content to look for images containing certain items. It's key to the general proposal that the filtering should not affect those who do not want filtering, and on by default would do that. Jamesday 03:25, 18 August 2011 (UTC)
  • I find it kinda difficult to take this proposal at face value. If someone wants to pre-emptively block all images, they can e.g. use AdBlock plus to filter all common image file formats, or they can use this userscript (works like a charm for me). For general image blocking, no separate filter system is required, and therefore implementing a universal default image block provides no benefits but a lot of drawbacks for the majority of people who want to see all or most images by default. --213.168.119.238 17:43, 18 August 2011 (UTC)
  • Mmmmm, oppose. People ought to be able to look at pictures first to see what they don't want to look at, and then be able to block them. Rickyrab 18:34, 19 August 2011 (UTC)
  • Oppose – No thanks, most people would like to customize the image filter for themselves. We shouldn't be forcing users to have to opt-out of the image filter; they should make the final choice to "opt-in" to such filtering. Parents can "opt-in" their children, librarians can "opt-in" library computers, etc. Give the readers (and/or the editors) the final decision whether or not to use the image filter; don't enable it by default. mc10 (t/c) 22:53, 20 August 2011 (UTC)
  • Oppose It is bad enough that the "optional" feature, as described, violates the NPOV pillar. More than bad enough. But making *all* so-called offensive categories, as this proposal suggests, from nudity to any image of the human body to any particular "sacred" element of any religion to images of violence to images of shrimp, making all those images hidden by default? I find this suggestion enormously troubling. --Joe Decker 02:38, 22 August 2011 (UTC)
  • Strong Oppose for reasons repeatedly expounded in other, scattered parts of the discussion --complainer 22:21, 22 August 2011 (UTC)
  • Strong Oppose This proposal is just simply Insane! Toglenn 06:25, 23 August 2011 (UTC)
  • Strong oppose, per reasons stated above, this is just a Really Bad Idea. This is allowing the original idea to go down a very slippery slope. --79.156.242.51 13:04, 28 August 2011 (UTC)

Proposal to at least put this in as one of the voting options[edit]

Currently the poll/referendum/whatev doesn't even ask the question (thus "censoring" the poll) ;-). Since it's already mentioned that people can change their votes anyway, is there any reason it can't be added as an option on the poll/referendum/whatev? --SB_Johnny talk 21:04, 17 August 2011 (UTC)

Considering the rather overwhelming opposition to the opt-out proposal (above), I wouldn't think it necessary to put the question in the poll. 92.18.208.159 16:46, 21 August 2011 (UTC)

Circumvention of opt-out by institutional administrators is possible, locking students and employees into a censored world[edit]

Or maybe it should be open-ended, so that whatever has a category can be arbitrarily black-listed and blocked - purple people can block images of striped people, Jennifer Aniston can block images of Brad, and I don't ever have to face the horror of gherkins again. Sure, people will want blinkers, but Wikipedia has traditionally had a policy not to provide them - from day 1, we've been all about enlightenment without political barriers. And Christian school admins can set a permanent non-removable cookie on all kiosk machines to block sex-ed and evolution in their classrooms (and also block image filter config pages, so the hiding be reversible: readers should be supported if they decide to change their minds. option can definitely be circumvented! And don't say JavaScript, because they can disable that, too). I don't think people should have a "sort-of-Wikipedia". It dilutes our mission. And just because Google caved in to China's demands at one point (and was rightly castigated for it), doesn't mean we have to. Samsara 22:02, 19 August 2011 (UTC)

It seems like allowing people to hide things from themselves vs. allowing people to hide things from others are two different issues, yet this proposal doesn't make a clear distinction between them. The list of voting questions includes "that hiding be reversible: readers should be supported if they decide to change their minds." Well of course, why on earth would you have a preference that once you activate it, can't be shut off? What it's really trying to say is "should a parent/school administrator be able to set it up so that a child can't unhide images?" or "should one person be able to hide images from someone else, out of that person's control?" Evil saltine 23:03, 19 August 2011 (UTC)
I would be really concerned if I thought what we were building would seriously allow one human being to stop another human being from accessing information as they choose to. But nothing we build is going to useful to that effort. Repressive regimes and repressive authorities already have plenty of software on their own. This is really just about the basic need for readers to, e.g. 'not see a gross autopsy picture if they really don't want to'. --AlecMeta 00:10, 20 August 2011 (UTC)
To follow up-- if somebody can control my browser's behavior, can't they already prevent me from seeing any or all WM content anyway? Would our building this actually help China censor its citizens in a tangible way, or can they that already without our help? --AlecMeta 02:46, 20 August 2011 (UTC)
Yes it would help because it does the categorization, and promulgates it for them. And it also, apart form the technical side of things, creates a chilling effect, because someone seen to have the "Chinese Orthodox" filter turned off, might be subject to various forms of retribution. Moreover it might be possible to log filter settings. Rich Farmbrough 03:16 20 August 2011 (GMT).
Every organisation and education institution uses a content filtering enabled proxy server anyway and nearly all organisations just block sites they dont support the views of or are off topic. This feature wont make that situation any worse and might make it somewhat better. Keeping in mind we are just talking about images here and I couldn't give a rats arse if I can't see images of dicks or mohammad, like honestly, keep it in perspective. If you think you arn't already censored WAY MORE than what this is doing, your delluded. Promethean 07:11, 20 August 2011 (UTC)
None of what you brought forward makes any sense, Promethean. It's been pointed out that if we build categories designed for filtering, this will help third parties in the restrictions they may wish to build for users whose machines they control. Secondly, just because I'm exposed to the potential of traffic accidents on my way to work doesn't mean I have to go swimming with saltwater crocodiles. Any level of censorship is bad, and just because we already have some, doesn't mean we should introduce even more. Apply that argument to nuclear weapons, and I think you'll see how massively wrong you've been in your arguments. Samsara 03:38, 23 August 2011 (UTC)

Legal issues and WMF responsibility[edit]

Liability[edit]

File:Babar.svg
Elephant in the throne room, yesterday

The risk here is that we create liability. Currently you get whatever images are on the page, you take a risk that "any reasonable person" would apprehend - that on a page you might see either A. an apposite but offensive picture B. and erroneous and offensive picture, or C. a vandalistic and offensive picture. Once we have a filter in place, the user with religious views, a sensitive stomach, PTSD, or and expensive lawyer on retainer, may come across images that they believe (rightly or wrongly) the filter should hide. We have in effect given an undertaking to filter, which we are not able to deliver on, for a number of reasons possibly including:

Is this a horror image?
  1. Interpretations of the filter
  2. Error
  3. Backlog
  4. Pure vandalism
  5. Anti-censorship tag-warriors
  6. Perception of the image

And that doesn't touch on the consequences if we hide something we should not.

Rich Farmbrough 23:04 16 August 2011 (GMT).

And yet flickr have filters and aren't sued by expensive lawyers on retainer, when porn images leak past the filters. Why is that? John lilburne 11:56, 17 August 2011 (UTC)
My understanding is that flickr terminate user accounts when users fail to apply the filters to their own images correctly, which also removes all their images. How would you implement that here? Samsara 22:13, 19 August 2011 (UTC)
Have you heard of the concept disclaimer? 68.126.60.76 13:48, 21 August 2011 (UTC)
Indeed, and the project have always resisted adding more disclaimers. Firstly issuing disclaimers does not necessarily absolve from responsibility, secondly it can increase responsibility in other areas, thirdly it establishes a position we may not wish to be identified with. But certainly if the filter(s) were implemented disclaimers galore would be required. Rich Farmbrough 18:37 21 August 2011 (GMT).
Yes, a disclaimer would absolve from responsibility. There is no responsibility in this situation. If there was, Wikipedia would already censor all images with nudity or violence. Do you think people regularly sue Google over every instance where SafeSearch fails to block a pornographic image? You can't search anything without accidentally finding porn. I can't even find a disclaimer from Google, other than a note in their help page that "No filter is perfect, and sometimes adult content shows up even with SafeSearch on".
There is no need for "disclaimers galore", this is utter nonsense. We probably wouldn't need more than one sentence. "Wikipedia is not censored" is hardly less of a disclaimer than "Wikipedia may be censored at your digression but the censorship might not always be perfect" 68.126.60.76 11:09, 22 August 2011 (UTC)
Rich is right: merely posting a disclaimer does not absolve you from existing responsibilities. If there is no responsibility in this situation, then a disclaimer is completely unnecessary. On the other hand, if there is a responsibility, then a disclaimer is not a guaranteed get-out-of-jail-free card. To give an extreme example, posting a disclaimer that says "BTW, we hereby disclaim any responsibility for the spambot malware that we're deliberately infecting your computer with" will not prevent you from going to jail for breaking the laws about malware and spamming. WhatamIdoing 17:28, 25 August 2011 (UTC)
Good point. Regardless, I stand by my assertion that if Google can offer an imperfect image content filter without being sued, so can Wikipedia. Notwithstanding Google clearly has more money than Wikipedia and makes a better target for a frivolous case. 68.126.63.156 02:36, 30 August 2011 (UTC)

If the filter is off by default, the Foundation would probably not lose the "safe harbor" legal protections afforded to ISPs that do not censor their content. It might be possible to have new users and IPs get asked about it if a controversial image is about to be shown without risking those protections. In Islamic countries where we've been blocked, we might even have the defaults set first, although since I'm not an ISP lawyer I have no idea what that would do to the safe harbor provisions. 76.254.20.205 16:38, 24 August 2011 (UTC)

Matters of principles[edit]

Quit trying to fix something that isn't broke[edit]

Wikipedia is one of the most successful sites on the Internet despite not having content restriction features. Lets leave well enough alone and stop tinkering with the open principles that made Wikipedia successful in the first place. Inappropriate images aren't a major issue but the slippery-slope solution presented has the potential to ruin Wikipedia itself. Jason Quinn 15:24, 16 August 2011 (UTC)

Approaching Wikipedia with a "don't fix what ain't broke" attitude will stifle its ability to progress. Yes, censorship is a delicate issue, but as long as it is opt-in, we are relatively far from the dangerous slippery slope, imho. I can't imagine any scenario where people would be driven away from a website that allows them to filter things they might not want to see. Nobody complains about Google's safe search feature; if you don't want a "safe" search then you keep it turned off. B Fizz 18:08, 16 August 2011 (UTC)
The existence of the categorization system used for censorship would -however- be neither opt-in, nor opt-out. It either exists or it does not.
The existence of this system allows for multiple avenues of attack against neutrality on any particular object in our custodianship. To wit, some attacks mentioned on this page: 3rd party usage of the categorization system in ways in which it was not intended. Legal attacks on categorization of certain items (both pro and contra). External entities able to mobilize large groups of people conspiring to place certain items under censorship (or remove them from censorship), for reasons of profit, trolling, or political gain.
We already deal with some of these problems on a regular basis, of course. I'm not entirely sure why we would want to make ourselves more vulnerable, however.
The software side is fairly easy, and is -in itself- indeed quite innocent. The problem is the categorization scheme that is going to be biting us in the rear for years to come.
--Kim Bruning 15:48, 19 August 2011 (UTC)
But Kim, aren't all of these problems absolute non-issues as long as the filter is strictly opt-in? People can use whatever categories there are to filter images as closely along their own preferences as the system allows at any given moment. --87.78.46.49 15:58, 19 August 2011 (UTC)
Not only are "all of these problems absolute non-issues as long as the filter is strictly opt-in"; the existence of the filter itself is not much of an issue by itself, even if it were opt-out. People are looking at the wrong part of the problem.
I think the root problem lies with the underlying data used to feed the filter. Now *that* is where the can of worms is. Start thinking about ways in which such structured data can be attacked (vector 1) or abused (vector 2). Assume an adverse environment (that is to say: attacks and abuse are givens). Now come up with a viable data design. That's actually very hard!
Note that if we use the existing category system (which is not what is currently proposed AFAIK) that system would come under new pressure, due to the stakes involved.
I think censorship is a high-stakes game, and I'm not sure it is possible to merely skirt the edges. --Kim Bruning 13:05, 20 August 2011 (UTC)
@B Fizz. Wikipedia has progressed just fine with a community driven approach. In fact, it's hard to imagine it having progressed better. When the community identifies a problem, fix it. These "top-down" solutions go against the spirit of the community. As other people have written here, it's only a matter of time before government gets it's filthy little hands on guiding the content of Wikipedia once these filters exist. In fact, schools and libraries will instantly apply these filters if given the choice, which will automatically trump the supposed opt-in nature of the filter. This whole issue is a non-issue anyway. I view hundreds of articles a week via the random page feature, and I never find inappropriate material.... EVER. The only place where images exist that some might view as controversial is on medical articles. Those images belong there if the add to the educational nature of the article. You don't not give important information to a patient just because the patient doesn't want to hear it. If there's another content here that inappropriate, the only way to find it is to go out of your way and look for it. Jason Quinn 20:24, 19 August 2011 (UTC)

24.148.240.176 05:24, 24 August 2011 (UTC) Busy, busy, busy. This is homologous to the four robocalls per weekend one receives from school principals who natter scripts that are available on their schools' web pages and provide bits of information that are old news to all who might care. Technology applied by the dull because it's there. Avert your eyes, readers, in the old-fashioned way you did back in '09!

The Wikipedia I want to contribute to does not have this feature[edit]

Since I can't vote against this... ...I will add my voice here.

Very simply, the Wikipedia I want to contribute to does not have this feature.

The Wikipedia I want to contribute to does not accept that any image is inherently offensive. It does not get into arguments about how much buttock is required to categorize a picture as sexual nudity. It does not implement religious superstitions in software. It does not accommodate readers who wish to experience a sanitized version of reality.

The Wikipedia I want to contribute to is unashamed to stand defiantly in opposition to any culture or system of belief that teaches that certain things may not be seen, or certain facts not known. It has a culture dedicated to telling the truth as wholly and neutrally as possible, without any bias of commission or of omission. It abhors any suggestion of censorship, or the creation of any mechanism that might someday support censorship.

I know I am not alone in this. I believe that the majority of people who are inspired and challenged to help build a free encyclopedia - on its way to becoming one of the great cultural landmarks of human history - are generally not to be found in favor of building any mechanism for restricting free access to ideas. This simply should not be built because it is a fundamental betrayal of the core values of the project.

Thparkth 15:07, 19 August 2011 (UTC)

+1, it can't be said much better than that, now where is the "No thanks, and I object to this feature being implemented" button in the vote? Thomas Horsten 15:34, 19 August 2011 (UTC)
By my reading, voting 0 on the first question will do the trick, although whether they will listen to any of the answers that they don't like is doubtful. — Internoob (Wikt. | Talk | Cont.) 20:51, 19 August 2011 (UTC)