Jump to content

Talk:Image filter referendum/en

Add topic
From Meta, a Wikimedia project coordination wiki

Push this button if the page content seems to be lagging: Purge

The talk page has been split into some sub-sections and subpages (still being worked out): if possible add your new subsections at the end of the relevant main section (see the index below), otherwise add it here and it will be moved later. Direct links to subpages:

Process

[edit]

Page protection

[edit]

Hi. I've just unprotected the subject-space page (Image filter referendum). There seems to be some confusion about when it is and is not appropriate to protect certain pages in Meta-Wiki's mainspace. Absent vandalism or some other problematic editing, it is inappropriate to protect pages, particularly full protection. This is fairly established on nearly every Wikimedia wiki. I've no idea why people are attempting to make exceptions here. --MZMcBride 22:36, 29 June 2011 (UTC)Reply

Thank you... SJ talk | translate  
I don't know if someone logged out to prove a point, but constructive edit by IP. :) The Helpful One 22:48, 30 June 2011 (UTC)Reply
Protected again now. Protection reason "This needs to stay stable" . Why? --Kim Bruning 10:44, 23 August 2011 (UTC)Reply

Who is Robermharris, and why should we follow his opinion?

[edit]

Who is Robertmharris, and what is his qualification to suggest anything to the WMF? Is Dory Carr-Harris his wife and what is her qualification to give the Board an direction in basically questions? [1] What kind is their connection to Wikimedia? -- WSC ® 12:32, 21 August 2011 (UTC) Now I read, he is related with the Canadian Broadcasting Corporation were en:Sue Gardner is coincidentally the previous director. But what maks him an expert in things like enzyklopedia. -- WSC ® 15:46, 21 August 2011 (UTC)Reply

What is his qualification to dicide if we have to launch an image-filter or not? Aside from the fact that he worked for the Canadian Broadcasting Corporation, whatever that means? -- WSC ® 16:50, 21 August 2011 (UTC)Reply
I mean, maybe he is a philosopher, ethnologist or an sociologist? a professor, who knos best all the works of enlightment? I mean Diderot, Rousseau, Hume, Kant, and just forgott to told us that on his User page? Did he ever read the writings of them? Did he understand what these fellers wanted to change in the world and that Wikipedia is in the tradition of the enlightment? Hey, its possible! He knows classical musik. -- WSC ® 17:39, 21 August 2011 (UTC)Reply

Robert M. Harris is a broadcaster and journalist who has worked for the Canadian Broadcasting Corporation for the past thirty years. His daughter Dory Carr-Harris is just finishing her MA in Cultural Studies at Goldsmiths, University of London 2010 Wikimedia Study of Controversial Content/Archive
And surprise: Sue Gardner began her career on Canadian Broadcasting Corporation (CBC) EN Sue Gardner. It smells badly. --Bahnmoeller 19:13, 21 August 2011 (UTC)Reply

Hm? Why is an journalist competent to decide what the policy of the board in the case of contents is? I've never heard any website of an newspaper or magazin got an content filter for their own articles. -- WSC ® 19:42, 21 August 2011 (UTC)Reply
You absolutely should not blindly believe the Harris report, nor should you trust in its conclusions because Harris has some magic infallibility credentials. But nor should you fault Sue and the foundation for recruiting him to investigate and report on the issue. Just read the report and agree or disagree with its content. I think I agree with it-- that is, I agree with what I think it means, though of course, others might interpret it differently. Either way-- rest assured, Harris hasn't "decided" anything for us, our board and our global community decide. --AlecMeta 17:53, 22 August 2011 (UTC)Reply
Just read the report and agree or disagree with its content -- The agreement of the user basis with the report is apparently not of any interest to the board. The board members read it and apparently agree with it so strongly that they act on it. --195.14.220.250 18:00, 22 August 2011 (UTC)Reply
The only action they've taken is to call for a global discussion and survey. They listened to _thousands_ of voices before the Harris report, Harris listened to lots and lots of voices, and the board is now doing it's best to listen to 20,000+ voices. These aren't the actions of a group that have their minds made up in advance. --AlecMeta 17:13, 23 August 2011 (UTC)Reply
Alec! Don't you think, an professor of philosophy or an professor of Ethnology, or anybody how had ever befor in his live think about the meaning of an enzyclopedia would present better arguments and an more powerful argumentation than an Strawperson of the Board ääh, excuse me, an expert of, of what? Classical Music. I don't like to be fooled. -- WSC ® 19:56, 22 August 2011 (UTC)Reply

Alec, you are wrong. The board has made a clear decision to introduce a content filter for images. That was made clear by the wording of the so called referendum. We could only declare if we find it important or not. I hope, that the board will cancel the whole project and will not stand for reelection once the term is over. --Eingangskontrolle 21:31, 1 September 2011 (UTC)Reply

[edit]

Could someone PLEASE put a highly visible link and encourage users to read this discussion on the main page of the referendum. There are only links to the boards proposal, a one sided FAQ section and voter eligibility. This discussion is listed on the box on the right at the bottom (after results) or some reason. How can a voter make a reasoned decision based solely on what the board proposes and a one sided FAQ? --Shabidoo 22:11, 21 August 2011 (UTC)Reply

+1 to any method that successfully involves more people in discussion. --AlecMeta 17:48, 22 August 2011 (UTC)Reply
Can you please define "main page of the referendum"? I don't find any link on the SecurePoll page so I guess you mean Image filter referendum, but I still can't see the other links you mention. I couldn't find anything else so I've changed the navigational template a bit. Nemo 23:18, 22 August 2011 (UTC)Reply
Im talking about Image_filter_referendum/en. There is no link to the discussion on the main part of the page...only in the box on the right side...below the results link (as though this discussion is only an after thought (after the results come in). It's bad enough that the questions are written in an ambiguous way, that the FAQ is entirely one sided...but at the very least there should be a very clear link to this page and it should be advertised so that users can see that other users have many problems with the image filter before they cast their vote. It is the least that could be done. --Shabidoo 01:44, 23 August 2011 (UTC)Reply
[edit]

I don't know what I have to do to get any response from anyone. Over the course of several days I have posted this in three different forums/discussions and it is being ignored.

Would someone please link this discussion on the main page of the referendum? Would someone please link this discussion on the main page of the referendum? Would someone please link this discussion on the main page of the referendum?

This discussion is important, and overwealmingly anti-image filter and yet this discussion is not properly linked on the main page but an afterthough in a referendum with biased FAQ and biasedly phrased questions. --94.225.163.214 01:17, 23 August 2011 (UTC)Reply

It is linked on the main PIF referendum page – you can get here either by clicking Discussion tab on the top of the page, or Discuss link in the box on the right-hand side. As you can clearly see here, all your posts can be found on this page – just search for you signature (IP). --Mikołka 02:06, 23 August 2011 (UTC)Reply
That is NOT enough. The discussion tab is at the bottom of the box ... comming AFTER the results tab. Why would it be there? There are big links to the voting page and the boards questions but not to the discussion. I didn't even see it myself until after I voted. The whole page and all the links (other than the discussion) are very one sided...this discussion page should be hilighted.
--Shabidoo 02:11, 23 August 2011 (UTC)Reply
Looks like you had no problem finding this page. ;) Anyway, considering your contributions on enwiki you know were the Discussion tab is. --Mikołka 02:18, 23 August 2011 (UTC)Reply
I just saw that the discussion tab have moved up. Thanks for doing that. Now the only thing left is a proper link. :) --Shabidoo 02:39, 23 August 2011 (UTC)Reply

Why is it so hard to vote?

[edit]

By golly, why is it so hard to vote, with all the instructions to search for something in the search bar (which, by the way, fails for those who use the HTTPSEverywhere addon for Firefox, because it redirects them to secure.wikimedia.org), and figure out on what wiki you're most active, AND with a link to WikiMEDIA? Most people are active on WikiPEDIA.

Please put a link to http://en.wikipedia.org/wiki/Special:SecurePoll/vote/230 and make voting simple. WTF.

(PS: if this has been suggested already, sorry - I searched this discussion page, but no way I'm going to read the enormous amount of debate on it, which after all is on a silly topic - bending Wikipedia to the cries for political correctness of those who, of course, want "the benefit of the children". Oppose. -- Dandv(talk|contribs) 10:47, 22 August 2011 (UTC)Reply

The problem with putting a link to that page is that it is one of many hundreds of projects affected by this vote, including over 250 languages of Wikipedia alone. Hopefully, recent clarifications of the instructions have made it a bit easier, but it would be so nice if we could somehow provide a specific link for each user to a project on which he or she qualifies. :/ --Mdennis (WMF) 11:44, 25 August 2011 (UTC)Reply

Refactoring and archiving

[edit]

I see that several sections have been archived only depending on inactivity. This page is too big and has to be split but there's already quite a lot of repetitions; someone has to reorganize sections by topic and move each topic to its own subpage. Any volunteer? :-) --Nemo 21:02, 22 August 2011 (UTC)Reply

Well, I've just been bold and did it the way I could... Headers and subpage titles can always be changed and sections can always be moved. It would probably useful to move some things back from the archives as well. --Nemo 22:18, 22 August 2011 (UTC)Reply

Last page protection rationale

[edit]

I've probably been able to guess it with some digging in the history of the page despite the very unclear edit summaries, but could someone explain the rationale of the page protection clearly? The last discussion I see about it is #Page protection and as far as I know "high traffic page" is not a valid reason for full protection of a main namespace page (not even for the main page here on Meta), so a note here would be useful. Nemo 22:35, 22 August 20

yes. There were attempts to move and rename pages after the election had launched, which was causing even more confusion. It was done to keep it at a stable and consistent state during voting. Sub-optimal, I agree, but it was confusing an already hazy situation. Philippe (WMF) 22:48, 22 August 2011 (UTC)Reply
If the problem is the title, can we restrict only moving? Nemo 23:02, 22 August 2011 (UTC)Reply
No one said they were confused by the proposed rename. Rich Farmbrough 11:15 29 August 2011 (GMT).

Not clever to make a sondage in the middle of summer

[edit]

I don't know if american people usually work 52 weeks a year (if it is the case, Nicolas Sarcozy, the french republic president may apreciate it), but in europe, a lot of people take holidays in July ou august (ot both when they can). and holidays is a possibility to leave internet.

For people who keep or use an alternative internet access, this access if often less confortable than the one used for the rest of the year.

This sondage ends on the 30 of august, and the next time I will see my ADSL box in the oposite part of the country will be during the night between the 30 and the 31 of august.

Up to last year, at the same period, I would not have been able to answer the sondage. In fact, it is a feast linked to my first name that was the occasion for me to see my parents and to download in their house several weeks of emails including the email speaking of the sondage. Presently, I use a 7 inches sceeen netbook in a garage connected by wifi to a neigbour box to access this page. It's happy french internet providers permit it.

It would have been more simple for a lot of people to wait the 2 first weeks of september to start the sondage. An alternative solution would have been to open the sondage during 4 weeks from middle of august toi middle of september. I don't think the absence of decision about filtering pictures on the 1st of september may have been dangerous for the life of Wikipedia and similar projects. Bech 01:08, 23 August 2011 (UTC)Reply

Ne jamais attribuer à la malignité ce que la stupidité suffit à expliquer (and I know you aren't saying it was deliberate, but some people might wonder given the way this 'referendum' is being conducted). It would not have occurred to most Americans that parts of the world slow down during July and August, since they do not take much time off in the USA (that, like having a functioning healthcare system, would be evil socialism). The only time many Americans stop working is Sunday, when they go to church to pray that God will cover up naked bodies on the internet with JavaScript, just as Adam and Eve (a real couple who lived 6000 years ago) covered themselves with fig leaves (an early forerunner to JavaScript). Seriously though, there is not much point any of us complaining, as the Board has already decided that this filter will be implemented regardless of the wishes of the community, and most of those 'voting' (this exercise stretches the definition of referendum to breaking point) probably have about 100 edits and have only read the heavily biased summary of this proposal offered here, instead of engaging in an actual discussion, which is the way we used to decide policy on Wikimedia projects. So whether you can participate or not is of no importance to the Board. The outcome is already decided. Beorhtwulf 15:14, 23 August 2011 (UTC)Reply

Definitions

[edit]

Eligibility and emails

[edit]
A page has been set up to gather reports of false positives in the emailing campaign: Image filter referendum/Email/False positives

To be or not to be eligible

[edit]

I received a message that I am "Sorry, you are not in the predetermined list of users authorised to vote in this election. " when I went to http://meta.wikimedia.org/wiki/Special:SecurePoll/vote/230 , but I received an e-mail saying I was eligible...strange.Analogue Kid 13:31, 19 August 2011 (UTC)Reply

That usually means that you voted from a wiki where you don't primarily operate. Go to your home wiki and follow the same instructions there, and it should work.  :) Philippe (WMF) 13:39, 19 August 2011 (UTC)Reply
I have the same problem and I'm voting from the place I do primarily operate. Wikiant 15:28, 19 August 2011 (UTC)Reply
Ditto Wikiant's complaint. I cant seem to vote from Wikipedia, yet have made hundreds of 'edits' there. As far as I know, I'm eligible. i don't know why we require the use of more than one Wikimedia Foundation website for eligibility, but I have used the WP Commons, as well. Jack B108 18:51, 19 August 2011 (UTC)Reply
A follow up comment that I was just able to vote, while logged in to WP. i'm not quite sure why it didn't work the first time, but the 2nd try went fine. Jack B108 18:58, 19 August 2011 (UTC)Reply

Same here. I have tired (this should read: tried) on three (English, German, Dutch) wikis and give up. The answer is still no!Sintermerte Sintermerte 18:06, 22 August 2011 (UTC)Reply

Same here too (English, German), giving up too. --HeWhoMowedTheLawn 19:18, 25 August 2011 (UTC)Reply
Do you actually qualify? I find exactly two edits under this username on the English Wikipedia. You must have made at least ten edits (while logged in) before 01 August 2011. WhatamIdoing 21:17, 25 August 2011 (UTC)Reply



::This is a perfect example of small-minded bureaucratic thinking~
Because my contributions have been on en.Wikipedia and not Wikimedia, I, too, have been excluded from voting on a topic I have strong opinions about, in spite of the fact I was invited to "Learn more and share your view."
Who are the fools who make a distinction between contributions to the one part of Wiki or the other?
I'm sure everyone who bothers to write here at all likely possesses above average intelligence and certainly has an avid interest in furthering global education and each just does so wherever s/he happens to be without regard for the heading they're under, be it "Wikimedia", "Wikipedia" or whatever other "Wiki".
Frankly, I didn't know I had to divide my hours of contribution between different sub-sections to be considered valuable enough to granted the right to vote. That's the kind of "Nope, can't do" that Career Hall Monitors of the world think up.

I may as well speak my mind here:

Making pictures hide-able is so sickeningly P.C.
When will this crippling P.C. scourge ever end?

Perhaps Wiki should allow readers to make entire paragraphs and sections vanish accordion-style from entries out of fear they might find them offensive?
With books, one had to get a glimpse, then shut one's eyes and quickly turn the page. Sometimes the reader could later find the courage to actually revisit the offending image and thereby possibly learn something new about the world. I guess one can't do that if one never knew the object was already censored, huh?
Sadly, Michelangelo's David was included as an example of a hide-able. Still, today?? So we hide away historic and world-famous art now, too, because in 2011 some people still take offense at human bodies? (And yes, even children).
If this community of people, keenly interested in knowledge and learning, doesn't make a stand to continue pushing humanity out of the Dark Ages, who's left to do it?
Teachers are scared of parents, school boards and churches –and American parents have thinner and thinner educations themselves. Everything slides down a notch to the lowest common denominator. Mediocrity.

Maybe entire entries should be excludable?
Perhaps a German grandma should decide if her grandson could ever find a Wiki article on Nazi death camps? Or, if so, who is the know-it-all arbiter who chooses how blandly written and illustrated that entry should be?

So, maybe we should make all text completely malleable, paragraph by paragraph, to parrot back a reader's own view of reality?

This is a far cry from the purpose of an unbiased international repository of knowledge in the classical sense of a encyclopedic reference source.
The fact that people can censor articles before even visiting them, pull a fig leaf over any facts they might not like to know, just caters to ignorance because of cowardice to stand up to controversy.
Where are we all headed if this hand-wringing continues?

One last question:
Who decided that ten contributions (all within one sub-subsection, no less) is the proper criterion to judge whether an individual is worthy of a vote?
What about the man or woman who devoted months to writing a single technical contribution for an entry, say, on quantum mechanics?
Is that person's opinion less valuable or less informed than that of someone who changed who's to whose in fifteen different entries?
Yup. Some people are definitely smarter than others. No question about that.Mykstor 20:20, 24 August 2011 (UTC)Reply

No, Mykstor, you have not be excluded because your contributions are mostly on en.wiki. Most of mine are on en.wiki, and I could vote. You have been excluded because you do not meet the eligibility requirements, which require ten edits before 01 August 2011 on any or all WMF projects. You had made just four edits before that date. WhatamIdoing 19:07, 25 August 2011 (UTC)Reply
You must read the rules to qualify, if you don't meet the basic you may be disqualified it is now over. -- Mohamed Aden Ighe 00:19, 1 September 2011 (UTC)Reply

Spam from wikimedia

[edit]

I have edited 4-5 times the wikipedia in the past. Today I have also received an email from wikimedia about "Image filter referendum". In my valuation this email is spam, as I have not asked this email, addition to this I haven't signed up for any email group on wikipedia/wikimedia. Notice that in USA as I can remember the spamming is strictly banned, and the sender have to pay approx. 500 dollars after each spam. Furthermore you say in email that "To remove yourself from future notifications", the problem with that I have not signed up for any notification. Could you be so kind to stop spamming my folder, and where could I turn with my complaints?

Please use Wikimedia nomail list in that case. --WizardOfOz talk 17:10, 19 August 2011 (UTC)Reply
Yeah, opt out. a fine answer :-/. But that doen´t take away that it is unsollicited bulk e-mail commonly classified as spam. Zanaq 17:51, 19 August 2011 (UTC)Reply
I found the following page: http://meta.wikimedia.org/wiki/Image_filter_referendum/Email/False_positives there I don't see any permission to send out a massive 700000 emails. Probably wikimedia become the biggest spammer on the world?
The only way you would have received this mail is if you checked "Enable e-mail from other users". If you'd still like to receive e-mails from other users, but not ones about votes like this, then you can add yourself to Wikimedia nomail list. Cbrown1023 talk 01:51, 20 August 2011 (UTC)Reply
e-mail from other users means that a user can click the link on the userpage and send an email to the user. one at a time. using the stored email address via some other tool to send bulk mail is spam, and not email from other users. Zanaq 07:00, 20 August 2011 (UTC)Reply

Who has sent mail to users? As I understand this was done in a way which is not open to normal users. Please explain and give me the opportunity to adress all receipients of that mail with my point of view. --Bahnmoeller 10:07, 23 August 2011 (UTC)Reply

The Wikimedia:Privacy policy plainly says, "The email address put into one's user preferences may be used by the Foundation for communication." This means that e-mail messages such as these do not meet any legal definition of spam. You opted in to these messages when you added your e-mail address to your account. You can undo that action by opting out, but you cannot make a credible claim that WMF is spamming you by using your voluntarily supplied e-mail address exactly like they said they would. WhatamIdoing 18:57, 23 August 2011 (UTC)Reply
Like anyone reads that. Anyway, what is legal and what is moral are two different things. It may be legal to send these mails, but mass mailing in such a biased way, while the vote is underway, to users that did not ask for it, doesn't strike me as a very good idea. And it might be that they saw the vote results among the regular editors were not as they wished, so they sent out a mail to all users who are less regularly here, with a text that portrays the proposal in a very positive light. I would be interested to compare the results before the sending of the mail to the figures after the mail was sent. Zanaq 11:20, 26 August 2011 (UTC)Reply
Quote "Like anyone reads that." You're failure to read the terms of service is not WMF's problem, it is yours. You agreed to the terms when you signed up, you have been told how remove yourself from similar e-mails from the foundation. Either do it or quit expecting the world to wipe you're backside. People like you who refuse to accept responsibility for your own actions are exactly why proposals like this are being forced onto the foundation. MrZoolook 10:27, 1 September 2011 (UTC)Reply
Nobody knows the figures. They will not be accessible until the voting period is closed. Too, a review of the history of the image filter referendum page will disclose that the email notification was always part of the plan; it has been included in the timeline for as long as the timeline has been published, well before the launch date of the vote. Language for the letters was crafted well prior to the launch of the vote, as it takes quite some time to coordinate translation into multiple languages. --Mdennis (WMF) 14:43, 26 August 2011 (UTC)Reply
Zanaq, the votes are secret ballots being collected by a third-party voting group. The WMF has no idea what the results are. They will not find out what the results are until after the voting has ended.
And Maggie's right: The plan has always included a specific date for sending out e-mail messages. It would have been immoral (called "lying") to say for months that the WMF will notify eligible people by e-mail, and then not to do so. It is not immoral to follow your previously announced notification plan to the letter. WhatamIdoing 17:16, 27 August 2011 (UTC)Reply

But it is still interesting to note, that the qualifications to vote are already at the lowest possible level with a total of 10 edits. Some users need more edits for just 5 lines of text. And all these, perhaps onetime users, would might not cared about wikipedia for months now receive an invitation to vote in this matter. How many users have received a email for the latest fundraising? How many users have received a letter regarding vector? How many users have received a notice to vote for the board? While it might be legally ok, it is not in the general context. Or put it another way: It was very unexpected. --Eingangskontrolle 21:51, 1 September 2011 (UTC)Reply

Invitation email caught by spam filter

[edit]

Hmm, don't seem to be able to log into meta. Anyway my referendum email to *@gmail.com was identified as spam. You may have lost a lot of putative votes that way — The preceding unsigned comment was added by 80.177.155.110 (talk) 10:21, 20 August 2011 (UTC)Reply

From my understanding most of these mails were completely unsolicited. Therefore they are spam. If enough people report it, wikipedia could possibly end up blacklisted on a lot of smtp servers. This was a very foolish endeavor that was carried out with all the tact and care of a bull in a china shop. I have lost a massive amount of respect for Wikipedia over this. Pothed 17:26, 20 August 2011 (UTC)Reply
They are not spam; this sort of communication is explained by the Wikimedia:Privacy policy that you agreed to when you created your account: "The email address put into one's user preferences may be used by the Foundation for communication." WhatamIdoing 18:59, 23 August 2011 (UTC)Reply

I got an email about a referendum but I cannot vote!

[edit]

Hi, I got the email:


"Dear BlitX,

You are eligible to vote in the image filter referendum, a referendum to gather more input into the development and usage of an opt-in personal image hiding feature. This feature will allow readers to voluntarily screen particular types of images strictly for their own accounts.

Its purpose is to enable readers to easily hide images on the Wikimedia projects that they do not wish to view, either when first viewing the image or ahead of time through individual preference settings. The feature is intended to benefit readers by offering them more choice, and to that end it will be made as user-friendly and simple as possible. We will also make it as easy as possible for editors to support. For its development, we have created a number of guiding principles, but trade-offs will need to be made throughout the development process. In order to aid the developers in making those trade-offs, we need your help us assess the importance of each by taking part in this referendum.

For more information, please see http://meta.wikimedia.org/wiki/Image_filter_referendum/en. To remove yourself from future notifications, please add your user name at http://meta.wikimedia.org/wiki/Wikimedia_nomail_list."

I got the same email, but when I go to the actual vote page, I get the message, "Sorry, you are not in the predetermined list of users authorised to vote in this election." Which is it? If we get an emailed invitation to vote, I imagine we're eligible, no? Denzera 19:43, 19 August 2011 (UTC)Reply

If it says your account doesn't exist, you're probably trying to log on to Wikimedia (with an "m") with an account name that's only known to Wikipedia (with a "p"). Note the instructions on the main page for the referendum: "2.Go to one wiki you qualify to vote from. In the search bar, type in Special:SecurePoll/vote/230." You might also want to look up "Unified login" so you can use the same account login on all the sites. (I did all this just now, with 100% success.) Gregory Bennett 20:39, 19 August 2011 (UTC)Reply

I got the same 'ineligible' message. Enough already, just give us a voting site with a radio dialogue voting doodad...

Radiojonty 21:50, 19 August 2011 (UTC)Reply


What site have you guys edited? Is it Wikipedia, Wiktionary, Wikimedia, etc, etc? You are only eligible to vote on that site. Go to whatever site it is and log in. If you can't log in, make sure you have typed your username in correctly (it is case sensitive). Black.jeff 22:29, 19 August 2011 (UTC)Reply

How am I supposed to know for which site I was classified? Since Wikimedia knows, they should include a direct link the email.--Ullrich.c 05:58, 20 August 2011 (UTC)Reply

I've edited in Wikipedia, got the email says I'm eligible. But when I log-in to Wikipedia and go to the 'Special:SecurePoll/vote/230' in Wikipedia it still says I'm not eligible. This is silly, give us a proper voting system.

I'll add my voice to the number of people who got an e-mail saying they were eligible, but when I go to Special:SecurePoll/vote/230 (while being logged in) the web site claims I am not eligible. I have almost exclusively edited on Wikipedia and there is where I should vote then. --NiklasBr 12:32, 27 August 2011 (UTC)Reply


When I tried to log in: wrong password; when I attempted to reset password I received an error stating my username did not exist (SNAFU?)

I would have voted NO. I probably will not vote (I'm lazy; who knows when I'll be able to resolve login issues).

FYI: turns out there is an opt-out option for all http images in most browser settings.

I sometimes find real life visual input offensive. It turns out that not seeing offensive images is as easy as not looking at them. If I don't want to see the leper across from me on the bus all I need to do is look away, same goes for the little girl in the 6" skirt (both images I consider disgusting)

In summery NO TO CENSORSHIP; The primary purpose of any encyclopedia should be the truth. Let the censors choose not to include wikipedia before wikipedia compromises it's integrety to include the censors' wishes.

To reiterate: WHOLE TRUTH - CENSORED IMAGES = INCOMPLETE TRUTH

Some people prefer a partial truth, and there is nothing wrong with that as long as they do not impose it on others. The op out of images seems intended so institutions can remove partial content(images) from wikipedia and redistribute that (partial) information (to students etc.).

If it was for bandwidth limitations I would totally be for it. But, let's be honest, this censorship request is not for bandwidth limitations and 64k isn't enough memory for the home computer! ;-)

Why a vote?

Isn't it possible in the wikipedia profile to create a checkbox to ask if I wanna see the photos clear? Isn't it possible because of technical reasons? If its possible can someone answer in german, my understanding in english isn't so good, thanks! --Stoppoker 20:45, 19 August 2011 (UTC)Reply

Of course it is possible to create the checkbox. The question is, "Shall we create a checkbox to ask you whether you want to see photos clear?" Some people (like Pothed, below) think the checkbox should not be created because you might use it for self-censorship. WhatamIdoing 19:06, 23 August 2011 (UTC)Reply

How is it censorship? This is giving people the option to hide content that they don't want to see. If they want to see the image, they can. Censorship would be removing the images or hiding them without giving people the option to see them. Black.jeff 22:29, 19 August 2011 (UTC)Reply

Self imposed censorship is still censorship. I think it's atrocious and runs counter to the purpose of the site. I'm suprised this idea has made it this far in development. Pothed 22:58, 19 August 2011 (UTC)Reply

Self imposed censorship and censorship in the wider sense are very different issues. One is imposed on you by a third party, that goes against freedom of several things. Self imposed censorship is you not wanting to see or read something, so you don't. If anything, opposing that goes against freedoms. If you allow people to censor images for themselves, they might then find out more about a particular issue, and thus it can further the goals of WMF. Black.jeff 21:24, 24 August 2011 (UTC)Reply

I agree.


I can't vote

[edit]

I keep getting email telling me to vote, but I can't vote even when I log in globally from Wikibooks. b:User:Jsalsman 21:16, 21 August 2011 (UTC)Reply

Hmm. You should qualify. Have you tried voting at [2]? If so, and it's not working, can you tell us what message you're getting? --Mdennis (WMF) 21:46, 21 August 2011 (UTC)Reply
Same problem here.
Sorry, you are not in the predetermined list of users authorised to vote in this election.
Not possible to vote :( --E7 10:12, 22 August 2011 (UTC)Reply

It worked after I logged in to Meta with my real name. James Salsman 16:28, 22 August 2011 (UTC)Reply

It worked for me after trying it a few days later... Silly stuff. --E7 09:36, 25 August 2011 (UTC)Reply

Lopsided and Strange

[edit]

Oppose: This proposal is ludicrous and lopsidedly biased.

I've made more than 10 edited and contributions prior to Aug 2011, but I'm not eligible to vote. However, I did receive an E-mail to participate!

So what gives? How are those whom are capable of voting chosen? Looks lopsided to me that it's not open for everybody to vote! Why isn't this open to the general public? It seems to be a limited crowd with a biased point trying to make something that everybody else will have to live with. That's unfair to say the least.

And as for the topic of discussion, topic to vote on being censoring images... come on people, why should Wikipedia... an information source be wrapped up in such a discussion in the first place. The internet is full of all kinds of information and images. If one doesn't like to see such images, then they shouldn't go to such sites... or if it's certain images of certain types, then they should use filtering software on their own PC to filter such.

Having such a Wikipedia committee take up such a controversial issue in the first place makes me wonder what the real reason behind all of this is. Wikipedia should spend more time working on how to get more information to more people and not on censoring certain information from certain minorities of people as I'm sure the majority would NOT go for such a proposal.

Again, I go back to my lead point... why isn't this voting open to the public? I presume that if it were open to the public, the minority trying to get this pushed through would loose out... and thus the bias inserted on Wikipedia's part by not allowing the majority to vote only enforces such tactics.

Wikipedia should NOT be delving in censorship! It's not Wikipedia-like!

Wbenton 04:27, 22 August 2011 (UTC)Reply

Did you go to en.wikipedia (which is not here at meta), log in, and type Special:SecurePoll/vote/230 into the search box? WhatamIdoing 19:11, 23 August 2011 (UTC)Reply

Why spamming e-mails that are not allowed to vote?

[edit]

my apologies for this discussion topic, but why on earth would i receive an e-mail, saying that i can vote and everything, while when i go to Special:SecurePoll/vote/230 it simply says that my account can't vote

seriously, why?

Hoorayforturtles 14:21, 22 August 2011 (UTC)Reply

Did you visit the page on skwikipedia? that is http://sk.wikipedia.org/wiki/Special:SecurePoll/vote/230 ? --Bencmq 14:26, 22 August 2011 (UTC)Reply
Also, compared with discussion, voting is evil-- but non-participation is even more evil. We'll always take votes as feedback, but discussion is even more valuable and informative as feedback. If you can't vote, it's probably because you're not coming from your main wiki. But even if someone actually isn't eligible to vote, we'd still want your opinion. --AlecMeta 15:56, 22 August 2011 (UTC)Reply
Thanks for the skwikipedia link, it worked
Hoorayforturtles 08:57, 23 August 2011 (UTC)Reply

Proposed change to instructions

[edit]

The directions currently say, "Go to one wiki you qualify to vote from. In the search bar, type in Special:SecurePoll/vote/230. For example, if you are most active on the wiki meta.wikimedia.org, go to meta.wikimedia.org/wiki/Special:SecurePoll/vote/230."

I think we want to expand this to say, "Go to one wiki you qualify to vote from. You are currently reading a page at meta.wikimedia.org. For most users, you will have to leave Meta and return to your main wiki project to be able to vote. In the search bar at your regular wiki project, type in Special:SecurePoll/vote/230. For example, if you are most active on the wiki meta.wikimedia.org, go to meta.wikimedia.org/wiki/Special:SecurePoll/vote/230." WhatamIdoing 19:19, 23 August 2011 (UTC)Reply

Hi. Sorry that this went unanswered so long. It's really hard to keep up with what's new on the page with the current format.:/ That sounds like a good idea to me; I'll run it by the committee. --Mdennis (WMF) 15:03, 24 August 2011 (UTC)Reply
I have been told that I may go ahead and implement this. :) About to do so. --Mdennis (WMF) 18:10, 24 August 2011 (UTC)Reply

What's a referendum and what's this

[edit]
A discussion about the title of the page is taking place at Meta:Proposed page moves#Image filter referendum.

Vote is needed to trash the proposal

[edit]

Please people, vote against this insanity. Self-cencorship is still cencorship. How blissfull it must be for people to continue being ignorant on topics they find "offending" or "harmful". Also, using child-safety once again as a reason for such system is just maddening. Once again! And in Wikipedia of all places! Isn't slowly but steadily tighteningn noose around neck of free Internet enough? All reasoning to cencoring internet is always about protecting children or preventing child-porn, while actual results are much wider-ranging in every cencorship case ever recorded. Now that self-deluding cancer has reached Wikipedia. This is truly maddening. 0rino 13:17, 20 August 2011 (UTC)Reply

The proposed Image Filter project is a precursor to imposed censorship, as it involves the subjective categorization of images, e.g.: an image of a girl on a beach in a bikini, which would be inoffensive to some would be categorized as objectionable or even pornographic by religious fundamentalists. The very basis of this project is opposite to Wikipedia's core values, one if which is NO CENSORSHIP. And as noted previously above, if the project is implemented, then '....welcome to image tag wars'.

This proposal is COMPLETELY UNWORTHY OF WIKIPEDIA, which should abide by the firm policy of no censorship. Readers of our projects who view articles on masturbation or the Nanking Massacre should reasonably expect to see images which are sexually explicit or graphically violent; they and their children should not view our works if they can be so offended, since our works and their images are based on codes of verified reliable sources and neutral point of view. Parents are wholly responsible for what materials their children access via the Internet –Wikipedia is not their babysitter.

As to 'surprise' images of nudists riding bicycles (and similar objections): if such images are not in the norm (i.e. most people do not ride bicycles in the nude), then the image is misplaced or irrelevant to the article and should be expunged. If and when an article on 'Nude bicyclism' is created, then the images of nude bicylists are entirely appropriate to that article. The argument of 'surprise' is a complete red herring submitted largely by those of fundamentalist right stripes. Harryzilber 16:20, 17 August 2011 (UTC)Reply

I absolutely agree. I think that it's despicable that the board would take such a heavy-handed approach to such a controversial topic, possibly against consensus. I marked 0 for the first question on the ballot and I encourage others to do so as well. — Internoob (Wikt. | Talk | Cont.) 17:08, 17 August 2011 (UTC)Reply
That's naive though. Suppose that I need to learn about Naturalism (art) - so I type in "Naturism" - and find (to my horror), not some crappy porcelain figurine of a shepherd, but instead, right there at the very top of the page, to the right of the lede - a picture of half a dozen naked people. A simple typo or misunderstanding of a term (that the reader is looking up in an encyclopedia precisely because they don't yet understand it) can result in such problems. Consider that there are some companies out there who might fire someone for looking at the picture at the top of Naturism on company time when they are supposed to be researching Rococo figurines! So precisely because I'm against censorship of Wikipedia - and because I demand the right to have full-frontal pictures of nudists at the top of the naturist page - I'd very much like a "No Porn Please!" filter. SteveBaker 19:10, 17 August 2011 (UTC)Reply
SteveBaker, I like what you're saying. I'd like to add the point, in response the the censorship purists, that this isn't, fundamentally, imposing censorship on ANYONE. It sounds like an opt in system. Making a list of check boxes (I think would be the best system) that say "Would you like to block images of: Porn (Scant dress/Soft/Hard)? Mohammad? Gore (Nasty papercut/Wounds/Serious mutilation)?" ect isn't imposing censorship. It's tabulation of images. Wikipedia? All about organizing ideas, just by defining them. Why shouldn't we allow users to block what they don't like?
In response to the peeps below me, "broden you horizon"? Grammar/Spelling aside, why would I want my horizons forcibly moved out? If I want to try that, I'll do it myself, thanks. And communist China? Not the only people touchy about employee porn. -Reade
You are very easely horrified - but that experiance may broden you horizon. Perhaps you should teach your computer not to accept certain words you type into searchfields. --Eingangskontrolle 20:25, 17 August 2011 (UTC)Reply
If your company fires you for reading Wikipedia then you're likely in Communist China, in which case you'll probably have many more important issues to deal with. Harryzilber 20:33, 17 August 2011 (UTC)Reply

Wikipedia is not intended solely for the twenty-something leftist neckbeard from a first world country. I would hope Wikipedia is used as an educational tool by people who most need it, and if allowing opt-in filtering of pictures of the prophet Mohammed gets a young Muslim child from the Sudan to educate themselves, then that is a far better thing than you getting to feel ideologically pure about "censorship". Which this ISN'T. Unigolyn 08:52, 18 August 2011 (UTC)Reply

Unigolyn: while your concept of education involves the removal of knowledge from our collective works, that is certainly not Wikipedia's which has had for many years the stated goal of "...a world in which every single human being can freely share in the sum of all knowledge". Not: "a world in which every single human being can freely share in the sum of all knowledge except categories a through m". You're obviously an usurper to that cause.
It's not wikipedia, or Unigolyn, or you that forcing information out of the article (if only potentially), it's the culture that says, "No images of Mohammad." What's better, a situation where religious adherents cannot morally (as they see it) view a page, or one where they can? Which is freeing up the flow of info, at least for them?
Young Sudanese students are entitled to view or not view images of Mohammed as they see fit —that decision is definitely not for you to make on their behalf. Hopefully you're not a teacher or in any way involved with an educational system. Harryzilber 21:12, 18 August 2011 (UTC)Reply
The problem is partly that it can't be done in a non-neutral way. No matter what we will end up being simultaneously too conservative for some viewers and too liberal for others. It is not the position of an encyclopedia to say "this photo is inappropriate" or alternatively, "you shouldn't reasonably be offended by this photo". The cultural aspects of it add a whole other dimension of POV. You might decide that women in bikinis are not nudity, but that's your culture talking. Your "naturism" problem can be solved in any of several less drastic, more effective ways (disambiguation page perhaps?) but the POV aspects of this cannot. — Internoob (Wikt. | Talk | Cont.) 22:40, 17 August 2011 (UTC)Reply
Please define porn. Does the foot need to be naked to be porn? Does it need to be in a stocking? Or with a shoe hanging from a toe? Under a stream of water? Only if it's a male foot, and how obvious must it be that it's male? Is a naked neck porn, or only when the veil is shown partly lifted to expose it? The knee, porn or not? The lower thigh? The upper? Only the inside? Does male or female thigh make a difference? A male buttock in soccer shorts during a game? A possible bulge from a penis under trousers? A group of nude people? The group only out of a clear artistic or naturist context? A two day old naked in the bath? A 1 year old? 2 year old? 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 89, 90 year old? How much of the nakedness must be visible and how much hinted at or presumed? Are signs of sexual maturity required? Which could be porn and which innocent? This description of things that might be porn, or parts of it? Each of those would be erotic material for some portion of the population. "Porn" is not a readily definable concept since so much is in the mind of the viewer. While I couldn't define porn reliably in a way that covers the range of human experience, I could describe facts about an image that an individual could choose to use to create their own individual description of what they consider to be possibly pornographic. Jamesday 03:04, 18 August 2011 (UTC)Reply

The irony with this objection is that forcing unwanted imagery on users is worse than censorship. Personal choice is not censorship. Why, if it were legal I think that those crying out "censorship" at this proposal would argue for including child pornography in the paedohillia article, as not doing so would be "censorship". 2.120.227.113 01:07, 18 August 2011 (UTC)Reply

The above contributor wrote "The irony with this objection is that forcing unwanted imagery on users is worse than censorship." That's highly disingenuous. No one is being forced to view Wikipedia, which amply notes its non-censorship policy along with its disclaimers. If you don't want to see articles that include images which offend you, then you have the right to fork off another encyclopedia and stay there. The wiki software is completely free and no one will mind if you never visit this work again for the remainder of your life. Harryzilber 04:22, 18 August 2011 (UTC)Reply
Cannot unsee. If, through ignorance, the road I usually follow to wikiprojects, I run into something I find offensive, I can't unsee it, not readily. It takes time to forget, and it's not a process I have very much control over. I can, accidentally, be 'forced' to view an image. Why not let me block those images categorically before I run into them? It affects no one else, unless someone is borrowing my account, and really? That's not exactly likely.
There are many more topics that don't violate law, but will also fall under the curse of censorship. Whats lost is lost and the WMF is supporting it. --Niabot 01:13, 18 August 2011 (UTC)Reply
This about "paedophilia" is one of the largest straw men I have ever seen. People that are against the proposal don't necessarily want the WMF to host illegal images. You're also missing the point about censorship: if some POV pusher wanted to tag all the women in bikinis as nudity for example, they would effectively be censoring "reasonable" content for those who have this feature turned on. And I put "reasonable" in quotation marks because the POV pusher would actually have a case: for some people, this is nudity, which brings me to my next point that this can't be done culturally neutrally. You might know exactly where to draw the line on nudity, or even if you mostly know where to draw the line, there will be people who are more conservative and others more liberal than you. — Internoob (Wikt. | Talk | Cont.) 20:04, 19 August 2011 (UTC)Reply

I completely agree, this proposal is not in the spirit of Wikipedia and I don't know how such a major change could have gone so far without broad discussions with the community (not just some small "consultation" which nobody will hear about). Instead they simply spring upon us this change and ask for votes, though we are not told whether this "referendum" is even binding. The entire referendum question is written in very biased way, which makes it sound like a minor technical change, I'm guessing no one who objected to the proposal was allowed to contribute to writing the question (this type of vote would shame a Banana Republic). --Hibernian 06:26, 18 August 2011 (UTC)Reply

It's too late. This vote will not decide whether the feature is implemented or not. If you don't like that, you can complain to the board, or walk, but as I understand it, nothing here matters in terms of the question of whether such a feature will be implemented. This discussion is moot, let's move along. --Joe Decker 02:55, 22 August 2011 (UTC)Reply

Ik ben faliekant TEGEN! - Dit riekt, nee: stinkt naar censuur! Tenzij het een door volwassenen (op hun eigen computer - en dáár alleen) in te bouwen/schakelen kinderbeschermingsfilter betreft.

Daarom heb ik vraag 6 met "?" beantwoord; aangezien "cultureel neutraal" een oxymoron is. Cultuur is per definitie divers. Al ademend zuurstof in kooldioxide omtoveren is zowat alles wat wij mensen gemeen hebben.

Vandaar ook mijn ongemakkelijk gevoel bij het ZEER suggestieve voorbeeld in vraag 5 (wel 'bloed', geen 'bloot') waarin ik het misselijkmakend puritanisme omtrent alles wat in de verste verte naar sexualiteit (of zelfs maar lichamelijkheid) zou kunnen verwijzen herken. Sintermerte Sintermerte 14:08, 22 August 2011 (UTC)Reply

I am utterly AGAINST! the proposal. - It reeks of censorship! Unless it's a filter, to be built in/activated by adults (on their own computer - and ONLY there) for the protection of minors.

Thus "?" I answer question # 6; for "culturally neutral" is an oxymoron. Culture is diverse by nature & definition. To conjure oxygen into carbon dioxide (by magic) while breathing is about all we humans have in common.

Hence my unease at the VERY suggestive example in question # 5 (condone the "ferocity of brute force" versus proscribe a "state of undress") redolent of the nauseating puritanism which abhors all that could ever so remotely hint at sexuality (or indeed the corporeal). Sintermerte Sintermerte 14:08, 22 August 2011 (UTC)Reply

Ich bin ausgesprochen GEGEN den Vorschlag! - Er riecht, stinkt sogar, nach Zensur! Es sei denn es betrifft ein Filter (von Erwachsenen ein zu bauen/schalten auf ihren eigenen Rechner - und NUR dar) zum Schutz Minderjähriger.

Weswegen ich Frage # 6 mit "?" beantworte: weil "kulturell neutral" ein oxymoron ist. Vielvältiger und verschiedener als Kultur ist kaum vorstellbar; dafür brauch' ich meinen Browning nich zu entsichern. Beim atmen Sauerstoff in Kappesdioxid um zu zaubern ist beinah' das Einzige was uns Menschen gemeinsam verbindet.

Daher auch mein unheimisches Gefühl bei dem SEHR suggestiven Vorbild in Frage # 5 (ja für "Not & Tod" nein für "Nackt & Titten" - (mehr "Blut", weniger "Brust")) welches ein abschäulicher Puritanismus atmet der Alles was nur ein Hauch von Sexualität, ja sogar Körperlichkeit aufweist, verachtet. Sintermerte Sintermerte 14:08, 22 August 2011 (UTC)Reply

Survey is ambiguous

[edit]
It is important that the feature be culturally neutral: as much as possible, it should aim to reflect a global or multi-cultural view of what imagery is potentially controversial.

It's completely unclear whether this is intended to be an inclusive or exclusive addition, i.e. would a Muhammad filter be offered/implemented because one cultural group prefers this, or would Muhammad cartoons remain completely uncensored because only a minority group opposes this? While the potential contradiction between "global" and "multi-cultural" in one of the possible interpretations hints at the intended meaning, this insight is immediately torpedoed by the inclusion of "potentially". This means whoever will be in charge of implementing the outcome of the survey can interpret this question to mean whatever they want. Therefore many people may be uneasy with answering this question. Samsara 22:02, 19 August 2011 (UTC)Reply

It's worse than that actually - large groups of Wikipedia contributors finds censorship as such offensive. And it's long been recognized that tagging images (or other media) with tags for the explicit purpose of censoring them, is helping out those who want censorship. Thus you have the situation where some people find certain images offensive, while others find that implementing technical infrastructure in order to facilitate the censorship of the same images, is also offensive. Thus you literally *cannot* avoid offending someone. The only choice is who to offend. Do we want to offend those who object to free distribution of information (including topics that are controversial to someone somewhere). Or do we instead want to protect those people from offense, but trample all over those editors that believed in Wikipedias stance on NOT censoring. --Eivind 06:49, 22 August 2011 (UTC)Reply

Petition against the development of an opt-in personal image hiding feature

[edit]

We, the undersigned, are Opposed to this development.

95.147.60.247 08:02, 20 August 2011 (UTC) (wikipedia:bobcousins)Reply

Please don't do this. Actual discussion is occurring above. Appending one's name to a "petition" without elaboration is far less constructive. —David Levy 08:23, 20 August 2011 (UTC)Reply
Agreed -- allthoug in fairness, this is really a missing-option in the poll. The poll should, if it is to be useful at all, have a question where you're allowed to communicate if you strongly support, strongly oppose or something in between this misfeature. The closest thing there now is if the feature is "important", which isn't the same thing at all. Thus, while I agree with you that adding your name to a list of people who oppose the feature is less constructive - I would argue that the poll itself thus is less constructive. --Eivind 06:59, 22 August 2011 (UTC)Reply

Bias in Referendum Questions - Important/Unimportant vs. Support/Oppose

[edit]

I like the idea that this feature would go to referendum, as there are strong arguments on both sides. But it is deeply discouraging to read through these arguments, go to the voting page, and realize that the decision has essentially already been made. We are not being asked whether we support or oppose this feature. We are merely being asked whether it is important or unimportant that it be implemented. Even a vote of '0' does not register active opposition, merely disinterest. Personally, I have grave concerns that this initiative is wholly incompatible with Wikipedia's neutrality policy and will be a great boon to third-party censors. Saying that it is "not important" that the feature be implemented is not an accurate reflection of my views.--Trystan 17:58, 21 August 2011 (UTC)Reply

Third-party censors already can use the existing commons category system. All or most images showing nudity for example are already under a nudity-category (see "nude women with shoes": the censors even can chose most meticulous details of our already in place censorship system!). We have a violence-cat with several under-categories too and so on. This is the reason why I think we don't need many new categories for this filter system. They are mostly already there. One could think of abolish the complete category system to make it harder for censors though... Adornix 18:09, 21 August 2011 (UTC)Reply
I'm not entirely pleased with image-hiding based only on current cats, but it is survivable. This does still open up new avenues of attack, but the security measures (soft and hard) are already in place for current cats. These will have to be strengthened, but at least we won't be on terra incognita.
But do we have any way to stop people making new cats, specifically for purposes of filtering (which we know they will). That will be killer. :-/ --Kim Bruning 18:29, 21 August 2011 (UTC)Reply
There would need to be new categories (see the example in the previous section). Rich Farmbrough 19:44 21 August 2011 (GMT).
As has been pointed out about, we currently categorize images in an attempt to be neutrally descriptive. This is a world apart from tagging potentially "objectionable" contant to empower each user (or a third party) to filter whatever they do not want to see.--Trystan 18:39, 21 August 2011 (UTC)Reply
As far as I understand the foundation's plans, all new filtering categories would have to be as neutrally descriptive as the already existing ones. This is my interpretation of "culturally neutral" as Phobe explains it above. And since the community will have to tag the images, there has to be consensus for every new category and for all categorizations.Adornix 20:27, 21 August 2011 (UTC)Reply
How can the formal determination that x image is "potentially objectionable" and y image isn't possibly be neutral? How can the formal determination that x subject constitutes "sexual" or "violent" content and y subject doesn't possibly be neutral? —David Levy 20:42, 21 August 2011 (UTC)Reply
I don't doubt that we can set standards and use them to label potentially controversial pictures. Immodest dress, religious satire, etc. can be objectively (if rather arbitrarily) set down into working, applicable standards. But the process of applying them - of tagging images to warn about certain types content - is a very different process from looking at an image and trying to describe what it is substantively about. For example, if a picture has a rather incidental depiction of a same-sex couple that is largely irrelevant to what the picture is about, it would not make sense to categorize it under homosexuality. But if we are in the business of warning people who don't want to look at any depictions of homosexuality, it would need to be labelled as such.--Trystan 20:46, 21 August 2011 (UTC)Reply
"to filter whatever they do not want to see" - How do we know what they don't want to see unless they give us specific parameters? Won't they be upset or at least annoyed when they inevitably find our idea of "nudity", "sex", etc. doesn't match theirs? Evil saltine 21:32, 21 August 2011 (UTC)Reply
If you read the article page describing it with the images, they 1. get to chose if they accept the categories to block or not and 2. can open an image that is blocked or close one that isn't. Thus, they will be able to set their parameters. Ottava Rima (talk) 21:52, 21 August 2011 (UTC)Reply
So, for example, someone would be able to hide all depictions of Mohammad (and nothing else) if they wanted to? Evil saltine 21:55, 21 August 2011 (UTC)Reply
Who determines which categories to use? (No, we can't simply dump in the entire current list of image categories, which don't come close to covering the "potentially objectionable" content.)
Who determines what images are placed in a given category? —David Levy 22:32, 21 August 2011 (UTC)Reply
I propose that the people who are offended tag the images so the majority of the community won't wast their time. I suggest that no-one who opposes this system should participate in the classification process. Zanaq 17:36, 22 August 2011 (UTC)Reply
Ok, then I suggest that no-one who is in favor of this system should participate in the classification process either. Get to think of it though, the proponents are the ones who are far less likely to be neutral at deciding between images as objectionable and non-objectionable, while the opponents of the filter are rather neutral when it comes to the images. You know what, actually, only opponents of the filter should be creating and maintaining the filter category (ie. if the board is insane enough to allow such a Wikimedia-hosted filter category). --195.14.220.250 18:21, 22 August 2011 (UTC)Reply
How should I know what other people find objectionable? I find nothing objectionable so I won't tag anything. If you find something objectionable you should tag it. The filter itself won´t be neutral: one cannot judge objectionability in a neutral way. The mechanism would be neutral since anyone can declare anything objectionable. Zanaq 17:22, 23 August 2011 (UTC)Reply
So...every single interest group or individual can tag whatever they like however they like with no policy or interference and all images even remotely questionable can be censored from those who use the filter and/or in libraries, schools and other places where the filter would be implemented for all computers. It may be culturaly neutural, in that there will be very few photos that will not be tagged.--Shabidoo 22:18, 27 August 2011 (UTC)Reply
That is correct. Zanaq 10:52, 3 September 2011 (UTC)Reply

Just to spell it out

[edit]

Oppose, for reasons I expounded above. I also urge other editors, no matter if in favour or opposing, to vote to know where the community stands: whether you agree with the proposal or not, I think nobody likes the fact that it is being crammed down our throat. This will also prevent wrong assumptions about where consensus lies. complainer 08:18, 22 August 2011 (UTC)Reply

I don't think we'd be able to accommodate that here, as we've received over 20,000 votes already. --Mdennis (WMF) 12:50, 22 August 2011 (UTC)Reply
People actively discussing here are far fewer; I think the total number of contributors to this page is less than 1% of that. Hopefully, they are a representative sample. --complainer 12:54, 22 August 2011 (UTC)Reply
I think the important thing in the coming months us to ensure that the results are not misused to try and establish evidence of community support. This is not a referendum presenting arguments for and against, and then asking whether people support or oppose, and should not be misrepresented as such.--Trystan 13:31, 22 August 2011 (UTC)Reply
I don't think there is any risk of misrepresentation: the nature of the referendum has been understood by the community, in spite of the fact that the term "referendum" is, as observed multiple times, wildly misleading. The problem with the referendum is that it can establish evidence of community support, but has no way whatsoever of assessing whether the community does not, in fact, support the proposal. The purpose of this head count is to fix this (not really minor) flaw. Due to the vastly different voting procedures, I don't think anybody could possibly confuse the two. --complainer 13:41, 22 August 2011 (UTC)Reply
+1 to the need to take care with the results, and I agree with complainer that the results aren't actually going to wind up misused, for a lot of reasons. The best thing we can do is publish all the results in aggregate, after removing usernames. To me, if one of our project communities reach consensus that they wants this, then the feature should, at minimum, be implemented on that project. --AlecMeta 16:43, 22 August 2011 (UTC)Reply

We have to note, that according to the way the question is asked the result will be: Voters feel it is important by different degrees. Lets hope, that 0 and 1 will be interpreted as not important at all, respectively not important enough to spend any more money and manpower into it. This referendum is a farce and until a real vote with yes or no option will be held, it will not be accepted. --Bahnmoeller 07:41, 23 August 2011 (UTC)Reply

So, complainer, you are trying to make people in favour of the filter think "I support the decision, but the process was bad, so I'll vote against it". That's not the way to govern a project, or a country, or the world. It's like voting if favour of the death penalty just to complain about a country's crime policy. --190.135.94.198 00:53, 24 August 2011 (UTC)Reply
What on Earth gives you that impression? I am trying to make most people against the filter say "I don't support the decision, and the process was bad, so it looks like I voted for it". I don't need to deceive anybody: if you read the posts, and I mean all the posts, you'll easily see that people are mostly against the proposal. There is no real way of seeing it from the way the referendum is formulated right now. I am, by the way, also urging people like you to say "Support" instead of speculating on my intentions: why would I be concerned about censorhip if I were to act underhandedly myself? It wouldn't make any sense. --complainer 24 August 2011
No, you'll see that many of the people who (1) figured out where to comment, (2) speak English well enough to join this discussion, (3) care enough to comment at all and (4) believe that they're going to lose—a group that is dramatically more limited than "people"—are against the proposal. Or, at least, they're against what they believe is the proposal.
"People" and "people who comment in a complaints forum" are not the same thing. People who are extremely unhappy and feeling powerless are far more likely to comment than people who think it's a great idea and are confident that they are going to get what they want. If you think that this proposal is so obviously a good idea that of course everyone will support it, you cast your ballot and go about your business. If you think this is a truly horrific idea that will be implemented anyway, then you are motivated to complain here.
This is a very typical pattern. You can see the same pattern right now at the discussions about the Article Feedback Tool: A dozen people are screaming bloody murder about it. The fact that 90% of the (literally) thousands of people who have used it and said that they support it is incomprehensible to them. The absence of those thousands of people from the bug-reporting and other complaints-oriented pages has led some of them to basically believe that those thousands of people don't exist (or don't count, since many of them are merely readers rather than experienced editors). They have confused "number of supporters who are willing to waste time arguing with complainants" with the "number of supporters". WhatamIdoing 18:52, 25 August 2011 (UTC)Reply
Can we stop the assumption war for a moment? And maybe try not to turn it into a (spurious, too: this is not a bug, but an intentional proposal) parallels war instead? This is the entire idea of a head count. As for people being happy or unhappy, let's face it, the accepted practice in democracies is that people who choose not to express their opinion have no influence on decisions; you can't invalidate the results of an election just because you think that people who are happy with the government vote less--although this is, technically, true in most countries. And please, stop telling me that the people who support the decision are a silent majority: many, including a user banned for trolling on wikipedia, are extremely active, vocal, and aggressive here. Which reminds me: if everybody signs his posts somehow, it becomes a lot easier to communicate. --complainer
We are getting a head count; so far, more than 20,000 people have participated in the head count. However, the head count is not happening on this page. I don't know (I doubt that anybody knows: that's the point behind having a third-party vendor collect the votes) what the results will be, but I do know that the participation on this page is not a reliable marker for the actual results.
The reason I mentioned the parallel discussion is because there's already been extensive surveys there, and we have results from several thousand people, which are that about 90% say they support the AFT, 5% say they think it useless, and 5% don't care. But on the discussion pages, the supporters and detractors are evenly divided. The discussion is therefore a completely unreliable indicator of the community-wide view.
Back on this discussion—but also closely paralleling the AFT discussion—I also know that the individual supporters who choose to comment on this page are very confident that the tool will be supported, and that the individual detractors who choose to comment on this page are convinced that their very strong opposition will "be ignored" (to use a typical claim) because a majority will support it. However, we might all be wrong. It could be that the supporters will turn out to be the minority. We'll find out next month. In the meantime, I encourage you not to mistake the views expressed on this page for the views of the whole community. WhatamIdoing 16:54, 26 August 2011 (UTC)Reply
The community supports the filters because all the information they had to answer the questions was based on a biased FAQ, strangely phrased questions and the proposal written as though it was a harmless change to wikipedia and nothing more. If you carefuly look at these discussions you will see that there are FAR more people against this "referendum" than for. --Shabidoo 18:44, 26 August 2011 (UTC)Reply
I know that there are many people here opposed to the filter (not the referendum). Whether those people represent the view of the whole community is unknown.
It appears, however, that you believe the rest of the community will support the filter only because you think they are much stupider than the critics here. I am sad to read that you believe that the supporters' ability to read and understand the referendum is so much less than the critics'. Have you considered the alternative, good-faith interpretation, which is that the support is based on fully understanding and actually liking the idea presented here, rather than them being ignorant, confused and easily swayed by what you call "a biased FAQ" and "strangely phrased questions"? Could it be that the supporters, rather than being stupid, merely have different values than you? WhatamIdoing 17:28, 27 August 2011 (UTC)Reply
@WhatamIdoing: the problem is that the referendum text is misleading; I don't think people reading the referendum:
1 - have thought through what the tagging can mean: myself, originally, came to this page with a merely technical concern, namely that the servers would not have been able to bear the added load (the point remains valid, by the way).
2 - if opposed, have a coherent way of expressing it: while some might actually vote zero to the importance, I ecpect most other just to abandon the page. People that like the proposal have no brainwork to do: they just have to praise it.
The problem goes deeper than this referendum and down to the shift of wikipedia away from its consensus-based decision procedure, which began with the restructuring of the WMF Board of Trustees rules into a complicated abomination that, in practice, allows Jimbo to always control its majority (it would be off topic to discuss it here but, if you are interested, try to read the restructure announcement, and follow the various branches of the election procedures). Then there is the matter of form: a decision taken by a former pornographer with Lovelace's syndrome and based on a "study" with no hard research and countless simplification, which amounts to little more than a high school essay is not something people should accept, regardless of their opinion about pictures. --en:User:complainer
I had the very same experience. I thought the filter seemed harmless and voted for it. I looked at the discussion (as I thought it was only a discussion about how to implement the referendum). I discovered to my horror, that there isn't even anything close to a community consensus on the topic. Very revolting. I've been trying ever since to get a very big and noticeable link to this discussion, and NO ONE listened. I just got them to move the discussion link a little higher. So, to answer your question...it has nothing to do with the cleverness of anyone and everything to do with neutrality in presenting the issue (more than one side) and encouraging and fostering debate (including advertising that there is debate instead of writing a completely one sided FAQ and manipulative questions). I cannot recount how many times I have expressed this. I am positive the results would be different if the discussion page was advertised. --Shabidoo 08:53, 31 August 2011 (UTC)Reply
As with all votes, you are not supposed to vote what you believe some other people want. You are supposed to vote for what you—and only you—personally think is best. This means that even if everyone else in the world thinks this is a great idea, but you hate it, then it is your duty and right to oppose it in the referendum—and even if everyone who bothered to complain on this page hates it (which is far from the case, and I believe we'll find that the people on this page are far from representative), if you personally think it's a good idea, then it is your duty and right to support it in the referendum. It is not your job to vote for what you guess other people want. They've got their own votes; you shouldn't surrender yours to them. WhatamIdoing 16:38, 31 August 2011 (UTC)Reply
It might be his duty, but it certainly isn't his option: he cannot oppose anything in the referendum. HE can just say it isn't important, which is certainly not the same. But you know this as well as we do. complainer
Once again...you fail to see the point. It's almost as if everyone cannot see the point. If someone asks me "Hey, there is this girl who is a little short on money, and we are all pitching in a dollar...why don't you help out?". I'll say...of course...here is a dollar. Now...if someone takes me to the side and says...did you know she is a bit of a loser and always needs money and is out of control and manipulates people? I'll think...I should look into this more...see what other people have to say...see if there are various views and ask questions if I can.
This is how I see this referendum. Unless you know you have to click on the discussion, unless you have any reason to doubt the way this referendum was presented (which was entirely biased and one sided), or unless you are just lucky enough to have randomly clicked on the "discussion tab"...you would have NO IDEA that the filter isn't simply some innocent feature that the board is simply trying to get a vote on. If you take a moment and look at the entire presentation of the referendum, the wording of the questions...and the FAQ before I changed it...you just might pick up on what I am saying. --Shabidoo 02:34, 2 September 2011 (UTC)Reply

Proposal to change the questions on this "referendum"

[edit]

There is no question which asks...should wikipedia utilize an image filter. There is no way to clearly and difinitively say...no. --Shabidoo 21:43, 24 August 2011 (UTC)Reply

I do not believe that any changes will be made to the questions in the referendum at this point in the vote. --Mdennis (WMF) 11:48, 25 August 2011 (UTC)Reply
At least the WMF should take into account that there was no "no option" if she starts with the interpretation of the results. --Niabot 11:53, 25 August 2011 (UTC)Reply
Taken into account, and amplified by some of the combinations of freeform comments and votes. SJ talk | translate   01:53, 6 September 2011 (UTC)Reply
AlecMeta writes above: "if one of our project communities reach consensus that they wants this, then the feature should, at minimum, be implemented on that project. --AlecMeta 16:43, 22 August 2011 (UTC)" -- I like this formulation of community empowerment, and think one of the reasons to improve the poll data is to find out which communities have consensus there. But please note that a sticking point here is how much time it takes to develop a workable solution. If only one community wants it, the work would likely have lower priority. SJ talk | translate   02:00, 6 September 2011 (UTC)Reply
I don't think that's a reasonable interpretation of the ballot questions. If you think this should not be implemented, then your answer to "How important is it for the Wikimedia projects to offer this feature to readers?" is "Zero: it is not important for WMF to offer this feature".
What isn't offered is the more subjective, "How strong are your emotions on this subject?" A person who votes "ten" but doesn't really care much one way or the other will be counted the same as a person who votes "ten" and thinks the world will end if the feature isn't offered. WhatamIdoing 19:00, 25 August 2011 (UTC)Reply
A botched experiment. Nothing like this can accurately give you the information you need, if the planning of it and the execution of it is organised and run by people who have one point of view and assume a positive response from everyone. --Shabidoo 23:59, 25 August 2011 (UTC)Reply
The point is not so much the interpretation, but the procedure: if somebody stops you on the street and asks you "From 0 to 10, how important do you think it is that your left kidney is surgically removed?", would you think "What a nice man, he is thinking of my possible renal disease, I'll say '0' to let him know I'm fine"? Most people would assume he wants to sell your organs, and will do so no matter what you answer is; this is exactly the feeling one has with this particular referendum. To get back to the metaphor, a bunch of fundamentalists shouting "What about the children who need a transplant?" and "One God, one Government, one Kidney" or accusing you of slandering the medical profession all around wouldn't help, either. --compainer 09:18, 30 August 2011 (UTC)Reply

Alternative ideas

[edit]

All images on or off

[edit]

Any kind of filtering system seems to me a waste of project resources and a morass. Instead, why not simply give readers the ability to turn all images on or off by default? And perhaps the ability to turn any given image back on, based on the reader's own evaluation of the caption and the article's subject. No tagging needed. No discussions required. Much easier to implement. Culturally neutral. Simple. Barte 11:21, 16 August 2011 (UTC)Reply

That is already included in every browser setting! In Firefox is Tools > Preferences > Content > remove check mark from "Load images automatically". Finish! So an easy solution is already there. --Dia^ 13:02, 16 August 2011 (UTC)Reply
That's not the point. If we're considering a quick "hide or show" system, this has to be done on wikimedia project, not by the browser. As you're probably browsing different websites simultaneously, you can want/need a specific website to hide it's image content, while you'll want another site to keep the images visible. Cherry 14:09, 16 August 2011 (UTC)Reply
Please choose another browser. Opera has such a functionality in, Firefox has likely either a plugin or a greasemonkey script somewhere. For IE and Chrome will also very likely exists some extensions for that purpose (called site-specific preferences, plugins, addons and similar features do handle these!) Mabdul 20:22, 17 August 2011 (UTC)Reply
"Please choose another browser" ?!! Either this image filter is the reader's issue : he wants it, he gets it by himself, and this vote is pointless, because this is not's Wikimedia's business. Or, this is a wikimedia issue, and there is no way you say something like "choose a decent browser". Accessibility of our content is our problem. Cherry 10:17, 18 August 2011 (UTC)Reply
I could support this. — Internoob (Wikt. | Talk | Cont.) 22:48, 17 August 2011 (UTC)Reply
This is what i thought of while reading here. Not browser based, but site based. but how do you support this for readers not logged in? is that possible (tech ignorant here). or would "IP reader" soft (ie can be overcome with a series of clicks) image blocking be good for schools, etc?(mercurywoodrose) 76.232.10.199 07:36, 19 August 2011 (UTC)Reply
Yikes! Filtering images and any other content is a reader issue, not a Wikimedia issue. Individual readers (or their system/network administrators) can and must determine whether and how to filter images (and other content) they or their user groups access, block, or selectively filter. The associated qualitative issues and practical how-to's are bigger than and not specific to Wikimedia. Accordingly, for Wikimedia to involve itself in this morass (to use Barte's apt term) would be inappropriate, off-mission, controversial, and a waste of project resources. Froid 12:52, 19 August 2011 (UTC)Reply
I agree with the off-mission sentiments partly, but this proposal is still better than the POV garbage of the original proposal, which would certainly be an even bigger waste of project resources. If we have to implement something, implement it this way. — Internoob (Wikt. | Talk | Cont.) 19:47, 19 August 2011 (UTC)Reply
That's my take as well. I think a simple on/off option gets Wikimedia out of the morass of trying to selectively filter content. Of course, that would also be true if the project simply posted some instructions on how to turn images off using different browsers. But the former approach, as I imagine it, would be more convenient for readers, while still keeping the project out of the "morass" of selective filtering. It's the selective part of filtering that I think will cause the project grief. Barte 13:51, 20 August 2011 (UTC)Reply
Support Support without regarding what's going to happen with 5-10-category-filter. Helpfull as well in case of animations which can't be stopped by users who aren't familiar with changing browser settings. Hæggis 20:57, 23 August 2011 (UTC)Reply
+1 SJ talk | translate   02:12, 6 September 2011 (UTC)Reply

Create an API for 3rd-Party Websites

[edit]

Wikimedia could uphold its commitment to openness by keeping the main WP sites as they are — free of image-hiding — and creating a software API that would enable independent groups to create their own sites which mirror WP's content while allowing them to implement their own display schemes. For example, a group of volunteer parents concerned about their kids seeing disturbing images could create a site called "ChildSafePedia.org" with tools that enable crowd-sourced tagging, rating, and optional hiding of images, all without affecting the official WP site. The API's Terms of Use could include provisions to prevent commercialization and require display of a disclaimer clarifying that the 3rd-party sites are independent from WP. This would allow access to most of the WP content for groups that might otherwise block the site entirely. --KnowWell 18:10, 20 August 2011 (UTC)Reply

This might be more complex than we need just for this task-- but we do desperately need to open up and start working with 3rd party sites to let them interface with us-- take our content, modify it on the fly, etc. "wiki" is the next "http" . --AlecMeta 23:59, 23 August 2011 (UTC)Reply

Greasemonkey Off-The-Shelf

[edit]

User:87.79.214.168 points out this NonCommercial Off-The-Shelf (NCOTS ;-) ) greasemonkey script which covers most of the requirements as it stands: [3] . Instead of making a lot of hoopla, could we just point users who wish to hide images to this script? Does this cover all the requirements we would like to meet? If not, which requirements still need to be covered? --Kim Bruning 13:57, 21 August 2011 (UTC) I'm all for *OTS. Time and money savings are a Good ThingReply

There's no guarantee any 3rd party script will be safe. Pointing our user to such script is very irresponsible if the user's computer is infected with virus or spywares thru these scripts. -- Sameboat (talk) 14:31, 21 August 2011 (UTC)Reply
Welcome to the wonderful world of open source. We could convince the author to release the code under an open source license (currently (s)he has released the source, but not provided a license afaict; it seems likely they wouldn't mind an F/L/OSS license). Worst case, we can then check the code, digitally sign it (to ensure none of your creepy crawlies get in), and release it ourselves.
Alternately, we can just ask the author to do the same. --Kim Bruning 15:14, 21 August 2011 (UTC)Reply
Oh yeah, so we have a proposed one right here. -- Sameboat (talk) 04:24, 22 August 2011 (UTC)Reply
So, you are forcing the people to burden themselves when it could easily be implemented mass scale? The mere presence of such a script undermines all of the "censorship" claims and not the other way around. The script's existence is proof that it is right for Wikipedia to implement a large scale option so people can do it more simply. Ottava Rima (talk) 16:09, 21 August 2011 (UTC)Reply
If the image filter is a "click here to see this image" on all images, I am not opposed to it. However, if it involves any categorization of images, it will be used for evil by third parties. The good thing about a "click here to see this image" filter is that it will be so annoying that nobody will want to use it. Kusma 16:21, 21 August 2011 (UTC)Reply
Evil? We already categorize images. We have categories that blatantly label porn and violent images. Claiming that giving people the ability to filter out images so they can edit a page without worrying about them is somehow "evil" is really inappropriate. Ottava Rima (talk) 16:23, 21 August 2011 (UTC)Reply
Right, we categorize all images in a manner intended to simply describe them. We don't single out "objectionable" ones (a non-neutral value judgement) and create special categories just for them. —David Levy 17:05, 21 August 2011 (UTC)Reply
That is a rather silly statement. You claim that we categorize things simply, but then claim we shouldn't create "special categories". The first part would imply the second part was impossible. We already have categories for porn. The filter would block them for users wanting to block them. We aren't creating new categories. It would be impossible for anyone to not know this or realize this, so please stop trying to confuse people by putting up misleading and contradictory claims with a tone of "I'm right and you are wrong no matter what either one of us says". It is really inappropriate, especially when you lack a real argument. Ottava Rima (talk) 18:54, 21 August 2011 (UTC)Reply
Some sweets, yesterday. Currently categorised as "Violent" but not as "Nude".
We may have categories for porn, but not on every one of the 777+ projects, moreover these categories are not what is required. What was considered pornographic in some places in the seventeenth century might not get a second glance now. We would need a whole new ontology, [category:Nude images class 1] .. through to [category:Nude images class 5] for example, with some images getting multiple categories (or tags, or whatever mechanism is used).
And certainly some images such as File:Woundedsoldiervietnamwar1968.jpg are not categorise as either a "medical image" or a "violent image", whereas the image on the right is categorised as violent. Rich Farmbrough 19:22 21 August 2011 (GMT).
Once again, you've quoted someone out of context. I wrote that we don't create special categories for images deemed objectionable. Regardless of whether pre-existing categories could be used (and Rich has explained why that isn't entirely feasible), the plan requires whatever categories are used (whether old or new) to be formally designated "potentially objectionable" or similar.
If you're under the impression that the idea is simply to enable readers to filter any of our current image categories, you're mistaken. The proposed implementation absolutely involves the creation of new categories (whether brand new or categories of "potentially objectionable" categories). —David Levy 19:48, 21 August 2011 (UTC)Reply
It would also affect how existing categories are used. If we are tagging an image that happens to feature, in the background, a woman in a bikini, a same-sex couple, or a person wearing a sacrilegious T-shirt, we wouldn't likely take that into account. But under this initiative, we need to make sure that any of those depictions are properly tagged so as to not offend those who wish to avoid them. I can picture the warnings on the upload screen: "Please make sure all pictures of women are properly categorized according to immodesty of dress."--Trystan 20:12, 21 August 2011 (UTC)Reply
Your comment assumes that we don't already require images to be properly categorized or that it is somehow difficult to add images to a category based on another category they are in. I don't know how you came come to either conclusion, especially when we have done such quite easily in the past. Ottava Rima (talk) 20:20, 21 August 2011 (UTC)Reply
I don't believe I make either assumption. We do properly categorize pictures, but this does not include categories like "Any depiction of homosexuality," "Female midriffs", or "Incidental blasphemy." Yet these are all things which people are likely to be offended by. Neutral description of an image's subject is very different from content warnings for filtering.--Trystan 20:30, 21 August 2011 (UTC)Reply
You have no proof of that. And even if they -were- offended by such things, we do have related categories. Bare midriffs, as one example. Ottava Rima (talk) 20:41, 21 August 2011 (UTC)Reply
This would be censored if you didn't like midriffs
We may have a category for everything (sorta kinda) but these are for pictures of bare midriffs. If we pollute them with pictures that merely contain bare midriffs (actually it's called [Category:Crop tops] see example to the right with no bare midriff) we wreck the category system, [Category:Sky] would have to include any picture containing sky and so on. Rich Farmbrough 22:42 21 August 2011 (GMT).
Exactly. I'm trying to get at the fundamental difference between a functional category, like Crop Tops, and a warning label, like Bare Midriffs. As for not having evidence of what people are offended by, Ottava, I couldn't agree more. We don't have any good evidence about what people are offended by, how many are offended, or how offended they are. So how could we possibly implement a system to identify it for them?--Trystan 22:49, 21 August 2011 (UTC)Reply

┌─────────────────────────────────┘
Reusing the current category system wouldn't work. Just take a look at them. For example the category violence. At first it has many subcategories (with many subcategories on their own) which are only indirectly related to violence. That means that including all subcategories as a filter criteria would give wrong results. If you take a look at the pictures then you will soon notice that many of them don't show actual violence. They are depicting monuments, protests against violence, portraits of people involved, and so on. In fact this means that we will need to create separate categories, look at every individual file and conclude/discuss inside which category it belongs. This will be an enormous effort with ten-thousands hours spend that could have been used to increase the quality and quantity of the current content. --Niabot 01:29, 22 August 2011 (UTC)Reply

3rd parties can trivially expand the scope

[edit]

The software side of this is fairly simple to implement, with a prototype working in a couple of days at worst. The issue is then "merely" (because it's a lot of work for the community) to assign categories for the software to work with. Done, perfect.

Here comes a massive application of en:WP:BEANS, or it would be WP:BEANS, once our own innocent filter is completed :

Now, as it turns out, our database is down-loadable, and our category system is readable by third parties. A third party can write software that either looks at our database or parses our pages, and can figure out what categories the images are in. They can then directly add those categories directly to an actual censoring proxy or filter. This will only take a couple of days too. *really* only a couple of days, since our fine volunteers have already generated all the data this third party needs. No sweat.

You know, while we're at it, images are usually linked to from pages. It's fairly easy to figure out what pages use a certain image. (Commons actually lists those pages!). To be on the safe side, said third party can probably block all the Wikipedia pages that link to a Bad Image. In thier mind, very little will be lost, and a lot of Bad Content can be blocked in that way.

Now, we can take that a step further. Since we now know Bad Pages, we can derive Bad Links, and Bad Topics.

Bad Links are links leaving from a Bad Page. A third party can probably figure to censor anything on the far end of a Bad Link, possibly out to a link-depth of about 3 or 4, without anything that they consider "of value" to be lost.

Bad Topics can be derived from page titles. If a Bad Topic shows up in the body-text of a search result, our hypothetical 3rd party filter-writer can probably "safely" block that too.

And hey presto, a perfectly censored internet. Due to the crowd-sourced nature of wikis, this method of censorship would be more reactive and more current than any other. I think the idea might be worth quite a bit of money. And the really great thing? The actual code for this can be written in a week or two, since all the actual hard work has already been done by wikimedia volunteers!

--Kim Bruning 13:57, 23 August 2011 (UTC)Reply

That's about the size of it... If you don't have access to a free internet, you're not a free citizen anymore, you're something else. Wikimedia projects, all information systems, are _incredibly_ useful for big brother, or a network administrator who plays the role well. The only consolation is that current Wikimedia volunteers won't waste their time on filters-- the time spent will be spent by a very different group of people with a very different mindset than our current Wikimedia volunteers. And they'll all hate each others guts, and they won't agree on anything. and THAT is educational. :) --AlecMeta 00:09, 24 August 2011 (UTC)Reply
I think that this is why the ALA classifies such lists and categories as "Censorship tools" --80.101.191.11 19:24, 24 August 2011 (UTC)Reply

A new filter function based on existing categories, but no new categories

[edit]

Most of the objections are related to the definition of what is or isn't objectionable, and about the process of arriving at that definition in the case of individual images.

If the WM community spends discussion time and effort to classify images, then the result of their effort will be used by censors to prevent certain groups of users from accessing those images. That constitutes a worse deal, in terms of the principles underlying WM, than if the censors themselves build and establish a censorship database around and on top of WM without the involvement of most WM editors. We would be making their jobs easier than before.

However, if the category system is left as is, including the requirement that the definition of all categories needs to be neutral (implied: objective), then the would-be censors do not gain any new advantage over the current situation.

The authors of the proposal intend for the individual user to be able to filter their own experience. It would be acceptable for the WM projects to offer such possibility, not based on 5-10 specially-designed categories, but based on any existing categories that the user selects. All we are offering is an add-on to the user interface, not an change to our data structures.

Arachnophobics need not refer to a special category of objectionable images, they can simply exclude spiders. Prudes can exclude any number of existing categories depending on their personal sensitivities. I may have a personal objection against the existence of the category Pornography, because its definition involves second-guessing the intention of the author of the image, but my objection is not made any worse by the possibility of filtering away all the images in that category.--Lieven Smits 09:21, 25 August 2011 (UTC)Reply

You raise good points. This won't please everyone who wants a "Safe For Work" toggle, but add a simple useful feature without any of the issues you mention. SJ talk | translate   02:12, 6 September 2011 (UTC)Reply
I think that the use of the "category" language is misleading people. Nobody's talking about setting up a new category, like "Category:Images that are offensive because they show naked humans" and adding that new category to hundreds of thousands of images. They're talking about using the existing category tree, and compiling a 5–10 lists of pre-existing, related categories for simple, user-friendly, optional image filtration.
A one-by-one, 100% customizable approach is definitely more flexible (and I can think of good reasons to move in that direction eventually), but there are three major problems with that approach:
  • It's far more complicated for the user. I believe that Commons alone has about half a million categories. I suspect that most users would want to filter less than one percent of those categories. Can you imagine reading half a million category names? Can you imagine individually ticking one to five thousand of them? Even a narrow area, such as the arachnophobic person in your example might want, is likely to require ticking about 100 categories.
  • It requires extensive knowledge of the material you want to avoid. For example, the people you label as "prudes" might well want to filter images from Commons:Category:Goatse, but they might not know what that even is. The person with a spider phobia might not realize that images of the en:Amblypygi could be triggers, and he is not likely to think that looking at all the pictures to find out if they trigger a panic attack is an appropriate method for figuring out what should be listed.
  • Now remember that Commons's cat tree is in English, and imagine trying to figure out which categories you are likely to want to exclude if you don't speak any English at all. An English-speaker could make a quick decision about Category:Photographs of sexual intercourse, but a non-English speaker might have no idea what that says. WhatamIdoing 18:33, 25 August 2011 (UTC)Reply
I admit that I feared that new categorisation information was going to be added to individual images, whether in the form of new categories or in the form of a new entity type with an m:n relationship to individual images. After your post I re-read the proposal and I do not think that it is stated very clearly. The Harris report talks about the existing category system, which is not the same as sticking to existing categories; and in any case the referendum is not about the wording of the report.
If the proposal is interpreted in the way that you describe, then most of my worries are alleviated, because editors will no longer be asked to judge directly on the 'appropriateness' of individual images. The remaining concerns are (1) that the definition of the 5-10 groupings is unlikely to be guided by objective criteria, and (2) that the definition of the underlying, existing categories will be de facto modified as a consequence.
(1) Even the Harris report is ambiguous in the examples it gives of 'sexual' images: in one place it seems to relate to images of genitals and masturbation, in another place it suggests including bare breasts -- although the latter example could be meant to belong to a different grouping. Sexual arousal is both a personal and a cultural matter that is impossible to predict in a neutral (i.e., independent from personal and cultural preferences) way. Note that not all groupings need to suffer from this. It is objectively verifiable whether an image depicts the prophet Muhammad, or a spider, or a corpse.
(2) If a grouping is created without the necessary objective and neutral criteria for inclusion, then the mere existence of the grouping will influence the definition of the existing categories from which it is composed. Thus, for example, the current category Pornography may start including pictures that have the effect of arousing a number of users sexually, rather than its current definition which is limited to pictures that have the intention of sexual arousal. In the nightmare scenario of broad interpretation, this could even include the likes of the Ecstasy of Saint Teresa.
But both these concerns are far smaller than my original worry (not entirely gone) that we would be doing the tagging ourselves.--Lieven Smits 08:53, 26 August 2011 (UTC)Reply
Directly tagging ten million images according to "offensiveness" is my idea of a nightmare, too, and that's why I'm glad that it's not planned.
I think one advantage of using the regular category system is that it's so much more detailed. Someone might put something in a "Porn" category, but—well, go look at what's actually in Commons:Category:Pornography. Would you put that in the list of things to suppress if someone decides to "Tick here to suppress sex and porn images"? I wouldn't. The categories that we'll actually want to put on that list are things like Commons:Category:Photographs of sexual intercourse, and I figure that pretty much any adult should be able to make a good guess about whether a given image actually fits in that category.
Borderline images are difficult to get "perfectly" categorized now, but there's surprising little edit warring over it, and people clean stuff up when they encounter problems. I don't think that this situation will change much. We may have a few more people appoint themselves guardians of certain categories, but that happens now (and is generally a good thing), and I expect it to work in both directions, with the 'guardians' both adding more images and removing inappropriate images. WhatamIdoing 17:08, 26 August 2011 (UTC)Reply
Our community is surprisingly good at descriptive classification because they have a category system that serves no purpose other than descriptive classification. So they can tackle the inherently difficult task and just keep improving the system.
If our classification system becomes also a filtration system, I think it would be reasonable to expect that we would have at least as many dedicated and efficient filterers as we now have categorizers. And from an information science perspective, the processes are simply fundamentally different. A filterer doesn't have any concern as to what an image is about, while that concept is at the very heart of descriptive categorization. Once arbitrary standards have been set down, filtering is actually much easier, because it's simply a "does contain"/"does not contain" decision.--Trystan 18:44, 26 August 2011 (UTC)Reply
Even worse we don't have enough categorizers now. Rich Farmbrough 11:22 29 August 2011 (GMT).

Pre-project enabling?

[edit]

How controversial would filtering be if it was initially adopted only on Commons, which doesn't have to be NPOV. That would let us work out the bugs and let the projects decide when and whether to enable the filter on their project.

Would this help relieve concerns? --AlecMeta 02:37, 26 August 2011 (UTC)Reply

Probably not, at least not my main concern: that WM editors will be spending time to make the work of third-party censors easier. It's the tag data itself that worries me, not the functionality that interprets it (on WM or elsewhere). Certain a priori restrictions on the way in which filters are defined could do the trick and put me at ease. Such as: filters are composed of relatively broad pre-existing categories, not of individual images or specialised categories that pertain by definition to very similar images (e.g., categories dedicated to a single work of art). And filters cannot be created unless they have an easily verifiable, objective definition.--Lieven Smits 09:15, 26 August 2011 (UTC)Reply

Design ideas and issues

[edit]

Questions about the design, existing alternatives

[edit]

Hello there, I will just add a few comments regarding the upcoming referendum. As to my background: I am a Computer Scientist by profession, so I can technically judge what is involved. As a quick outline:

  • A user (either anonymous or named) is able to exclude certain images from search reasults. In the case of anonymous users, the preferences are stored inside the users session. Closing and re-opening the browser will reset the settings. Users who log in to edit can store their settings in their profile; their settings will not be reset when they close/open their browser.
  • Architecture wise: blacklists and whitelists can be used. To be determined: do these act on groups of images, or on single images? - Is it possible to whitelist single images in a group that is blacklisted? - To blacklist single images of a whitelisted group?
  • How does the software identify the images to "filter"? - There are two options: At load time, the software analyses the image; the result of this analysis is used for filtering. This will incur extra costs in computing time, and memory; There are different algorithms, which yield different results. The other option is static tagging. This option has the drawback that some people need to decide the tags to use ("tag wars" have been cited above). Also the behaviour needs to be specified if an image does not have any tags; the blacklist/whiltelist approach can be used.
  • There are programs on the market that implement a client-side proxy, and that probably cover 80-85% of what this development will achieve. I currently see no benefit in implementing this solution on the server. The solution where the filtering is done dynamically (i.e. no static tags), and on a per-image basis would probably be superior to the client-side filtering. This however comes at the cost of additional cpu and memory usage, as well as false positives/false negatives.

To summarize:

  • If the solution of static tagging is chosen, we have the problem that images need to be tagged, and "agreement" over the tags to use needs to be reached in some way. Also, the behaviour in the case of an untagged image needs to be defined. Finally, we need to define the granularity: Is it possible to "whitelist" individual images of a group that is "blacklisted" (or to "blacklist" individual images of a whitelisted group). Finally: how do we determine the "tags" (or group of tags) to use?
  • If we tag dynamically, we incur extra costs in cpu and memory use of the system. We need to reach agreement over the algorithms to propose for identifying images; we need to implement those algorithms, which may be technically difficult; we may need to think about caching results of calculations, to reduce cpu load. Also note that the algorithms use stochastic information. There will be false positives, and false negatives.

Both approaches have their benefits, and drawbacks. Neither is "quick to implement". So given that client proxies ("filters") out there probably cover 80-85% of the requirement the usual client needs ("don't show images of nude people of the opposite sex"), where is the use case that would justify 3-5 people work 3-6 months, to get the extra 15-20%? --Eptalon 09:12, 4 July 2011 (UTC)Reply

Please provide more data to explain what "80-85%" means to you here. (list some of the clients you have in mind, and the use cases you feel constitute the 100%). If there are client-side tools that are aware of Commons image categories [or can be customized to be] that would be a useful data point. (And pointing to an open-source client-side option for readers who want one, that is known to work smoothly with WM projects, would be in line with the goal here). Two use cases, for discussion purposes:
  1. You're browsing wikipedia, possibly on someone else's machine, and want to toggle off a class of images. [for instance: giving a WP demo at work, or in Saudi Arabia, &c.] It may not be possible to install your own client software, and you'd like to be able to set this in under a minute.
  2. You come across a specific image you don't want to see again (and checking, find it is part of a category of similar images), and want to hide it/them in the future.
SJ talk | translate   15:42, 4 July 2011 (UTC)Reply
Client-side proxy filters are aimed at parents worried that their children might see the wrong type of image; AFAIK most of them work with whitelists/blacklists of sites; they do not do an on-access scan of the image. In addition, they might do "keyword scanning" in the text (to filter hate sites, and similar). The customisation of these products lies in being able to select "categories" of sites to block/allow, perhaps on a per user basis. Our "static category blacklist/whitelist" approach would in essence do the same thing, except that to achieve it, we need to do development work, and at the best, we match the functionality of a USD 50 product. In addition, load is placed on our servers to do the filering work (+possible problems with the categorisation). Using the dynamic approach will mean even more load on our servers, the possibilities of "false positives"/"false negatives"; the difficulties in finding training data (note: that data can not be used later on), etc. In short: a lot more (difficult) work. We may exceed the USD 50 product as to functionality, but we have 3-5 people developing 4-6 months. I really don't know if I want to spend up to 250.000 usd (24 man-months) to "not see an image again" - it seems out of proportion.--Eptalon 19:29, 4 July 2011 (UTC)Reply
Point of clarification... $250,000? NonvocalScream 19:36, 4 July 2011 (UTC)Reply
Let's be very careful about throwing around dollar figures. That hasn't been scoped, so I think it's dangerous to introduce false numbers to the equation at this point. Philippe (WMF) 19:39, 4 July 2011 (UTC)Reply
Duration of project: several people, 4-6 months (for the dynamic approach, not using static tags). --Eptalon 20:15, 4 July 2011 (UTC)Reply
According to whom, by what metrics and judging by the speed of what resources? How about we let the folks who are designing the thing scope it out, once it's, you know, designed? Philippe (WMF) 23:59, 4 July 2011 (UTC)Reply
Eptalon, you seem to assume that everybody's got a web proxy. I don't. If I really, really don't want to see the infamous picture of the guy jumping to his death from the Golden Gate bridge, my options at the moment are:
  1. Don't read Wikipedia (because any vandal could add it to any page at any time),
  2. Especially don't read pages where that image might logically be present (bummer if you need information about suicide), or
  3. Figure out how to manually block all versions of that image in every single account (five) and every single browser (two) on every single computer (four) I use—which will effectively keep that image off my computer screen, but not any others like it.
This proposal would let me control my computer by clicking a "don't really feel like seeing images of dead bodies today, thanks anyway" button. The images would appear "hidden", and I could override the setting any time I felt like it by simply clicking on the "This image hidden at your request because you said you didn't feel like seeing any images of dead bodies today" button. There is nothing here that would let some institution control my computer. WhatamIdoing 19:23, 5 July 2011 (UTC)Reply
I dont have one (or use any filtering software); Your porposal shifts the problem though. You need to agree with other people about the categories. In the 16th century, a painter called Lucas Cranach the Elder pained a woman, before a tree, wearing a necklace (called 'Venus'). In the same century Mechalangelo did his statue David. In the 19th century, Jules Joseph Lefebvre painted a woman with a mirror ('Truth'). To me, all these works are works of art, and as such, limiting their audience does not make sense. In the 1990s, a museum in London, used Cranach's painting as an ad for an exhibition; they showed it on posters in the London Underground - and there was an outcry.--Eptalon 14:27, 6 July 2011 (UTC)Reply
@Eptalon: I think there is a very real advantage to doing a system specific to Wikipedia that takes advantage of our category structure: filtering can be made more precise, which means not just missing less things that people want to block, but more importantly, avoiding blocking educational materials that young readers need access to to learn. This is our chance to give people a solution that isn't as conservative and overzealous as every other generic solution on the market. Dcoetzee 22:31, 16 July 2011 (UTC)Reply
I didn't sign up to provide people with such a 'solution', Dcoetzee. But thanks for pointing out the extreme slipperiness of this slope.
Now, if someone wants to hide a certain picture for themselves by clicking on a little X or whatever, *fine*, I'm all for user control.
If you pop open a menu that also says "hide all other images in <one of the categories the image is a member of>" ... well, that's fairly sane, as long as no new categories are generated.
But let's not jump on the actual censorship bandwagon. Fair enough? --Kim Bruning 10:35, 21 August 2011 (UTC)Reply
Incidentally, if we're adding menus to images anyway, could we also have other options that make things more explicit, like "view image info" and "view full size"; or editing assistance like "Alter size", "Alter location", etc... --Kim Bruning 10:45, 21 August 2011 (UTC)Reply

The problem is not with clicking things. That's fine. People keep concentrating on the clicky bits while there's nothing wrong with clicky bits. ;-)

The problem is with the category scheme. When you categorize images as "problematic" or "potentially clickyable", you're in trouble, because that classification system *in itself* is defined as a censorship tool by the ALA.

So if you have a clicky tool that can click images on or off, but without using cats, you're fine. However, if you have cats, you're toast. --Kim Bruning 19:33, 24 August 2011 (UTC) 19:28, 24 August 2011 (UTC) meeouw!Reply

Logging in to permanently save

[edit]

Just to point out: the concept presented in File:CC-Proposal-Workflow-Anon-FromNav-Step3.png (logging in to permanently save options) wouldn't necessarily work in the classic scenario of parents applying filters to their children's computers, as:

  1. Their children would then edit using the accounts that have been logged in (pro: would work well in terms of accountability, con: what happens if the account is subsequently blocked for vandalism? Note that the account would most likely pass semi-protection due to the length of time since creation - although it might not pass in terms of number of edits, at least to start with.)
  2. Their children could simply log out, and/or log in with their own accounts, thereby bypassing the filter.

Mike Peel 21:48, 23 July 2011 (UTC)Reply

I do not see this as something parents should use to prevent their child seeing images. They still have google images. If a parent does not watch their child while they are on the internet (i.e. watches what they are doing on the internet) or does not obtain a filter program or service that is controllable only by them, then they have no one to blame but themselves is their child watches porn or the like. This is so people who don't want to see nudity can block it for themselves, not for their children. The only problem people might have is those with dissociative personality disorder where one personality wants the nudity but another doesn't.Black.jeff 09:05, 20 August 2011 (UTC)Reply

You're right: this is completely useless for parents (or schools, or employers, or whatever) censoring someone else's reading. This is only useful for you controlling what you personally and voluntarily choose to display on your computer screen at any given moment in time. It is not designed to permit you to control what anyone else sees. If parents want to control their children's access to Wikipedia, they will have to find another method for doing that.
The point of this is to let a person with PTSD (for example) voluntarily stop the display of mutilated bodies on his own computer screen. Currently, we are forcing people to see types of images that they do not want to see. There is no possibility in this system for you to stop someone else from seeing images that they want to see; even if you could "lock" the preference (which you can't), overriding the setting only requires that the reader click the hidden image. WhatamIdoing 19:29, 23 August 2011 (UTC)Reply

Protection levels

[edit]

Just like many security software appplications have many security levels, I propose the same for this image filter. There must be few levels per tag, 3 or 4, so decisions are easy. For example, 0 for no sexual content, 1 for light clothes like here, 2 for underwear and 3 for visible genitals. --NaBUru38 18:39, 15 August 2011 (UTC)Reply

And of course, it's always going to be women who will be tagged with "light clothes" because we all know they should wear potato bags to please certain Middle Eastern imaginary beings. Andkon 19:44, 22 August 2011 (UTC)Reply

Dont use a collapsed frame!

[edit]

I think really that the system should NOT use AJAX or collapsed frames, for several reasons.

The first, and most important reason, is that a filtered image should not be sent over wire at all. It might get stuck in a company/school proxy server with logging turned on, and that can make bad things happen for the employed person who did think the filter was a "safe" way browsing wikipedia. (like fired from work, having computer account revoked/locked for a period of time, or expelled from school in a time).

With the filter ON, you should be safe and know that nothing that you have elected to filter out, should be sent over wire at all.

Another reason of not using AJAX or collapsible frames in the filter system, is that older browsers might not support them, causing the images to be shown even with the filter turned on. You can never know what happen with older browsers so safest is to control image hiding at the server side.

Also the filter settings dialog, contains category words, that might be stuck in a filter. So instead of showing the filter settings dialog with AJAX, show it in a simple target="_blank" popup. A _blank popup in a link will almost newer be blocked by popup blockers (if you not turned the security settings wayyy too high). That will make the filter safe for everyone to use and rely on.

Also, provide a option to block ALL images too, so if you have a slow link, so you can opt not to view heavy images under a 200kbps GPRS connection. Sebastiannielsen 02:45, 21 August 2011 (UTC)Reply

Comment: I like the idea of a "No images" option, independent of anything else. In which case you certainly would not want to load the images behind a collapsed frame, you wouldn't want them to be present at all. SJ talk | translate   02:37, 6 September 2011 (UTC)Reply
If a workplace is going to be that assholic, it's best that they do without Wikipedia. That will give a little edge for a better company to outcompete them. Wnt 17:10, 21 August 2011 (UTC)Reply
To block all images follow the instructions at en:Wikipedia:Options to not see an image. Rich Farmbrough 18:55 21 August 2011 (GMT).
That does not block the images from being sent over the wire, which is the purpose of disabling all images, if you disable it because you are sitting on 200kbps GPRS. About workplaces, the sysadmin can in many cases not see if the client is showing a "forbidden" image or not, if they not have a client survelliance solution. They can only see if a client has downloaded a image or not, thats why the system should not send the images at all over the wire.
Remember that many pupils use Wikipedia as a resource for their school work, and they might be searching for something related to that, and comes into a article with forbidden images (according to the computer usage policy at that school). Sebastiannielsen 23:40, 21 August 2011 (UTC)Reply
I tend to think of children and workers-with-harsh-workplace-rules as if they are subjects of a totalitarian society-- I don't exactly know whether we help or hurt by working to meet their needs. If we even consider sysadmins use a tool to determine who has the filter turned on, we legitimize their using it in that way. These populations are essentially "lost" to us, and it may be better to focus exclusively on the needs of people who have the right to view whatever they wish.
Then it's a much simpler problem. 1-click-to-view-a-shock-image is something lots of readers want, and it's really not about censorship as much as it is about turning the screen away from the room before you click to see what's underneath. That's a simple need, with a simple fix.
It's not really about politics or decency or anything as complex as that. It's just a momentary courtesy some readers would like. --AlecMeta 18:13, 22 August 2011 (UTC)Reply
A simple thing to do, is to force images that are filtered according to user's filter setting, linked. Eg, instead of the image, the user is shown a link that he can click and land on the image wiki page. If we are going to use collapsed frames or AJAX, I think really that we should put a HUUUGE warning on the filter page that says "WARNING: The filter does only prevent you from inadvertently seeing images you don't want to see. The filtering will be done in your local computer, and it does NOT prevent any of the filtered images from being sent over your network wire, and if you use wikipedia from a school or workplace, you can face consequences for images transmitted over the wire, according to your local IT usage policy, even if the images never was shown on your computer screen." Sebastiannielsen 06:38, 27 August 2011 (UTC)Reply
It would be reasonable to include that information on whatever image-filter help pages we might write (assuming the filter is ever implemented). I think that making it "a HUUUGE warning" is probably unnecessary, though. WhatamIdoing 17:33, 27 August 2011 (UTC)Reply
Collapse boxes also take time to render, I see the content of collapse boxes quite often. Not a good technical solution. Rich Farmbrough 22:38 30 August 2011 (GMT).

Eingangskontrolle's example

[edit]
Kruzifix
Image hidden by Wikipedia opt-in personal image filter. This image is categorized as: religious POV (christian) ; health advice (dangerous for vampires) ; contravention to "Thou shalt not make thee any graven image" (Exodus 20:4) ; torture ; corpse ; barely clothed male person -- the categories you blocked are marked yellow -- click link to show image

The preceding unsigned comment was added by Eingangskontrolle (talk • contribs) 11:54, 30 August 2011.

This page isn't complete: Archives have more alternative ideas

[edit]

I should note that I proposed an alternative ([4]) which was oublietted to the Archives before these subtopic pages were set up here. There are five pages of such archives. Should they be divided up and transcribed to these topic-specific talk pages? In any case bear in mind that not all the ideas are listed here. Wnt 18:36, 8 September 2011 (UTC)Reply

Categories

[edit]

5-10 categories? A few dozen?

[edit]

Who decides what the 5-10 objectionable categories are?

[edit]

From the penultimate question on the survey, it looks like 5-10 filtering categories are being considered. I was thinking that as a practical matter, it might be difficult for many thousands of editors across different cultures to decide what those categories were, so I assumed someone else might be deciding, i.e. the WMF. Phoebe's statement implies my assumption might be wrong. If 5-10 categories are to be used, who will (and how will they) decide what those categories are?--Wikimedes 07:04, 21 August 2011 (UTC)Reply

Actually, the research shows that providing three broad groups would make nearly everyone around the world happy (except, naturally, the small but vocal "anti-censorship" people who think it the inalienable right of every vandal to display any image he wants on my computer screen):
  • Certain religious images (e.g., en:Mormon temple garments and paintings of Mohammed)
  • Sexual images (definitely including porn, but possibly going as far as general nudity)
  • Violent images (e.g., mutilated bodies and abused animals)
I don't know, but I'd guess that five to ten were proposed because it would be useful to introduce some granularity. For example, I could imagine that a person might want to avoid stills from pornographic movies, but wouldn't mind the naked woman in en:Pregnancy. (It may amuse you to know that the "anti-censorship" crowd has successfully prevented the inclusion of even one image of a fully dressed pregnant woman in that article: all images show light-skinned women either completely naked or in their underwear.) WhatamIdoing 20:09, 23 August 2011 (UTC)Reply
I am arachnophobic, and I get panic attacks from looking at spider images. A friend of mine suffers from BPD and he gets anxiety attacks from looking at images of tall structures. Et cetera. "Argument": destroyed. Contrary to the "censorship" strawman people like you keep raising (you guys are almost like an anti-anti-censorship false flag operation), there are many different overwhelmingly powerful arguments against a project-wide filter category system, not a single one of which has got anything whatsoever to do with censorship. At all.
Also, you are destroying your own "argument" without even realizing it: If there are actual, encyclopedic reasons that speak against the set of images included e.g. in en:Pregnancy, then we should solve that problem at its root (ie. go to the article and improve it), not throw a completely superficial "solution" like an image filter at it. That said, an image filter has my tolerance, if not enthusiastic support -- but only if it is implemented as a general image filter, without any project-wide filter categories. --213.196.212.168 20:54, 23 August 2011 (UTC)Reply

As I noted at some point on the mediawiki discussion page: I'm also in favor of any implementation being a general image filter. I haven't seen any other option proposed that sounded like it would work out socially or philosophically. SJ talk | translate   01:23, 6 September 2011 (UTC)Reply

We need this but it should be made sure to be flexible.

[edit]

I agree with one of the above users, but only partially, that this could become imposed censorship, but if implemented correctly it won't be. In fact the point of user controls is to AVOID global censoring. Just as one must make sure a governing body doesn't impose censors, it should also be made sure that they don't impose no possibility for the individual to censor the content on their own computer. One can always avoid certain pages even now, but if one wants to read a topic that may contain offensive images, they should not be forced to view those images if they don't want to, any more than they should be forced to not see them if they do want to see them.

For more flexibility the 2 proposed categories ("nudity/sex" and "violence") should become 3 categories ("nudity", "sex", and "violence). Otherwise an image of a XXX sex act might be censored just as much or little as an image of the nude painting Birth Of Venus.

A user's personal control panel should contain not only an "on/off" switch for each category, but also a 0-10 slider under each category for showing or not showing an images of a certain "strength" of that category. It should also allow the image to be delayed showing or blocked entirely based on a check box (not forced to be delay method only as suggested in the official statement). Specific images that you believe should or should not be shown based on your image category sliders (but had been shown or not shown opposite what you wanted) should be allowed to be added to a user's blacklist or whitelist manually by the user on an image by image basis. Of course all the above settings should be reversible by the user so that no change to the settings made by a user is permanent for that user.

This is very important it be done though as NO web-client software package (i.e. Firefox) is capable of blocking some images but not others, and blocking images but not text. Instead of such software will just block WHOLE SITES that have "bad content" in general above a certain user set level (no distinction of pictures versus text, or other such surgical precision). Instead pages with such offending material will simply be completely blocked from being browsed to by the user. This browser based "complete censorship" is a FAR STRONGER method than that being proposed on Wikimedia, and makes "collateral damage" by blocking completely lots of web pages even with only a small amount of offensive content. Until browse censors allow finer control of the censoring, then Wikimedia is just doing what it has to. And keep in mind this is USER CONTROLLED! It is NOT "we wiki get to censor what you see cause we're evil big brother ha ha ha ha ha".

To the above user who is worried about institutions policy, my only reply is this. Follow the rules of that institution when at that institution. If you want to be able to brows freely with no censoring, then do it on your own computer at home or at an institution or other place with internet access who's rules don't restrict content accessed on campus. I'm guessing those institutions already have a policy in place preventing offending pages from even being accessed. At least with this in place they could block the images but not the whole page. There are MANY places with their own rules. If you go to a fastfood joint with a sign that says "no pets" are you going to bring your pet with you just because you don't like the rule? No, because then you won't get served food. You are correct that the US government is bound by the constitution. However schools and other privately run companies and institutions have the right to impose their own restrictions and rules. And you will need to follow them if you don't want to get in trouble. This applies for a private school/college, restaurant, grocery store, etc.

Animedude5555 07:14, 20 August 2011 (UTC)Reply

About the institutions, overall I agree that public computer labs have a right to control what is seen. As my point above still stands, though, this greatly hurts a massive demographic. Most Wikipedia editors have personal computers, are well-educated, etc., and so have this false view that everybody owns a computer and lives in some sort of metropolis with a thousand internet cafes. But we can't ignore the demographic with only access to one public computer, especially in conservative or religious areas where administrators are likely to enforce filters. Many of the pictures we're thinking of allowing to be filtered have crucial educational content. And I know so many people, some street kids, some homeless, some deadheads, some just minimum-wage families without a computer, that if the local school or library enforced a policy would have no alternative Wikipedia access. I can only imagine how many people are in this situation in extremely poor, rural areas.
To protect this demographic at large, I don't think we should even consider giving institutions this tool in the first place Rodgerrodger415 16:36, 20 August 2011 (UTC)Reply
You do not seem to understand the proposal. This system lets any reader, with a single click, see any image the reader wants to see. This system will not work for schools, libraries, or parents. Think about it: the school says "Hide these images from the children". The student says "Ha ha, I click here to see the image anyway!". There's nothing the school's computer systems can do to prevent the individual user from overriding the settings on this system. If you really want to censor your users' images, you need to get a en:web proxy (or similar, paid systems, like "CyberPatrol") to do this. This system simply will not work for that purpose. WhatamIdoing 20:21, 23 August 2011 (UTC)Reply
1. As discussed above, "nudity," "sex" and "violence" are merely three of countless types of image content regarded as offensive by some individuals in some cultures.
2. Also discussed above is the inevitable difficulty of determining whether to include a given image in a particular filter category (and the likelihood of controversy, argumentation and edit wars). If I understand correctly, you propose that this be further complicated by requiring that each image's "strength" (whatever that means) in a given category be ranked on a numerical scale. How is that remotely feasible? Via what process do you envision this occurring? —David Levy 07:32, 20 August 2011 (UTC)Reply
You missed the part where I said the individule could force a image blocked under a certain user's setting to be shown (via a user account specific or cookie specific whitelist) if that user wanted a specific image to be shown that they believe should not have been blocked. Likewise if I find an image that I believe should be blocked under my current settings but isn't, then I could "blacklist" it.
Keep in mind this policy gives the USER (not Wiki) control of the censoring. It is NOT a violation of speech, and given certain images on certain wikis I may well choose to NOT BROWSE TO A CERTAIN WIKI ENTIRELY if this new feature is not put in place. By forcing me to NOT go to a certain wiki (based on my values) they would be in effect censoring THAT ENTIRE WIKI for me. Now THAT would be the unfair thing to do. What they are proposing gives the user MORE POWER, rather than taking away power as you claim. It does NOT infringe on their right to free speech in ANY WAY!
As for the implementation, it would be simple. It should use the world's best shape/image recognition software implementation ever invented. Not sure what the name of the software is, but it would be expensive, VERY expensive. But in the end it would be worth it to allow for a very good implementation of my suggestion. It would (for nudity) auto detect the presence of certain body parts, how much of the different parts is shown, the pose used (does it look like a "slutty" pose like in a playboy magazine, or just a "plain nude" pose like nude art in an art museum), the overall realism of the image (from "flat" images like a cartoon, to realistic 3D computer graphic renderings, to actual photos), and also in general what fraction of the skin is exposed (bikini coverage, versus fully clothed, etc). All these factors would contribute to the final computer determined "strength" rating in nudity category.
Of course violent images and images with sexual content would also have similar things looked at, and ratings calculated automatically by computer based shape/image recognition technology.
And don't think my suggestion is science fiction. There already exists forensic software that the police can use to detect child porn on a computer that indeed depends on implementation of image/shape recognition algorithms to determine in an image the age of the person depicted and also what amount of nudity and/or sexuality is present in the image. This allows the images the cop is searching for to be narrowed down, and he/she can then manually inspect these images to make sure that no "false alarms" are marked as illegal images.

Animedude5555 07:57, 20 August 2011 (UTC)Reply

You missed the part where I said the individule could force a image blocked under a certain user's setting to be shown (via a user account specific or cookie specific whitelist) if that user wanted a specific image to be shown that they believe should not have been blocked. Likewise if I find an image that I believe should be blocked under my current settings but isn't, then I could "blacklist" it.
I "missed" nothing. I didn't comment on that portion of your proposal because it's irrelevant to my main concerns. I don't know why you've reiterated it, as it doesn't address anything that I wrote.
Keep in mind this policy gives the USER (not Wiki) control of the censoring.
...within a predetermined set of filters.
It is NOT a violation of speech, and given certain images on certain wikis I may well choose to NOT BROWSE TO A CERTAIN WIKI ENTIRELY if this new feature is not put in place.
That's your prerogative. Under the above proposal, persons objecting to image content other than "nudity," "sex" and "violence" face the same dilemma.
By forcing me to NOT go to a certain wiki (based on my values) they would be in effect censoring THAT ENTIRE WIKI for me. Now THAT would be the unfair thing to do.
Note the phrase "my values." Why do you believe that your values take precedence over those of other individuals/cultures?
What they are proposing gives the user MORE POWER, rather than taking away power as you claim. It does NOT infringe on their right to free speech in ANY WAY!
I don't recall claiming any such thing.
As for the implementation, it would be simple. It should use the world's best shape/image recognition software implementation ever invented. Not sure what the name of the software is, but it would be expensive, VERY expensive. But in the end it would be worth it to allow for a very good implementation of my suggestion. It would (for nudity) auto detect the presence of certain body parts, how much of the different parts is shown, the pose used (does it look like a "slutty" pose like in a playboy magazine, or just a "plain nude" pose like nude art in an art museum), the overall realism of the image (from "flat" images like a cartoon, to realistic 3D computer graphic renderings, to actual photos), and also in general what fraction of the skin is exposed (bikini coverage, versus fully clothed, etc). All these factors would contribute to the final computer determined "strength" rating in nudity category.
Of course violent images and images with sexual content would also have similar things looked at, and ratings calculated automatically by computer based shape/image recognition technology.
Okay then. Thanks for sharing your thoughts on the matter. —David Levy 08:19, 20 August 2011 (UTC)Reply

Objective/neutral filtering is easy : just pick a few dozen keywords

[edit]

First off, I'm rather appalled at the "Wikipedia (and I) know(s) better than you what images you should see" attitude. Also, "the government will start censoring Wikipedia if we add filters" is an absurd slippery-slope arguement.

More importantly, it's extremely easy to have an objective filtering system. You just need to use objective keyword system. For example, you tag all images that have nipples in them. The reader can decide whether they will accept images with nipples in them. The presence of an exposed nipple is not subjective. No one has to determine how sexually explicit the image is on some sort of scale. There is a nipple in it, end of story.

This would easily cover the vast majority of content that could be considered offensive in any culture. Think of the keywords you can easily and objectively add to an image:

  • Nipples, genitals, penis, vagina, anus
  • Intercourse/penetration, sexual stimulation (or masturbation and variations thereof), oral sex/stimulation
  • Blood, exposed organs, disease, surgery
  • Suicide, deceased person (or corpse, cadaver)
  • Carnivorism, skinning
  • Religious imagery (or, Catholic imagery, Hebrew imagery, etc, when the particular religion is not questionable)
  • Muhammad, Jesus, God, etc
  • Weapon, firearm
  • Hijab

Keywords that could potentially be ambiguous could be defined or not used. There could also be potentially combination filters, for example, do not display an image tagged "female" unless it is also tagged "hijab".

I think this is a reasonable approach to the idea. 68.126.60.76 13:35, 21 August 2011 (UTC)Reply

Why do we need an "objective" filtering system? People get to chose to use it or not. If they disagree with it then they don't need to use it. Ottava Rima (talk) 13:39, 21 August 2011 (UTC)Reply
To satisfy concerns a filtering system will be non-neutral in any way, including cultural bias, and prevent the need to come up with subjective image categories. 68.126.60.76 13:44, 21 August 2011 (UTC)Reply
A filter isn't supposed to be neutral. It is stupid for a filter to be neutral. A filter is supposed to filter what the -user- wants, which makes it 100% subjective and non-neutral. Otherwise, it isn't a filter. Ottava Rima (talk) 16:11, 21 August 2011 (UTC)Reply
No matter how many times people explain the distinction between "the reader deciding which filter(s) to use" and "the project deciding what filter categories are available and what images are placed in said categories," you continue to conflate the two. It's rather frustrating.
The concern isn't that a reader's personal beliefs are non-neutral. That's a given. The concern is that the filter options presented to the reader will be non-neutral (i.e. only some "potentially objectionable" images will be assigned categories, with subjective determinations of the files contained therein).
For example, if we have a filter category for sexual content, there will be significant disagreement regarding what images are "sexual" in nature. Meanwhile, if we don't have filter categories for unveiled women or homosexuals, this will constitute a formal declaration that persons/cultures deeming those subjects "objectionable" are wrong to do so. (And if we do have filter categories for those subjects, the opposite value judgement will be inferred.)
The only neutral approach, short of providing filter categories for literally everything (which obviously isn't feasible), is to provide filter categories for nothing. (This, of course, doesn't bar the introduction of a system providing the native capability to block all images and manually display them on a case-by-case basis.) —David Levy 17:05, 21 August 2011 (UTC)Reply
After several pages of well-argumented debate, where an overwhelming majority of veteran wikipedia editors vent their concerns about the matter, dismissing the argument as "absurd slippery-slope" seems rather asinine to me. As for your proposal, although slightly less dangerous, it shares with the original one a mix of good intentions, leniency towards moralism and practical dangers. To begin with, governments could filter based on legitimate tags for oppressive reasons: governments in North Africa, for example, could censor all non-muslim religious images just for the sake of bothering religious minorities, which they already systematically do. On the other hand, images having nothing to do with, say, genitals, could be tagged as such if they are found to be inconvenient. And yes, wikipedia is an encyclopedia, so it does know a lot of things better than you (as in the general you), probaly including, but not limited to, which images you should see. Complainer
Actually, the majority of users are for the filter. The "majority" you are speaking of on this page (which is barely over 50%) are mostly IPs from those logging out and using multiple IPs to make it seem like far more people. If we implemented a bot that CU checked everyone and blocked those who log out edit, a lot of users would be blocked from their actions on this page alone. Ottava Rima (talk) 16:11, 21 August 2011 (UTC)Reply
Actually, most users are opposed to this filter, especially outside the US. (If you don't need proof for your claims, neither do I. Helps the quality of the debate, doesn't it?) Kusma 16:23, 21 August 2011 (UTC)Reply
Most users? No. Not even close. Most users in the world are actually against porn. Stop trying to project a tiny, vocal minority that lives in the Netherlands or Germany upon the rest of the world. China and India are 1/3rd of the world and their culture is very anti-pornography as a whole. As is the Muslim world, which is another 1 billion. That is half the world that is completely opposite of a tiny few, and we haven't even discussed the Americas in that number. Ottava Rima (talk) 16:30, 21 August 2011 (UTC)Reply
How do you know that the majority of users is for the filter? Adornix 16:25, 21 August 2011 (UTC)Reply
Because you can take the talk page, which pornography related talk pages are always heavily libertine even though the raw votes never suggest even close to that. Then you can take out the duplicate IPs and the logged out editing and realize that those in for this still represent a vast majority. With the bias acknowledged, it is probably about a 80% for the filter. After all, this is about giving people choice, not tyranny of a few people in the Netherlands or Germany that wish to force their views on sexuality upon the rest of the world that wants moderation. Ottava Rima (talk) 16:30, 21 August 2011 (UTC)Reply
Even if all IPs are the same person, your count seems completely off. Kusma 16:45, 21 August 2011 (UTC)Reply
Maybe you need to do a recount. At least two people admitted to "accidentally" editing logged off. IPs have no way of really knowing about this except to have a user account. It is a majority for the proposal, and most polls that aren't done by having canvass shoutings on talk pages have always shown a vast majority against the porn. Only about 2% of the world's population lives in such a libertine culture. Ottava Rima (talk) 16:59, 21 August 2011 (UTC)Reply
The Wikimedia Foundation's fundamental principles cannot be overridden by a majority vote.
It's reasonable, of course, to opine that this idea complies with said principles. But if it doesn't, no amount of support can change that. —David Levy 17:05, 21 August 2011 (UTC)Reply
The primary principle of Wikimedia is accessibility to an encyclopedia. Encyclopedias do not require image. By denying a large portion of the world access to an encyclopedia that those like -myself- wrote because -you- demand that they have no ability to make it so pornography or other problematic images don't appear for them is one of the most egotistical things ever, and it has no purpose on this site. The WMF is acting completely based on what ethics and its principles demand. You are being incivil by attacking it for doing so. Ottava Rima (talk) 18:58, 21 August 2011 (UTC)Reply
I tell you for the third time, OR, STOP calling people names. Like it or not, this is NOT how you conduct a discussion on wikipedia. Anyhow, I think there is some confusion here. Nobody here is arguing that people should not have access to wikipedia. We are arguing that not looking at pictures they consider indecent is their responsibility and neither wikipedias nor their teachers or whoever happens to be in control of their firewalls. You provide the perfect example of the concept, being obviously opposed to much of our image bank and still an active contributor. If it didn't bother you, why should it a billion Chinese? Complainer
The Wikimedia Foundation hosts several projects other than Wikipedia, so let's extend "an encyclopedia" to include all of them. You're correct that our written content can be useful (and preferable in some cases) without the display of images. (We haven't even touched on bandwidth issues or visual impairments.) That's why I support the introduction of an optional image filter, provided that it covers all images (and enables their manual display on a case-by-case basis). Problem solved. No more "pornography or other problematic images" appearing against the reader's wishes.
I merely object to the establishment of a system in which images are singled out and formally deemed "potentially objectionable," which cannot be accomplished in a neutral or non-contentious manner, let alone an efficient or reliable one.
You accuse me of incivility for expressing disagreement with the Board of Trustees (on a page set up to solicit feedback regarding the idea) while simultaneously describing my position as "egotistical" (one of many such insults that you've hurled). There's no need for that. —David Levy 19:29, 21 August 2011 (UTC)Reply
"STOP calling people names" - I haven't called anyone one name. However, you are making up false statements about what others say, which is a major breach of civility. You really need to stop. You have already made it clear that you don't have respect for cultures different from you and that you want to ensure that hundreds of millions of people can never have access to Wikipedia because you don't want to give them a choice. It is highly incivil, selfish, and not appropriate. It is also racist because you are intolerant of other cultures that are not like you. You wish to deny them a simple choice because they are different. That is not what Wikipedia is for. Ottava Rima (talk) 20:11, 21 August 2011 (UTC)Reply
1. You appear to have mistaken Complainer's message for part of mine.
2. As noted elsewhere on this page, you've somehow misinterpreted my messages to mean exactly the opposite of my actual sentiment. Among other reasons, I oppose the idea because I believe that it would discriminate against members of various cultures by affirming some beliefs regarding what's "potentially objectionable" and denying others. You needn't agree, but attributing my stance to intolerance and racism is uncalled-for. —David Levy 20:42, 21 August 2011 (UTC)Reply
1. I didn't mistake it. I assumed that you had an extra colon than what you did and it formatted behind yours. 2. And don't dissemble about your claims. It is impossible for anyone to argue that a few people will be discriminated against when not doing anything continues the discrimination against hundreds of millions of people. You wish to impose your fringe view on the world to deny people a choice not to look at porn while using the encyclopedia. In essence, you deny children, those at work, and the billions of people in places like the Middle East or China who are not legally allowed to look at porn from having access to the encyclopedia. That isn't appropriate no matter how much you try to claim you are protecting people. Ottava Rima (talk) 21:25, 21 August 2011 (UTC)Reply
1. You keep referring to "Wikipedia" and "the encyclopedia." Again, this pertains to every Wikimedia Foundation project.
2. As I've noted repeatedly, I support the introduction of an optional image filter, provided that it covers all images (and enables their manual display on a case-by-case basis). Problem solved. No display of "porn" against readers' wishes.
I merely object to the establishment of a system in which cultural beliefs (whether yours, mine or someone else's) of what constitutes "objectionable images" are formally codified, thereby deeming them correct and everyone else's incorrect. In addition to opposing this on philosophical grounds, I believe that such a method would be highly contentious, inefficient, resource-draining and unreliable.
But again, I'm not asking you to agree with me. If you feel that I'm wrong, that's fine. But please drop the allegations of intolerance and racism. There's no need for that. —David Levy 22:32, 21 August 2011 (UTC)Reply

┌───────────────────────────────────────┘
porn -- An encyclopedically valuable depiction of a nude body or bodypart is "porn" in your worldview, Ottava Rima? "Porn" as in "portrayal of explicit sexual subject matter for the purposes of sexual arousal and erotic satisfaction"? Seriously: Is e.g. commons:file:Human vulva with visible vaginal opening.jpg "porn" to you? This is important because if it is, then you should understand that the problem is all yours. --87.79.214.168 21:40, 21 August 2011 (UTC)Reply

You completely missed the entire point of this section. It is unnecessary to define porn if you use appropriate objective keywords. If he doesn't want to see a vagina, he can block images tagged vagina, and nobody has to care what his definition of pornography is. 68.126.60.76 04:41, 22 August 2011 (UTC)Reply
I was replying to Ottava Rima. --78.35.237.218 09:07, 22 August 2011 (UTC)Reply



"so it does know a lot of things better than you (as in the general you), probaly including, but not limited to, which images you should see" - this is arrogant idiocy. Anyway, arguing about how many people are on each side is incredibly childish and pointless. The final vote will be the final vote. You shouldn't try to sway people's opinions by telling them they are in the minority. Each individual should vote for him or herself - to try to tell them otherwise is authoritarian, elitist behavior which is absolutely incompatible with the views you are supposedly attempting to uphold.
"Lots of people are concerned about a slippery slope" doesn't necessarily make it a less fallacious argument. People are herd animals and tend to panic in large numbers. Why would governments censor images based on keywords for images but not based on keywords in articles? It would be easy to block every page with the words "Jew", "Hebrew", etc in them for example. On the other hand, they might uncensor entire articles when an "offensive" image is removed. Why is this any less likely than "Africa will censor all non-Muslim images to religiously oppress the people"? Why wouldn't Africa block Wikipedia entirely? 68.126.60.76 04:39, 22 August 2011 (UTC)Reply
I support 68.126.60.76's idea. The categories should have strict definitions easy to measure. So instead of a vague "partial nude", we could say "underbody clothes (shorts, skirt, etc) shorter than X cm/in below crotch, cleavage below X cm/in from a certain point, visible belly". We could discuss about how many cm, of course, but this way each picture will be easy to judge and discussion would restrict to the definitions themselves. --190.135.94.198 01:06, 24 August 2011 (UTC)Reply
I assume this is sarcasm, but it's a long leap of logic to assume objective filtering means guessing measurements. 68.126.63.156 02:43, 30 August 2011 (UTC)Reply

Working definition of "controversial" + categorization scheme

[edit]

I've been looking more closely at the 2010 Wikimedia Study of Controversial Content. The Study notes, among other things, that Wikimedia projects already have working definitions for "controversial" text or topics. As an example, the Study cites the English Wikipedia's list of controversial issues. The Study suggests that a similar, though of course different, working definition of "controversial" could be achieved for images.

I have two questions:

1. Does this mean that the Foundation is planning to have one single catch-all category of "controversial images" which users could choose to filter out? I'm confused on this point. From what has been said on the main page for this referendum, I was under the impression that the plan was to categorize images as belonging to various categories (e.g. nudity, violence, drug use) and allow users to filter out specific categories. Is the plan to have both a single catch-all category of "controversial images" and various sub-categories (e.g. nudity, violence, drug use), with the option of choosing which sub-categories to filter out? Please clarify.

2. The English Wikipedia defines a controversial issue as "one where its related articles are constantly being re-edited in a circular manner, or is otherwise the focus of edit warring". The Study of Controversial Content cited this in a footnote as an example of an objective, verifiable criterion for controversial content. In that case, is the standard for controversial images envisioned as being based on the number of complaints or edit wars associated with an image? Or is there no definite idea yet as to how images will be categorized as "controversial"?

Note: I'm looking for some clarification here, and ideally some discussion on how best to establish a categorization system for images, should the image-filtering plan proceed. Please don't comment here if you're just going to argue for or against the filtering plan. --Phatius McBluff 16:24, 18 August 2011 (UTC)Reply

I would be also interested to see this questions answered. But I have to add that I'm strongly against the introduction of such a categorization, regardless in which way it will be done. --Niabot 16:34, 18 August 2011 (UTC)Reply
Hello Phatius McBluff, and thank you for reading through all of the information in such detail! It's good to know that people are actually reading the stuff. :-) The controversial content study is just the background for the referendum and part of the inspiration. Robert Harris was contracted by the Foundation to look into the issue of controversial content on Wikimedia projects, and prepared that report. After that, the Board discussed it and felt it agreed with some and disagreed with other points raised, etc. (Phoebe's comment above in #discussion could help more, since she's on the Board and I'm not.) Then the image filter referendum proposal came up, which is what we're discussing now.
The current proposal is just what you see on the Image filter referendum page and its subpages; the controversial content study is just background information. The current proposal does not use a single catch-all "controversial" category, but instead has various subcategories that people can choose from. (Who are we to say what is "controversial"? Things like "nudity", "violence", etc. are a lot more straight-forward and neutral. Granular settings are good. :-)) The idea is also that it would fit in with our existing category system on Commons and the various projects, rather than by creating an entire new set of categories. Cbrown1023 talk 16:39, 18 August 2011 (UTC)Reply
The current proposal does not use a single catch-all "controversial" category, but instead has various subcategories that people can choose from. (Who are we to say what is "controversial"? Things like "nudity", "violence", etc. are a lot more straight-forward and neutral. Granular settings are good. :-)) The idea is also that it would fit in with our existing category system on Commons and the various projects, rather than by creating an entire new set of categories.
I'm glad to hear this, if only because it confirms that I was correct in my initial impression of what the plan was. However, just to clarify, by "our existing category system on Commons and the various projects", do you mean the system of (say) putting this image into Commons:Category:Shamanism in Russia? In other words, is the plan to allow users to block images from whatever image category they please? I'm puzzled on this point, because the main page for this referendum mentions a possibility of only "5–10 [filterable] categories". --Phatius McBluff 17:04, 18 August 2011 (UTC)Reply
Phatius, it's not just you being confused; there's been a lot of discussion around this, and nothing is etched in concrete. I think in a perfect world being able to filter any image category you chose would be nice; but there are pretty serious usability and technical concerns around doing that. So the proposal is to filter 5-10 categories. -- phoebe | talk 18:16, 18 August 2011 (UTC)Reply
Okay, I think I now understand where things currently stand. Cbrown and phoebe, thanks for your prompt responses! I do hope that we get to have more input from ordinary users on exactly how the category/tagging system will work. Most of the feasibility (and at least some of the philosophical) concerns surrounding this proposal will probably hinge on that. --Phatius McBluff 19:49, 18 August 2011 (UTC)Reply
I too took a closer look at the report. One salient point was that the recommendations were addressed to the community, and explicitly not to the board. Rich Farmbrough 17:09 18 August 2011 (GMT).
A companion report was delivered to the Board; the body was unchanged but it included an introduction directed explicitly to the Board. SJ talk | translate   01:23, 6 September 2011 (UTC)Reply
"Things like "nudity", "violence", etc." If the purposes of this project are to allow people to filter out what they do not want to see, that's a pretty big etc. How do we select the 5-10 top objectionable categories, as measured by audience aversion? I'm very leery about a filter that could potentially invite users to filter out:
  • Nudity
  • Sexuality
  • Violence
  • Gore & Dead Bodies
  • Blasphemous Images (Christianity)
  • Blasphemous Images (Islam)
  • Blasphemous Images (Hinduism)
  • Sexist images
  • Racist Images
  • Depictions of Homosexuality
--Trystan 19:35, 21 August 2011 (UTC)Reply
I'm also very worried about whether we can objectively assign images to categories. I can think of one objective measure: "Has been objected to by at least one known human". That category, over time, would approach filtering all images, but it would be completely fair. Additionally, we can fairly use mathematical criteria like "frequency of objection" to classify images.
5-10 categories would be very hard to sustain. People who object to classes of content will legitimately want equal footing for their personal objections, and we'll be hard pressed to find any fair way to assign categories or justify any particular assignment. Most of all, when someone is offended by an image not already categorized, what can we tell them? "Your offense isn't a valuable as other types of offense"? 5-10 categories will tend to slide towards unlimited categories, and I think that will work. --AlecMeta 00:20, 23 August 2011 (UTC)Reply
It will not necessarily extend very much, as there are usability limits. But we will honour only some of the requests. We cannot rely on the number of complaints as anybody can create enough accounts to object many more times to a single image than the realistic number of real complaints. That means it is up to subjective decisions by editors who have an interest in editing the filters or related categories (depending on how it is set up).
For the encyclopaedic articles we can rely on editors wanting to improve the quality of articles on noteworthy subjects (and POV editors being in minority). Editors will get their reward by seeing those quality articles (and the quality Wikipedia). I am not convinced the same will work with the filter categories. The work may indeed be dominated by POV pushers - or take energy from the poor souls already fighting with backlogs and uncategorised media on Commons. I have seen no suggestion on how this will be solved or any reasoning about how this will not be so.
--LPfi 14:02, 23 August 2011 (UTC)Reply

What will happen? All users who are offended by the filter will switch it of. And all the censors will put over time tousends of images into one or the other offending category. They will visit controversal articles in search for images to ban. And finally they will put classical works of art into such categories. And then the editors of value content are gone. Hopefully somebody has made a fork under a liberal jurisdiction by then. --Bahnmoeller 18:56, 23 August 2011 (UTC)Reply

What categories are admissible?

[edit]

The elephant in the room here is the incredible amount of handwaving that goes on when it comes to discussion of the categories that will be available to filter. We're to understand that they will include graphic sexual images, possibly nauseating medical images, ETC. Well the ETC is a big deal. If John Q. Antisemite and friends want the facility to hide images of Jews on the grounds that Jews have demon eyes that corrupt and defile all who meet their gaze, are we going to tell them that's unacceptable and we won't make such a category available? If so, what are the criteria for determining the admissibility of a filter category? This is crucial information that needs to be spelled out in advance rather than left until after this is a fait accompli. It's no good saying we will leave it to readers to determine which categories they want. I strongly suspect some will be more welcome than others, and this will have a lot to do with the particular cultural biases of the people overseeing this new tool. It will not be long at all before someone proposes a category that is judged unacceptable. What do we do then? How do we decide whose aversions to cater to? Beorhtwulf 15:53, 23 August 2011 (UTC)Reply

You're making an excellent point! Huge and inevitable drawbacks like that are the reason why I believe only a general image filter (possibly with a per-user blacklist/whitelist system) could ever work. Other examples: A friend of mine has BPD and cannot look at images of tall structures without getting an anxiety attack. Do we create a filter category for people like him? A female friend of mine was raped by a black guy and she suffers from PTSD and cannot look at images of black guys without panicking. Do we create a filter category for people like her? Myself, I'm arachnophobic, but I have no problem with images of insects and do not want them thrown into the same category with spider images. Et cetera, et cetera, et cetera. --213.196.212.168 20:02, 23 August 2011 (UTC)Reply
The reason that "etc" gets thrown about is that the system is not fully designed. The WMF has a reasonable point in making sure that this is generally supported before they invest more staff time in this.
I'm reminded of an old Doonesbury comic strip. A political candidate proposes an unpopular (but possibly necessary) economic policy and says that it is likely to result in some people losing their jobs. The media say, "But can't you be more specific?" He responds, "Yes, in this town, this particular factory will fire exactly this many workers". And the media say, "But can't you be more specific?" He responds, "Yes, the following people will lose their jobs: John Smith of 123 Maple Street, Alice Jones of..."
Back here in the real world, we can't foresee every single consequence. We can't predict exactly which categories people will choose to include. We can't even predict exactly which categories will exist: they change all the time. In just the last hour, about 75 new categories were created on Commons and about 20 were deleted.
So you're basically saying "Please show me the fixed, never-changing, guaranteed, final version of the product, so I can see exactly which images in which articles are going to be affected forever", and the fact is that what you want not only does not exist now, but it will never exist. This is a wiki: it will always change. We can give you general descriptions—there's pretty strong support for giving readers the option of not seeing certain general types of images—but the exact specifics, and how any given image fits into that, will definitely change over time. WhatamIdoing 21:43, 23 August 2011 (UTC)Reply
the fact is that what you want not only does not exist now, but it will never exist -- Exactly. Which is why we are opposed to any project-wide filter category system. Beorhtwulf's argument illustrates that not only is the WMF proposal not perfect as can be expected from a work in progress, it is flawed beyond potential practicability as long as it includes special filter categories. There are most definitely far, far more than "5-10" types of images people will want to have filtered. On top of that, as Beorhtwulf correctly argues, there are types of images for which a category will be rejected out of reasons of political correctness (should e.g. my friend get the filter category for images of black men? are you saying we should have such a category? or are you saying that my friend's very real PTSD-based problem is not grave or widespread enough to warrant an image filter? how would you or anyone go about defining objection to a certain type of images as suffiently grave and widespread for a filter category? does that mean we only pay respect to widespread and politically correct objections to images?). All of that is conveniently hidden under the ETC rug. --213.196.212.168 22:06, 23 August 2011 (UTC)Reply
Even if the system were fully implemented today, it would still be possible to change it in the future. We should not pretend that it won't. It would be silly to make promises today about exactly which types of images might or might not be included in the future.
In general, though, by limiting it to 5 to 10 groups of images, the WMF is signalling that the intention is not to provide thousands of options to cover every possible individual's phobia. If only 5 to 10 groups of images are available, then only the 5 to 10 groups deemed to be the highest priority will be implemented. If "images of men who look like the one who raped me" is in the top 5 or 10, then it might well be implemented; if it's not, it won't. People who need something else will have to do what they are doing now, which is either avoiding WMF projects altogether, or buying a third-party filtering system. WhatamIdoing 23:44, 23 August 2011 (UTC)Reply
If only 5 to 10 groups of images are available, then only the 5 to 10 groups deemed to be the highest priority will be implemented. -- Who defines "highest priority"? According to what parameters?
People who need something else will have to do what they are doing now, which is either avoiding WMF projects altogether, or buying a third-party filtering system. -- Ah, so you're saying that the needs of the few should be ignored but the needs of the many shouldn't. That probably makes sense in your own worldview, but not in mine and hopefully not in the Wikimedia Foundation's. Also, there are alternatives (especially the adjustable general image filter) that take of everyone, not just the many.
Why are you so hellbent on the image filter being based on categories, especially when there are alternatives and when people are pointing out numerous fatal flaws with any project-wide filter category system?
Two blunt but honest questions at this point: (1) Are you being intellectually dishonest here (i.e. secretly recognizing mine and other people's valid points)? (2) Do you personally have a problem with any particular type of images that is likely to be deemed "highest priority" (e.g. nudity)? --213.196.212.168 00:10, 24 August 2011 (UTC)Reply
"Highest priority" is determined by the community, which, in the end, is likely to mean what the most people (around the world, not just the young, single, childless, white males who dominate this page) demand.
The point behind using categories is that they already exist, and therefore impose far less burden than alternatives, like tagging specific images. If (to use your example of nudity) we wanted a category for nudity, it is the work of mere seconds to say, "Put Commons:Category:Human penis (or selected subcategories) on the list". Doing the same thing with a separate tagging system would require editing about a thousand image description pages. Doing the same thing with special, new categories would similarly require adding the new category to every one of those pages. Furthermore, categories are a well-understood technology, and we have a long history that helps us figure out whether any given file should be placed in any given category. Whether File:____ belongs in Cat:Human penis is an easy question; whether File:____ should be tagged as "Images that are offensive on sexual grounds" is not.
I have no expectation of enabling any of the filters (unless, I suppose, it becomes so widely used that it is useful for checking what other readers will see). I do not think that I need to protect myself from any images. WhatamIdoing 17:17, 24 August 2011 (UTC)Reply
"Highest priority" is determined by the community, which, in the end, is likely to mean what the most people (around the world, not just the young, single, childless, white males who dominate this page) demand.
In other words, we'll vote on what is and isn't reasonably "objectionable," with the majority beliefs formally deemed valid and the minority beliefs dismissed.
If (to use your example of nudity) we wanted a category for nudity, it is the work of mere seconds to say, "Put Commons:Category:Human penis (or selected subcategories) on the list".
Does this count as "nudity"? Who decides? —David Levy 21:05, 24 August 2011 (UTC)Reply
Who will decide how to categorize that image? The same people that decide how to categorize it now. There is no proposal to change the method by which images are categorized on Commons.
Who will decide whether the subcats under Commons:Category:Bikinis will be included under a "sex and porn" tickbox? The same community that does not connect those categories to any of the sex, porn, or nudity categories now. WhatamIdoing 17:58, 25 August 2011 (UTC)Reply
1. The image filter system won't be limited to Commons; many Wikimedia Foundation wikis host images separately.
2. You're mistaken in your belief (also stated elsewhere) that no new categorization would be necessary. As has been explained in great detail, our current category system isn't remotely suited to the task planned. There's a world of difference between subject-based image organization and sorting intended to shield readers from "objectionable" material. —David Levy 19:02, 25 August 2011 (UTC)Reply

┌─────────────────────────────────┘
You believe our current cat tree won't work. What does that mean? Most probably, you mean that you looked at a category like Commons:Category:Violence and decided that in your personal opinion, many of the images were not objectionable (however you are defining that), like a scan of a 17th century publication and a logo for television ratings. Then you assumed (wrongly, I believe) that this category would definitely be included in any list of "Tick here to suppress violent images".

The cat includes a few images that people who didn't want to see violent images probably wouldn't want to see. The first line, for example, shows a very bloody wrestling match. But that image would be suppressed by including either Commons:Category:Hardcore wrestling or Commons:Category:Professional wrestling attacks, basically all of which are images of violence or its aftermath. It is not necessary to include Cat:Violence (with its wide variety of image types) to filter out this image.

In other cases, the images are currently not categorized correctly. The images of the victims of violence belong in "Victims of violence", not in "Violence". The old paintings in belong in "Violence in art". The presence of so many images from movies suggests that Commons needs a "Category:Movie violence"—regardless of what happens with any filter. This is routine maintenance; people do this kind of work at Commons all day long.

There will be false positives on occasion. This is not a bad thing. It should be possible for people to identify most false positives by reading the image captions, and they will click through in those instances. If the contents of a category change so that are a lot of false positives, we can—and will—remove those categories, or substitute specific subcats. If the filter is restricting far more than people want restricted, they will turn off the filter.

I'm just not seeing the problem. We don't need to categorize specific images according our subjective view of whether the image is "objectionable". We're not promising perfection; perfection is not feasible. A reasonable list taken from the existing cat tree—based less on what should be considered "objectionable" and more on what actually produces objections—should be sufficient. WhatamIdoing 22:04, 25 August 2011 (UTC)Reply

Most probably, you mean that you looked at a category like Commons:Category:Violence and decided that in your personal opinion, many of the images were not objectionable (however you are defining that)
I've been exceedingly clear in conveying that I strongly oppose reliance on my (or anyone else's) personal opinion of what's "objectionable."
Then you assumed (wrongly, I believe) that this category would definitely be included in any list of "Tick here to suppress violent images".
No, I've made no such assumption.
There will be false positives on occasion.
The greater problem, in my view, would be false negatives. As others have pointed out, our current categories are based upon what the images are about, not what they contain. For example, a photograph with a bikini-clad woman in the background (who isn't the main subject) currently is not categorized in any way enabling persons objecting to her sight to filter it. Using the current categories for this purpose obviously would dilute them beyond recognition, so for the system to work, we would need to create a separate set of categories. (Otherwise, there would be never-ending battles between those seeking to maintain the categories' integrity and those attempting to include off-topic images on the basis that incidental elements require filtering.)
In some cases, even an image's main subject isn't categorized in a manner indicating the "potentially objectionable" context. For example, many people strongly object to miscegenation. These individuals would have no means of filtering images such as this one.
And what about readers whose religious beliefs dictate that photographs of unveiled women (or any women) are "objectionable"? What current categories can be used to filter such images with any degree of reliability? —David Levy 23:18, 25 August 2011 (UTC)Reply
I see no significant harm in false negatives. Sure: someone's going to upload another goatse image, and it's not going to get categorized instantly, and some innocent reader will want to bleach his brain. But—so what? How is that different from what happens now, except that we might reduce the odds of it happening by an order of magnitude? We're not promising 100% success.
We're actually promising that every single preference by tiny minorities won't be accommodated. Limiting the tickboxes to 5–10 options means that we won't be able to provide a filter that accommodates every single request. We'll have to focus on the 5–10 filters that are wanted by the largest numbers of users. This, by the way, means making those filters both as broad as our readers want, and as narrow as they want. For example, a filter that hides everything even remotely connected to nudity, from images widely considered benign (e.g., anatomical line drawings and marble statues) to images normally considered non-benign (e.g., photographs of sexual intercourse) is not likely to be as popular as a more narrowly tailored list. We might offer options that indicate degrees, like "serious sex and porn" and "most photographs showing genitals", but we are unlikely to waste one of our limited slots with "nudes in art" or "images of women" or "images showing people of more than one race". WhatamIdoing 17:28, 26 August 2011 (UTC)Reply
"...images widely considered benign... to images normally considered non-benign" Perhaps my standards are not "normal", but if I was using a filter (for example, to make WP 'safe' for a child), I would want any and all nudity filtered, without pausing to consider how benign it is.--Trystan 18:30, 26 August 2011 (UTC)Reply
Sure: someone's going to upload another goatse image, and it's not going to get categorized instantly, and some innocent reader will want to bleach his brain. But—so what? How is that different from what happens now, except that we might reduce the odds of it happening by an order of magnitude?
Under the current setup, readers familiar with our content have no expectation of protection from images to which they object. Those for whom this is a concern might disable the display of images via their browsers or add-on scripts.
As soon as a native filter system is implemented, readers will place their trust in it (and regard false negatives as unacceptable).
We're not promising 100% success.
That won't prevent readers from expecting it.
We're actually promising that every single preference by tiny minorities won't be accommodated. Limiting the tickboxes to 5–10 options means that we won't be able to provide a filter that accommodates every single request.
Indeed. That's a major problem, and one that won't be limited to "tiny minorities."
The belief that a photograph of an unveiled woman is objectionable is not rare. Will this be one of the five to ten filter options? Probably not. Why? Because we're allowing cultural bias to shape determinations of which objections are and aren't reasonable.
We'll have to focus on the 5–10 filters that are wanted by the largest numbers of users.
In other words, "if you aren't part of the majority, we don't care what you think."
We might offer options that indicate degrees, like "serious sex and porn" and "most photographs showing genitals", but we are unlikely to waste one of our limited slots with "nudes in art" or "images of women" or "images showing people of more than one race".
Right, we mustn't "waste" resources on those silly, non-majority-approved cultural beliefs.
To be clear, I agree that it would be impractical to dedicate slots to such subjects. We could have 100 slots and come nowhere near covering every "objection." That's one of the reasons why I oppose this implementation (and support the one discussed here). —David Levy 05:38, 27 August 2011 (UTC)Reply
The literature on the history of warning labels in libraries does not support the suggestion that the community will happily settle on a few objectively identifiable, broadly agreeable categories. If we say that people have the right to filter images of nudity, and use Commons:Category:Human penis as one indicator of nudity, I think you will find that a very determined group of editors will be using that category to tag any image containing any depiction of a human penis, from Michelangelo's David on down. The WMF-endorsed user right of filtration will override good classification principles; it's not a very good reply to "I have the WMF-endorsed right to filter human penises," to say, "Well, yes, but this image isn't really about human penises, it just shows one." So any categories used as part of the system would cease to be organizational and descriptive categories, and become instead broad warning labels. You could certainly automatically populate the new warning label using existing categories, but they serve very different purposes and it would be vital to implement them separately.--Trystan 00:02, 25 August 2011 (UTC)Reply
Trystan, I'm not sure I follow your reasoning. If you believe in the ALA's view of the world, there is no way to implement separate labels that would be appropriate and non-prejudicial. Allowing people to choose existing descriptive categories and say "I'd rather not see these images" is the only variant that might fit their criteria - with a style guideline that categories should be [remain] purely organizational and descriptive. That might not perfectly match the expectations of some readers, but then no automated system would. [By the way, if you think that there aren't already edit wars about whether or not to include certain images in controversial categories, you should spend more time on Commons :)] SJ talk | translate   01:23, 6 September 2011 (UTC)Reply

"5-10" had better include the seven categories shown for this to be seen as legitimate

[edit]

I would suggest that the "5-10" categories from question five should at least include the seven categories shown in this image which was linked from the voting instructions at http://meta.wikimedia.org/wiki/Image_filter_referendum or this vote will probably not be seen as legitimate. Those categories are: children's games; gladiatorial combat; graphic violence; medical; monsters and horror; other controversial content, and sexually explicit. 76.254.20.205 16:31, 24 August 2011 (UTC)Reply

I doubt that anyone will mind if "children's games" disappears entirely from the list, or if "gladiatorial combat" and "graphic violence" are merged. WhatamIdoing 17:19, 24 August 2011 (UTC)Reply
I think there's a good reason to exclude games: they can distract students from assigned work. I'd also like to see gladiatorial combat separate from graphic violence because a lot of very agressive forms of combat don't result in particularly violent images. If I had my way I would add religion and fiction to the categories too. 76.254.20.205 20:35, 25 August 2011 (UTC)Reply
The games themselves might distract students from school work, but the pictures of the games will not do that any more than the text of the article will. This is only about hiding (temporarily) the images. It will not keep the students from reading the articles. WhatamIdoing 22:06, 25 August 2011 (UTC)Reply



general image filter vs. category system

[edit]

the only way forward is with a general image filter

[edit]

I've noted this at other places, but would like to open a discussion specifically about this issue. Some proponents of the image filter appear to be hellbent on an implementation that relies on categories.

Me and many others argue that any possible category system would bring a lot of problems and is ultimately not a workable approach at all. The many varied problems, including neutrality, and catching all possibly objectionable images, have been discussed at length elsewhere on this talk page.

Personally, I believe that if we go forward with the image filter, it can only be as a general image filter:

  1. It could be instantly activated and deactivated by the user (or temporarily disabled on a per-image, per-page, or per-session basis).
  2. It is the only way to ensure that nobody will accidentally see images to which they object. At the same time, it is also the only filter variant that is actually "tailored" to each and every single last individual user.
  3. It would avoid the whole (imho completely inevitable) infighting and disagreement and drama over which images to include into which categories and so on. It would also exclude simple oversight.
  4. The general image filter could of course be made customizable by the user, who would sort individual images, articles, or entire categories into a user-specific blacklist and whitelist. Blacklisted images (or images included in blacklisted articles/categories) wouldn't show up for the user even when they disable the filter, while whitelisted images (or images included in whitelisted articles/categories) would be displayed even when the user browses with enabled filter.

In all, I can think of no argument that speaks for a category system rather than the general image filter. Imo, this is the single most important referendum-within-the-referendum. --78.35.232.131 01:07, 23 August 2011 (UTC)Reply

I could support that, with one caveat. If it allows blacklisting of entire categories, there should be a policy in place stating that the purpose of categories is organizational, and not censorial (essentially the distinction drawn by the ALA above.) That way if people argue that an image should be placed in a category just so that it can be filtered (as opposed to it being a good fit for that category based on its subject matter), the policy rests on the side of maintaining the integrity of the category system.--Trystan 02:08, 23 August 2011 (UTC)Reply
Yes, that sounds reasonable. --78.35.232.131 04:28, 23 August 2011 (UTC)Reply
Both sound reasonable to me, if one intends to provide an image hiding feature. Both points are well defended by claims made elsewhere above. I agree that this is the essential implementation detail if we do indeed make it possible to set a personal preference for hiding images. There is only one argument I have heard for a category system: that it might be technically easier to implement. I don't think this is a good argument - a bad solution here could be worse than none. SJ talk | translate   01:23, 6 September 2011 (UTC)Reply
What's the differences between your suggestion and the current proposed one? It's almost the same. -- Sameboat (talk) 06:15, 23 August 2011 (UTC)Reply
The difference is: No labeling.
  • If you're looking at all this as "how can we make a simple filter system", this is merely a subtle point.
  • If you're coming at this from "how do we prevent people from exploiting the situation and doing us permanent harm", this is actually a really really important point.
Labeling appears to be exploitable.
--Kim Bruning 11:16, 23 August 2011 (UTC)Reply
The 4th point is lame as I previously have explained. Without labeling by an organized group, it will be practically unmanageable by user alone. Don't give me that "if you want self-censorship, do the dull works all by yourself" nonsense. Our labeling does not do further damage than the existing censorship options. It will be too forgiving to be exploited by conservatives, authoritarian and communist. -- Sameboat (talk) 13:29, 23 August 2011 (UTC)Reply
Mate, trying to explain things to you is like trying to play chess with a pigeon. But here goes anyway: I added point #4 to further expand my proposal with an option for the users to adjust the image filter according to their liking. They do not have to do the dull work at all. They can decide to simply switch the filter on and off and either filter all or none images. So, now you can knock over the pieces and so on. --213.196.209.190 13:36, 23 August 2011 (UTC)Reply
Are you telling me the category you stated in #4 is the general category rather than filter category? That's pointless, category may contain images both worth filtering or not. -- Sameboat (talk) 13:46, 23 August 2011 (UTC)Reply
the category you stated in #4 -- What "category" did I "state" in #4? I was talking about a user-specific and user-defined blacklist and whitelist. The categories I mentioned are the existing file categories. But that's just a tiny aspect of my proposal. If it turns out that blacklisting/whitelisting entire categories is not a good idea for some reason, I imagine it will be more because of what Trystan addressed above.
category may contain images both worth filtering or not -- In that case, it's for the user to decide the trade-off. Consider that a filter category would be largely out of control for the individual user and will in all likelihood also contain images that the individual user doesn't deem problematic.
Let me ask you something in return now: what exactly are the arguments that speak for a category system rather than the general image filter? --213.196.209.190 13:57, 23 August 2011 (UTC)Reply
This IS the trade-off I can't accept. I need a separate category system for filtering. That's the whole point of this proposal. I need specific images to be filtered and the others unfiltered, the WM proposal will save me tons of time for that purpose. Do you know how many files uploaded to Commons right now for an individual to customize their black/whitelist? Commons:Special:Statistics gives you the answer. -- Sameboat (talk) 14:06, 23 August 2011 (UTC)Reply
This IS the trade-off I can't accept. -- You're making progress! At least you're now talking about your own preferences. The question with regard to the image filter though is what is best for most people.
the WM proposal will save me tons of time for that purpose -- It may save you time, but it will cost the community a lot of time and stress. Time better spent writing articles.
That's the whole point of this proposal. -- Wrong. The whole point of the image filter is to give users the option to filter images. How exactly this should be implemented is completely up in the air. The possibility of a Wikimedia-hosted and -maintained filter category system is just one of the things determined via the referendum. It is not in any way, shape or form a necessary part of the final image filter. Fait accompli much?
I need a separate category system for filtering. -- That is exactly what the whitelist/blacklist system is: A separate, per-user filter category system, just better in that it allows each individual user to adjust the image filter exactly to their liking -- something which no project- or Wikipedia-wide category system could ever achieve. Alternatively, if they don't have the time to define their own blacklist/whitelist, users can simply enable/disable the filter at any time.
You still haven't explained the drawbacks you see with a general image filter, as they apply to most or all people, not just yourself. Enable the general image filter, you won't see any images. Want to see an image or all images? Disable it with a click, or whitelist it. What is the problem with that? The huge problem with any project-wide category system in that regard remains that different people all have a different set of images they deem objectionable. This is why the referendum asks users about 5-10 categories. But that arbitrary range of 5-10 categories is highly questionable, because it presupposes that there are only 5-10 different groups of images which people will want to have filtered. It also relies on the likewise doubtful presupposition that people can agree on which images to include into even one of those categories. So which direction is all this going to? Exactly right: individual per-user filter categories. Ergo I argue that the category system with all its inevitable drawbacks (neutrality, catching all objectionable images, agreeability etc) is dropped in favor of a general image filter, with the proposed additional option of a per-user blacklist/whitelist system -- which, again, is essentially a per-user filter category. --213.196.212.168 20:00, 23 August 2011 (UTC)Reply
So this is an ALL images on/off function that already provided by browsers innately, case closed! OK not yet. The WM proposal is exactly allowing every user to customize their own categories. And this can be changed if 5-10 categories is deemed to be insufficient, that's why we're discussing it right here in search for the equilibrium of the amount of category and how categorization should be conducted objectively based on visible characteristics. -- Sameboat (talk) 22:15, 23 August 2011 (UTC)Reply
See the section below this one for a deconstruction of that "argument". Project-wide filter categories are an utterly unworkable approach. That is just the fact of the matter, whether you are capable of and willing to wrap your mind around it or not. --213.196.212.168 22:22, 23 August 2011 (UTC)Reply
I regard this approach as the best by far, for the reasons eloquently stated by 78.35.232.131. I've previously expressed support for such a setup, and the implementation described above seems ideal.
Sameboat: You need to understand that a user could simply enable the all-images filter and whitelist individual images upon encountering them (e.g. when viewing an article and seeing a blocked image whose caption describes something that he/she wishes to view). Likewise, he/she could blacklist image categories and whitelist individual images contained therein. That's substantially more advanced than the functionality included in browsers.
This method is vastly simpler and easier to implement, requires no special community efforts, and eliminates numerous major flaws. —David Levy 22:29, 23 August 2011 (UTC)Reply
The all image on/off does not conflict with the categorized image filter. Somebody want to examine it to decide whether they want to add it to whitelist or blacklist. But some user definitely don't want to come in contact with images they don't want to see at all but in the mean time still show the image that is acceptable for them. This suggestion is not a trade-off, but striping the right to hide SOME image initially apart from some simple category selection for the enduser. -- Sameboat (talk) 00:00, 24 August 2011 (UTC)Reply
There are no "rights" in a legal sense on Wikipedia (except the right to leave), only privileges. And no, you are of course once again completely and utterly wrong. The adjustable general image filter is the only filter variant that does give each individual user complete control over what they want to see and what they don't. Now, since you seem unwilling and/or unable to understand anything explained to you, would you mind telling me what you are doing on an encyclopedic project? Just honestly curious. --213.196.212.168 00:16, 24 August 2011 (UTC)Reply
Check my contributions on zh & en WP if you like, but this is irrelevant, the image filter is designed for readers, not editors. What the WM proposal gives is the ability to filter image according to user's preference without requiring them to examine every image they come in contact beforehand. Any inferior suggestion (your suggestion) which does not provide an equal substitution is not gonna be helpful. Many users just do not need complete control over the filtering preference. If some volunteers of the WM community are willing to do all the dull work to label most images for them, why reject it? You said user can use 3rd party script for the same purpose. I would prefer the filter being exploited than the user's computer being infected thru 3rd party script. -- Sameboat (talk) 01:07, 24 August 2011 (UTC)Reply
The above proposal does not require anyone to "examine every image they come in contact beforehand." It enables readers to easily view a desired image with one click (without ever needing to review anything beforehand or display a single image of which they don't approve).
You're asking why we would reject volunteers' offer to label certain images "objectionable," thereby formally establishing that various cultures' and individuals' moral beliefs are valid (if reflected in the categorization) and invalid (if not reflected). You don't understand why anyone would oppose that? —David Levy 02:55, 24 August 2011 (UTC)Reply
You're missing the point I care the most: how does the reader is supposed to know if the image in question should be filtered or not when the hide all images option is enabled? It is not that "if you're not confident, just don't open any image at all" because this suggestion is no more than a minesweeper game to me. -- Sameboat (talk) 03:17, 24 August 2011 (UTC)Reply
Actually censors care the most, that others don't see the images or read the book. To cite Joseph Henry Jackson: Did you ever hear anyone say, "That work had better be banned because I might read it and it might be very damaging to me"? --Niabot 03:21, 24 August 2011 (UTC)Reply
WP is not just "a" book, it is an internet encyclopedia covering almost every topic possible. Citing that quote is merely telling people to avoid obtaining any knowledge if they don't wanna be offended by some images. Or if I interpret that political quote correctly, it tells people who are too easily offended have no right in obtaining knowledge. -- Sameboat (talk) 03:53, 24 August 2011 (UTC)Reply
You must be silly to think that this only referrers to a book. It aims at all knowledge, all book, all articles. It says that you can't judge about something you never have experienced. It also says that only censors will read (to late for them) and that they make the decision for others. Making decisions for other is censorship. --Niabot 11:34, 24 August 2011 (UTC)Reply
This is another misconception against proponent of image filter. We want some of the image filtered because we already have experienced many similar images previously. In order to avoid experiencing that unpleasure once more we need the filter. And this is not censorship when it is the end-users asking for filtering some of the images for themselves ONLY, not for others, make no mistake. -- Sameboat (talk) 13:06, 24 August 2011 (UTC)Reply
It is already censorship to label content as objectionable:
"Labels on library materials may be viewpoint-neutral directional aids designed to save the time of users, or they may be attempts to prejudice or discourage users or restrict their access to materials. When labeling is an attempt to prejudice attitudes, it is a censor’s tool. The American Library Association opposes labeling as a means of predisposing people’s attitudes toward library materials." [5] --Niabot 13:14, 24 August 2011 (UTC)Reply
Applyin ALA opinion on labeling here is seriously wrong to begin with. Library labels are applied to each book while image filter categories are applied to image, self-explanatory. Image may be included in the book or article. Do you see the difference now? Labeled book in the library might turn off some potential reader from picking it, while image filter does the exact opposite because reader might know some degree of unpleasure caused by sensitive images can be alleviated, they can be more confident in accessing the article after adjusting the image filter preferences properly. And I want to add one thing of the general image filter, turning off all images is problematic because there're many images should never be hidden such as diagrams and statistic graphs which are equally text in the form of image. -- Sameboat (talk) 13:45, 24 August 2011 (UTC)Reply
It isn't wrong. You might label books, articles, images, comments,... It's everytime the same question and problem. If you label images you exclude images. If you label articles you exclude articles. This has nothing to do with the fact, that you can still access the article, while the image is hidden.
What you propose is to rip the images out of a book and hand them out if the user requests them! Absolutely unacceptable. --Niabot 14:12, 24 August 2011 (UTC)Reply
Whether the image will be hidden or not will be decided by user's filter preference, not WMF. Censorship is about thought/information control, while image filter is giving users freedom to choose. What WMF endeavors now is clearly the latter. -- Sameboat (talk) 15:09, 24 August 2011 (UTC)Reply
Whether the image will be hidden or not will be decided by the censors that categorize the images for the filter. The WMF provides the tools for the censors. --Niabot 15:13, 24 August 2011 (UTC)Reply
Disable the filter if you condemn it, then you're living in your colorful world without censorship, your life will be totally untouched by this feature. -- Sameboat (talk) 15:22, 24 August 2011 (UTC)Reply

┌─────────────────────────────────────────────────┘
That isn't the problem we are talking about. We speak about prejustice of content, optional or not. --Niabot 15:30, 24 August 2011 (UTC)Reply

Indeed. The issue, Sameboat, isn't that anyone will be forced to filter images. It's that the Wikimedia Foundation and/or its volunteers will formally judge what images reasonably should/shouldn't offend people and offer inherently non-neutral filter options based upon said judgements. —David Levy 15:41, 24 August 2011 (UTC)Reply
The filter categories are not set in stone. This is a wiki, meaning things can be changed and refined to avoid prejudiced categorization and come up with more neutral and objective selections. And ALA's statement is ultimately made in fear of "restricted access" by the public, which may be a concern of information access policy in different states/countries/nations. But the thing is, whether accessing the particular information violates the law is judged by the local authorities/judiciary departments themselves, rather than depending on our labeling system. Which means if accessing particular media (for example, image of pedophile) is a crime in that region, labeling it or not does not change the fact that it is against the law. -- Sameboat (talk) 15:58, 24 August 2011 (UTC)Reply
You can't change this by redefining categories. Any answer to the question "What is objectional content?" is "non-neutral". It will always be a "non-neutral" judgement. This starts with the names of the filter-categories itself. There are only two categories that are neutral. A category that contains not a single image and category that contains all images.
You don't need to put the terms "pedophelia" and "law" on the table, since non of our used images is pedophelia or in violation with the law. We don't need a filter for non existing content. --Niabot 16:07, 24 August 2011 (UTC)Reply
I never define if the content "objectionable" to me. It is the visible characteristics of the image which should be used to shape the filter categorization system so there's no concern of prejudiced labeling. Pedophile is just an example. In reality there're many kinds of images on WM server which are legally acceptable in your hometown but illegal in many other countries, particularly the Muslim. But I almost forgot that I brought this out simply for refuting the ALA's statement, law is a reasonable mean to restrict information access. If law is not the concern, may the labeling hurt the people's self-esteem for accessing the labeled media? -- Sameboat (talk) 16:34, 24 August 2011 (UTC)Reply
What visual characteristics define objectionable content? A question related to the previous one. But there are many more questions that come closer to the point: Why does the filter not allow the exclusion of flowers? What are the default categories? Which categories will be listed? Why are this the categories that are listed?
If you follow this questions back to core, you will giving an answer that provokes the question: "What is objectional content?" There is no way you could avoid this non-neutral question and to give an neutral answer.
Again. Law isn't a reason. We are only bound to US-law, with or without filter. The people in foreign countries are bound to the law, with and without filter. The filter doesn't have any impact on law. It's not a necessary tool to comply with the law. As such, law is not an argument of the debate. Thats exactly what the ALA states. If we introduce this censors tool, then without any need to do so. --Niabot 16:49, 24 August 2011 (UTC)Reply
Simple enough: common sense and understanding of different cultures/religions. Note that I'm not trying the say this is an easy task, but if the slightly detailed categorization than the currently proposed 5-10 categories can cover the preferences of more than half of the users on WM, it is already a good start. -- Sameboat (talk) 17:01, 24 August 2011 (UTC)Reply
At some point i will repeat myself like an record player... It should be obvious that there is no common sense that is acceptable by liberals and conservatives. There is also no room for compromise. An image is either inside a category or it is not. That is a straight line between yes or no. You will never have the case that all opinions are under or over the line. That makes a decission non neutral. A filter that allows all images is guaranteed to fulfill neutrality. It is a line at the top. No opinion is above it. A filter that excludes all images is a line at the bottom. Again, all opinions are on one side. The filter is neutral. Anything else is up to the pattern/opinions. You can lower or rise the line as long as you don't make a opinion change from yes to no or the other way around. That is the maximum tolerance for neutral judgment. For example: All would agree, that this is a flower. That someone disagrees is very unlikely, but it already limits maximum tolerance. What you try to find with "common sense" (i doubt that it exists in the first place), is the median line trough all opinions, described as a rule. If you found this median and you make a decision after this rule. The problem with "yes/no" categorization is, that 50% will agree with the decision and 50% won't. A non-neutral judgment is programmed in this case. You can't avoid it. If you would like, you could proof it with mathematics. --Niabot 17:33, 24 August 2011 (UTC)Reply
Suppose that most users of Wikimedia Foundation websites are Christian. (I'm not claiming this to be true, but let's imagine that it is.) Would you support a filter system based strictly upon Christian beliefs? In such a scenario, this would "cover the preferences of more than half of the users," so would that be "a good start"?
Any system favoring particular sets of value judgements (whether yours, mine or someone else's) is unacceptable. Even if we could reliably include a vast majority (which I don't regard as feasible), we mustn't discriminate against minorities by formally establishing that their definitions of "objectionable" aren't worth accommodating. Neutrality is a core principle of the Wikimedia Foundation — one that can't be voted away by a majority. —David Levy 21:05, 24 August 2011 (UTC)Reply
I think someone with better knowledge/reasoning than me should take my place in this discussion, otherwise just let this section fades over time. Maintaining neutrality does not mean denying or condemning the existence of unneutrality. True Neutrality cannot be achieved without accepting the biased opinions. That's why we never have neutral POV in our WP articles but stating every POV possible with citation. So is the filter system. It is meant to accept different preferences without imposing your so-called neutrality on other users. I'm out of it, but that does not mean this suggestion gained my acceptance. Making it a choice between black and white just disgusts me. -- Sameboat (talk) 01:32, 25 August 2011 (UTC)Reply
Indeed, maintaining neutrality does not mean denying or condemning the existence of non-neutral views and cannot be achieved without accepting biased opinions. That's why I oppose implementations that would ignore/exclude certain cultures' beliefs and support one that would accommodate "every POV possible" to the best of our ability. —David Levy 02:22, 25 August 2011 (UTC)Reply
Your arguments appear to be three:
  • that making a maintenance-oriented list of pages that concern people is prohibited by NPOV (which it isn't, or the anti-spam folks would be in trouble),
  • that NPOV is a mandatory policy on all projects, including Commons (which it isn't), and
  • that since we can't please 100% of users with a "perfect" solution, we should refuse to take even a small step towards an improvement that would please most of them (a fallacy known as making the best become the enemy of the good). WhatamIdoing 17:48, 25 August 2011 (UTC)Reply
  • Filter categories like "violence" are no maintenance-oriented lists. Prejudicial labels are designed to restrict access, based on a value judgment that the content, language or themes of the material, or the background or views of the creator(s) of the material, render it inappropriate or offensive for all or certain groups of users.
  • The images are filtered and excluded/hidden from articles inside other projects. That the technical implementation will be hosted on commons has nothing to with this.
  • Labels on images or articles may be viewpoint-neutral directional aids that save the time of users, or they may be attempts to prejudice or discourage users or restrict their access to materials. When labeling is an attempt to prejudice attitudes, it is a censor's tool. [6]
--Niabot 18:01, 25 August 2011 (UTC)Reply
1. I reject your description of this system as a "maintenance" task. It will be a reader-facing site feature.
2. The image filter system is to be implemented across all Wikimedia Foundation projects, including those for which NPOV is a nonnegotiable principle.
3. Setting aside the neutrality issue, I don't believe that the proposed setup is even feasible, let alone a step toward an improvement. The amount of resource consumption and extent to which perpetual controversy would be provoked are staggering.
Returning to the neutrality issue (and setting aside the issue of feasibility), I don't regard "pleasing" majorities (at the expense of the minorities whose value judgements are deemed invalid) as a laudable goal. We could "please" most people by customizing each language's projects to declare that whatever religion predominant among its speakers is correct, but that doesn't make it a sensible idea. —David Levy 19:02, 25 August 2011 (UTC)Reply
An image, whether accessed directly or via one of the Wikimedia Foundation's various projects, typically is accompanied by descriptive text, which would inform the reader of the blocked file's subject and enable him/her to decide whether to view it. —David Levy 04:01, 24 August 2011 (UTC)Reply
In order to inform the reader what characteristics that image contains, the description must be more detail than it normally requires. In the end, it is almost like labeling the image by filter category (I mean the labels of objective characteristics, not subjective interpretation). Not to mention its inconvenience that requires the user to access the file description page before filtering and possible lacking of description text or the language that user understands. Even though alt text can more or less fulfill that purpose, it is not an integral part of the file description page. The image alt text needs to be written per article which is the source of impracticability. Filter category eliminates the troublesome of internationalisation. While your suggestion greatly relies on the file uploader's self-discipline and more translation works for worldwide users. -- Sameboat (talk) 04:27, 24 August 2011 (UTC)Reply
Actually, I'm thinking primarily of the text accompanying images in articles (or equivalent pages), which should clearly explain their basic nature. —David Levy 05:23, 24 August 2011 (UTC)Reply
This is even more inefficient than labeling filter category because there can be more than 1 article/page embedding the same image. -- Sameboat (talk) 13:06, 24 August 2011 (UTC)Reply
How does an image's use on more than one page render the setup inefficient? If a reader were to whitelist an image via any page, it would be unblocked for him/her on every page containing it. —David Levy 13:26, 24 August 2011 (UTC)Reply
I mean the description text of image for reader to judge if that image worth being filtered or not. If there're more than 1 article including the same image, you need to copy and paste that description text to all including pages, this is needlessly repetitive. -- Sameboat (talk) 13:51, 24 August 2011 (UTC)Reply
Copy and paste what description text? The pages in question already contain (or should contain) such text, which is essential for context.
For example, someone viewing the English Wikipedia's "Penis" article would see the top image described as "a selection of penises from different species at the Icelandic Phallological Museum." A reader with the general image filter enabled would choose to whitelist that image only if he/she wished to view "a selection of penises." —David Levy 14:11, 24 August 2011 (UTC)Reply
The caption text is not always that obvious when the images are hidden indiscriminately. It may be sufficient in well-written articles, but generally not in most stub to decently short articles. Also considering the "random page" option and articles of unknown/utterly professional/ambiguous title (medical/surgery terms for example), the general image filter cannot prevent such surprising moment because the general image filter requires the user with some degree of vigilance of being offended before turning on/off the filter, while WM proposal does not. -- Sameboat (talk) 14:38, 24 August 2011 (UTC)Reply
The caption text is not always that obvious when the images are hidden indiscriminately. It may be sufficient in well-written articles, but generally not in most stub to decently short articles.
If an article is missing reasonably descriptive captions, its images lack the context needed to be useful. This problem can be addressed in a straightforward, neutral and uncontroversial manner, thereby providing a valuable service to all readers. This is a far better use of volunteers' time than tagging "objectionable" images (an inherently complicated, non-neutral and contentious task) would be.
Also considering the "random page" option and articles of unknown/utterly professional/ambiguous title (medical/surgery terms for example),
If a user is unfamiliar with a subject, he/she can simply read the page (or relevant portions thereof) before deciding whether to unblock an image.
the general image filter cannot prevent such surprising moment because the general image filter requires the user with some degree of vigilance of being offended before turning on/off the filter, while WM proposal does not.
Sorry, I don't understand what you mean.
But if you want to discuss "vigilance," I can think of nothing in the Wikimedia Foundation ecosystem requiring more vigilance to implement and maintain than a category-based image filter system would. Millions of images would need to be manually checked (with countless tagging disagreements and edit wars inevitably arising), with thousands more uploaded every day. —David Levy 15:41, 24 August 2011 (UTC)Reply
One of many practical reasons not to use custom categories. But note that this section was about not using a small # of categories at all, whether or not they were already in place and maintained by the community. SJ talk | translate   01:23, 6 September 2011 (UTC)Reply
Indeed. I wrote the above in support of the "general image filter" implementation proposed in this section. —David Levy 12:16, 7 September 2011 (UTC)Reply
I'm not sure this is the only way we could implement an image filter, but I'd be happy if we introduced it this way and when I use a slow connection I'd definitely use it with "hide all images" set on. I'm assuming that blocked images would have their caption shown as well as the click here to see the image box. We need to decide whether it would also display alt text or it would give people the choice whether alt text was displayed. I think my preference would be for the former. WereSpielChequers 10:20, 18 September 2011 (UTC)Reply
If it is technically feasible, then I support the inclusion of alt text. WhatamIdoing 16:23, 18 September 2011 (UTC)Reply

Give choice - not one central category list, but several user defined

[edit]

I am strictly against a centralized categorization list. This is not self-censorship, this is censorship! The only option for me is - selecting each image by yourself OR -

having the choice to select from user-, or user-groups defined censorship lists. So every user can select pre-defined censorship list from users or groups he trusts. This is the only way to go for me, a centralized Wikipedia censoring-commitee is inacceptable. 94.217.111.189 07:09, 21 August 2011 (UTC)Reply

Fully agreed. It's also amusing that some proponents of such a central, Wikimedia-hosted "objectionable content" list seem to believe that the same community that in their view is unable to make sound editorial decisions regarding images in articles should somehow be perfectly able to produce a workable filter category. Hilarity will ensue. --87.78.45.196 07:19, 21 August 2011 (UTC)Reply
Please, WMF will not enforce users to use the filter. If you're worrying about the filter being abused by 3rd party organization, that's not our problem at all. The major problem of user-defined filter category is that you actually have to see then add the images to the filter categories all by yourself. It is also equally as bad as adding the original categories into your filter category. Considering the absurd amount of images and their categories on the server, it is too inefficient and impractical to be done by a single user for himself. The filter itself is meant to hide the image from users if they don't want to see it in the first place. What's the meaning if user must see each unwanted image first before filtering it? -- Sameboat (talk) 07:56, 21 August 2011 (UTC)Reply
How is a category of "objectionable content" compatible with the core idea of an encyclopedic project, with core tenets like neutrality? A centralized blacklist would essentially label parts of Wikipedia's content as objectionable (by whose standards?) and non-essential (according to whose encyclopedic judgment?).
Even more importantly, how much sense does it make to label some content as objectionable and non-essential, yet at the same time to consider the same content as important enough for full coverage of the subject matter to include it in the article? --87.78.45.196 08:13, 21 August 2011 (UTC)Reply
Nothing worth discussing with you if you think the filter category is identical to a blacklist. We're not talking of morality of the labeling but people who need this feature to avoid things they don't wanna see. Disable the filter if you don't want to use it. Objectionable or not doesn't concern you. -- Sameboat (talk) 08:35, 21 August 2011 (UTC)Reply
Nothing worth discussing with you -- Oh wow. Insulting others because you're out of arguments. Well done, considering.
if you think the filter category is identical to a blacklist -- Nothing worth discussing with you if you intentionally seek any opportunity to evade the arguments. So you don't think that a centralized filter category that labels content as objectionable and enyclopedically non-essential is anything like a blacklist. Then ignore the word "blacklist" and focus on the reasoning I presented. Or do you need a filter button for the word "blacklist" in order to enter meaningful discussion?
Objectionable or not doesn't concern you. -- At least change your tone if you don't have anything of value to contribute to this discussion. And yes, it does concern any editor. Read my arguments, respond to them, or be quiet. Thank you. --87.78.45.196 08:44, 21 August 2011 (UTC)Reply
You're the one who is evading. I've already explained the user-defined filter is practically inconvenient, unusable. All you do is complaining on the labeling to be objectionable. You, the one who will definitely not using this feature, are asking to shut the project down which is asked by a considerable amount of other users. We believe WM user group can do a better job than the 3rd party censoring option which may censor everything indiscriminately, because our filter is tailor-made for WM projects which can create less controveries. -- Sameboat (talk) 09:15, 21 August 2011 (UTC)Reply

┌────────────────┘
complaining on the labeling to be objectionable -- I am saying that an image filter category hosted and maintained within the Wikimedia project and specifically created for the purpose of filtering a community-decided set of images would, by sheer virtue of being offered to the user as a part of Wikipedia, become a part of the encyclopedic project itself. We'd be offering (i.e. suggesting!) images sorted into one or more filter categories to the users for blocking, thereby agreeing as a community that these images are objectionable according to some general/common/basic/universal? sense/feeling/opinion/consensus? regarding the objectionability of images. Like I also pointed out above, by including an image in a filter category (and offering=suggesting it for blocking), we'd also be labeling it as non-essential to the article. Which in turn would then bring up the question of why those images are there in the first place. An image either contributes to the encyclopedic coverage in an irreplaceable way, or it doesn't. If the image is and integral part of and essential to the article, then how could we be suggesting it for blocking? Conversely, if the image is not essential, then why is it in the article?

We believe WM user group can do a better job than the 3rd party censoring option -- I don't think they can. You can install Firefox, install the Greasemonkey add-on, install the userscript and all images on Wikipedia will be hidden by default, unhide-able with a click and with additional unobtrusive, Wikipedia-style links at the top of each page, giving the user the options to show all hidden images on that page and to disable image hiding for the rest of the session. It hardly gets any more convenient than that. When I know that I'm looking at articles that are unproblematic for me, I can e.g. disable image hiding, otherwise it's activated by default and thus failsafe and foolproof.

Again: This is all technology that already exists and which is freely available and which I am already using. And which, by not being part of Wikimedia, poses no problems with regard to the project's basic tenets. --87.78.45.196 11:15, 21 August 2011 (UTC)Reply

Just because the filter categories are being stored in WM server, doesn't mean WM acknowledges/declares those images to be objectionable or non-essential to the article. You're making a fallacy here. I like to read some medical article but somehow the real photo may be too gory for me. I understand the image is essential to the article but I just want these kind of images hidden from my sight until I want a peep at them. A truly workable filter for WM must be hosted by WM itself and managed by its users, otherwise the whole project is to difficult to be effective and up-to-date. -- Sameboat (talk) 11:29, 21 August 2011 (UTC)Reply
You keep ignoring the crucial points. I was talking about an image filter category hosted and maintained within the Wikimedia project. A filter category built e.g. by the English Wikipedia community means it's a part of the project itself. You cannot argue your way around that.
I just want these kind of images hidden from my sight until I want a peep at them -- 1, 2, 3, done. You're welcome. --87.79.214.168 11:59, 21 August 2011 (UTC)Reply
I never ever trust 3rd party script. It is the best news the filter WILL be part of the WM project. It does not discriminate the labeled images objectionable or unessential to the project. That's all. -- Sameboat (talk) 12:13, 21 August 2011 (UTC)Reply
I am tired of throwing reason at you, but I hope that someone else may profit more from my postings in this section than you managed to do. --87.79.214.168 12:21, 21 August 2011 (UTC)Reply
You're making a huge fallacy to support the shut down of the project, that's why no one else is willing to reason with you except your fellow who echos your vague point. -- Sameboat (talk) 12:25, 21 August 2011 (UTC)Reply
Ahm, yeah, right. The original poster of this thread, me, and a couple more people on this page, as will you will see if you at least skim over it. Farewell. --87.79.214.168 12:29, 21 August 2011 (UTC)Reply
Ooh, perfect. That greasemonkey script might cover most issues nicely. Next problem! --Kim Bruning 13:53, 21 August 2011 (UTC)Reply
It's only "perfect" if you think "see zero images no matter what" is a reasonable response to "I don't want to see images of mutilated bodies by default, but I'd sometimes like to be able to click through to them anyway". If a reader finds 2% of images distressing, do you want the reader's choices to be limited to (1) giving up all of the other 98% or (2) being distressed by 2%? WhatamIdoing 20:27, 23 August 2011 (UTC)Reply
Greasemonkey can be enabled and disabled with a single mouseclick. Nobody's choices are affected in any way, shape or form with a general image filter, that's exactly the beauty of it. --213.196.212.168 23:15, 23 August 2011 (UTC)Reply
I don't think that "nobody's choices are affected in any way, shape, or form" is an accurate description of "unable to see the 98% of images of images that I want to see without also seeing the 2% that I don't want to see", unless by that you mean "the only choice anybody should ever be given is 'all or nothing'". WhatamIdoing 23:46, 23 August 2011 (UTC)Reply
This should never operate invisibly. It's important to make it clear to the user in every instance that some content has been concealed, why it was concealed, and what they need to do to view it. - Elmarco 21:10, 27 August 2011 (UTC)Reply
Yes, that would be an absolute requirement. To the original idea of having a set of user-defined categories: you don't even need to have a visible list of what other users use. You could make it easy for readers to identify the categories that a given image is in, and choose a set of them to shutter. (One of the long-term technical improvements that this discussion highlights a need for, is access to metadata about media embedded in a page. It currently takes a number of aPI calls to get author, upload date, and category information about a file; and there is no quick way to see all of its parent categories. So for instance there is no easy way to identify all pictures of Books, since most will be in a sub-sub-category.) SJ talk | translate   01:23, 6 September 2011 (UTC)Reply


Categories are not filters or warning labels

[edit]

Categories are inclusive, filters are exclusive

[edit]

Oppose for reasons quoted by many other posters, basically, the fact that offering filter tags makes government censorship a lot easier.

Remark on implementation: if this proposal does make it through the democratic scrutiny, then I remain opposed to the idea of it being implemented through categories. Categories were designed to be able to find pages, not to exclude them. Their purpose is inclusive, i.e., when you are searching for something then it is better to have a few results too many than to miss out on the exact page (image, resource...) you were looking for.

Censorship (aka "filtering") tags, if implemented at all, should have the opposite philosophy: it is better to accidentally come across an offensive image every now and then than to miss out on important information that may not be so offensive, after all. Resources should only be labeled 'offensive' after they have been established to be so, not 'potentially offensive' because someone thinks its a good idea to do so. In a debate on whether a certain resource 'potentially' has a certain property, the 'yes' side stands a better chance of winning by definition (in case of uncertainty, the claim remains 'potentially' true).--Lieven Smits 08:05, 22 August 2011 (UTC)Reply

This is why images should be given objective tags such as "nipples, genitalia, blood, intercourse" rather than an "offensiveness" rating. 68.126.60.76 10:54, 22 August 2011 (UTC)Reply
Well, keeping in mind that I am still very much opposed to the whole idea of tagging objectionable images, if we remain within the limited discussion of whether or not to use categories, I think that you mostly confirm my point: the 'normal' category Blood [7] would have a rather different set of pictures from the 'objectionable because too bloody bloody' tag. Hence: two different concepts, two different implementations.--).--Lieven Smits 12:02, 22 August 2011 (UTC)Reply
Yes. The filtering categories are going to have to be totally totally different than categories like 'Shuttering requested due to _____ '. People will want to add images to the filter categories that don't objectively meet the stated criteria, and over the long haul we won't have a way to stop them from doing it. Admitting up front that the 'filter categories' are different is a good insight that will save us a lot of trouble later on. --AlecMeta 16:48, 22 August 2011 (UTC)Reply
You're trying to invent a problem where there doesn't need to be one. We shouldn't be trying to define what is objectionable. You tag an image with blood, and someone determines whether or not they will allow images with blood to display on their computer. It doesn't matter how much blood or how objectionable anyone else might think the blood is. The presence of blood in an image is at least 99% of the time going to be an objective fact which is indisputable. You may get people who want to add improper categories, but this can be undone, just as people add bad content to articles and it is reverted. It will still be an encyclopedia anyone can edit. If such an objective system is used your concerns become moot, and you let the end user worry about which tags they want to block, if any. 68.126.60.76 19:46, 22 August 2011 (UTC)Reply
In my theory, the desire of people to censor will be inarguable-- people will always find an image they dislike and then use any excuse they can to get it in a filter. And since bigotry is multiplicative, black gay men will get filtered all the time if they stand anywhere near ANYTHING potentially objectively objectionable. And that won't do.
We can't "filter neutrally"-- instead let's admit upfront all filtration is, by its nature, non-neutral. Stop and think about it-- we can live with this. Individual filters don't have to be neutral as long as there is no "one" filter or one set of filters or a pretense that filtration is objective, authoritative, or morality-based. We say upfront it's all prejudice, but if you REALLY need a very-slightly-prejudiced content, we can give you the tools to do that on your own time. --AlecMeta 21:39, 22 August 2011 (UTC)Reply
It's still an encyclopedia anyone can edit. Why are black gay men more likely to be filtered than deleted entirely in the absence of filters? If you properly use objective filters, the problem you propose can't happen. Say a black gay man is standing nude, and you filter "penis". The presence of the penis is not debatable, and he will be just as filtered as an image containing a straight white man's penis. Now suppose a multiplicative bigot sees a picture of a clothed gay black man, standing in a room in his house, everything normal and everyday about the scene. What is the bigot going to tag? Wall? Chair? How many other users will filter photos by the tag "chair"? If the black gay man is at a pride parade, maybe the image will be tagged with "homosexuality" or "gay pride" or something of that nature, which some could consider objectionable, but only those who choose to filter those contents will have the image blocked. If you filter purely by keywords, and don't assign an offensiveness rating, it will be nigh-impossible to game the filtering system to prevent others from seeing the image you don't want to see, since everyone will filter different keywords rather than choosing the "moderate" setting. You could vandalistically add a false tag, but someone will notice and remove it, just as articles are vandalized and reverted. 68.126.60.76 04:58, 23 August 2011 (UTC)Reply
I may not have been clear on the example of 'Blood' categories/filters/tags. It may be objectively verifiable whether an image contains blood, but that is beside the point. The category 'Blood' should not contain all images with blood on them, but only images whose content is relevant to someone who intends to learn about blood: blood groups, microscopic images of blood cells, chemical formulae, test tube images, blood drives etcetera. The other 'thing', the category that you intend to use as a filter against objectionably bloody images, would presumably not even include any of those; it would contain depictions of humans and possibly animals who are bleeding. The two concepts of categorisation, while both deserving the name 'blood', are very different both practically and theoretically.
My principal opposition against creating the other 'thing' remains. It can and will be used to limit other people's access to information. Apart from being almost the exact opposite of NPOV.--Lieven Smits 07:15, 25 August 2011 (UTC)Reply
You are confusing Wikipedia's existing category system and a content filter keyword system. They would be separate systems. 68.126.63.156 02:47, 30 August 2011 (UTC)Reply
I agree that Wikipedia categorisation serves a very different purpose from content filtering. I do not think that I have mixed the two concepts up.
The purpose of the proposal is essentially content filtering. But several contributors on this page have argued that the proposal includes using the existing category system for that purpose. Thus, confusion appears to be in the air. I have tried to argue that the category 'Blood' should be distinct from the content filter 'Blood' if there is ever a need for the latter. The category 'Blood' is unfit for filtering out shocking images of bleeding people and animals.--Lieven Smits 09:42, 30 August 2011 (UTC)Reply

Your point about categories having an inclusive, not exclusive purpose is a good one. Nevertheless, for sufficiently specific categories, the two become the same. SJ talk | translate   01:23, 6 September 2011 (UTC)Reply

Libraries have fought warning labels for years

[edit]

When it comes to passionate defenders of intellectual freedom and access to information, one need look no further than the library profession. They have opposed warning label systems, like the one being proposed, for years. From the American Library Association:

Labels on library materials may be viewpoint-neutral directional aids that save the time of users, or they may be attempts to prejudice or discourage users or restrict their access to materials. When labeling is an attempt to prejudice attitudes, it is a censor's tool. The American Library Association opposes labeling as a means of predisposing people's attitudes toward library materials.
Prejudicial labels are designed to restrict access, based on a value judgment that the content, language or themes of the material, or the background or views of the creator(s) of the material, render it inappropriate or offensive for all or certain groups of users. The prejudicial label is used to warn, discourage or prohibit users or certain groups of users from accessing the material...

I would encourage anyone interested to read the full statement and seek out some of the substantial body of library literature on why warning labels are incompatible with intellectual freedom.--Trystan 13:59, 22 August 2011 (UTC)Reply

Trystan, while I know some proposed implementations have involved a custom set of 'controversial' categories (which would qualify as prejudicial labels), I believe the more general quote from that document which is relevant here is: "Directional aids can have the effect of prejudicial labels when their implementation becomes proscriptive rather than descriptive." We would need to be careful in any use of existing categories as well not to present them in a proscriptive way. SJ talk | translate   01:23, 6 September 2011 (UTC)Reply
Read w:Wikipedia:Free speech please. Your right of speech here is a privilege granted by WM, not US government. -- Sameboat (talk) 14:12, 22 August 2011 (UTC)Reply
You get to choose what 'warning labels' you want. No body is going to paste a 'label' on your 'book' for you. --Bencmq 14:18, 22 August 2011 (UTC)Reply
@Sameboat: I don't get your point at all: Trystan is not even American, as far as I know. @Bencmq: you are not getting Trystan's: the issue is that, once the labels are there, you can be denied access to the books by third parties. Besides, the labelling would be done by the wikipedia community, including, but not limited to, shills of the Chinese government, not by the single user. --complainer 14:24, 22 August 2011 (UTC)Reply
On that point I agree. There is inevitably going to be ridiculous edit warring over images whether they should go into a certain category that is used for the filter - but I think the availability of the tool itself is as important as well. I hope, before we see anything, the likelihood for such inevitable dispute could be minimised through technical means.--Bencmq 14:30, 22 August 2011 (UTC)Reply
Sameboat, I am not American, and am aware that free speech guarantees don't apply to private organizations regardless. The issue is not free expression but the related concept of intellectual freedom. I chose the ALA wording because it is well-expressed, but there are librarians world-wide that support intellectual freedom.
Bencmq, the warning labels show up in at least two places that have the potential to prejudice users in the sense that the ALA is talking about. First, in the filter itself ("Do you want to avoid looking at controversial images like sex, nudity, violence, immodestly dressed women, blasphemy, or gay people?") Second, on the image pages themselves, which would need to be appropriately tagged with those labels for the filter to work.--Trystan 14:32, 22 August 2011 (UTC)Reply

Yet every library I've been to is divided into a Children's section, Young Adults' section, etc. And there's librarians and parents around to ensure that children don't wonder off. There's a difference between a government telling organizations what to do and organizations using their own judgement. --Michaeldsuarez 16:08, 22 August 2011 (UTC)Reply

That's an excellent point, thanks for bringing it up. Material is organized to help the reader find it, not to warn about it's content. It is a frequently attempted form of censorship to try and get material aimed at children or youth filed in the adult section based on ideological objection (e.g. Children's books like Heather Has Two Mommies or Judy Blume Young Adult books dealing with sexuality.)
As for librarians monitoring children, absolutely not. Libraries very explicitly do not stand in the place of parents. We do not scrutinize or evaluate what patrons are doing, and are most certainly not censors for other people's children.--Trystan 16:26, 22 August 2011 (UTC)Reply
(ec) You raise some great points, Trystan. w:Wikipedia:Free speech not withstanding, I actually do think of labeling offensive content raises very real 'free speech' issues, because if we did filtering "wrong", we might empower or legitimize real-world censors taking away our readers' real-world rights.
But labeling done "very right" can be way less bad once you take away any sense of "authority" behind the labelling. If the foundation personally decided on "the best" labelling scheme, it would come with authority. But filtering will be massively parallel, "buggy", controversial, and it won't work the way users might expect it to. The kind of "prejudgment" our labels will contain is far less reliable and monolithic-- nobody will agree on where the lines are or if there should even be lines. It will become apparent to anyone using the feature that its logic is irrational, idiosyncratic, and non-ideal-- in short, our labels won't be monolithic and authoritative.
This isn't exactly what our 'heart' tells us to do, but let's be realistic. A library is usually located in a specific location-- we are everywhere-- buses, subways, airplanes, offices, mosques, and everywhere else. We also are a 'face of the internet' to whole cultures that are getting online for the very first time-- imagine our ancestors at the dawn of film, when the silent black and white image of a train heading towards the viewer could actually elicit enough fear to be called a "scary movie"-- and now realize that whole cultures are going to be meeting us at the same time they meet any internet technology. I don't know who these cultures are, I don't know where they are precisely, and I have no idea what they need in a "library".
If they really deeply want a very simple 'privacy shutter', should we really let our original vision of the ideal global library stand in the way of them having it? I ask this none-rhetorically, but my emotions are saying that this is 'close enough' that we should give it to the people who want it. --AlecMeta 16:17, 22 August 2011 (UTC)Reply
Hm, but. For that to make sense, we would have to place a permanent and quite prominent note on every single content page (not just the main page, where first-time users may or may not arrive!) drawing attention to the filter option. Otherwise, new and technically inept users in particular may keep running headlong into objectionable material (which probably has done untold damage over the past years, including work stoppages, people poking out their own eyes, and spontaneous combustion). --195.14.220.250 16:30, 22 August 2011 (UTC)Reply
I agree that we'll have to advertise the filter on either all content or content that has objectively been shown to upset people to a certain _objective_ level. It will still involve the reader having to do some effort on their own, so it won't be perfect from their point of view, but it will be as close to perfect as we can get. --AlecMeta 18:54, 22 August 2011 (UTC)Reply
Nope. The closest we can get to a perfect filter is a general image filter. That is the only way to (i) guarantee that --once they activate the filter-- people won't see any objectionable images and (ii) to prevent any completely unnecessary snafu that would inevitably accompany any filter category system.
Bottomline: IF the image filter is implemented, it should most definitely be done as a general image filter, which users can enable and disable at will. What exactly, in your opinion, actually speaks for a category system rather than the general image filter? --78.35.232.131 00:44, 23 August 2011 (UTC)Reply
I would certainly like to see what sort of interest there is in a general image filter. That is the only version of this feature that I could imagine myself using. (for a small and annoying phobia.) SJ talk | translate   01:23, 6 September 2011 (UTC)Reply
«labeling done "very right"» How can we expect to do it right. It might be right for yourself, but not for others. You will be in the trouble that the average American, the liberalst German and the conservativest Arabian have to agree that the labeling is done right. Otherwise you simply don't do it in a "very right" way and all your argumentation is for nothing. I hope that this is clear. --Niabot 18:25, 22 August 2011 (UTC)Reply
Either that, or we create several different filter categories, one "Typical Arab Sensibilities Filter", one "Typical Conservative Christian Sensibilities Filter", one for mom, one for dad, one for grandma, one for granpa, one for average children, one for smart children who are ahead of their age. I can see it now, it suddenly all makes perfect sense. Seriously, to anyone easily offended by sarcasm: I feel sorry. For you. --195.14.220.250 18:38, 22 August 2011 (UTC)Reply
Yes! Precisely. I'm not sure if all the participants who want a filter yet realize this basic insight that you capture so well-- there is NO SUCH THING as a filter everyone can agree on. The liberalist American and the most conservative American will NEVER agree on a filter, and that's just Americans. There is absolutely no consensus to find-- it's JUST opinion.
So "Very Right Labelling" is infinite labelling-- it's a wiki with infinite forks-- as many categories as people want, right down to per user. I could make a whole category with no better label than "images that offend AlecMeta" and that would be fine. No one would care what offends me, no one would expect ANY labelling be perfect any more than we expect any article to be perfect.
Doing this "very right" would actually be something we could actually be proud of, at the end of the day. We could show Youtube and GoogleSafeSearch that there's a better way than a "one-size-fits-all" labeling. We could, if we chose to, make filters in infinite diversity in infinite combinations. NO two people would EVER agree on a filter, and we could prove it in front of the whole world.
I still don't see it as a priority. But if it has a low price tag, is it an exciting and educational project? absolutely! --AlecMeta 19:02, 22 August 2011 (UTC)Reply
Alec, the point is that no filter category or set of filter categories will ever be sufficiently evolved, nor will they evolve towards anything useful in the first place. You're trying to play the eventualist / incremental improvement card: "it doesn't need to be perfect right now, but it's approaching perfection". No, it's not. Not only will it never reach perfection (as in: working really well as a filter category/categories), it will actually never even be approaching anything like that. --195.14.220.250 19:13, 22 August 2011 (UTC)Reply
There is no perfection to find for this kind of project. It will never make anyone completely happy. But if we want to actually understand what images upset people, Mediawiki is a powerful technology, and we can use it to let people make filters that are "as perfect for them" as time and effort allows. Most of them will always be upset that their filter isn't imposed on everyone, but that's okay. --AlecMeta 19:20, 22 August 2011 (UTC)Reply
But thats all under assumption that you could create personal filters. Or at least a bunch of them. I doubt that it would be practical to have a list with 100 kind of different filters from that some might hit your expectations. At least for the endusers this would be a pain. "Oh, cool. I can choose from 100 filters. I will spend an hour in reading and choosing. But, damn, i just wanted to read something about XYZ". --Niabot 19:55, 22 August 2011 (UTC)Reply
Indeed, it is all under that assumption that we "do it right". If we do this very wrong, it would blow up in our faces very badly. If we do it only semi-wrong, I predict the Image Filtration project will quickly evolve unlimited categories as we learn, experimentally, that there are no consensuses and no hard lines. What's on the table is close enough to perfect that either it will be "good enough" (and I'm wrong) or it will develop infinite categories with time. Even better, of course, would be to get it right the first time for a low price, if that's an option.
The first screen would probably only show the n most popular filters, and trying to make sense of infinite filters would be a big challenge. It's just about offering people who are displeased with the status quo a limited SOFIXIT power. People are coming to us upset with our content, the solution is to give them a 'project' and let them try their best to solve it for themselves. If they fail, that's okay-- the experiment doesn't have to succeed for it to be a success. Offering people the chance to fix it is enough. --AlecMeta 20:02, 22 August 2011 (UTC)Reply
Now you did a whole lot of good faith assumptions. At first it is a bit doubtfull, that people, don't wanting to see the images (offended), will actively search for such images and categorize them in a neutral way. The second point is the default filtering. It's still on the table. Which will be the default filter? (More objections to follow, but two for the start) --Niabot 20:35, 22 August 2011 (UTC)Reply
People won't categorize in a neutral way-- there will be no neutral way, there will be no neutral categories, just like real life. There is no default-- default is off. Even when turned on there is no default, only 'most popular'. (and I admit a LOT of assumptions, so by all means pick my arguments apart, that's what they're there for.) --AlecMeta 20:46, 22 August 2011 (UTC)Reply
Right, right. It's about the children, all about the children, always and only about the children. Won't somebody please think of the children?! Great argument! So new and fresh and powerful, kinda. --195.14.220.250 16:19, 22 August 2011 (UTC)Reply
I just can't express deeply enough how much this is NOT about the children. The children don't want this, the children don't need this, and the children can get around this without blinking. Won't somebody please think of the grandparents who can never unsee Goatse! --AlecMeta 19:28, 22 August 2011 (UTC)Reply
  • 50 years ago, childrens libraries were indeed restricted to material thought suitable for them. Movies were censored also, on the same traditional grounds as that proposed here: sex, and violence. For that matter, so were encyclopedias, which carefully avoided anything that might disturb the readers. It was a survival of what once was a important cultural value, paternalism in the quite literal sense, where (usually upper-class) adult males prevented other people from reading or see what these supposedly more responsible people thought them unfit to see. We did not make an encyclopedia on that basis. We were most of us raised in a world that was free from this, or at least becoming free from this, and we did not make an encyclopedia to suit the views of our great-grandparents. We're a contemporary encyclopedia, and contemporary means uncensored. It was one of our first principles. We also believe in the freedome of our readers. If they choose of their own will to censor it with their own devices they are free do do so. regardless of what we may personally think of that decision. But we an not in conformance with our principles encourage them to do so, or adopt a system aimed at facilitating it. The board seems not to have been as much committed to free expression as the community, and the members of it need to rethink whether they are in actual agreement with the goals of the organization they are serving. DGG 17:31, 22 August 2011 (UTC)Reply

Any idea, why the American Library Association has made such a proclamation? Because they were and are under pressure from local politicans - and we are just the next one on their targetlist. --Bahnmoeller 18:32, 22 August 2011 (UTC)Reply

It is worth noting that we are not, in fact, under pressure from local or global politicians on this score. A few countries block us, but not primarily for our use of images. And some filters may block commons, but we do not receive significant political pressure to change as a result. SJ talk | translate   01:23, 6 September 2011 (UTC)Reply
[Warning: the following post contains feminist, gay, and blasphemous content, which is offensive to many users. If you are offended by this content, please make sure your filters are set to hide it.]
...but in all seriousness, you raise some excellent points, Alec.
I don't think taking the labeling authority away from WMF and giving to users makes much difference. If someone tags a picture of a woman wearing shorts and a T-shirt as "Image filtering: Immodestly dressed women", there really is no response to that. I can't sensibly argue that the person who tagged it doesn't find it offensive, as it is completely subjective. And WMF will have validated their right not to be offended by such images, so I guess it should be tagged that way. But labeling an image with such a pejorative, moralistic label is antithetical to intellectual freedom principles. Whether it's done by an authority or a collective, I simply don't see a right way to indicate that wanton women with bare arms and legs/sinful gays/blasphemous Pastafarians are offensive and that the user might want to think about blocking them.
"I have no idea what they need in a "library". We don't really need to know in that sense. Whatever the culture, we provide free access to information (free from warning labels prejudicing users against that information) and let individuals access what they want to. In every culture there are those who want to freely access information and those who want to stop them. A commitment to intellectual freedom sides with the former group and actively opposes the latter.
There are also, in every culture, disenfranchised and unpopular minorities. Time and time again, calls for warning labels disproportionately target these groups. Empowered female sexuality, cultural minority traditions, gay and lesbian issues, and religious minorities are frequently held to stricter standards than what the majority deems proper. Adopting a warning label system just helps perpetuate this.--Trystan 00:12, 23 August 2011 (UTC)Reply
Agreed. If we make a category "Immodestly Dressed Women", we should naturally expect it to grow to include every possible image that could be added to the category in good faith-- and on earth, that's going to wind up meaning "Immodestly Dressed Women" ultimately filter all images of women that show more than eyes-- there would be no way to stop that expansion. The whole system will be prejudice embodied-- but prejudices are just another form of information, and if people want to tell us about their prejudices, we can treat those prejudices as information to be shared, categorizes, forked, remixed, etc.
We have no clue how to reach consensus across languages, across cultures, across projects, but we are going to learn those things. We also have no clue how to reach consensus on 'offensiveness'-- and I think consensus has to fail there because no consensus exists. So we use forks instead of consensus-- that's okay. the image filtration project, by it's nature, is nothing but POV-forks already, because POV is all it is. --AlecMeta 00:38, 23 August 2011 (UTC)Reply
I have great interest in participating in a project committed to neutrality and intellectual freedom. I have absolutely zero interest in participating in a project that rejects those principles and sets out to embody the prejudices of its users.--Trystan 02:17, 23 August 2011 (UTC)Reply
Mmm-hmm... but you seem to have forgotten the important distinction between "censorship" and "selection". Have you ever worked for a library that has a collection of Playboy magazines? No? Why not? They're popular, they're informative, they're in demand by readers, they have cultural importance—but you haven't ever recommended that any library acquire them, have you? there are just ten libraries in all of Canada that include even a single copy of this magazine. Nine of them are university libraries. The other nearly 2,000 don't list it in their catalogs at all.
So if not acquiring Playboy doesn't violate your commitment to intellectual freedom, then how could letting me decide whether to display video stills from hardcore porn movies on my computer screen possibly violate intellectual freedom? Is skipping porn a "legitimate acquisition decision" only when it's done by a paid professional and affects every single user in the system, but merely base "prejudice" when the individual reader is making the decision to choose something else? WhatamIdoing 21:00, 23 August 2011 (UTC)Reply
Selecting and proposing images for filtering equals labeling them as (i) non-essential to the article (in which case they shouldn't be included in the article in the first place) and (ii) as somehow generally potentially objectionable (according to whose standards?). The problem is not the "censorship" strawman you keep raising (although it is detrimental to the level of discourse that you keep raising it), but the fact that proposing (ie. suggesting!) images for filtering equals labeling them as somehow negative, and as non-essential to the articles -- to which all images are supposed to make an irreplaceable contribution. --213.196.212.168 21:15, 23 August 2011 (UTC)Reply
Every image does not make an irreplaceable contribution to an article, just like every book does not make an irreplaceable contribution to a library, and every artwork does not make an irreplaceable contribution to an art museum. A brief look at the battles that the English Wikipedia has had with keeping vanity shots and extensive galleries out of some articles is sufficient to disprove this idea. To give one example, en:Bride has been reduced to "only" nine images, half of which are still unimportant.
You are proposing that a person going to a museum not be allowed to pick and choose what he (or she) looks at, but must look at every single image in every single room merely because the curator thought it worth displaying to the public. Why should I have to look at every single image? Why should I not look at the images I want to see, rather than the images that someone else wanted to display? Does your notion of free speech include an inalienable right for speakers to have an audience, but no freedom for you to use earplugs if you don't want to listen to the man yelling on the street corner? WhatamIdoing 23:28, 23 August 2011 (UTC)Reply
Playboy is an interesting example. You make a very good case for libraries carrying it, and some choose to,[8] with somewhat predictable results.[9] Whether each individual library's decision with respect to that title is censorship or selection is a very complex and interesting and debatable question.
I wouldn't say that skipping "porn" per se is based on prejudice, depending on how pornography is defined. The difficulty is that in defining pornography for people who wish to avoid it is very difficult. The prejudice we are likely to introduce into our system is that people tend to be much more offended by (and tempted to label as "pornographic") certain types of images. For example, women are often held to different standards than men in terms of acceptable dress. Images of same-sex couples attract disproportionate criticism compared to similar images of opposite sex couples.--Trystan 23:39, 23 August 2011 (UTC)Reply
I don't think that anybody believes that the filter could be 100% effective, if for no other reason than the undeniable fact that commons:Category:Media needing categories has thousands of uncatted images, and new images turn up every second of the day. We already have problems with correctly categorizing borderline images, and nothing here will change that.
It's true that the borders of categories are sometimes hard to define. We already see people getting blocked for edit warring over the cats, and that is unlikely to change. It's likely to be a 98% solution rather than a 100% guarantee. But none of those limitations are good reasons for saying that it's unethical to give readers the ability to use neutral information to make their own choices about which parts of our "library" they personally choose to see.
In fact, I believe that the greater ethical transgression is forcing them to see what they do not want to see. If your library included Playboy, you would never shove it in front of an uninterested reader, and you would stop any patron who did that to other patrons. You'd be calling security, not telling the other patrons that the library had a legitimate reason to include porn magazines and if they didn't like having them shoved in their faces unexpectedly, then they should stop using the library. Similarly, I think we should support allowing a Wikipedia reader to say, "I don't want to see that, so please stop shoving it in my face". WhatamIdoing 00:03, 24 August 2011 (UTC)Reply


Negative effects of using categories as warning labels on the usefulness of categories

[edit]

This topic has been mentioned in a few places above, but I thought I would add it under a separate heading here for a fuller discussion.

Categories, effectively used, attempt to capture that elusive quality of aboutness; the main subject of an item, in this case an image. They let us organize and find items based on content. Warning labels are very different; they attempt to flag any image which contains the offending content, regardless of what it is primarily about.

For example, we have a category about Nudes in Art. The implied scope of this category, based on how it has been used, is something like "artistic works primarily featuring the nude form." The scope is not "artistic works which contain any nudity whatsoever, however trivial." That would not be a useful scope for it to adopt. But if Nudes in Art becomes the warning label for people who want to filter out nudity, it will become diluted in that way. A category being asked to serve double duty as a warning label will invariably become diluted and much less useful.

There is also a detrimental effect on the images being categorized. A good practice is to categorize something based on a limited number of categories that best describe it. For example, this image of the Barberini Faun has a very well-chosen set of descriptive categories, but (goodness!) they forgot to include the all-important fact that he is naked. Let's add that, and any other potential objections people might have about it, and drop off some of those categories which merely tell us about its significance as a work of art. If we make warning people a primary function of our categorization system, it is to the detriment of its descriptive and organizational power.--Trystan 22:44, 21 August 2011 (UTC)Reply

This is no difference than 87.79.214.168 stating the filter will make the labeled image "unessential to article". I have enough such fallacy here already. He may not think so or just worry other's thought. But WM can and should establish a statement that filter categories do NOT reduce the priority and usefulness of labeled images to be included in the article than unlabeled images. -- Sameboat (talk) 23:28, 21 August 2011 (UTC)Reply
the filter will make the labeled image "unessential to article" -- Funny that you would mention it, seeing as you didn't have a good reponse to that powerful argument above, either. Again, not for you, Sameboat, but for others who may inadvertently think that you have any sort of legitimate point (which you do not): Sorting an image into a category of objectionable content and offering it to users for blocking equals (among other things) labeling it as non-essential to the article. That's not a fallacy, nor a statement of opinion. It's a fact, and as such non-negotiable reality.
An image either contributes to the encyclopedic coverage in an irreplaceable way, or it doesn't. If the image is an integral part of and essential to the article, then how could we be suggesting it for blocking? Conversely, if the image is not essential, then why is it in the article in the first place?
WM can and should establish a statement that filter categories do NOT reduce the priority and usefulness of labeled images to be included in the article than unlabeled image -- Unfortunately for your line of reasoning, that statement is blatantly untrue, whoever says it. --213.196.218.6 01:32, 22 August 2011 (UTC)Reply
Of course if labeled image would be refused by editors to include it in the article, WM would not propose the image filter in the first place. Your statement may be true for some editors, but definitely does not represent the whole community. -- Sameboat (talk) 04:22, 22 August 2011 (UTC)Reply
Your statement may be true for some editors, but definitely does not represent the whole community. -- What on god's green earth are you blathering about? --78.35.237.218 09:03, 22 August 2011 (UTC)Reply

When labeling is an attempt to prejudice attitudes, it is a censor's tool. The American Library Association opposes labeling as a means of predisposing people's attitudes toward library materials. http://www.ala.org/Template.cfm?Section=interpretations&Template=/ContentManagement/ContentDisplay.cfm&ContentID=8657 --Bahnmoeller 18:20, 22 August 2011 (UTC)Reply

True, but irrelevant.
Labeling an image of the Prophet Mohammed as being an image of the Prophet Mohammed is informative and appropriate. It is not a means of predisposing people's attitudes towards the image: it is a method of telling them what the image contains. The ALA would object to labeling such an image as "artwork created by evil people" or "artwork whose existence is an offense against all morally sound people". Those labels would prejudice the viewers' attitudes, but nobody is proposing that we do that.
Similarly, the ALA does not object to labeling pornography as pornography. This is informative, appropriate, and non-prejudicing. It would object to labeling it as "things decent people would never look at" or "materials only of interest to sexual perverts", but nobody is proposing that we do that.
The ALA does not oppose labels; it opposes labels whose purpose is to impose the librarians' prejudices on the users. The ALA does not object to readers using informative labels to help them choose which materials they want to see. WhatamIdoing 21:21, 23 August 2011 (UTC)Reply
Labeling an image of the Prophet Mohammed as being an image of the Prophet Mohammed -- Ah, but that is not all of what we'd be doing if we implement a variant of the image filter that relies on special filter categories. We'd be categorizing that image and suggesting it for filtering! --213.196.212.168 21:39, 23 August 2011 (UTC)Reply
Only in the sense that my local library indicates to me that an entire section of the library is mostly garbage by labeling it en:Young-adult fiction. I am independently prejudiced against such (commonly depressing and frequently idiotic) books. Are they wrong to label the books in a way that allows me to exercise my prejudice against them, by choosing books from other parts of the library? WhatamIdoing 23:18, 23 August 2011 (UTC)Reply
Do you see how you need to go back to the library analogy in order to make a "point"? If libraries had a user interface and were suggesting certain books or types of books for "filtering" of some sort, yes, they would be wrong to do so and it would be very different from simply labeling the books according to their content. --213.196.212.168 23:22, 23 August 2011 (UTC)Reply
My library provides exactly that filter in their online user interface. They also provide filters according to format, subject, language, publication date, and more.
You are claiming that libraries do not provide labels that readers can use to filter the contents of the library because you think the ALA prohibits it. I am proving that you are wrong: they do this routinely. Their goal is to have "a book for every reader, and a reader for every book", not "read this because the librarian know better than you what you should read". That means letting the readers ignore and reject the books that they do not choose to read—just like Wikipedia should allow readers to ignore and reject images they do not wish to look at. WhatamIdoing 23:35, 23 August 2011 (UTC)Reply
You are claiming that libraries do not provide labels that readers can use to filter the contents of the library because you think the ALA prohibits it. -- I don't think I said anything like that. In particular, I have not expressed any opinion (nor do I have one) with regard to the ALA, not here, nor in any other section. So you are, in fact, not proving anything I said wrong. More importantly however, you are not even proving wrong what you think I said.
"read this because the librarian know better than you what you should read" -- You accuse the people opposed to an image filter that relies on project-wide filter categories of an overly authoritative and normative approach? Also, not having an image filter (you may now realize how I keep going back to the Wikimedia situation at hand) does not equal an advice, let alone a suggestion or order ("read this"). By contrast, suggesting images for filtering by presenting them to the user in his user interface does exactly that. So you are guilty of what you are wrongly accusing me of. Coincidence?
just like Wikipedia should allow readers to ignore and reject images they do not wish to look at. -- You are, consciously or not, presupposing that there could be no image filter variant that does not rely on special filter categories. Also, the "should" is your personal opinion. --213.196.212.168 23:50, 23 August 2011 (UTC)Reply
A library could well adopt a "pornography" label as a finding aid, helping locate sexually material explicit material designed for sexual gratification for those users with that need. But when people say to public libraries that they want to avoid pornography, their objections are almost never limited to that class of works.--Trystan 23:45, 23 August 2011 (UTC)Reply
Labels always work both ways. One patron could use a "pornography" label to find a work, and another could use the same label to avoid the work. I am certain that my local library adopted the "Young adult" label to help people find works of interest to them; I am equally certain that I use that label to avoid works not of interest to me. You should label books anyway.
It is true that most people object to many kinds of works. A person who does not want to see pornographic magazines might not want to see sexually explicit romance novels, either, or possibly any romance novels at all. I do not want to read depressing YA fiction; I also have no interest in similarly depressing works of fiction that happen to be outside of the YA section. It happens that I also have no interest horror, vampires, romance, sports, or dinosaurs. That I don't want more than one type of book does not mean that it is inappropriate for you to provide neutral, factual labels for all the books. We can provide factual information, like "pornography" or "photographs showing human genitals" without using subjective qualities like "depressing" or "sexually immoral". We have been doing this for years, after all, in categories like Commons:Category:Human penis. WhatamIdoing 17:06, 24 August 2011 (UTC)Reply
The key distinction is that categories like Commons:Category:Human penis haven't been singled out as "potentially objectionable." No matter how factual the image filter categories are, the fact that they've been compiled on the basis of what should (and shouldn't) be deemed "objectionable" renders them inherently non-neutral. —David Levy 21:05, 24 August 2011 (UTC)Reply
I disagree that making a maintenance-oriented list of categories that people might want to filter under a tickbox labeled something like "sex and porn" is inherently a non-neutral activity—or that NPOV applies to maintenance activities rather than to the main namespace—but let me try to focus your attention on this fact:
Commons does not have a neutrality policy. Commons rejected NPOV. NPOV is not required for WMF projects. So your argument sounds an awful lot like "But it might violate a non-existent policy", which is not a convincing argument. WhatamIdoing 17:40, 25 August 2011 (UTC)Reply
1. Please explain how a formal determination of what content is and isn't reasonably regarded as "objectionable" is neutral.
2. Are you suggesting that the feature won't directly affect the main namespace?
3. The image filter system is to be implemented across all Wikimedia Foundation projects, including those for which NPOV is a nonnegotiable principle. 19:02, 25 August 2011 (UTC)
When categorizing images, we don't and won't decide what is "reasonably regarded as objectionable". We will instead decide (for example) if an image is reasonably regarded as a photograph of sexual intercourse, and place it (or not) in Commons:Category:Photographs of sexual intercourse based on that determination. That determination has everything to do with what our typical reader expects to find in a category with that name, and nothing to do with anyone's views on the morality of the images.
When assembling a list of categories for "Images to suppress if the person ticked the 'no sex or porn pictures, please", we will not decide whether the contents of a category are "reasonably regarded as objectionable". We will instead decide whether the category is reasonably regarded as images of "sex and porn" based on that determination. That determination will have everything to do with what our typical reader expects to have hidden (and expects not to have hidden) if he ticks that box, and nothing to do with anyone's views on the morality of what is or isn't hidden.
This process of implementing the principle of least astonishment is every bit as neutral and factual a process as the one that ALA members use when they list Hustler magazine under "Pornography -- periodicals" (which is exactly what they do with that magazine). You would expect to find Hustler in such a category; you would not expect to find The Last Temptation of Christ in that category. The fact that some people believe one (or the other, or both, or neither) is "objectionable" in some sense is completely unimportant. WhatamIdoing 21:27, 25 August 2011 (UTC)Reply
You're missing the point. Even if the "sex and porn" filter's coverage is 100% accurate and undisputed, its creation — combined with the non-creation of a [something objectionable to someone] filter — will constitute a formal declaration that "sex and porn" is reasonably objectionable and [something objectionable to someone] isn't. —David Levy 23:18, 25 August 2011 (UTC)Reply
"...what our typical reader expects to have hidden..." I don't know that such a creature as a typical reader exists. Personally, if I activated a nudity filter, I would expect all nudity to be filtered, including artistic, medical, educational, and pornographic. If I went to nudity-related categories, I would expect a very different set of images (those primarily or significantly about nudity.) Similarly, we could certainly create a filter for works about pornography, but I think that would be a rather surprising scope for most people who use it. In the library setting, complaints about pornography are almost never about anything I would classify as pornography.--Trystan 02:31, 26 August 2011 (UTC)Reply

Can filtering be done in a way that minimizes the prejudicial effects of warning labels?

[edit]

I've been re-reading the Harris Report to highlight areas that I agree with and pin down the basis of disagreements. The report talks about balancing intellectual openness with other goals; which I think is a fair point, and I could agree with it if the language was strengthened. I also retain my inherent dubiousness towards labeling based on its prejudicial power. However, I think I could support a filtering system that identifies 5-10 controversial areas for filtration if it was founded on the following principles:

  1. We acknowledge that warning labels prejudice users against certain classes of content, and are therefore an infringement on intellectual freedom.[10]
  2. In a few extreme cases, this infringement on intellectual freedom is justified, in order to give users control over what images they see, where an objectively definable set of images can be shown to cause significant distress for many users.
  3. The scope of a warning label must be clearly and explicitly defined based on image content.
  4. In order to be a reasonable infringement of intellectual freedom, warning labels must be minimally discriminatory.
    1. They may not have the express purpose of filtering of any group of people identifiable on the basis of race, national or ethnic origin, colour, religion, sex, age, mental or physical disability, sexual orientation, or other personal characteristic.
    2. Where disproportionate filtering of an identifiable group is likely to result from the implementation of a label, the scope of the label must be crafted to minimize this.
  5. We acknowledge that any system of warning labels will be inherently non-neutral and arbitrary, reflecting majority values while being over-inclusive and under-inclusive for others, as individuals have widely different expectations as to which, if any, groups of images should be filtered, and what images would fall within each group.
  6. We acknowledge that introducing warning labels, despite being for the express purpose of allowing personal choice, empowers third-party censors to make use of them.
  7. Categories are not warning labels. Because the task of labeling images that contain any controversial content is fundamentally different from the classification process of describing what an image is about, the warning label scheme used for filtration will be kept separate from the category scheme used to organize and describe images for retrieval.

I think the above principles could lead to the establishment of workable warning labels. It acknowledges this as an intellectual freedom limitation, which places the discourse with the right area of caution, and acknowledges the cost of every label we add. It also seeks to minimize the worst effects of warning labels in terms of prejudicing the user, namely, targeting or implicitly disadvantaging certain classes of people.

So what labels could potentially meet this criteria? Well, I think the following might:

  1. Nudity, including any depictions of buttocks and genitalia. If we include nipples, we include both male and female (i.e. Providing a filter that applies to Topfreedom and not to Barechested would not be minimally discriminatory.) We also would not distinguish between artistic, educational, or pornogrpahic depcitions of nudity, as such distinctions are not objectively definable.
  2. Wounds, medical procedures, and dead bodies.
  3. Sacred religious depictions, such as depictions of Mohammed or temple garments. But not including practices of one group of people which another group feels to be blasphemous, sacrilegious or otherwise offensive.

The major con of the above principles is that they will lead to categories which are perhaps not the best match we could develop to meet user expectations (e.g. a lot of people would probably prefer to filter female nipples but not male nipples, or entire minority topics like homosexuality.) This is by design, as it flows inherently from valuing the principles of objectivity and minimal discrimination above user expectation. It's also likely to be not all that culturally neutral (Though I confess that I don't understand how any set of warning labels could be neutral at all.)-Trystan 23:31, 26 August 2011 (UTC)Reply

How do you reconcile "never on the basis of physical disability" with "it's okay to suppress images of wounds"? Do the victims of acid-throwing attacks or landmines not count as having a physical disability? WhatamIdoing 17:40, 27 August 2011 (UTC)Reply
That's exactly the sort of discussion that I'd like to see the community address. My thinking of "wounds" was fresh wounds, like battlefield images, as opposed to a filter that hides images of amputees or burn victims. I have real difficulty when we start identifying classes of people as controversial.--Trystan 18:14, 27 August 2011 (UTC)Reply
The unpleasant truth is that some diseases and disabilities really are incredibly disfiguring. There are regular complaints about the images at en:Smallpox. It was a horrifying, disfiguring, disgusting, deadly disease, and the pictures of victims are seriously disturbing to some people, just like the real thing was often seriously disturbing to both its victims and their caregivers. We want to educate, but we have people who are so upset that they close the page to get away from those images, and that means we aren't educating those people. (The images don't happen to bother me.)
I've read that the same was true in real life. Badly marked survivors were discriminated against. They had trouble making friends, forming relationships, and getting jobs. People who caught sight of them frequently reacted with disgust. (Human reactions to skin diseases may have evolutionary roots: our disgust might protect us from contagious diseases.[11])
So, yes, they're real people, and they deserve respect. But, yes, other people, who are equally real but unfortunately squeamish, may be unable to tolerate some of these disgusting images long enough to read the article. I don't believe that we will find an easy solution to this problem. Since we have announced that we don't want special categories solely for filtration (e.g., "Category:Images that are so disgusting they should be filtered") this may be one of those subject in which the proposed filter do far less than what some users want. WhatamIdoing 21:24, 29 August 2011 (UTC)Reply
Please consider how well the alternative implementation discussed here would work in such a context. A reader with the filter enabled would visit the article and not encounter any disturbing images without specifically opting to load them. As in the category-based setup, he/she would have the ability to view only some of the images (depending on their descriptions). —David Levy 04:12, 30 August 2011 (UTC)Reply
Especially in this case it would hide the knowledge from the user. Yes it doesn't look nice, but we want to educate. This images are a big part of it. They allow the reader to get a good understanding which crucial effects this disease had/has. I would find it unacceptable to hide the images in this context, out of respect to the people that fought against it. --Niabot 05:55, 30 August 2011 (UTC)Reply
WhatamIdoing makes a good point about readers avoiding these articles entirely. Surely, we'd prefer that they avail themselves of the prose.
I oppose a setup in which we define such images (or any others) as "objectionable," but one in which all images are treated identically (and individual users are empowered to decide for themselves) seems appropriate. —David Levy 06:51, 30 August 2011 (UTC)Reply
How would they know what to expect? They read the description, read the text and have no clue what they will be facing. Out of curiosity they will open such an image, "be shocked" and learning that they shouldn't open images, even so most of them (the other images hidden) wouldn't disturb them. Regarding this article i have to say: Anyone that truly wants to understand the article has look at the images. Otherwise i doubt that he had an interest for this topic to begin with. --Niabot 06:57, 30 August 2011 (UTC)Reply
I agree that sighted persons can best gain an understanding of the subject by availing themselves of both the text and the images (however uncomfortable this might make them), but I disagree with the idea that we should seek to impose this as a mandatory condition (already a technical impossibility). I also disagree with your "doubt that [someone blocking the images] had an interest for this topic to begin with," which projects your thought process onto others. —David Levy 07:44, 30 August 2011 (UTC)Reply
Would you really study medicine if you can't see blood? You would either understand only the half of it, or you should not study it. Learning only half the truth is mostly worse then to never hear about it. --Niabot 08:46, 30 August 2011 (UTC)Reply
Obviously, a person unable to stand the sight of blood cannot realistically enter a profession requiring him/her to routinely encounter blood. The same isn't true of someone who merely wishes to read about such a subject. Relevant imagery can aid in one's understanding, but your assertion that its omission results in "learning only half the truth" simply doesn't make sense. I wonder whether you'd apply this bleak assessment to a blind reader's experience. —David Levy 16:56, 30 August 2011 (UTC)Reply
A blind reader will always have this problem, that he can't actually see it and has to rely on sources that go in the very detail. He needs a description by someone who expresses his feelings about such an topic. Something we can't do, due to principles of an encyclopedia: Short to the point, neutral judgement, no emotions, ...
That we lack such detailed descriptions is true for many articles, and can be only improved with better more detailed articles. A new task like sorting images after new categories will bind manpower and would have the opposite effect. It doesn't help a blind man or woman. Instead the new buttons/links could confuse screen readers. Another issue.
You should simply believe me, that someone that has an detailed interest in such an article and would also be able to accept the images. It is as it is, its part of the knowledge. --Niabot 17:14, 30 August 2011 (UTC)Reply
1. I strongly oppose the introduction of a filter system based upon "sorting images after new categories." I support the alternative implementation discussed here, which would enable a reader to block all images and unblock whichever ones he/she wished to view.
2. I agree that it's very important that the interface not interfere with screen readers (or otherwise reduce accessibility). Presumably, the additional buttons/links won't appear unless the user enables them (which a blind person obviously wouldn't do).
3. Again, you're applying your thought process to others. It isn't our place to pass judgement on readers. I personally believe that it's best to view the images, but people have the right to decide not to. In such a circumstance, reading the available prose is far better than avoiding the article completely. —David Levy 18:00, 30 August 2011 (UTC)Reply
I agree to the first two points and i partially agree with point three. That someone is able to hide images manually isn't a problem. That he can hide all images as default is also no problem. The problem arises if we start to select the pictures for others, that are hidden by default. This results in non neutral judgment (everyone has at least some other opinions), discrimination of the content (majority wins), manipulation (majority is widely spread, some minority groups can push content in/out [local majority]), endless debates (wasted time) and many more problems (accessibility could be one of it).
You said, that we should give people the freedom to hide what they don't want to see. I say, that we should not decide for people what they shouldn't see. That is not their freedom; that is our personal judgment about what is acceptable for the public and what not. --Niabot 18:47, 30 August 2011 (UTC)Reply
I agree 100% with all of the above. I strongly oppose any implementation requiring the community to take an active role in determining what content is "potentially objectionable." —David Levy 19:07, 30 August 2011 (UTC)Reply
I guess we came to the agreement that
  • a feature to personally hide/show images is acceptable,
  • a feature to hide all images as the default is acceptable,
  • a feature with predefined categories is unacceptable,
if we want to keep our goals intact. We came to this conclusion after a relatively short discussion. The ALA did so, sixty years ago. They still think that this is the foundation for free knowledge, that should be kept under any circumstances, if possible. Wikipedia was founded under the same premise 10 years ago.
Now we may ask:
  • Why wasn't the Foundation able to figure this out for itself?
  • Why is a report written by an non expert and his family the valid and only source for such a critical project?
  • Why it was decided to implement a filter, even before asking publicly for other opinions?
  • Why must such an critical decision be the first test case for a "global" poll?
What makes me most curious at the moment is how they want to come to a conclusion after the poll. Will the results be split by language or does the English speaking majority count? It's important, since one of the goals is: "cultural neutrality". --Niabot 19:46, 30 August 2011 (UTC)Reply
Niabot, you say that people with "detailed interest" in smallpox won't be put off by the images. Shall we educate only those people with a "detailed interest"? Should only medical professionals learn about it? Personally, I would rather educate every single reader, including those squeamish people who currently close the page out of sincere disgust. WhatamIdoing 19:54, 30 August 2011 (UTC)Reply
If they close the page out of sincere disgust, then they already have seen enough. Reading a good written article should cause the same effect, otherwise it isn't a good article. Blind man example: You can't change the attitude by hiding facts without making it a lie.
PS: I did not say, that hiding images it out of question. But we shouldn't be the chosen ones, that decide what to hide and what not. --Niabot 20:54, 30 August 2011 (UTC)Reply
No, that's not enough information. Closing the page because you are sincerely disgusted by the photo at the top of the article, which is a close-up, full-color head shot of the last-ever person to catch smallpox, does not teach you, for example, that the disease was completely eradicated through vaccination. It does not teach you that people died from the disease. It does not teach you that the scars are permanent. That disgust only teaches you that at some point in the course of the disease, it looks disgusting.
Different people react differently to words and to images. Some people can see images without any effect, but are disgusted by the description. More people accept the description without a murmur and are disgusted by the pictures. Most people don't mind either very much. We need to educate all of these people, not just some of them. WhatamIdoing 16:30, 31 August 2011 (UTC)Reply
Then give them the option to hide any image or no image, but don't play the judge for what might disturb them. It's a very simple solution for the problem. We ensure that the feature is neutral in any way, that it can't be exploited and would have the effect you desire. Any problem with that? --Niabot 16:37, 31 August 2011 (UTC)Reply
That feature already exists.
Users have repeatedly said that the existing image-blocking feature is not good enough.
Users say specifically that it is not good enough precisely because they want someone else to make a guess at what is likely to disturb them, and to filter out such images—and only such images, not 100% of images—before they have the opportunity to be disgusted by them. WhatamIdoing 16:43, 31 August 2011 (UTC)Reply
"It doesn't exist", because most users don't know this feature hidden deep inside the configuration of their browsers.
That is your personal opinion on this matter. There is no given source that proves your claims at this point. But it can be proven that letting others judge for yourself will harm you at the end. The history books are full with this insight and this story repeats itself again and again. Today it is our turn to make or to make not the same error again. Turning on a filter is not the same as to decide which content will be filtered. Filtering for others and not for yourself is not an option.
Additionally you ignored so far any problems that will arise with category based filtering. It's already mentioned some paragraphs above. So i won't repeat myself at this point. --Niabot 17:04, 31 August 2011 (UTC)Reply
No, the existing image-blocking options do not require the user to do anything "deep inside the configuration of their browsers". There are several purely on-wiki options. (Please go read en:Wikipedia:Options to not see an image instead of guessing what the existing options are.) WhatamIdoing 19:30, 1 September 2011 (UTC)Reply
  • How many percent of the readers actually find this help page or are searching for it: < 0,0001%?
  • How many people find the option to hide images inside their browser: 5%?
  • How many of the people that found the function actually use it to block images: 0,1%?
  • How many browsers allow to block images only from a single domain without plugins: 0?
  • How many people that found a way to block all images from Wikipedia are complaining? 0,00000001?
I know this page and i know that it is visited by 12 People a day in average [12]. A big number right? How many of the 2.200 daily Visitors [13] of Futanari complain about the image? < 1 in a month?
The numbers look a bit funny, right? --Niabot 22:01, 1 September 2011 (UTC)Reply
The setup discussed here (which would be far easier to implement than one based on categories/community judgement) would provide a solution significantly more robust than simply disabling/enabling images via one's browser. I don't know how advanced the various add-on scripts are, but very few users even know that such a thing exists (let alone how to use one). So it's unreasonable to equate this with anything available now.
I don't doubt that some readers would prefer to automatically filter only the images that they deem "objectionable" (and many probably have similar feelings regarding text). For the reasons discussed, this is neither feasible nor compatible with most Wikimedia Foundation projects' core principles. —David Levy 19:48, 31 August 2011 (UTC)Reply

Categories for muslims

[edit]

my proposition

[edit]
  • photos and images (and videos' and gif/flash/silverlight/html5 animations' thumbnails and ascii art) of women for men
  1. do not hide any women
  2. hide all women except women covered at least from knee to shoulder and neck
  3. hide all women except women who has only face or hands or feet open
  4. hide all women, including images where their part of body is visible

explanation of 1st 2 degrees: it can be considered/counted by somebody that first glance is allowed and then that images should be hidden manually. and may be general naked men filters can be used instead of them, and this degrees may be removed from islamic filter levels.

explanation of children images: women and men mean all women and men including childs and babies.

explanation of image where it is not clear whether it is woman or man - they should be rounded to the direction to be hidden.

some people may think it is allowed for men to look at little girls who are not in "hijab" ("hijab" means only face, hands, feet visible) but "moderately" clothed, i mean, closed from knees to neck and shoulders, including knees and shoulders, or some people may even think looking at more naked girl children is ok. i have found that that idea is based on a weak hadith, and i do not want to make the categorisation more complex. and that hadith even do not say anything about little girls. probably, if it is true hadith, it just means someyhing like that children should not be scolded much...

explanation of being covered: clothing should cover form of body part as cloth can do, clothing should not stay too close to body, if it is so, let it be considered as uncovered by this my degrees, for example, arm with skinny sleeve should be accounted as uncovered arm; and clothing should not be transparent, for example, arm should not be visible through clothing.

  • photos and images of men for women
  1. do not hide any men
  2. hide all men except men covered at least from knee to navel
  3. hide all men except men covered at least from knee to elbows and neck
  4. hide all men, including images where their part of body is visible

(see explanations in the section of images of women for men) explanation of levels with covering more than from knee to elbow: it is not said clearly in quran, to what women should not look, so, that levels are also possible.

  • photos and images of men for men
  1. do not hide any men
  2. hide all men except men covered at least from knee to navel

(see explanations in the section of images of women for men)

  • photos and images of women for women
  1. do not hide any women
  2. hide all women except women covered at least from knee to navel
  3. hide all women except women covered at least from knee to shoulder and neck

(see explanations in the section of images of women for men)

  • photos and images of thigh of any men/women or animal
  1. do not hide any thigh (except if they are hidden already by other filters)
  2. hide thighs of any animals and/including people, not including thighs of insects

# hide thighs of any animals and/including people, also thighs of insects, also pistils and stamens ie reproductive organs of plants, also reproductive organs of other living creatures

explanation of level of hiding reproductive organs of all living creatures. i think hadith about not looking to thigh also shyly refers to reproductive organs. and hadith allowing to look at thigh of locust is a separate hadith, so, if it is not true, then such policy is also possible. i deleted it, but now i bring that back and strike that out. strong argument to not use that level: if that be so, there would be hadith about that. though, i am not sure, maybe in time and place of muhammad there just were not culture of giving flowers nor looking at them, nor artificial growing of them, nor was photos, and so there was no enough need to say about that.

  • photos and images of any animals (and people)
  1. do not hide any animal (nor people)
  2. hide animals that are big, clear visible, some criterion should be selected, for example, size in any direction more than 10 pixels
  3. hide all images where is any animal or people or their parts are visible

--Qdinar (talk) 11:15, 10 January 2015 (UTC)Reply

Usage of the filter

[edit]

Proposal: Filter is enabled by default

[edit]

I am proposing that, by default, the image filter is enabled (all potentially offensive images are blocked). There are several reasons why this is preferable:

  • It is in keeping with the "principle of least surprise/least astonishment". Users should not have to take action to avoid seeing images which may offend them.
  • Some of Wikipedia's critics have long complained about offensive images. Unless those images are blocked by default, the criticism will continue. Having potentially offensive images blocked by default will silence that criticism.
  • Schools and some libraries are likely to want potentially offensive images blocked by default.
  • Many institutions with shared computers do not save cookies for security reasons, so any setting would be gone the next time the browser is started. This poses a problem for institutions which require potentially offensive images to be blocked, unless the filter is on by default.
  • Wikipedia vandals often place offensive images in templates or articles. While users viewing en:Fisting may not be surprised to see a photograph of a man being anally fisted, they would not expect to see it on an unrelated article, which is what happens with this type of vandalism.
  • The misuse of images (as opposed to the deliberate abuse of images by vandals), coupled with the potential for users, especially younger users, to be unaware of alternate meaning of terms, means that users may be surprised by images that they did not expect to see. Blocking offensive images by default mitigates this possibility.

Therefore, I propose that implementation of image filters be done in such a way so as to enable image filters by default. Unregistered users could turn it off with the dialog shown in the mock-ups. Registered users would be able to change it via a preference that they set once. Note that I am not proposing any change to which images are filtered or how they are classified - those remain issues for the WMF to work out. When I refer to "potentially offensive images" I simply mean images which would be filtered if a user had chosen to use the filter (there is no difference in what is filtered between the opt-in or opt-out implementation). Delicious carbuncle 20:52, 16 August 2011 (UTC)Reply

  • Support - as proposer. Delicious carbuncle 20:52, 16 August 2011 (UTC)Reply
  • Support - flickr is probably the largest source of free porn that is hovered up and stored here, and by using simple opt-in filters they still manage to keep the place relatively porn free for those that don't want to encounter it in specific circumstances. John lilburne 21:20, 16 August 2011 (UTC)Reply
  • Oppose - My understanding of the original "opt-in" filtering proposal was that, under that proposal, we would tag images as belonging to various (relatively value-neutral) categories (e.g. nudity, spiders, firearms, explosives, trees), provide each viewer with a filter that doesn't block anything by default, and allow the viewer to choose which categories to block. I'm not entirely happy with that suggestion, but not strongly opposed as of yet. In contrast, Delicious carbuncle's proposal is to have certain "potentially offensive images" blocked by default, while giving viewers the option of un-blocking those images. Unlike the opt-in proposal, this one would require us to make a definite value judgment (i.e. regarding which kinds of images are "potentially offensive" to a reasonable person); there seems to be no way of drawing the line here, unless we're going to have the filter block by default all images that any identifiable demographic finds offensive. Sorry, but I don't think it's feasible to expect people from all over the world to come to an agreement on that issue. --Phatius McBluff 21:33, 16 August 2011 (UTC)Reply
    • Comment - However, if we do end up having some categories filtered by default, then we had better allow people the option of adding additional categories to their filters. That's only fair, given the extreme subjectiveness of what counts as "potentially offensive". --Phatius McBluff 21:33, 16 August 2011 (UTC)Reply
      • To be clear, what I am proposing is that any image which would simply be filtered under the opt-out scheme be filtered by default under the opt-in. Whatever system and schema used to classify images for the original proposal would be used in my variation. By "potentially offensive" I am referring to any image tagged as belonging to any of the categories shown in the mock-ups. Nothing is changed except the filter is on by default. Delicious carbuncle 22:32, 16 August 2011 (UTC)Reply
  • Strong Oppose for all reasons given by me in the sections above this poll. --Niabot 21:48, 16 August 2011 (UTC)Reply
  • I can't think of a worse way of using Wikimedia resources than to help libraries keep information from their users. Also, it is impossible to implement in a culturally neutral way... Kusma 22:31, 16 August 2011 (UTC)Reply
    • Again, there is no difference in implementing classification under my proposed variation, simply that filtering is on by default. If your issue is with the idea of a filter or with the difficulties of classifying images, please do not comment here. Delicious carbuncle 22:35, 16 August 2011 (UTC)Reply
"All potentially offensive images" would include all images of people for a start, and I suspect all mechanically reproduced images (photographs etc.), and quite possibly all images. Rich Farmbrough 22:47 16 August 2011 (GMT).
OK, I stand corrected. There are two culturally neutral ways to filter images: all or none. Kusma 22:48, 16 August 2011 (UTC)Reply
Once again, I am not proposing any changes to which images are filtered or how they are classified, only that the filters are on by default. If you have concerns about how image will be classified or who will do it, take them up with the WMF. If you have an opinion about whether the filters (which will be implemented) are on or off by default, feel free to comment here. Otherwise, your participation in this discussion is entirely unhelpful. Delicious carbuncle 23:03, 16 August 2011 (UTC)Reply
I strongly suggest that they are disabled by default or will never be implemented. There is one big, open and time consuming question: Which image is offensive to who? "Personally i hate flowers and numbers, please block them all in the default view, because i often change my workplace..." --Niabot 23:36, 16 August 2011 (UTC)Reply
Let me then rephrase. Oppose because in the event that filters are implemented they should, and undoubtedly would, include a filter for the human form "it is feared by many Muslims that the depiction of the human form is idolatry", all human and animal forms - certain hadiths ban all pictures of people or animals - and for all representations of real things "You shall not make for yourself a graven image, or any likeness of anything that is in heaven above, or that is in the earth beneath, or that is in the water under the earth". Rich Farmbrough 23:31 16 August 2011 (GMT).
What a load of crap historically, and currently. Damn I've not seen such ignorance outside of BNP and EDL lies. John lilburne 06:55, 17 August 2011 (UTC)Reply
I guess WP:CIVIL doesn't apply on Meta? You may wish to go and purge this "crap and ignorance" from en:Wikipedia - and possibly stop off for a chat with the Taliban and advise them of the errors of their ways in banning photography. Or perhaps you could credit people with the knowledge that Persian illustrations such as "Why is that Sufi in the Haman" contain figures which were permissible where and when they where created, but would not be by other groups of Muslims at other times in other places. Since the prohibition is derived from hadiths rather than the Qur'an it is certain Sunni sects that have observed it, while other groups have not. To generalize from a couple of specific localised examples to a faith of a billion people over 14 centuries, is an absurd leap. Rich Farmbrough 23:54 17 August 2011 (GMT).
You wouldn't be trying to censor words here now would you? Back on point every Muslim country has TV with images of people and animals, all have newspapers containing images of people and animals, all have billboards with local politicians, street demonstrations have people holding aloft images of religious leaders, or dead martyrs. Walk around in a Muslim tourist spot and the women in niqab and burka are taking photographs of their family. ban all pictures of people or animals crap. John lilburne 00:12, 18 August 2011 (UTC)Reply
Just because every country (currently) does or has something, it does not follow that there are not communities that do not dislike, condemn or prohibit it. Rich Farmbrough 12:06 18 August 2011 (GMT).
There is no significant number of people that are offended by general images of people and animals. If such people exist then they must have a very unhappy time wherever they live as there is hardly anywhere where there are no images of people or animals, and if they are truely offended by such images they won't be browsing the internet. So excuse me if I don't dance of into la-la land with you. John lilburne 14:06, 18 August 2011 (UTC)Reply
Interesting that you feel the need to be rude in every comment you make. However it might still be worth your while to read, for example, en:Islamic aniconism, to consider that there are other ways of delivering the projects than Internet (some of which WMF representatives have specifically discussed in relation to similar minority religious groups), to further consider that it is possible to use the Internet with all images suppressed, and one positive feature is that this might allow access to a reduced subset of WP where nothing was available before. Alternatively you can continue to deny the existence of such groups, or claim that since they are small they don't matter, and that I am a loony to consider that "all potentially offensive images are blocked" is a very broad, indeed overly broad proposal - but you'd be wrong. Rich Farmbrough 00:13 19 August 2011 (GMT).
  • Oppose: If you implement the filter, make it off by default, otherwise it would further marginalize images seen as controversial by those in power, something to which I'm opposed to on anarchist grounds. Cogiati 01:37, 17 August 2011 (UTC)Reply
How is that an Anarchist perspective! As an Anarchist myself I don't think ANYONE should be dictating to what I MUST see anymore than some one should dictate what I MUST NOT see. The solution that allows the most freedom of choice is to allow me to decide whether to view the image or not. John lilburne 11:46, 17 August 2011 (UTC)Reply
It is your choice to filter some categories or not. But it is not your choice which image belongs to which category. Some might still offend you and some might stay hidden even if you hadn't any objections to view it. --Niabot 18:35, 18 August 2011 (UTC)Reply
  • Oppose – Forcing users to opt out is far more intrusive than granting users the ability to opt-in. --Michaeldsuarez 02:56, 17 August 2011 (UTC)Reply
  • Oppose. Delicious carbuncle: You seem to be under the impression that the idea is to compile a finite list of "potentially offensive" subject areas and offer the option of filtering them. This is incorrect. An image "offensive" or "controversial" to one culture/individual is not necessarily viewed as such by another. "All potentially offensive images" = "all images". The "categories shown in the mock-ups" are merely examples.
    For the record, I strongly oppose the plan on philosophical grounds. Your suggested variant, in addition to being even more philosophically objectionable, isn't even viable from a technical standpoint. —David Levy 03:45, 17 August 2011 (UTC)Reply
    You appear not to have read what I wrote. A filtering system is being implemented. I am simply proposing that the filters are on by default. Any issues with deciding what' to filter exist in the current implementation. Delicious carbuncle 10:59, 17 August 2011 (UTC)Reply
    You appear not to have read what I wrote (or what Phatius McBluff wrote). The proposed system would not include a single set of filters applicable to all cultures (an impossibility) or to one particular culture (a non-neutral, discriminatory practice). It would, at a bare minimum, encompass numerous categories widely considered controversial/offensive in some cultures and perfectly innocuous in others.
    "The filters are on by default" would mean that everything categorized within the system would be filtered for everyone by default. Some religious adherents object to images of women without veils or women in general, so all images of women would be blocked by default. That's merely a single example.
    For the plan to even conceivably succeed, "deciding what to filter" must be left to the end-user. (Even then, there are significant issues regarding how to determine the options provided.) Your suggestion relies upon the existence of a universal filter set, which is neither planned nor remotely feasible. —David Levy 18:54, 17 August 2011 (UTC)Reply
  • Oppose. Just because you're playing on a slippery slope over shark-infested waters doesn't mean you have to dive off it the first minute. Look, I'm not completely ignorant of the religion of the surveillance state; the concept is that the human soul is a plastic card issued by a state office, and all of the rights of man emanate from it. For the unidentified reader, who doesn't run Javascript or cookies, to be able to access specialized information (or eventually, any information), let alone post, is a Wikipedia blasphemy against this faith no less extreme than the Muhammad pictures are against Islam. And so first you feel the need to make the image hiding opt out by default; then limit it to the registered user; and eventually only such registered users as are certified by the state, because your god commands it. After that it would be those specialized articles about chemistry, and so on. But Wikipedia has been defying this god since its inception, so I have high hopes it will continue to do so. Wnt 05:45, 17 August 2011 (UTC)Reply
  • Support, as a parent in particular. Currently I have all of the WMF's websites blocked on my daughter's browser, which is a shame because she loves learning and wikipedia would be a good resource for her if it weren't for problematic images that are generally only a few clicks away. --SB_Johnny talk 12:30, 17 August 2011 (UTC)Reply
I'm disturbed to read your vote, because if you can block WMF websites on your child's computer you must be running some sort of "nannyware", but I would expect any remotely professional censorship company to be able to block out the sort of images being discussed here while allowing through the bulk of Wikipedia material. What do they do in schools which are required to run such software? Wnt 14:41, 17 August 2011 (UTC)Reply
Why would you be "disturbed" by a parent using nanny software? Or (say) a preK-8 school, for that matter? --SB_Johnny talk 16:39, 17 August 2011 (UTC)Reply
There is no software that will reliably detect inappropriate images (whatever the criteria for inappropriate is). Thus it is simpler and less error prone to block the entire site. John lilburne 15:03, 17 August 2011 (UTC)Reply
Yup. I seriously doubt the sites will come off the red flag lists until at least some effort is made to change the defaults. OTOH, from other comments on this page about "cultural neutrality" and "radical anti-censorship", the categorization and tagging efforts might fail anyway. --SB_Johnny talk 16:39, 17 August 2011 (UTC)Reply
  • Strong oppose Oops, I think we slipped. Our boots are losing traction. Internoob (Wikt. | Talk | Cont.) 17:33, 17 August 2011 (UTC)Reply
  • Oppose, for reasons outlined in previous comments on this talk page. If I oppose opt-in filtering, it's likely of little surprise that I oppose opt-out filtering. If we must have image filtering, it should never be opt-out.
  • Support. No point to doing it any other way. Why would a reader create an account if they're offended by an image in order to hide it in the future? They'll simply leave. So much for disseminating all of human knowledge (like the WMF actually does that anyway). – Adrignola talk 00:06, 18 August 2011 (UTC)Reply
    Do you advocate that all images (or at least all images known to offend large numbers of people) be suppressed by default? If not, where should we draw the line? At images that seem "offensive" to you? To me? To whom? —David Levy 01:16, 18 August 2011 (UTC)Reply
    Flickr, picassa, facebook, ipernity, youtube, have vast amounts of content that is considered as acceptable for their international audience, flickr has a huge amount of sexually explicit content that is filtered too. Some of the most prolific uploaders of sexually explicit content to Commons are sourcing the content from flickr. Flickr's rules of acceptability are crowd sourced. Why do you think that WMF sites would be unable to distinguish acceptable from unacceptable in relation to the same audience? John lilburne 06:43, 18 August 2011 (UTC)Reply
    Flickr is a commercial endeavor whose decisions are based on profitability, not an obligation to maintain neutrality (a core element of the nonprofit Wikimedia Foundation's mission).
    Flickr (or any similar service) can simply cater to the revenue-driving majorities (with geographic segregation, if need be) and ignore minorities whose beliefs fall outside the "mainstream" for a given country. The Wikimedia Foundation mustn't do that. —David Levy 07:33, 18 August 2011 (UTC)Reply
    And yet flickr has far more controversial images than Commons, is the place where Commons sources much of its material, and where the minority ero community and majority non-ero communities coexist side-by-side. Where ero-content producers often express support for a filtering system where they can browse the site with their children, or work colleagues and not encounter age inappropriate or NSFW images. John lilburne 17:29, 18 August 2011 (UTC)Reply
    You're defining "controversial images" based upon the standards prevalent in your culture. In other cultures, for example, images of women without veils or women in general are considered highly objectionable. A website like Flickr can make a business decision to ignore minority cultures (on a country-by-country basis, if need be), thereby maximizing revenues. ("Only x% of people in that country object to this type of image, so it needn't be filtered there.") Such an approach is incompatible with the Wikimedia Foundation's fundamental principles.
    Incidentally, Flickr hosts hundreds of times as many media files as Commons, primarily because most fall outside the latter's scope. —David Levy 18:19, 18 August 2011 (UTC)Reply
    Your position is inconsistent. If true that unfiltered images of women without veils on WMF sites are objectionable, WMF sites are not less objectionable with unfiltered images of a woman's vagina. Flickr, google, and others are in the business of getting the maximum number of viewers on their sites. Both do so by having a click through before revealing images that may be objectionable. Logged in users that have expressed a preference to see certain types of image are not bothered once their preference has been set. John lilburne 19:16, 18 August 2011 (UTC)Reply
    No, my position is not inconsistent. My position is that the Wikimedia Foundation should filter no images by default, irrespective of who they do or don't offend. (I also oppose the introduction of opt-in filtering, but that's a somewhat separate issue.)
    It's true that someone offended by an image of an unveiled woman also would object to an image of a woman's vulva, and your argument seems to be that there therefore is no harm in filtering the latter. You're missing my point, which is that it isn't the Wikimedia Foundation's place to decide where to draw the line.
    In some cultures, a photograph of a fully nude woman is regarded as perfectly acceptable. In other cultures, any photograph of a woman (even if clothed in accordance with religious custom) is considered problematic. You want the Wikimedia Foundation to filter its content in a manner precisely conforming with your culture's position in the spectrum, thereby abandoning its neutrality by deeming the former cultures excessively lax and the latter cultures excessively strict. (And as noted above, this is merely a single example.)
    Flickr, Google and others are in the business of making money. For them, value judgements catering to majorities at the expense of minorities improve the bottom line. For the Wikimedia Foundation, they would violate a core principle. —David Levy 23:28, 18 August 2011 (UTC)Reply
  • Oppose. This would block what search engines see and that would block the ability of those who do not want filtering to use search engines that analyse image content to look for images containing certain items. It's key to the general proposal that the filtering should not affect those who do not want filtering, and on by default would do that. Jamesday 03:25, 18 August 2011 (UTC)Reply
  • I find it kinda difficult to take this proposal at face value. If someone wants to pre-emptively block all images, they can e.g. use AdBlock plus to filter all common image file formats, or they can use this userscript (works like a charm for me). For general image blocking, no separate filter system is required, and therefore implementing a universal default image block provides no benefits but a lot of drawbacks for the majority of people who want to see all or most images by default. --213.168.119.238 17:43, 18 August 2011 (UTC)Reply
  • Mmmmm, oppose. People ought to be able to look at pictures first to see what they don't want to look at, and then be able to block them. Rickyrab 18:34, 19 August 2011 (UTC)Reply
  • Oppose – No thanks, most people would like to customize the image filter for themselves. We shouldn't be forcing users to have to opt-out of the image filter; they should make the final choice to "opt-in" to such filtering. Parents can "opt-in" their children, librarians can "opt-in" library computers, etc. Give the readers (and/or the editors) the final decision whether or not to use the image filter; don't enable it by default. mc10 (t/c) 22:53, 20 August 2011 (UTC)Reply
  • Oppose It is bad enough that the "optional" feature, as described, violates the NPOV pillar. More than bad enough. But making *all* so-called offensive categories, as this proposal suggests, from nudity to any image of the human body to any particular "sacred" element of any religion to images of violence to images of shrimp, making all those images hidden by default? I find this suggestion enormously troubling. --Joe Decker 02:38, 22 August 2011 (UTC)Reply

Proposal to at least put this in as one of the voting options

[edit]

Currently the poll/referendum/whatev doesn't even ask the question (thus "censoring" the poll) ;-). Since it's already mentioned that people can change their votes anyway, is there any reason it can't be added as an option on the poll/referendum/whatev? --SB_Johnny talk 21:04, 17 August 2011 (UTC)Reply

Considering the rather overwhelming opposition to the opt-out proposal (above), I wouldn't think it necessary to put the question in the poll. 92.18.208.159 16:46, 21 August 2011 (UTC)Reply

Circumvention of opt-out by institutional administrators is possible, locking students and employees into a censored world

[edit]

Or maybe it should be open-ended, so that whatever has a category can be arbitrarily black-listed and blocked - purple people can block images of striped people, Jennifer Aniston can block images of Brad, and I don't ever have to face the horror of gherkins again. Sure, people will want blinkers, but Wikipedia has traditionally had a policy not to provide them - from day 1, we've been all about enlightenment without political barriers. And Christian school admins can set a permanent non-removable cookie on all kiosk machines to block sex-ed and evolution in their classrooms (and also block image filter config pages, so the hiding be reversible: readers should be supported if they decide to change their minds. option can definitely be circumvented! And don't say JavaScript, because they can disable that, too). I don't think people should have a "sort-of-Wikipedia". It dilutes our mission. And just because Google caved in to China's demands at one point (and was rightly castigated for it), doesn't mean we have to. Samsara 22:02, 19 August 2011 (UTC)Reply

It seems like allowing people to hide things from themselves vs. allowing people to hide things from others are two different issues, yet this proposal doesn't make a clear distinction between them. The list of voting questions includes "that hiding be reversible: readers should be supported if they decide to change their minds." Well of course, why on earth would you have a preference that once you activate it, can't be shut off? What it's really trying to say is "should a parent/school administrator be able to set it up so that a child can't unhide images?" or "should one person be able to hide images from someone else, out of that person's control?" Evil saltine 23:03, 19 August 2011 (UTC)Reply
I would be really concerned if I thought what we were building would seriously allow one human being to stop another human being from accessing information as they choose to. But nothing we build is going to useful to that effort. Repressive regimes and repressive authorities already have plenty of software on their own. This is really just about the basic need for readers to, e.g. 'not see a gross autopsy picture if they really don't want to'. --AlecMeta 00:10, 20 August 2011 (UTC)Reply
To follow up-- if somebody can control my browser's behavior, can't they already prevent me from seeing any or all WM content anyway? Would our building this actually help China censor its citizens in a tangible way, or can they that already without our help? --AlecMeta 02:46, 20 August 2011 (UTC)Reply
Yes it would help because it does the categorization, and promulgates it for them. And it also, apart form the technical side of things, creates a chilling effect, because someone seen to have the "Chinese Orthodox" filter turned off, might be subject to various forms of retribution. Moreover it might be possible to log filter settings. Rich Farmbrough 03:16 20 August 2011 (GMT).
Every organisation and education institution uses a content filtering enabled proxy server anyway and nearly all organisations just block sites they dont support the views of or are off topic. This feature wont make that situation any worse and might make it somewhat better. Keeping in mind we are just talking about images here and I couldn't give a rats arse if I can't see images of dicks or mohammad, like honestly, keep it in perspective. If you think you arn't already censored WAY MORE than what this is doing, your delluded. Promethean 07:11, 20 August 2011 (UTC)Reply
None of what you brought forward makes any sense, Promethean. It's been pointed out that if we build categories designed for filtering, this will help third parties in the restrictions they may wish to build for users whose machines they control. Secondly, just because I'm exposed to the potential of traffic accidents on my way to work doesn't mean I have to go swimming with saltwater crocodiles. Any level of censorship is bad, and just because we already have some, doesn't mean we should introduce even more. Apply that argument to nuclear weapons, and I think you'll see how massively wrong you've been in your arguments. Samsara 03:38, 23 August 2011 (UTC)Reply
[edit]

Liability

[edit]
File:Babar.svg
Elephant in the throne room, yesterday

The risk here is that we create liability. Currently you get whatever images are on the page, you take a risk that "any reasonable person" would apprehend - that on a page you might see either A. an apposite but offensive picture B. and erroneous and offensive picture, or C. a vandalistic and offensive picture. Once we have a filter in place, the user with religious views, a sensitive stomach, PTSD, or and expensive lawyer on retainer, may come across images that they believe (rightly or wrongly) the filter should hide. We have in effect given an undertaking to filter, which we are not able to deliver on, for a number of reasons possibly including:

Is this a horror image?
  1. Interpretations of the filter
  2. Error
  3. Backlog
  4. Pure vandalism
  5. Anti-censorship tag-warriors
  6. Perception of the image

And that doesn't touch on the consequences if we hide something we should not.

Rich Farmbrough 23:04 16 August 2011 (GMT).

And yet flickr have filters and aren't sued by expensive lawyers on retainer, when porn images leak past the filters. Why is that? John lilburne 11:56, 17 August 2011 (UTC)Reply
My understanding is that flickr terminate user accounts when users fail to apply the filters to their own images correctly, which also removes all their images. How would you implement that here? Samsara 22:13, 19 August 2011 (UTC)Reply
Have you heard of the concept disclaimer? 68.126.60.76 13:48, 21 August 2011 (UTC)Reply
Indeed, and the project have always resisted adding more disclaimers. Firstly issuing disclaimers does not necessarily absolve from responsibility, secondly it can increase responsibility in other areas, thirdly it establishes a position we may not wish to be identified with. But certainly if the filter(s) were implemented disclaimers galore would be required. Rich Farmbrough 18:37 21 August 2011 (GMT).
Yes, a disclaimer would absolve from responsibility. There is no responsibility in this situation. If there was, Wikipedia would already censor all images with nudity or violence. Do you think people regularly sue Google over every instance where SafeSearch fails to block a pornographic image? You can't search anything without accidentally finding porn. I can't even find a disclaimer from Google, other than a note in their help page that "No filter is perfect, and sometimes adult content shows up even with SafeSearch on".
There is no need for "disclaimers galore", this is utter nonsense. We probably wouldn't need more than one sentence. "Wikipedia is not censored" is hardly less of a disclaimer than "Wikipedia may be censored at your digression but the censorship might not always be perfect" 68.126.60.76 11:09, 22 August 2011 (UTC)Reply
Rich is right: merely posting a disclaimer does not absolve you from existing responsibilities. If there is no responsibility in this situation, then a disclaimer is completely unnecessary. On the other hand, if there is a responsibility, then a disclaimer is not a guaranteed get-out-of-jail-free card. To give an extreme example, posting a disclaimer that says "BTW, we hereby disclaim any responsibility for the spambot malware that we're deliberately infecting your computer with" will not prevent you from going to jail for breaking the laws about malware and spamming. WhatamIdoing 17:28, 25 August 2011 (UTC)Reply
Good point. Regardless, I stand by my assertion that if Google can offer an imperfect image content filter without being sued, so can Wikipedia. Notwithstanding Google clearly has more money than Wikipedia and makes a better target for a frivolous case. 68.126.63.156 02:36, 30 August 2011 (UTC)Reply

If the filter is off by default, the Foundation would probably not lose the "safe harbor" legal protections afforded to ISPs that do not censor their content. It might be possible to have new users and IPs get asked about it if a controversial image is about to be shown without risking those protections. In Islamic countries where we've been blocked, we might even have the defaults set first, although since I'm not an ISP lawyer I have no idea what that would do to the safe harbor provisions. 76.254.20.205 16:38, 24 August 2011 (UTC)Reply

Matters of principles

[edit]

Quit trying to fix something that isn't broke

[edit]

Wikipedia is one of the most successful sites on the Internet despite not having content restriction features. Lets leave well enough alone and stop tinkering with the open principles that made Wikipedia successful in the first place. Inappropriate images aren't a major issue but the slippery-slope solution presented has the potential to ruin Wikipedia itself. Jason Quinn 15:24, 16 August 2011 (UTC)Reply

Approaching Wikipedia with a "don't fix what ain't broke" attitude will stifle its ability to progress. Yes, censorship is a delicate issue, but as long as it is opt-in, we are relatively far from the dangerous slippery slope, imho. I can't imagine any scenario where people would be driven away from a website that allows them to filter things they might not want to see. Nobody complains about Google's safe search feature; if you don't want a "safe" search then you keep it turned off. B Fizz 18:08, 16 August 2011 (UTC)Reply
The existence of the categorization system used for censorship would -however- be neither opt-in, nor opt-out. It either exists or it does not.
The existence of this system allows for multiple avenues of attack against neutrality on any particular object in our custodianship. To wit, some attacks mentioned on this page: 3rd party usage of the categorization system in ways in which it was not intended. Legal attacks on categorization of certain items (both pro and contra). External entities able to mobilize large groups of people conspiring to place certain items under censorship (or remove them from censorship), for reasons of profit, trolling, or political gain.
We already deal with some of these problems on a regular basis, of course. I'm not entirely sure why we would want to make ourselves more vulnerable, however.
The software side is fairly easy, and is -in itself- indeed quite innocent. The problem is the categorization scheme that is going to be biting us in the rear for years to come.
--Kim Bruning 15:48, 19 August 2011 (UTC)Reply
But Kim, aren't all of these problems absolute non-issues as long as the filter is strictly opt-in? People can use whatever categories there are to filter images as closely along their own preferences as the system allows at any given moment. --87.78.46.49 15:58, 19 August 2011 (UTC)Reply
Not only are "all of these problems absolute non-issues as long as the filter is strictly opt-in"; the existence of the filter itself is not much of an issue by itself, even if it were opt-out. People are looking at the wrong part of the problem.
I think the root problem lies with the underlying data used to feed the filter. Now *that* is where the can of worms is. Start thinking about ways in which such structured data can be attacked (vector 1) or abused (vector 2). Assume an adverse environment (that is to say: attacks and abuse are givens). Now come up with a viable data design. That's actually very hard!
Note that if we use the existing category system (which is not what is currently proposed AFAIK) that system would come under new pressure, due to the stakes involved.
I think censorship is a high-stakes game, and I'm not sure it is possible to merely skirt the edges. --Kim Bruning 13:05, 20 August 2011 (UTC)Reply
@B Fizz. Wikipedia has progressed just fine with a community driven approach. In fact, it's hard to imagine it having progressed better. When the community identifies a problem, fix it. These "top-down" solutions go against the spirit of the community. As other people have written here, it's only a matter of time before government gets it's filthy little hands on guiding the content of Wikipedia once these filters exist. In fact, schools and libraries will instantly apply these filters if given the choice, which will automatically trump the supposed opt-in nature of the filter. This whole issue is a non-issue anyway. I view hundreds of articles a week via the random page feature, and I never find inappropriate material.... EVER. The only place where images exist that some might view as controversial is on medical articles. Those images belong there if the add to the educational nature of the article. You don't not give important information to a patient just because the patient doesn't want to hear it. If there's another content here that inappropriate, the only way to find it is to go out of your way and look for it. Jason Quinn 20:24, 19 August 2011 (UTC)Reply

24.148.240.176 05:24, 24 August 2011 (UTC) Busy, busy, busy. This is homologous to the four robocalls per weekend one receives from school principals who natter scripts that are available on their schools' web pages and provide bits of information that are old news to all who might care. Technology applied by the dull because it's there. Avert your eyes, readers, in the old-fashioned way you did back in '09!Reply

The Wikipedia I want to contribute to does not have this feature

[edit]

Since I can't vote against this... ...I will add my voice here.

Very simply, the Wikipedia I want to contribute to does not have this feature.

The Wikipedia I want to contribute to does not accept that any image is inherently offensive. It does not get into arguments about how much buttock is required to categorize a picture as sexual nudity. It does not implement religious superstitions in software. It does not accommodate readers who wish to experience a sanitized version of reality.

The Wikipedia I want to contribute to is unashamed to stand defiantly in opposition to any culture or system of belief that teaches that certain things may not be seen, or certain facts not known. It has a culture dedicated to telling the truth as wholly and neutrally as possible, without any bias of commission or of omission. It abhors any suggestion of censorship, or the creation of any mechanism that might someday support censorship.

I know I am not alone in this. I believe that the majority of people who are inspired and challenged to help build a free encyclopedia - on its way to becoming one of the great cultural landmarks of human history - are generally not to be found in favor of building any mechanism for restricting free access to ideas. This simply should not be built because it is a fundamental betrayal of the core values of the project.

Thparkth 15:07, 19 August 2011 (UTC)Reply

+1, it can't be said much better than that, now where is the "No thanks, and I object to this feature being implemented" button in the vote? Thomas Horsten 15:34, 19 August 2011 (UTC)Reply
By my reading, voting 0 on the first question will do the trick, although whether they will listen to any of the answers that they don't like is doubtful. — Internoob (Wikt. | Talk | Cont.) 20:51, 19 August 2011 (UTC)Reply
I agree up until "[Wikipedia] does not accommodate readers who wish to experience a sanitized version of reality." Sanitized reality should not be forced on anyone, but I do feel that if a reader wishes to experience a sanitized reality, they should be able to. Suppose I want to read the wiki entry for "fuck", but I don't want to see images depicting that word. I don't know whether the article has explicit images or not, but I could just turn off sexual images and read the article. I like being in control over what I see or don't see, and I think it fair to give this ability to everyone, as long as no one but me controls what I see. So someone wants to read about Muhammad but feels that depictions of him are blasphemous? He can turn off Muhammad pictures for himself, and leave the rest of us unaffected. Everyone who opposes this feature acts like someone else's preferences will affect my own ability to experience the raw 100% exposed Wikipedia content. It won't. Anyone who wants the entire uncensored content can still get it, and suggesting that this feature will snowball into a forcibly censored Wikipedia is pure paranoia. B Fizz 16:28, 19 August 2011 (UTC)Reply
Readers are not in control of what they see or don't see, on any web site. The content on the page is the product of editorial decisions. Take it or leave it (or hit the "edit" button and change it), but don't assume you have the right to experience it in a different way than the content creators intended. The bowdlerized article you want to see may be missing critical information, and it may be made non-neutral by your self-imposed censorship. Thparkth 16:43, 19 August 2011 (UTC)Reply
So are you saying that Google Safe Search doesn't exist, or that when the user clicks the button to choose a setting, that the setting is controlled by someone other than the user?
Or do you mean that if you don't control 100% of everything on the website, that you aren't in control of anything at all? WhatamIdoing 00:12, 24 August 2011 (UTC)Reply
+1, and how can I make my voice heard to the deaf who don't want to hear the No they deserve?
Better yet, where is the vote!?! I click the link to SPI from here: http://en.wikipedia.org/wiki/Special:SecurePoll/vote/230 and I get, "Welcome to SPI!
SPI is a non-profit organization which was founded to help organizations develop and distribute open hardware and software. We encourage programmers to use any license that allows for the free modification, redistribution and use of software, and hardware developers to distribute documentation that will allow device drivers to be written for their product.
Read more
This site is powered by ikiwiki, using Debian GNU/Linux and Apache.
This new SPI site is still under construction, please refer to the "legacy" SPI web site for any content not yet present here.
Copyright © 2010 Software in the Public Interest, Inc.
License: Creative Commons Attribution-ShareAlike 3.0 Unported
Last edited 2011-07-14"
No actual way to vote. WTF is actually going on here? A proposal to vote on censorship that contravenes and existing board ruling with no way to vote and bad questions/options (apparently) when you get there... something really weird is going on here. JMJimmy 15:50, 19 August 2011 (UTC)Reply
+1 At first I was rather in favor to allow user-requested filtering, even if I don't see the need myself.
But you explain clearly how it's again the core project's values. If a user, group or user or government want this feature, they can implement it themselves as a browser add-on or a country-wide subversion-wall.
--Guillaume42 00:28, 20 August 2011 (UTC)Reply
+1 This "vote" is embarassing and disgraceful. There's aproximately half a dozen questions, but the most essential question of all, namely if we want this misfeature at all, is not even asked. The closest they get is how important the feature is, which is not the same thing at all. (I consider it important - infact I consider it very important that we NOT implement this.) --Eivind 06:38, 22 August 2011 (UTC)Reply
You are right that you are not alone in this. It is certainly one of the reasons I dove into Wikipedia head-first. SJ talk | translate   02:47, 6 September 2011 (UTC)Reply

This is not censorship

[edit]

Reality Check

[edit]

To those of you who are convinced that this proposal is a form of censorship, and more to the point, to those of you who may have been convinced by all the screaming that this proposal is a form of censorship, I propose the following test. I will agree with you if you can satisfy one of the following conditions. If you can…

1. Name me one image that is removed from Wikimedia’s image banks under this proposal.

2. Name one image of those ten million that I won’t be able to see exactly when I want to under this proposal.

3. Name one image of the ten million that will be hidden from my view unless I personally choose to hide it.

You can’t. You can’t because as much as this may feel like censorship to you, as much as you may want it to be censorship, as much as you want to convince us that, despite what the proposal clearly says, it doesn’t mean what it says, but means the opposite – it isn’t censorship. If it were – you’d be able to answer the questions above.

Wikimedia is a very open society – we all get to say what we want. That is good. But, at some point, reason has to at least be invited to the table, even if it’s given nothing to eat. Vote how you will on this proposal. But to be opposed to it because you feel it’s a form of censorship is a sadly wasted vote. 70.49.184.46

This might come to you as a surprise, but the about two hundreds posts calling this censorships do not generally assume that you are the victim of it, which is largely responsible for the fact that nobody is able to answer your questions. Images will not be removed from the image banks, they will just be tagged so that third-party filtering systems will make people (possibly, not you) unable to access them. The system itself might make people (possibly, not you) unable to see some images under the assumption that they are children or students of somebody who just doesn't think they should be seen. Like, say, the students of a Turkish teacher who doesn't like the idea of the Armenian genocide. Or a Chinese teacher who doesn't like Tienanmen Square. Or a Texan parent who doesn't like Charles Darwin. complainer
Your questions do not make sense. You might a well ask "name one fluffy kitten that will die if this proposal passes".
However, there is a risk to Commons, albeit not immediate. And that is that if this proposal passes, the opt-out version will be right on its tail. Following that, the scenario is that the projects have "admitted" that certain images are "unsuitable" or at least "not needed" then it makes little sense to keep them, since they are A. controversial B. have no educational value (and are hence out of scope). Rich Farmbrough 03:09 20 August 2011 (GMT).
The claims of third-party filtering systems abusing this are silly. The "tags" this system will use already exist; they're called "categories", and you can read about them at Help:Categories. If a third-party system wanted to use our long-existing category system to filter images, then they could have done that many years ago, and they will be able to do that for many years to come, no matter what WMF decides about this tool. WhatamIdoing 00:17, 24 August 2011 (UTC)Reply
They could filter commons:Category:Nudes in art, but the scope of that is not "all artworks containing any nudity whatsoever", nor should it be. I don't think we have a category with that scope, because it isn't very useful as a finding aid, compared to a selection of works where the nudity is actually a significant aspect. If we implement a nudity filter, we would need to create that new grouping.--Trystan 00:45, 24 August 2011 (UTC)Reply
"If we implement a nudity filter, we would need to create that new grouping." Says who?
If we decide to implement a nudity filter, we might well decide that Cat:Nudes in art should be excluded entirely from it. Nobody is guaranteeing that a filter primarily intended to hide, e.g., Commons:Category:Full nudity or Commons:Category:Sexual acts, will necessarily hide marble statues or drawings. WhatamIdoing 17:27, 24 August 2011 (UTC)Reply
Dear 70.49.184.46, you are technically correct The best kind of correct!. Fortunately, ALA comes to our rescue by calling it a "Censorship tool" instead. Not the clicky buttons btw. This is about the categories you would need to create or curate. The clicky buttons were only ever icing on the cake anyway.
Once the tool exists, any (third) party can come along write some of their own (more evil) clicky buttons, and use them to censor wikipedia.
Of course, to many: "censorship", "censorship tool" is like "tomahto", "tomayto". But still; thanks for pointing out the difference! :-)
--Kim Bruning 19:41, 24 August 2011 (UTC)Reply

This is proving an excellent intelligence test, if nothing else

[edit]

Hint: Watch everyone shrieking about censorship. Those are the people that have failed the test. Opt-in thing =/= censorship. 75.72.194.152 02:16, 21 August 2011 (UTC)Reply

Hint: Read all the arguments against the image filter that are not related to claims of censorship. censorship =/= one of the many different very real and valid reasons that speak against implementing an image filter. --87.78.45.196 02:18, 21 August 2011 (UTC)Reply
Images are an integral part of the articles and the wikipedia project. They can be as important as the text. If someone proposed a text filter they would be laughed out of cyberspace and yet with images, the board can pull of this farce of a referendum (which is it not but an afirmation of policy) and say its not censorship at all let alone a dangerous beginning. Try to at least listen to others and read their arguments. These arguments should be CLEARLY visible to voters before the vote (which it is not). --94.225.163.214 02:52, 21 August 2011 (UTC)Reply
Images are an integral part of the articles and the wikipedia project -- Exactly right, and we as a community should trust our own judgment in determining what needs to be included in articles in order to provide the best possible coverage of the subject matter. If an image is deemed to possess explanatory value that goes above and beyond the devices of prose, then that image should be included and it becomes an integral part of our coverage of the subject matter. If there are valid reasons that speak for an exclusion (such as in the case of spider photos in arachnophobia articles), then that image should not be included. Simple as that. If you don't like water, don't go swimming. If you don't actually seek any level of holistic education, don't use an encyclopedia. --87.78.45.196 04:14, 21 August 2011 (UTC)Reply

You are technically correct. Opt-in thing != censorship. However, the proposed categorization scheme used to make it work is *in itself* something that the ALA would classify as a "censorship tool". --Kim Bruning 19:43, 24 August 2011 (UTC) Hope you don't mind me using the programmers "not equals"! ;-)Reply

About time

[edit]

It was about time that this proposal was put forward, and I thank those involved in making sure that it has progressed as far as it has. This is very minimum in putting forth protections that are required both legally and ethically, and it is rather silly that people would oppose it. Ottava Rima (talk) 23:01, 19 August 2011 (UTC)Reply

I respect your opinion, please respect other's opinion and don't them as silly
Concerning the legal aspect, some legal and perfectly acceptable images can be illegal in other countries where censorship is the law. I hope this function will only respond to the users's choice, not to the wish to please some state that want to censor wikipedia.
--Guillaume42 23:47, 19 August 2011 (UTC)Reply
To deny someone an option to opt out can be nothing but silly. You claim censorship because people want the ability to ignore you? That is ridiculous. You do not have the right to force other people to view material that is inappropriate for them. To claim that this is "censorship" is really inappropriate and incivil. Censorship would be blocking people who put the material there. Ottava Rima (talk) 00:57, 20 August 2011 (UTC)Reply
Not implementing this filter does not amount to 'denying' anyone the ability to simply look the other way (as you seem to insinuate). Are you claiming that as wikipedia stands right now, it's denying you the ability to ignore something it contains becuase it doesn't offer a filter? We all already have that feature right now. Simply navigate to another page, or turn off your computer. Filtering and tagging content in this way is completely outside the scope of Wikipedia's concern. There are already countless 3rd party options for denying oneself the privilege of seeing the world as it really is, if said person wants to limit themselves in that way. Pothed 03:15, 20 August 2011 (UTC)Reply
I wasn't claiming censorship on you or supporters of this referendum. I just wish this can be put in place so it respect every user choice, but not easing, even involuntarily, state censorship that would want to impose it's view to it's citizen. My last message may have been ambiguous, sorry.
Ignoring can be done even if something is visible. But the question is the ability to hide. It can be done third party. For instance parental control in the browser can block keywords or URLs (images have their URL) to protect from violent or sexually explicit contents. It could also be done for everyone wanting to hide any category of image. Not implementing it on wikipedia don't mean I'll be impossible.
--Guillaume42 10:22, 20 August 2011 (UTC)Reply
"anyone the ability to simply look the other way " That could only be true if there already was a filter. There isn't. Ottava Rima (talk) 02:47, 21 August 2011 (UTC)Reply
Could you possibly stop calling people who oppose your point of view (which, incidentally, seem to be a vast majority of wikipedia editors) names, as well as, possibly, read their arguments about how this proposal dramatically eases third-party censorship? You being perhaps the most fervent defender of this "proposal", it would really help rationalize and civilize the debate. Thank you in advance, complainer
Actually, the majority of Wikipedians are neither for the pornography nor against a filter of it. Also, Wikipedia is not a right and it is a private entity, so there is no such thing as "censorship" here. Tossing the word around is inflammatory and incivil. Ottava Rima (talk) 02:47, 21 August 2011 (UTC)Reply
This is a non sequitur: censorship has nothing to do with anything being either a right, or a public entity. As for being inflammatory, there are, regretfully, no synonyms to be used, and not using the word because it harms your point wouldn't be fair; it would, in fact, amount to (self) censorship. Complainer
Not even close. Censorship, by definition, implies that you had a right to see something that was removed. You cannot be accused of "censorship" by removing inappropriate things. Otherwise, removing vandalism is censorship. Your arguments are absurd and it is sad that you do not realize it. Ottava Rima (talk) 18:59, 21 August 2011 (UTC)Reply
You know, you'd come out a lot more convincing if you just argued your point without vaguely insulting the other's; if I were to follow in the incivility, I could point at your history on the English wikipedia, which I am not going to do. As for the definition, you might want to consult http://www.merriam-webster.com/dictionary/censorship to see that the term "right" or any of its synonyms is not there. You are also straying, really, off the point, by mentioning "inappropriate things". This discussion is about the implementation of a technical solution to give people the opportunity of not seeing images. By arguing the right of removing inappropriate content, you are actually saying the same thing I, and most (yes, most) other editors are, that is that the solution opens the floodgate of removing content for third parties. The only difference is that, for some reason of moralistic nature, you see it as a good thing. By the way, removing vandalism is not censorship for the plain and and simple reason that everybody is free to see it by looking at previous revisions. Complainer
Convince? You and others came out with guns blazing, attacking people, throwing around insults, and making some of the most problematic comments. You made it clear you aren't here to be "convinced". Furthermore, your claims are really off above. The definition is clear: to claim censorship means that it has to be legitimate and necessary. It is not censorship to not look at something, otherwise the Earth is censoring the moon half the day! If you stop using terms in absurd ways then you would have nothing to say, which is the point. Ottava Rima (talk) 20:06, 21 August 2011 (UTC)Reply
Convince? You and others came out with guns blazing, attacking people, throwing around insults, and making some of the most problematic comments. You made it clear you aren't here to be "convinced".
You're the one who initiated a thread by declaring that any opposition to the idea was "rather silly." Since then, your replies have grown increasingly uncivil and confrontational. —David Levy 20:42, 21 August 2011 (UTC)Reply
Um, the only way you could have a point is if my post was the first post on this talk page. Everyone can see that there were hundreds of comments here before I posted. And yes, trying to say people don't deserve to have the ability to turn off images is silly at its best. It is also xenophobic, racist, or really inappropriate at its worst. And don't toss around "incivil" to me, when you only do so because I pointed out the incivility of your post. You wish to deny people a simple choice because they are not from the same culture or mindset of you, which isn't acceptable behavior. You have to learn to get along with people from other nations if you want to continue editing here. Ottava Rima (talk) 21:16, 21 August 2011 (UTC)Reply
You say that others "aren't here to be convinced," but you initiated this thread by declaring that you had your mind made up and anyone who disagreed with you was "silly."
That's among your milder insults, which also include the terms "bully," "tyranny," "selfish," "fringe," "xenophobic" and "racist" (among others).
Again, I wish to prevent a scenario in which readers are discriminated against because of cultural/personal differences in their beliefs. I don't know what gives you the idea that I don't "get along with people from other nations."
And are you under the impression that my country (the United States) isn't among those in which many people are offended by the content that you've continually cited? —David Levy 00:11, 22 August 2011 (UTC)Reply
You label things as insults that aren't. You made a racist comment. It is that simple. You tried to claim that other cultures were inferior to your special knowledge and that you know better than them. You tried to claim that the minority that agrees with you is superior to the Chinese, Indian, Muslims, etc, all cultures that do not believe that porn is appropriate. If you don't like this being pointed out, then don't say such things. And in the US, most people are offended. We have laws against those under 18 from seeing porn for a reason, and you wish to say that we shouldn't allow these children to obey the law, which is irresponsible. Ottava Rima (talk) 00:33, 22 August 2011 (UTC)Reply
Please directly quote the "racist comment" in which I stated the above.
Does it surprise you to learn that I'm morally opposed to pornography? I am. It's because I don't wish to impose my beliefs on others that I take issue with the idea of formally labeling content "objectionable" in accordance with my (or anyone else's) personal opinion. —David Levy 01:00, 22 August 2011 (UTC)Reply
Here is just one of many racist comments from you: "I believe that it would discriminate against members of various cultures by affirming some beliefs regarding what's "potentially objectionable" and denying others." Basically, what you are saying is that because they are from another culture than yours, you get to decide that they have no right to find anything objectable that you don't. That any of their "objections" must be false, wrong, or undeserved. Your own statement right there is racist. You try to push some sort of ethnic superiority over a large portion of the world. Ottava Rima (talk) 01:23, 22 August 2011 (UTC)Reply
As I've noted repeatedly, you're interpreting those comments to mean the exact opposite of what's intended.
I include some of my beliefs among those that would be affirmed at the expense of others'. I'm saying that I don't want us to deem my (or anyone else's) value judgements correct, thereby discriminating against other cultures by conveying that their contrary objections (or acceptances) are "false, wrong, or undeserved." —David Levy 02:06, 22 August 2011 (UTC)Reply
Sorry, but pretending that I interpret them to be opposite of what they would say would mean that you are arguing for the filter. It is just that simple. Making a statement and pretending that valid responses to that are wrong because "I say so" isn't an appropriate argument. If you think you are being misunderstood, then you should fix what you say. If you are truly claiming that we aren't to value anyone's judgments as "correct", then you would think that the filter must be put in place. Otherwise, you are saying that anyone but you has a bad value judgment. It is that simple. You are discriminating against everyone who isn't you unless you give them the right to filter out what they do not want to see. Ottava Rima (talk) 16:36, 22 August 2011 (UTC)Reply
Again, I support the introduction of a system providing the option to block all images en masse and select individual images for display on a case-by-case basis. This would accommodate all readers' beliefs regarding what is and isn't "objectionable" (instead of attempting to define this ourselves, thereby discriminating against any culture or individual whose standards fall outside our determinations). —David Levy 18:37, 22 August 2011 (UTC)Reply
So, let me understand this, and please tell me a clear "yes" or "no" so I can stop wasting my time with you: you are not trying to convince anybody (I've got to recognize you are not doing so, but I assumed you were at least trying)? Because, if not, there is very little else you would be doing, except for offending. Incidentally, you really, really should follow my dictionary link and read what en:censorship is: the word does not belong to you, and you have no title to attach to it implications that are simply not in its definition. Just to take up one absurd example for the last time, regardless of what the Earth is doing, the moon is a poor subject of censorhip since, when you are not seeing it directly, you can still read about it, and see pictures of it, for example, on wikipedia. You can, as long, of course, as somebody doesn't assume that crescents offend Islamic sensitivities. I'm sort of tickled to know which insults I have thrown around, but not quite enough to keep reading yours: I think being a sorry racist egotist who makes no sense will do for now. Complainer
"the moon is a poor subject of censorhip since, when you are not seeing it directly, you can still read about it" Without the pornographic images you would still be able to read about it. You defeated your own arguments. Ottava Rima (talk) 23:41, 21 August 2011 (UTC)Reply
The following words, conveniently omitted from the above quotation, are "and see pictures of it." —David Levy 00:11, 22 August 2011 (UTC)Reply
Ever wonder why Google or Yahoo have safe search filters? Ottava Rima (talk) 00:57, 20 August 2011 (UTC)Reply
Because people try to game the search engines to get their pr0n visible to as many people (customers) as possible. Nobody is gaming [Special:Search] to promote porn (I hope!) - and it does not return images anyway so the parallel fails. Rich Farmbrough 03:29 20 August 2011 (GMT).
Actually wasnt it not too long ago that Wikipedia was in the news for having excessive amounts of porn (including CP in the form of art)? The safe search filter isn't designed for the purpose described so please go bat your shit elsewhere.. Promethean 06:57, 20 August 2011 (UTC)Reply
Larry Sanger, co-founder of Wikipedia who left Wikipedia after numerous conflicts with other volunteers, & who has been critical of Wikipedia, made that allegation. Fox News picked it up & ran with it -- as if it were President Obama's fault. However, no one has made a study to determine whether he was correct -- or just making unfounded accusations based on spite. (Jimmy Wales did make a unilateral & ill-informed decision to delete some images, but Wales has admitted that he believes Fox News broadcasts the truth -- sad but true.) -- Llywrch 16:37, 20 August 2011 (UTC)Reply
Llywrch, be fair. Agree or disagree with what Larry Sanger charged (and again, I'm not supporting or endorsing anything here), it's clear if one looks at the origins of his action and his attitudes, that he's completely sincere. Also, it's not a case of "make a study" - it's a matter of legal interpretation of obscenity law for one charge, and views on sexual material for another charge. Ironically, neither point, by the way, is addressed by the proposal here. -- Seth Finkelstein 19:07, 20 August 2011 (UTC)Reply
I am being entirely fair. Sanger was the first major example of WikiBurnout, & like many once important contributors has soured on the project & like anyone talking about a former lover with whom one has had a bitter break-up he habitually views it in a negative light. As for "make a study", I should have written something along the lines of "investigated"; but that was the best phrase I could think of at the moment. AFAIK, no one has objectively invested his claims of "Commons is full of porn". It's an accusation along the lines of the SCO's Group claim that the Linux kernel was full of plagiarized code: no one honestly knew for sure, & due to lax policies it -- both plagiarized code in the Linux kernel & porn in Commons -- is entirely possible. But a careful audit showed this was not the case for the Linux kernel code base; no one has yet auditted Commons. (BTW, I assumed by "be fair", you referred to my comments about Sanger; re-reading my post above, I see I was far more harsh on Wales. Or is any criticism of the "God-King" a different matter in your eyes?) -- Llywrch 22:15, 20 August 2011 (UTC)Reply
And inversely, many wiki-editors act like besotted lovers, blind to the faults of the paramour and hostile to those who would dare point out flaws. That being said, it's clear from the immediate pre-FBI-report messages Sanger wrote that he believes what he's saying about the various material (again, this is just a statement of his beliefs, not agreeing or disagreeing with him). Regarding his charge about violating US law, that's a complicated matter which requires legal expertise to analyze. Regarding "full of porn", that comes down to porn/erotica/educational-value debates, where I wish you good luck to "objectively invest[igate]" (i.e. not the same thing as where you personally would draw the lines). By the way, I've defended the barbarian-king on some occasions where I felt criticism of him was unjustified, though sadly I never seem to get credit for doing that. -- Seth Finkelstein 23:45, 20 August 2011 (UTC)Reply
Enforcing? There is no forced opt-in. Even if there was, you can always opt-out. You wish to deny people a choice, which in itself is the only true censorship. There mere fact that you have people putting up so many comments here while logged out shows that the arguments against the opt-in are completely illegitimate. Otherwise, they wouldn't need to hide their identity or use multiple IPs to make it seem like they have more support than actually exists. Ottava Rima (talk) 02:47, 21 August 2011 (UTC)Reply
Enforcing? There is no forced opt-in.
Straw man. That isn't the context in which the word "enforcing" was used.
The question pertained to the enforcement of a cultural viewpoint when determining which images to deem "graphic," not a claim that anyone will be forced to enable the filters.
Additionally, while no one will be required to block any images from view, their options of what type(s) of images to filter will be determined based on non-neutral, culturally biased standards of what constitutes "objectionable" material (unless literally everything is assigned a filter category, which is unrealistic). Each and every belief that a particular type of content is "objectionable" effectively will be deemed valid (if included) or invalid (if omitted). That's non-neutral and discriminatory. —David Levy 04:05, 21 August 2011 (UTC)Reply
Labeling my post as a straw man when it clearly was not is incivil. Your statement is really inappropriate, especially when the claims you said don't exist are directly right there for everyone to see. Please do not make up things like that. "determined based on non-neutral, culturally biased standards of what constitutes "objectionable" material " This is really offensive. Are you saying that people don't have a right to block things they don't want to see because the reasons why they want to block things is a cultural difference? Basically, you are saying "they have a different culture from me, therefore their opinion doesn't matter". That is the very definition of racism. Ottava Rima (talk) 13:37, 21 August 2011 (UTC)Reply
The categories shown are so vague as to be culturally dependent. There is no way to get a neutral standard for the category "Sexually Explicit". Evil saltine 14:08, 21 August 2011 (UTC)Reply
Why would we want a "neutral standard"? Everyone gets to see what is filtered and if they find something they want to see they don't have to filter it. It is that simple. Your argument makes no sense. All filtering is subjective and it isn't forced. Ottava Rima (talk) 16:13, 21 August 2011 (UTC)Reply
It goes against the idea that Wikipedia has a neutral viewpoint when we start making subjective judgments about what is or is not offensive. Whether people have the choice to view or not view images we deem "controversial" is irrelevant to that. Evil saltine 21:16, 21 August 2011 (UTC)Reply
You might be interested in knowing that WMF is far more than the English Wikipedia, and not all projects subscribe to the "neutral point of view" that en.wiki does. For examples, Commons has an official policy disclaiming it: Commons is not Wikipedia, and files uploaded here do not necessarily need to comply with the Neutral point of view and No original research requirements imposed by many of the Wikipedia sites. You will find the list of which projects choose to subscribe to that policy here on Meta at Neutral point of view. WhatamIdoing 00:40, 24 August 2011 (UTC)Reply
1. It was a straw man, as it countered an argument that the person to whom you replied didn't make (by attacking a position superficially similar to his/her real one). You applied the "enforcing" quotation to a context materially different from the one in which it was written.
Again, the claim was that the determination of which images to categorize as "graphic" would enforce a cultural viewpoint, not that anyone would be forced to use the resultant filter.
2. Remarkably, you've managed to misinterpret my message to mean exactly the opposite of my actual sentiment.
One of my main concerns is that the categorization system will include images widely regarded as "objectionable" only in certain cultures, effectively conveying an attitude that "they have a different culture from me, and therefore their opinion doesn't matter."
I'm not faulting anyone for his/her beliefs. I'm saying that this feature will have that effect (by accommodating some and not others). —David Levy 17:25, 21 August 2011 (UTC)Reply
If you want to claim it is a straw man then you do not know the definition of a straw man is. I responded to 100% of what he said and then you started rambling about whatever. Bullying people with incivility isn't appropriate and forcing people to look at things against their will is tyranny. You do not own Wikipedia so stop acting like you do. Ottava Rima (talk) 18:46, 21 August 2011 (UTC)Reply
That IP was me, and I was saying what David Levy is saying. Sorry for the misunderstanding.
No one is "forcing" anyone to read Wikipedia, and some of the text is rather disturbing as well. Are we "tyrants" for not dumbing down the reality for users? If someone comes across something inadvertently that disturbs them, just like with other media, the attitude that should be applied is "that's tough, and what were you doing reading about ten-year-old torturer-murderers if you didn't want to know that?" No one is forced to know reality in all of its colours, and when they do, it should be seen as perhaps an unfortunate, but enlightening experience.
Having said that, I am not opposed to this filter on principle, except that it is not Wikimedia's place to make decisions of "decency". The filter will inevitably be too conservative for some cultures, too liberal for other cultures and just plain off-the-mark for others still. Too conservative means that the filter approaches being censorship, and offends people by telling them that bikinis are indecent, for example (and telling them literally: there will be discussions on the topic and people will say this); too liberal means that the filter is offensive in that it tells people that they shouldn't reasonably be offended by a picture, or is just plain ineffective; and off-the-mark, the most probable outcome, is a combination of the two. Do you see how this is a serious encroachment on NPOV? — Internoob (Wikt. | Talk | Cont.) 19:58, 21 August 2011 (UTC)Reply
1. No, you quoted him/her out of context and countered a nonexistent argument. He/she used the word "enforcing," but not in reference to readers being required to filter images.
2. As noted elsewhere on this page, I have no desire to force anyone to view any image, and I support the introduction of an optional image filter, provided that it covers all images (and enables their manual display on a case-by-case basis).
3. I find your allegations of bullying and incivility mildly amusing, given your propensity to assign insulting adjectives to anyone with whom you disagree.
4. You do realize that this plan applies to many projects other than Wikipedia, yes? —David Levy 20:42, 21 August 2011 (UTC)Reply
1. Nope. No matter how you try to twist it, it wont become true. He said "enforcing a cultural POV". There is no such thing if there is no forcing people to adopt the feature. The only way you can "enforce" something is to -force- it upon people. That means no choice. www.dictionary.com 2. I don't believe that is true or you wouldn't bad mouth the majority of people in the world who do not wish to be exposed to such images or are legally not allowed to be. 3. You can claim I assign "insulting" adjectives all you want, but it doesn't make it true. I describe actions. You describe individuals. Then you try to deny a vast majority of the world a right to a choice. 4. Why would that even matter? People who can't legally view porn or would get in trouble for doing so don't need to be only on Wikipedia for that to be a problem. Ottava Rima (talk) 21:21, 21 August 2011 (UTC)Reply
Lets break this down.
  1. If we have a small number of cats (as the proposal suggests) we need to include some potentially offensive categories and exclude others
  2. This choice of what categories to include is not culturally neutral.
That's why this proposal cannot be culturally neutral as it stands.
Further problems exist, if we extend the system to the large number of categories required to be culturally neutral.
  1. We enable oppressive regimes
  2. We create a massive categorisation task
  3. We are giving undertakings that will fail
  4. We are creating a PR nightmare
  5. We open the door to proposals to "opt-out" and to filter text - both have already been made while the survey on opt-out for images is running
And that's juts for starters. Rich Farmbrough 23:06 21 August 2011 (GMT).
By being available to children while having pornography perfectly visible, we created a PR nightmare. Putting up a filter has never created any problems. The WMF seems quite okay with everything and thinks your potential warnings aren't even close to reality. Ottava Rima (talk) 23:42, 21 August 2011 (UTC)Reply
1. Yet again, you've quoted him/her out of context. He/she referred to "enforcing a cultural POV in that someone will have to make a call as to whether something is graphic or not" (emphasis mine), not in that readers will be forced to filter images deemed "graphic." The enforced cultural POV is the formal declaration that category x is "potentially objectionable" and should contain images y and z. You keep noting that readers will choose what categories to filter or not filter, but they won't choose what options are presented to them.
2. I'm bad-mouthing no one. My personal opinions of what types of content people should/shouldn't deem "objectionable" don't even enter into the equation. My argument is based upon a desire to discriminate against no one, which is why I support a blanket image filter option (automatically covering literally every image subject to which anyone objects).
3. I've listed some of your insults above. What insulting terms have I used?
4. You repeatedly referred strictly to "Wikipedia" and "the encyclopedia," so I wanted to make sure that you were fully aware of the plan's scope. That's all. —David Levy 00:11, 22 August 2011 (UTC)Reply
Your attempt at an argument in number 1, while failing to realize that in the English language your critique doesn't make sense, is a problem. You don't suggest any ability to understand that. I pity you. It is also silly to think that the users wont have any choice in the options, when it is clear that there can be additional categories at any time. It was pointed out many times that the images show that the Pokemon picture can be blocked. Why you ignored that is beyond any explanation. You ignore a lot of things, make claims that are patently false, and try to justify the unjustifiable. Ottava Rima (talk) 00:36, 22 August 2011 (UTC)Reply
1. Thanks for pitying me. That's one of the more polite things that you've written.
2. Indeed, categories can be added at any time (and undoubtedly will be with some degree of regularity). But are you suggesting that every "potentially objectionable" type of image will be included (thereby favoring no culture over another)? How do you address the inherent subjectivity in determining what qualifies (and doesn't qualify) as "potentially objectionable"? —David Levy 01:00, 22 August 2011 (UTC)Reply
Filters are supposed to be subjective because the objective standard of allowing all porn no matter what to fill the pages isn't working, and people need the right to chose what best fits them. Anything is better than nothing. Ottava Rima (talk) 01:24, 22 August 2011 (UTC)Reply
Do you oppose the idea of providing an optional filter for all image files (thereby enabling readers to display their desired images on a per-request basis)? —David Levy 02:06, 22 August 2011 (UTC)Reply
Your question makes it obvious that you never bothered to look at the other page, which removes any legitimate reason for you to be posting. It is clear that, in the example of the Pokemon image, that a filter would allow you to minimize any image. Ottava Rima (talk) 16:37, 22 August 2011 (UTC)Reply
That isn't what I'm describing. I'm referring to a system providing the option to automatically block all images en masse, enabling the reader to make case-by-case determinations of which individual images to view. —David Levy 18:37, 22 August 2011 (UTC)Reply
And that IP was me. I genuinely forgot to log in. — Internoob (Wikt. | Talk | Cont.) 04:18, 21 August 2011 (UTC)Reply


This is a partial solution

[edit]

Unlike most other Wikipedians, I am actually in favor of self-censorship. It is a fact that some people are bothered by certain content and it is a good feature to allow them to filter WP content.

However, this proposal is for image filtering only, and I do not see that there is any intent to extend voluntary self-censorship to text filtering. Personally, I would be happy to be able to convert "F**K" words to this nicer form with the asterisks, or even a more fully-asterisked version "***" (a user-chosen string).

Why is one form of filtering considered worthy but not the other?

If there are sufficient arguments in favor of image filtering, I think most should also apply to text filtering. I would like to see both implemented, for consistency and (as the mathematicians call it) elegance. David spector 12:12, 21 August 2011 (UTC)Reply

Couldn't you try to unlearn your learned aversion to the word instead? It's simple, quick and doesn't open up a can of worms. --87.79.214.168 12:27, 21 August 2011 (UTC)Reply
So he must change because you demand that everyone be as libertine in word choice as you? That is really unfair and inappropriate. Ottava Rima (talk) 13:40, 21 August 2011 (UTC)Reply
That's a good point. Why don't we have a filter, click a button and all "Obscene language" is filtered out? I'm sure we can all come to an agreement about which words are too naughty. Evil saltine 21:28, 21 August 2011 (UTC)Reply
Most chat room systems allow you to filter out cuss words. I would be 100% for that. Ottava Rima (talk) 21:51, 21 August 2011 (UTC)Reply
Well no surprise there. Presumably we would also allow filters for blasphemous text that supports the so-called theory of evolution? Or would we remain strictly on words? I can see the configuration screen now:
Please select which of the following words you find offensive
  1. *********
  2. *********
  3. *********
  4. *********
  5. *********
  6. *********
Please note, all words have been starred out to avoid causing offence.
Rich Farmbrough 22:55 21 August 2011 (GMT).
Are you honestly trying to say someone shouldn't be allowed to not see hard core pornography because someone might be against evolution? That is really weird. Ottava Rima (talk) 23:45, 21 August 2011 (UTC)Reply
Ottava Rima, unless you define "pornography", your use of the term is arbitrary and meaningless. Does e.g. file:Human vulva with visible vaginal opening.jpg qualify as "pornography"?
Also, someone shouldn't be allowed to not see -- Who is forcing anybody to use Wikipedia? --213.196.218.6 01:20, 22 August 2011 (UTC)Reply
Why don't we have a filter, click a button and all "Obscene language" is filtered out? Honestly, it's because there are definitely no english words that are anywhere near as upsetting as the most upsetting images. Just, empirically, people don't get as upset by seeing a dirty word. There are "shock images", but no "shock words" in English anymore, at least none as shocking as shock images. If there words that could spontaneously trigger instant nausea in large groups, we probably should have a 'click to view' filter around those words too-- in my experience, no such words exist, but I only speak english and I've only been me. -AlecMeta 18:02, 22 August 2011 (UTC)Reply
It's not so simple: [14] --195.14.220.250 18:09, 22 August 2011 (UTC)Reply
There's one that is extremely offensive to most African=Americans in the U.S. Ileanadu 09:15, 30 August 2011 (UTC)Reply


Fundamentally disagree with the report's findings

[edit]

I do not support the idea that users need to be given tools to protect themselves from certain content. Exposure to information that is shocking, disturbing, and offensive is healthy. It prevents people from living in a bubble where certain parts of the world they don't like don't exist. It does not necessarily persuade people that the content depicted is or should be normative; in fact, it may do the opposite. But pretending that the world is not occasionally violent, sexual, or full of a diversity of religious beliefs is certainly not healthy. I disagree that "respecting the audience" requires the Foundation to facilitate such self-delusion, any more than it requires the hiding or deletion of facts which contradict a given set of mistaken beliefs about the world.

A "delay viewing" filter is not effectual to prevent children from seeing content some people deem harmful to them, and I do not think it should be attempted. Personally, I do not support the idea that content that is appropriate and potentially educational for adults is harmful to children. Shielding children from certain truths about the world only stunts their intellectual and emotional growth, and creates distrust that adults are sharing the whole truth about any given subject.

I do not support limiting the scope of Commons to media which are deemed "educational". Any distinct, legal, freely-licensed media supplied should be collected there. -- Beland 17:21, 20 August 2011 (UTC)Reply

+1 to "Any distinct, legal, freely-licensed media supplied should be collected there."
One of the reasons I'd accept a filter is that I think it will help us move towards a culture of 'wider scope', where all information is totally welcome. There is no such thing as 'non-educational information'.
This filter won't "protect children" and it's clearly not meant to. This is exclusively meant to help non-tech-saavy adults make their own decisions about what they want to see. It's really only good for things like "I'm reading about porn in public so I don't want to show any on my screen" or "I have a horrible fear of spiders and get really upset having to look at them when I didn't expect to". --AlecMeta 18:21, 20 August 2011 (UTC)Reply
Surely pictures of spiders won't be added to the "restricted pictures" category, will they? This is where the whole thing gets kind of ridiculous, when one group of people have to decide what others may or may not find objectionable. 86.179.1.163 20:22, 20 August 2011 (UTC)Reply
Sorry, no, it's probably not literally true. I used the "images of spiders" as an example since it's upsetting but totally apolitical and culturally neutral. I didn't want to single out any single category of images, because we are too culturally diverse to ever rationally reach agreement on what "should" be upsetting to our readers. So if we have only a few filter categories, we'll have use some 'objectively fair' method to pick the categories, and the presence of spiders won't make the list, I assure you. There are things that stress people out far more than spiders, and I shouldn't let that example trivialize the issue.
Now, if we made the 'perfect' shutterer, then yes, literally, people with serious debilitating arachnophobia could come here to collaborate, on-wiki, to build a spider-image shutterer for themselves, if they really really wanted to. But developing that level of customization may be prohibitively expensive. --AlecMeta 21:17, 20 August 2011 (UTC)Reply
If those are the odd sorts of instances that this filter is intending to address, then I completely agree that it's a total waste of resources. Those issues should simply not be the content providers problem, and should be the end users taking steps through 3rd party applications to achieve there goals. Or hey... you could just turn off images in your browser settings (this is even easier for someone who isn't tech savvy than this proposed filter would to figure out). Both of the situations you mentioned are already solved. There is simply NO NEED for the content provider to concern themselves with this task. It's dangerous territory for a site like this to get into, especially when you consider the problem is already solved on the web-browser side of things. More sophisticated filters can be developed at the end-user level. They have no place on the content provider (server) level. Pothed 20:28, 20 August 2011 (UTC)Reply
I agree so deeply that this is the way the web should have worked. Everyone using Firefox, every browser its own platform, every user their own sysadmin. But we never imagined being THIS successful. We're very pervasive-- we're more like TV, now, than a 2000s era website. We're getting readers who are totally not up to doing anything outside of the browser frame, and we're getting LOTS of them. And they just want to know how to stop accidentally flashing inappropriate images at times and places they didn't intend. It's a very small request from an ever-growing number of people, and we can be accommodate the request pretty easily and cheaply, so long as we're careful about it. --76.20.61.69 21:54, 20 August 2011 (UTC)Reply
So basically, since we're like TV, it's ok that we introduce bleeping? I like the analogy though, because it powerfully illustrates the folly of the image filter. Consider that television, like much of popular culture in general, suffers greatly because people who don't like certain types of content have so much traction that most parts of the end product are an utter mess, dumbed down to the least common denominator. An encyclopedic project is not compatible with this kind of catering to the masses. People consult Wikipedia to learn, and learn they shall. Either they'll have to learn to tolerate content they don't like for whatever reason, or they'll have to learn how to block those images on their client end themselves (e.g. using the userscript I mentioned and linked to further above). At any rate, narrow-mindedness, laziness and ineptidtude shouldn't be rewarded.
As to the spider example. I've brought that up myself, but I found that Greasemonkey userscript and it blocks all images on all Wikimedia projects as long as I keep Greasemonkey activated in Firefox (the script replaces each image with a clickable link to reveal the image with a single click if the user so desires). Since activating/disabling Greasemonkey also takes just one single click, I can "safely" browse Wikipedia without any need for a MediaWiki-side image filter. It is that simple. People who don't like something on TV should mute the set or look away or change the channel. People who don't like certain images on Wikipedia have an easy-to-use and failsafe method readymade and freely available to them via third-party software. I'm emphasizing "failsafe" since the efficiency of the existing categories or any future system of tags or categories for purposes of batch-filtering certain types of images is highly questionable. --87.78.45.196 00:11, 21 August 2011 (UTC)Reply
"I do not support the idea that users need to be given tools to protect themselves from certain content." That is a really, really scary statement and I am baffled that you are posting such. Ottava Rima (talk) 02:48, 21 August 2011 (UTC)Reply
"scary"? No offense, but you're not making a strong case by appealing to emotions instead of explaining in plain terms what exactly you find so problematic with that statement which I for one happen to agree with. --87.78.45.196 03:20, 21 August 2011 (UTC)Reply
"you're not making a strong case" Hiding under an IP and using multiple IPs to try and make it seem like more people are opposed to this than actually are is what isn't making a strong case. Quite the opposite. Ottava Rima (talk) 13:34, 21 August 2011 (UTC)Reply
Appeal to emotions, ad hominem, assumption of bad faith; what's next up your sleeve, you master of logic and reasoning? --87.79.214.168 20:43, 21 August 2011 (UTC)Reply
My dear, the only people the description above applies to is the "its censorship!" group that resorts to using the IPs to make it seem like they have more people while being completely nasty when refering to the WMF and to the majority of people in the world that don't have their view. Ottava Rima (talk) 21:22, 21 August 2011 (UTC)Reply
My dear banned Wikipedian, if you actually skimmed over this talk page, you would inevitably see that there are numerous valid objections to implementing an image filter which have nothing to do with claims of "censorship". So the answer to my question is: Strawman. That was the next fallacy up your sleeve. --87.79.214.168 21:26, 21 August 2011 (UTC)Reply
Just because you claim there are valid suggestions does not make it so. Log in if you truly believe in what you are saying. If you can look up to see if I am blocked, then you are an experienced user and have an account. Your post only verifies that the opposition to this is acting inappropriately. Ottava Rima (talk) 21:45, 21 August 2011 (UTC)Reply
Mate, like the Wikipedia community, I'm done talking to you. --87.79.214.168 21:52, 21 August 2011 (UTC)Reply
The WMF is really going downhill. Recently this really stupid poll, and now this. And the worst thing is that people get paid to deliver this kind of crap. Money that was donated to spread free knowledge!! The foundation should put the infrastructure in place, monitor the legality of things, and let the communities do their thing. Btw: spiders should be tagged as offensive, because some people think they are. One should not make these a-priori decisions, that is not neutral. The only neutral way is letting those who are offended tag the material that they feel is offensive. People who are not offended by that same material should not make those decisions for the offendable people and try to guess what they might find objectionable. Zanaq 17:57, 22 August 2011 (UTC)Reply

Commons:Project scope: "Wikimedia Commons is a media file repository making available public domain and freely-licensed educational media content (images, sound and video clips) to all." Emphasis mine and this is policy. – Adrignola talk 03:30, 21 August 2011 (UTC)Reply

all images that contribute to the understanding of a subject are educational. Educational does not mean, suitable for textbooks for young schoolchildren. . Wikipedia makes the assumption that people can learn to use it carefully. Anyone who does not initially realize that articles on subjects that they find disturbing will have content that they find disturbing, will very soon learn. And is anyone really unaware of that to start with? DGG 03:43, 21 August 2011 (UTC)Reply
You guys are agreeing that the filter is a bad idea. Just stating since it wasn't obvious to me at first glance. To me, the filter is basically Wikimedia giving up on the idea that the community is able to (or even supposed to) determine what needs to be included in articles in order to provide the best possible coverage of the subject matter. It's the project giving up on the idea of community consensus. It's the project leadership telling us that we're too dumb to write an encyclopedia, to make a product by which we as a community stand and which we deliver as is, everything in its right place to the best of our judgment. --87.78.45.196 04:00, 21 August 2011 (UTC)Reply
"all images that contribute to the understanding of a subject are educational" That is like saying all laws are the basis of sound governments. If Hitler was able to use schools to help justify the killing of Jews then not everything can be deemed "educational" simply because you teach another. We don't tolerate propaganda or hate, so why would you put forth a statement that you know is utterly too broad and inappropriate? A mere existence of something does not make it appropriate or educational. Ottava Rima (talk) 13:34, 21 August 2011 (UTC)Reply
"Exposure to information that is shocking, disturbing, and offensive is healthy." - Trying to force your subjective view upon others. 68.126.60.76 13:16, 21 August 2011 (UTC)Reply
Being able to use Wikipedia at work without getting fired is healthy. Being forced to be shocked, disturbed, etc, is not. That is called tyranny. Ottava Rima (talk) 13:35, 21 August 2011 (UTC)Reply
If a person has a reason to use Wikipedia at work, why should he be fired because of what it returns? If we have an image hiding feature, maybe he becomes "culpable" for not using it - and who will actually be thinking about that if they have a work related reason to access the site? You might get more people being fired, not less. Better just to stick with our own uncensored perspective. Wnt 17:08, 21 August 2011 (UTC)Reply
Seriously? Are you joking? You are effectively saying that people shouldn't have an ability to use 99% of Wikipedia if they are not supposed to see 1% of it. That is one of the most insane things that someone can promote and is exactly opposite of the mission of Wikipedia to be accessible to everyone. Ottava Rima (talk) 18:56, 21 August 2011 (UTC)Reply
Wikipedia should be accessible to everyone, of course, and we should lobby for that. However, what you seem to prefer is to change Wikipedia in order to satisfy demands of people who would otherwise block it. It is better to be blocked in China than to accept the tiniest bit of control by the Chinese government over Wikipedia's content. If we are uncensored and people block us, it is their loss. If we are censored, everybody loses. Kusma 19:09, 21 August 2011 (UTC)Reply
If you want Wikipedia to be accessible to everyone, the only response is to support this. Anything short of that is directly taking away people's ability to read this encyclopedia. It is NOT better to be blocked because YOU feel the need to force a few graphic pictures on others. That is absolutely inappropriate. Ottava Rima (talk) 20:08, 21 August 2011 (UTC)Reply
I want a complete Wikipedia to be accessible to as many people as possible. And nothing the WMF does gives or takes away people's ability to read this encyclopedia: that is done by the censors that sit in schools, libraries, or governments. Why do you want to collaborate with them and make their work easier? Kusma 20:39, 21 August 2011 (UTC)Reply
Our goal is to make free knowledge available. If some third party tries to block the whole wikipedia, that's their problem: we've made it available, so our job is well done. Nobody is forced to look at the article about the penis, but those that do look should not be surprised that there is a picture of a penis. Muslims are not forced to look at the article about Muhammad, but they should not be surprised that there are historically relevant pictures (probably created by muslims!) there. We should not waste our time with peoples sensibilities, there are way too many of those, and focus on the gathering (so not hiding!) of knowledge. Zanaq 17:46, 22 August 2011 (UTC)Reply
Yes, yes, yes, and no. This shouldn't be our responsibility, this isn't exactly our job, this isn't what we are here to do. Muslims are NOT forced to look at the article, and no, they should NOT be surprised to find images of him there, and when they do, they shouldn't be upset by it because we're not trying to offend them. (and American Muslims, for example, aren't really the issue-- they understand and if not they'll learn about multiculturalism within a generation or less.)
BUT somehow, this responsibility HAS fallen to us. Muslims _do_ look at the article on Muhammad, often one of the first they visit. They are surprised-- literally shocked. They do get really upset-- some sad, some angry, some confused. They _do_ suffer over it-- those emotions are real.
All this should not be, but it is. Some of our readers are suffering unnecessarily, and we can stop that suffering rather effortlessly. We shouldn't have to be the ones to stop it-- their ISP, their society, or somebody else should have stopped it for them. But the world is real, those readers are real, and as crazy as it is, we're the only ones who actually have the power to fix it for them-- they don't know enough about computers to do client-side filtering and they don't yet know enough about multiculturalism to not have negative emotions over the images.
It is INSANE that we here at Wikimedia are the ones deciding issues that could affect world peace. But here we are! This is what happens when you succeed beyond your wildest expectations-- you have to temper your newfound power with responsibility, you have to adjust ever so slightly, so you don't accidentally stomp on whole societies that you never even imagined you'd get to talk to, and here they are reading our articles. --AlecMeta 18:29, 22 August 2011 (UTC)Reply
Wow, I have to give it to you, you actually beat the Maude Flander argument: it isn't even about the children, it's about World peace! Now, that is both amusing and impressive, BUT: there are people out there who want all Muslims dead. There are Muslims who want everybody who is not a Muslim dead. All these lovely people generally don't have access to computers and, if they do, they hardly have access to wikipedia. Yes, wikipedia is an important project for disseminating knowledge but on the global political scale, its influence is a plain zero. And don't tell me I'm stomping: I spent a LONG time in a place where westerners would be killed if the wrong people saw them, and I tell you, there is no internet in those parts. --complainer
I do not accept that responsibility. I am not here to protect people against themselves. Our goal is to build an encyclopedia/dictionary/university/image repository, not bring world peace. We should look at our internal values, not the external world: this is cyberspace. Zanaq 18:38, 22 August 2011 (UTC)Reply

Thank you for putting in two short paragraphs what I needed four for in my vote comments. The proposal is a crutch for naïve, deluded and incapable, and I certainly don't think we should be encouraging these states by equipping them with tools to continue. And especially not in the name of the children, which seems to be the universal replacement for a coherent argument these days. Mathrick 08:11, 28 August 2011 (UTC)Reply

This is not Wikipedia's problem

[edit]

Oppose: This proposal should NOT be implemented.

When I read it I was reminded of the children rhyme:

Sticks and stones may break my bones,
but words will never hurt me.

We in the Wikipedia community are being asked to solve a set of problems, that should be solved by each individual who has created their own problem.

If a person finds an image offensive, it is their subjective evaluation, that makes it so. Since they are making the judgement, they should solve their problem. It is wrong to try and force anyone else to solve their problem, including this set of reference works under wikipedia.org. (IMHO, the best solution is to stop taking offense.)

The criteria Wikipedia has in place for content are sufficient without this proposal.

Please VOTE against this proposal. Lentower 00:30, 22 August 2011 (UTC)Reply

FYI There is no option to vote against this proposal, at this point in time. Actually.. no wait... --Kim Bruning 01:05, 22 August 2011 (UTC)Reply

Hundreds of different culturs, but only one Imagefilter. Is this the imagination of the board of ONE-World?

[edit]

See, every culture got is own coversion of things you can call obnoxious. For example it seems to be a big differnt between the moralization in USA and europe. Much bigger seems the different between europe and some countrys in arabia. But we got always one catagory for all these different countrys. Now my point is, how will you set an catagory who contains alle the conversions in all these different cultures? Or will the board take the easy way, and promotes the american style for all? Or will they orientate themself, for example, at the converssions of saudi arabia, or Sweden? Sorry, but these contentfilter is dumb, and everybody should know that. -- WSC ® 08:24, 22 August 2011 (UTC)Reply

I don't mean to be pushy but, if you oppose (or support, for that matter) the proposal, I'd urge you, and successive editors, to explicitly and boldly (in the typographic sense) state it. en:User:complainer —The preceding unsigned comment was added by 77.233.239.172 (talk) 08:29, 22 August 2011
Excuse me! My english is lousy. Please can you restate. I don't get the point. -- WSC ® 08:39, 22 August 2011 (UTC)Reply
I'd like for you to explicitly write Support or Oppose (or Abstain, if you have no definite opinion) to the proposal. Thanks, en:User:complainer —The preceding unsigned comment was added by 77.233.239.172 (talk) 08:43, 22 August 2011
Well, I'm not abstain, but I'm oppose of all kinds of acting without due consideration. And these Imagefilters seems to be really inconsiderately. If you want an determiation, in these manner I'm Oppose. -- WSC ® 08:52, 22 August 2011 (UTC)Reply
Thank you: I hope you don't mind me editing the typeface to bold: it will help with the headcount. complainer —The preceding unsigned comment was added by 77.233.239.172 (talk) 09:08, 22 August 2011
Never mind! -- WSC ® 09:12, 22 August 2011 (UTC)Reply
There will most certainly not be a "headcount". We run things on a basis of building a consensus by exchanging arguments. If the majority just piles on votes, these will be rightly discarded in the final evaluation. The better arguments must prevail, not the majority. (Also, please sign your comments with four tildes ~~~~. And if you're w:User:Complainer, why not simply sign in?) --78.35.237.218 09:17, 22 August 2011 (UTC)Reply
I thik, thats not true. The board seems to be sure to launch that image filter. If I disagree, don't matter if I've plausibly arguments, they don't care. But Robertmharris whoerver that is, got much more reputation than me. Maybe he is much smarter than me? I don't know. But he ain't got no answer for my cultur argument. -- WSC ® 09:32, 22 August 2011 (UTC)Reply
I would love for you to be right, although I doubt it; however, I would point out that:
  • The fact that an argument has prevailed is traditionally (and procedurally) expressed on wikipedia by headcounting. People are free to change their vote until the closing time, in case a better argument pops up.
  • There have already been arguments about where current consensus lies as far as this page is concerned. Counting through all the edits, some of which come from IP's that could point to the same user is laborious and uncertain. This process should fix that
  • If there is any merit in the Harris Essay, more than 80% of the world population should support the proposal, and this majority should mirror itself on wikipedia. This headcount is a good lithmus test for the veridicity of the essay.
  • I think you should elaborate on the majority votes being discarded, as it sounds dangerously undemocratic; anyhow, the decision on this issue was already taken before it was even announced, and it has been clearly stated that the referendum was just a way of yanking our chain. This headcount is also a statement against this.
Finally, I am en:User:complainer, not User:complainer: somebody is already using this nick on wikimedia. I have explained this above: please don't start a conspiracy theory. If you want me to prove who I am, I can put something on my English user page, but I really think it would be exaggerating.
-- complainer —The preceding unsigned comment was added by 77.233.239.172 (talk) 09:38, 22 August 2011
@77.233.239.172: Relax, I didn't accuse you of anything. Just please sign your comments with four tildes, you know the drill. Your assertions about headcounting are all wrong btw, so wrong in fact that I'll go into detail only if you ask me to. See also w:Wikipedia:Polling is not a substitute for discussion.
@Widescreen: A crowd of people senselessly shouting "oppose" without presenting valid rationales for their opposition will not sway the board. Good arguments hopefully will. Also, this is not a binary decision. Many support one form of an image filter, but are at the same time very strongly opposed to other variants. If (when) the board proceeds and implements an image filter, the exact details of its implementation are very important. --78.35.237.218 09:51, 22 August 2011 (UTC)Reply
Now without irony: The "Harris Report" (sounds like the epochal en:Kinsey Reports) is nonsens. A jounalist, reclaim global moral high grounds and don't even think about other influence. These guy been never part of the community by his own admission. What makes him an authority to decide how an enzyklopedia should spread knowlege? With this Report the Board trys to fool the comunity. The outsourcing of important desisions to be not responsible. Or for exaple the press. Show me one newspaper who use an imagefilter for their own homepage. And remeber, we are no magazin. We are an enzyklopedia. We want to spread knowledge, not hide it. -- WSC ® 09:57, 22 August 2011 (UTC)Reply
You're preaching to the choir, mate. I posted in this section only to set the record straight on the idea of a majority vote. --78.35.237.218 10:05, 22 August 2011 (UTC)Reply
In german WP. there is an poll in prepare. de:Wikipedia:Meinungsbilder/Einführung persönlicher Bildfilter. The aim is to investigate if the german community is agree with this nonsense. I think, this is an better choise but spread arguments. Because the board gives this fella, robertmharris, more authority than any member of Wikimedia projects. Including themself. So you can argue till armagedon. They don't care. -- WSC ® 10:18, 22 August 2011 (UTC)Reply
Uh, yes, I'm going to accept that my arguments are "all wrong" without an explanation. Right. Care to at least explain why you assume only people who Oppose are going to shout? Is this another Silent Majority argument? Anyhow, if I sign with the four tildes, you'll only get my IP address; since this varies every time, it would border on sock-puppeting, something the more argumentative member(s) of the opposite field would be more than happy to accuse me of. I'll keep on signing with my English username and, for the sake of clarity, urge people not to force-stamp my edits (I'll timestamp. though, if you think it helps). --complainer 10:28, 22 August 2011 (UTC)Reply
Hm? What's the point you trying to make? -- WSC ® 10:45, 22 August 2011 (UTC)Reply
I was talking to 78.35.237.218; the indenting makes it hard to see. --complainer 10:55, 22 August 2011 (UTC)Reply

I would rather talk about the cultural contractedness. -- WSC ® 11:23, 22 August 2011 (UTC)Reply

Yes, about that: I think holding a poll on the German wikipedia could play in the hands of those screaming for censorship: it has been asserted above that German and Dutch culture are "libertine" (which, for people born after circa year 1700, seems to mean "inordinately fond of porn") and (allegedly) represent a minority view on the issue. I'd rather like Indian editors, of which there are quite a number, to come forth and express their opinion, which, according to the Harris Essay, would be overwhelmingly pro-filtering. --complainer 11:43, 22 August 2011 (UTC)Reply
The question is, have the board got the right, to overwhelm any community because of explicite american reflections? Precisely because the German and Duch People are more libaral as others. But I doubt that germany is more liberal than other countries. In germany are a lot of people you can call conservative. These people, got compleatly different perceptions than liberal germans have. Maybe the differents are, that the most conservative germans, you would call liberal in Tennessee (or Toronto?). And here we have another problem with this nonsens: What someone want to see, and what not, is an highly individuel question. -- WSC ® 12:12, 22 August 2011 (UTC)Reply
Conservative Germans are called socialists by some Americans. --Bahnmoeller 12:19, 22 August 2011 (UTC)Reply
We are straying very far from the point, here, but, as a rule of thumb, modern Northern and Eastern European or right wings are liberal, which is the same term, and basically the same political philosophy behind the US "left" wing (i.e., basically, the Democrats). The thing is oddly reflected in Danish policy, where the dominant right-wing party is called Venstre (which translates to "left") as a remnant of a political system reminiscent of the current American one. These liberal parties generally support at least some measure of social darwinism, but are not usually lenient towards either censorhip or support for specific religions (which two things usually come together, anyway) --complainer 12:34, 22 August 2011 (UTC)Reply
You're right. Thats another question. But I can tell you the political ideology of the board: simplemindedly. -- WSC ® 12:47, 22 August 2011 (UTC)Reply
WSC, thanks for commenting. I agree 100% that we have to either create a system of 'culturally neutral' tagging or we will evolve one over time. Every culture has its own version of things it cans calls offensive. Every individual has their own version. Consensus can't decide offensiveness. If we don't already know that now, it will become obvious with time. We won't reach consensus on "offensive" because there is no consensus to be reached, only opinions.
The filter will have to do it's best to be multicultural and multilingual, and we need to be able to tell people that we'll build them a fully culturally-neutral filter if they give us x dollars to fund development.
The whole point, for me, of doing this to be culturally neutral and open to non-US cultures. Americans can handle getting their own filter on their own without our help-- it's the people in the global south who most need our help to meet their reading needs. To build a filter that appeals only to a western notions of offense would be far worse than not building one at all.
I think we know that, and if we don't, I have confidence it will become very apparent with time. --AlecMeta 16:33, 22 August 2011 (UTC)Reply
The only workable option for a filter "category" is to filter all images. Consider the problem of labeling images as objectionable or potentially objectionable from the other side: Who is to say that a given image will most definitely not be problematic to anyone, ever? Therefore, we simply cannot ever produce a workable filter which filters only objectionable images and all objectionable images. Not possible.
I would welcome a simple, general image filter, which could be simply and quickly enabled and disabled. Only with such a filter system, people could actually decide for themselves what they want to look at and what they don't. --195.14.220.250 16:43, 22 August 2011 (UTC)Reply
ACK with the ip 195.14.220.350. I think AlecMeta don't really understand my argumentation. See, for example in saudi arabia, it's offensive, to show a womans knee, or even the face of an woman. Thats never conpatible with the beliefs of european people. They want an pitcture of the face of Sue Gardner, to have an idea of who she is. And these differents are too big to equate them with an commons categorie. -- WSC ® 17:44, 22 August 2011 (UTC)Reply
I think I understand and agree with your argument. I don't believe a few commons categories can do this job either. I think it will take unlimited categories to do this right. I'm just open to be proven wrong by experiment if the board wants to try. --AlecMeta 21:12, 22 August 2011 (UTC)Reply
@Alec: this is starting to freak me out. I thought the proposal was, in principle to offer people a way of not seeing images, and, according to me and a number of other pessimists, in practice a means of facilitating third-party censorship. Your comments about a tool for the "Global South" (a bizarre notion, by the way, since there is a lot more moralism in, say, Syria than in, say, Uruguay) seems to indicate that this tool is actually intended censorship was the purpose of the tool right from the start. I also completely disagree with your time argument: time, imho, will just show that something much alike the current wikipedia will rise and take its place instead of the mangled version we (well, you, actually: I'll be off as soon as I see an alternative) will be offering. --complainer
You are right complainer (@en). I think Alec did't understand, that free science is not a pawn of any political purposes of WMF. Because it's not the aim of an enzyclopedia to be twisted by moralic reservation. I think that was the big aim of Jimbo. To start an enzyclopedia who will change the world. Thats not possible when the enzyklopedia is not one. Jimbo is an salesman, not an en:Encyclopédistes. He had an idea of an enzyclopedia but no notation of what that means. Sue is an journalist and so is robertmharris. But an enzycolpedia is scientific not moralic. Think about what the big en:Encyclopédie would have change, when the writers considered the feelings of monarchs? The enzyklopedia shall change the world, by distributing knowledge, not the world shall change the enzyklopedia by distribute moralic reservations. -- WSC ® 18:30, 22 August 2011 (UTC)Reply
I think it's more about respecting the feelings of the people and not the kings. You have the top-down perspective here, Widescreen. Please don't try to disguise your contempt for individual sensitivities as antiauthoritarian. Adornix 19:10, 22 August 2011 (UTC)Reply
Mind if I do it, then? I am in the best position to, being personally full of contempt for "individual sensitivities", but WSC's argument is sound and raises a legitimate concern, namely: is this a first step towards some kind of commercialization of wikipedia? Jimbo has been panhandling at increasing frequencies for a long time, and this software upgrade is going to cost a lot in servers; since it is fundamentally geared towards currying the favour of the poorer parts of the World, it can't bring that much in donations. And while we are at it, a couple more, namely:
Was this proposal even voted by the WMF, or was it imposed on them too? After all, the WMF is elected by the users, and the principle of representative democracy is that one gets screwed by one's elected representative as a punishment for having voted them; it is part of the game, and everybody in the West is used to it (I am not being sarcastic, by the way).
Was the Harris Essay an actual guideline, or an excuse for a decision taken in advance?
--complainer
"contempt for individual sensitivities" -- Oh gawd, here comes the "all liberals are baby-eaters" line of "reasoning". It was merely a question of time, I guess. It's always either that, or the "bleeding-heart liberals" epithet, depending on current mood and water temperature. The only good thing about rightwingers is that they are reliable. --195.14.220.250 19:21, 22 August 2011 (UTC)Reply
Sorry, thats no step forward into commercialization, thats a step forward into stupidity. I meant Jimo is a buissnessman. Not an Philosopher (maybe an NET-philosopher). Jimbo got other aims, and these aims are not only commercial but idealistic. But these aims not necessarily compatible with the aims of an enziclopedia. But you are right Complainer, when you ask the board whose idea it was to start these Filter. And also ask for the influence of Jimbo. Jimbo can't hold criticism. Last year he trys to delate contents with the authority of his funder flag. Now, believe it or not, we have an Contentfilter here. Proposed by robertmharris, an his dougter. A guy who worked with Sue. You can fool some people sometimes, but you can't fool all the people all the time. -- WSC ® 20:48, 22 August 2011 (UTC)Reply
The way we got here is that when Jimbo did his deletions, everybody screamed "No!". So then we spent a year and a half thinking about how to do it AND keep our values. I think they've basically succeeded. As for motivations, Jimbo, Sue and RobertMHarris are all very very different people, with different agendas and different methodologies and different skills. Imagine them as a conspiracy if you want, they all believe they're problem-solvers working in public, not conspirators working in secret. --AlecMeta 21:17, 22 August 2011 (UTC)Reply
This is fascinating, but I really don't see anything even close to an answer to any of my questions, although we have at least two WMF members regularly contributing. Am I missing something? Or, worse, am I hitting something? --complainer 21:39, 22 August 2011 (UTC)Reply
I'm not sure I have any good answers for you myself. I don't think there's any bad motives here. I think the Harris report is just a report-- it has no "authority" beyond its ability to inform, it wasn't meant to. The Harris report sought, in good faith, to find a way for us to "have our cake and eat it too"-- to construct a win-win that would meet the needs of both anti-censorship editors AND anti-shock-image readers. I think it succeeds, maybe it fails, but I'm very confident it was written in good faith, trying to find the middle ground win-win. --AlecMeta 23:19, 22 August 2011 (UTC)Reply
VALUES? What kind of values? Maybe these values are phantasies not beeing criticised by administrative bodies or conservative powers? And what about my values? Why don't you never ask me, what I thik about this idea? What are your values? Moarl reservations and knowledge were never compatible. Do you think all this people here writing an moralic treatise? No, we try to write an enzycolpedia. Belive it or not. What would you think when you are an physician and write an article about the human body, and you know, maybe if the reader is not loged in, he can't see any picture you are collecting, or maybe shot by yourself? What would you think if you write an article about plitical riots, sure that your tabels and pictures are invisible by anyone who wan't be stressed by this issues? Writing an Enzyklopedia is not for fun, you can't bend all the contents as you will.
And did you never think it could be wrong Jimbo deletes the pictures? Why do you think he is right with that? Don't you think it could have reasons from "everybody to scream NO!" I can tell you why they screemed NO! I got to tell you because it seems, you have nerver think about that! They screemed no, because, they don't want to be censored. Amazing, don't you tink? What makes you belive Jimbo is right with the delation so that you are now a proponent of this stupid filter? Are you so much smarter than everybody who screemed NO!? I don't think so. -- WSC ® 21:48, 22 August 2011 (UTC)Reply
I care about your values, I care about your opinion, and I believe the board does too-- they asked for opinions in a giant way.
And did you never think it could be wrong Jimbo deletes the pictures? ... Don't you think it could have reasons from "everybody to scream NO!" I do think it was wrong, I did then, and I screamed as loud as I could for him to stop. I understand exactly why people screamed "NO!"-- I screamed "No!" too. That's why I'm here trying to tell people that this is a new and different approach-- I was so loudly "No" then, I have a certain duty to say "Yes" now that they've met my understanding of 'Wikimedia Values'. --AlecMeta 23:12, 22 August 2011 (UTC)Reply
Oh Alec, now you try to fool me twice. You said: "they asked for opinions in a giant way." But that ain't true and you know that. They only ask Robertmharris and his daughter. And they don't care, they ain't got no clue about Enzyklopedias. And you screemd no too, while Jimbo deletes the pictures? If you are User:Alec on commons, you said nothing sice Mai 2008. Tell me, were do you screem? At Home while watching TV? And yes you are right, this is a new and differnt approach toward censorship. And congratiolations, the wikimedia met your values. But they don't met mine. But let me think about this! Maybe your arguments (whatever they are) are so importent and smart like Robertmharris' are? Because Robertmharris is such a smart fella. -- WSC ® 08:38, 23 August 2011 (UTC)Reply
Venus of Willendorf
Hide Content
en:Bomis was a dot-com company founded in 1996 by Jimmy Wales and Tim Shell.
Forget Content
Harris wasn't looking for what 'he' wanted, he was looking for something everyone could like. He's a mediator, not an 'expert'. He's proposed the framework for a treaty he thought we'd like, but it's up to us whether to adopt it. No one can force this on us against our consensus-- we proved that last year. What you're seeing is the completely OPPOSITE approach from last year-- massive discussions after discussions after discussions, LOTS of community involvement, and taking things slowly.
And congratiolations, the wikimedia met your values. But they don't met mine. and that's why we needed to call a referendum on this. My values are no better than yours, I have no reason to believe I'm smarter than you. Nobody even cares whether Harris is smart or not, we just care whether he got it right or not. And to find out how everybody feels, you need a global referendum and discussion! There is no other way for us to have even found out your values.
I agree the referendum questions have some issues, but this is our first time doing it, nobody had any idea how wildly successful a survey like this could be. Nor do we have any agreed method for interpreting the results. It's not a conspiracy-- I'm convinced. It's good people trying to find out what their readers & editors want of them, and trying to do their best to balance it all. They make mistakes all the time, and they learn from their mistakes all the time too. --AlecMeta 17:42, 23 August 2011 (UTC)Reply
First of all: Harris dosen't look for anything, everybody could like. He looks for an way to justify the Filter of all cost. And his "Report" is taken by the board to legitimize an Objekt of censoring. When he is an Mediator so you say, he should have to be absoulut impartial. But Sue was his Boss for years at Canadian Broadcasting Corporation. Curious, isn't it? You are fooling the communitys! And so on, an Mediator needs both parties to mediate. But he never ask the germans. And I'm sure he don't ask anybody but the board. F o o l i n g. Do you think alle the Authors of scientific Articles are stupid jackasses?
And you are right. The board took the easy way. And you are also right when you say you take things slowly. Becaus the more offensive way failed. No on from german community requires these filter. And I'm sure most german Authors don't want them now. But that don't matter? Right? Jimbo and the board wants the Filter and we will get them, no matter if we agree.
And my values is: I don't want any filter or other censorship. And the different to your values and the boards values is, I can give you reasons why this filter is nonsens, and why we don't need em. The german newspapers says, the board want to expend into the "south". S.America, Asia. But what will you export? Censorship? I think this goods are not needed there. Will you teach these people they don't have to face the truth if they don't want to?
And that you call referendum is a joke! Ask us, if we want this filter! No, because you know the answer, and you don't care. Because we are all dumb. Right? Just Robertmharris, Jimbo and Sue are smart. You try to fool us like we are shit. And you can't ballance anything. Exept for, you quit your plans and accept, you don't have to equal anything or censor it. Just keep your hands off wikipedia and buy some servers. -- WSC ® 19:33, 23 August 2011 (UTC)Reply
This keeps coming up, so I suppose that the explanation bears repeating: The existence or non-existence of this tool will have no effect on third-party censors (except, I suppose, to possibly, but only very slightly, reduce sales of the lowest-end products).
The means by which this tool would operate already exist and have been usable by third-party censors for years and years. The means by which this tool would operate are called "categories"—the current, existing, long-standing categories, like Commons:Category:Abused animals by type, not some made-up, brand-new, filter-specific categories. If a third-party censor wanted to use categories to filter images, they could have done that years ago, they could be doing that now, and they could do it tomorrow, even if the WMF decided to dump the tool.
Enabling this tool on (some or all) WMF sites will not enable third-party censorship any more than said censorship is already enabled by the mere existence of some attempt at organization at Commons. WhatamIdoing 00:55, 24 August 2011 (UTC)Reply
You ignore the fact, that most categories (such as violence) don't contain only images that show actual violence. There are images of protests against violence, caricatures against violence, monuments dedicated against violence and so on. All relate to violence, but not all show violence. That is correct, since none of this categories was created or meant to do content filtering. It's a guide to find content. This is a huge difference. If we really want filtering based on categories, we will need to introduce new categories outside the current categorization tree. This will in fact enable censors to exclude "objectionable content". This is no pro-argument. This is a contra-argument. (Read the Library Bill of Rights and its subsections, especially Labeling and Rating Systems) --Niabot 01:40, 24 August 2011 (UTC)Reply
@whatIdoing: I think, they don't even fool the communitys, they are fooling themselves. The filter could be used by organisations, churches and other institutions, to start an own censorship. You can't avoid this with technical gimmicks. It's ain't like in the "north", everbody got his own internetaccount. And even the only "personal censor" should be the super-ego. And less than other things, the Wikipedia. -- WSC ® 05:55, 24 August 2011 (UTC)Reply

arbitrary break

[edit]
Niabot, if you have guessed that I would not include the main Cat:Violence, you're right. Most of the images in it, although related to violence, are not images showing either violent actions or the outcome of violence. I'd be far more likely to recommend that an anti-violence filter include Commons:Category:Wounded people or Commons:Category:Abused animals by type—specific categories that already exist and are primarily the sort of thing that squeamish people do not want to see. WhatamIdoing 17:42, 24 August 2011 (UTC)Reply
WSC, since there's nothing new being created here, there's nothing new for organizations, churches or other institutions to start new censorship programs with. We can't prevent that with technical gimmicks because we can't prevent that now. WhatamIdoing 17:42, 24 August 2011 (UTC)Reply
whatamidoing: You are right. We can't prevent that now. But we shouldn't promote that ever. And the software could be abused. And don't you think, it's paradox, an enzyclopedia founded to spread knoledge, designes a filter which can be abuse for censorship? But this is not the only argument against this filter. -- WSC ® 18:44, 24 August 2011 (UTC)Reply
I do not understand how you think this filter could be used for censorship. Unlike the various proposals made by opponents, this filter adds zero information to the images. There are no tags or blacklists in this filter. How can "tick here if you personally want to hide the images that have been listed in Commons:Category:Photographs of sexual intercourse for three years now" result in a system that can be abused for censorship, but "there's nothing to tick here, even though the images are still located in the same category" not result in a system that can be abused for censorship?
What's the specific mechanism? Do you think that if we don't provide the filter, then the would-be censor will somehow not be able to discover that this category exists? Do you think that someone capable of writing software to restrict access to this category the day after the filter is deployed (assuming it ever is) would somehow (magically?) be incapable of writing software that restricted access to the same category the day before? WhatamIdoing 19:21, 24 August 2011 (UTC)Reply
Excuse me, I don't understand your point because of my bad englisch. Please can you restat that. -- WSC ® 19:36, 24 August 2011 (UTC)Reply
That works fine WhatamIdoing, just beware of people hijacking that category to classify "objectionable content" instead! --Kim Bruning 19:46, 24 August 2011 (UTC)Reply
Seems like whatamidoing is not interested to read my answer? -- WSC ® 18:08, 25 August 2011 (UTC)Reply
I read your answer. I have also determined that your native language is German. I am attempting to obtain a good translation for you. WhatamIdoing 20:57, 25 August 2011 (UTC)Reply
Maybe you don't have to translate it, just rephrase it. -- WSC ® 22:06, 25 August 2011 (UTC)Reply
This is a translation of my above comment from 19:21, 24 August 2011:
Ich verstehe nicht, wieso dieser Filter zur Zensur gut ist. Die Bilder erhalten vom Filter genau null neue Information (anders als bei den vielen Gegenvorschlägen). Es gibt hier keine Tags oder Schwarzlisten, nur eine Präferenz. Wie ist es möglich, dass "klicke hier, damit Bilder, die schon seit drei Jahren in Commons:Category:Photographs of sexual intercourse sind, nicht sofort sichtbar sind" irgendwie zur Zensur missbraucht wird, aber dennoch "die Bilder vom Sexualverkehr sind nach wie vor als solche gekennzeichnet, man hat aber nicht individuell die Wahl, sie zu sehen oder nicht" nicht ebensogut missbrauchbar ist?
Wie soll das denn funktionieren? Wird der Zensor etwa nicht entdecken, dass diese Kategorien existieren, wenn es keine Filter-option gibt? Wenn jemand in der Lage ist, nach Einführung des Filters (sollte es je passieren) Software zu schreiben um Zugriff zu einer Image-kategorie einzuschränken, warum soll dies ohne Filter auch nicht möglich sein, wenn doch die wichtige Information (die Kategorien) schon vorher existiert? WhatamIdoing 18:52, 27 August 2011 (UTC)Reply
Hi, whatamidoing: Is this a babelfish-translation? You've better rephrase it. But thanks for the effort. I think I can imagine what you are trying to say. First of all, this filter is manly a tool of personal censorship. If you don't want the hole truth, you don't have to face it. For example: Ain't it educational to show the hole horror of the holocaust? Do you really think, it's enough to discibe how many people beeing murdered? Don't you think it could be important to let some people know how brutal and inhuman the holocaust was? If we, the authors of this enzyklopedia think, it's necesary to show these pictures it's absurd to let everybody decide if he wants to see them. Especially because my old Schoolbook shows some pictures you certainly found "brutal". If a lot of peadagougus found it didactical helpful or not avoidable to show them. So why the board thinks they have to prohibit pictures like that? And this is just one example. We are the authors and we decide in a collectiv process if a picture is necessary or not! If someone disagree he shouldn't read an enzyclopedia. Rather the Bible, the Koran or "Mein Kampf". We also transport a mindset of what is necessary to understand a topic. I don't want to distribute that a female brest or a exaple of mans brutality is not worth to see it.
But you can also use this Filter for public or other censorship. This is determinating where the internet itself is a rare ware. I think about the third world, not everybody has a internet connection. Parties, Churches, Schools and others are able to use this Filter for their own censorship just by prohibit to opt out. People see just the wikipedia some moralizer want them to see. This ain't got nothing to do with the aims of the author. It's evel enouth they can prohibit us with external tools. We cannot support that just one second.
And, of course you add new information to the pictures! You add an category like "explict pictures", or something. You tell the people whats "explict", "offensiv", "brutal" and abouve all, whats not necessary and not worth to see. This is a right the authors of the article should decide.
At least I want to talk about your example, the Commons:Category:Photographs of sexual intercourse. The most picutres aren't in use in articles at all. In fact I found only two pictures ar in use in Wikipedia! One of them in the likely arabian WP, an other in some european and japanese WP. Seems like no one got a problem with that, except for some moralizer like the board. So why you have a problem with that? It seems to me, the board is strived to create an problem, no one else can see. -- WSC ® 10:16, 28 August 2011 (UTC)Reply
No, that is not a machine translation. It was made by a person who speaks both German and English fluently; he was was born, raised, and educated in Germany. He has an abitur: do you? Perhaps your lack of understanding says something about your education.
I am sorry to see that you are very misinformed about the proposal. (Perhaps someone used babelfish to translate the German pages.) Here is a list of some of your errors:
  • Pictures are not added to Wikipedia articles solely because someone believes the picture to be necessary to understand the subject. Perhaps that should be true, but it is false. Many pictures are added solely because the person thinks that pictures are pretty or because they want their own photograph in the article.
  • Wikipedia is not the only WMF project. The filter will be available on Commons itself, where 100% of the images in Commons:Category:Photographs of sexual intercourse are used. "Used in Wikipedia, which is an encyclopedia" is not the same as "used in any WMF project".
  • There will be no new category like "explicit pictures". There will be only a list of existing categories.
  • Because the filter does not create new categories, then it cannot be used for public or other censorship. Nothing changes about public censorship:
    1. Public censorship can be done today, with the existing categories.
    2. The same categories will still exist if the filter is created.
    3. If the filter is turned on, no new method of public censorship will be created.
    4. If the filter is not turned on, then the current method of public censorship continues to exist.
I ask you again: If the censor does not use the existing categories for censorship today, then why would the censor suddenly begin to use the existing categories for censorship tomorrow? Is the censor too stupid to use the existing categories today? Will turning on the filter magically make the censor smart about the existing categories? WhatamIdoing 23:11, 28 August 2011 (UTC)Reply
There will be no new category like "explicit pictures". There will be only a list of existing categories. This, exactly this, would never work. In case of sexual content it works a little bit better then elsewhere. But in many other cases you would block dozens of harmless images (not that i think that any of our pictures would be harmful). Consider something like violence or war. There aren't only images that depict brutal scenes or crimes. You will find memorials, peaceful demonstrations, portraits of innocent persons, etc. Blocking them all would cause needless collateral damage.
Public censorship can be done today, with the existing categories. You can't do that, as i explained already.
The same categories will still exist if the filter is created. I doubt that, because you will need new categories that would need to separate "harmless" from "harmful" images. The categorization System was and is meant to sort our content for faster navigation and to gather images (galleries if you will) for topics. There isn't or shouldn't be any distinction based on controversy. If so, we already do it (partially) wrong.
If the filter is turned on, no new method of public censorship will be created. Since the filter can't be build on top of current categories, you would need to do so.
If the filter is not turned on, then the current method of public censorship continues to exist. Hopefully not. Since Wikipedia and it's projects aren't censored. OK. I have to admit that EN is censored, since a lot of hot headed censors doing their job: Thinking about what others must think. --Niabot 01:02, 29 August 2011 (UTC)Reply
@whatamidoing: If this is a good translation, it must be a awful and naive argumentation. ;o) But seriously: It's a matter of who is censoring. If anyone trys to censor the Wikipedia, we should stuggle on that. Not provide our own censorship, even it's not so worse like others. The five pillars of WP don't contain, we have to be conformistic or have to twist the knowledge to make it anyone comfortable. For real, I think it's a betrayal on the principes of wikipedia. If anyone take our catagorys to censor the wikipedia, we shouldn't do also.
Next is, you are right, not every picture is add to expain a point. But the authors of the article found it helpful or pretty to add it. But the filter don't hide only pretty pictures, but all pictures, listed in some categorys which never being clearly defined. This is a point you, should think about, not jaw arround about my education or my lack of understanding the intend of this filter. The authors write wikipedia and they shouldn't be overvote by any machine and some moralizer put pictures in forbidden catagorys.
"If the filter is turned on, no new method of public censorship will be created." So you say! It could be helpful, if you answer to my argumentation. If you try to spread dogmas, join the catholics. They got 2000 years more experience than the board with that. But I reckon, if you would really answer to me, you would have to realyse, the true meaning of this filter.
"If the filter is not turned on, then the current method of public censorship continues to exist." Yes, thats also true! And thats the reason why we need another kind of censorship, created by the board. Because there's not enought censorship on this planet.
What you say, there's no need of new catagorys, is not true. Niabot replieed that. Maybe you take him seriously? Maybe not. But this point shows how much you have think about this filter.
Yes, on commons you can view all these sinful pictures, and all this violent pictures, showing the truth of human being. So you can start your own world: small cats will joing little childs who playing in the sun. People live togehter (without copulate with each other) and watching wikipedia, with filters, so they don't have to remember we want to distribute knowledge not evangelical moralization. C'm on do that, but don't bother us, while we try to write an enzyklopedia. If someone wants to see a couple having sex, they don't have to watch that at commons. There are millions of sides like this. These sides contain videos, close ups and really strange things. And some harmless pictures, are in use at wikipedia and commons. -- WSC ® 07:18, 29 August 2011 (UTC)Reply

┌─────────────────────────────────┘
Again: no tags, and no new categories. It uses the existing categories, full stop. The report specifically says not to use tags or other special, filter-only labels, because doing that would enable involuntary censorship. (It's in Recommendation 10.) So we're not going to do this: it is not in the proposal.

Niabot, I'm sorry that you, with your limited knowledge of writing code for Mediawiki, have decided that the devs can't possibly do what they have said that they most certainly can do. When it comes to deciding what's technically feasible, I prefer to believe the WMF's highly experienced developers instead of random editors.

However, you are right about one thing: The filter will not be 100% accurate. This is a direct and known consequence of the choice not to use special categories like "Category:Images that are offensive because they show naked humans". According to this proposal, some innocent images will be filtered, and some unwanted images will not be filtered. These errors are expected. The risk of these errors is not going to stop them from using this unfriendly-to-censorship model. The risk of these errors is not going to result in us creating new categories or tags that are ideally suited for involuntary censorship.

How many errors we will see depends entirely on how stupid we are about the categories we list. Listing Commons:Category:Violence will suppress many innocent images—so it should not be listed for filtering. Listing Commons:Category:Abused animals by type will not suppress any innocent images—so it is a reasonable candidate for a violence-oriented filter. WhatamIdoing 21:05, 29 August 2011 (UTC)Reply

WhatamIdoing! See, even there are no new categorys, you have to determinate some which are "evel" or which contains contants who not necessary to be seen. Let us call these categorys "sinful-categorys". And you say that, if you can divine the future. These filter is not runnig at all. So don't tell me things you don't know yet. Some categorys are unusable. Thats for sure. They contains pictures not even the board wants to censor. There will be errors you say. The filter will not run 100% accurate, you say, whatever that means. But thats indifferent? How many errows are portable to allow people to censor others, or censor themself? You think so and thats ok for me.
But did you notice, you don't answer to the fundamental questions of this filter. If he is necessary, or legit, or useful, or compatible with our aims? You know, spread knowledge is our aim, I think. But thats seems to be not important any more. If the categorization will work, or if there will be errors, are just additional questions, who showes how inconsiderately these filter is. That the filter won't work for any culture, I mentioned at the top. But that's no matter to you. It seems to be indifferent to you. We will start this filter, thats for sure, no matter what it costs, or if it's useful. If it's useful to hide some pictures, or hide any picture, don't matter. If this ist censorship? It don't matter! Nothing seems to matter but the moods of the board and theire silly attempt to explain. And they seems to be in the mood for censorship. -- WSC ® 22:43, 29 August 2011 (UTC)Reply
Who cares if there are errors? If there are too many errors, people will stop using the filters. You don't want them to use the filter, so why do you care if they stop because they are frustrated with it? WhatamIdoing 17:37, 30 August 2011 (UTC)Reply
Thats not the question! The question is: is this filter compatible with our aim, distributing knowledge? I told you a lot of points why this filter shouldn't be used. You can look-up them there are all above. But now it's seems to me you won't take a look at them, and you won't answer them. You only answer to some en:Straw man-arguments. (Like: I should be delighted, if this filter won't work without errors because people will not use em, because they will be frustrated. But they won't find any malfunction. They will just see theres a picture missing. What picture that is, they will never find out, if they don't take a look. But they don't want, to take a look, because they turn on this stupididy filter. Wikipedia will help you to remain stupid. Wikipedia will help you to stay close-minded. Wikipedia will help you if you don't want to face the truth. Forget this knowlege-widget everybody's blubber about. Main thing is, your moaralic reservation will not put to the test.) -- WSC ® 18:49, 30 August 2011 (UTC)Reply
Seems like, Whatamidoing is not abel or unwilling to remove my doubts about the fundamentals of this filter. He doesn't answer to the main refutations. The answers of whatiamdoing don't even touch on the basic inconsistencys of the filter. -- WSC ® 10:08, 2 September 2011 (UTC)Reply

Facilitating Censorship by Governments or Other Groups

[edit]

I am somewhat opposed to filtering, but I don't see it as a complete censorship problem as people are deciding for themselves whether to see certain images. I think ignorance, even if self imposed, should not be encouraged, but this value is in conflict with the value of individual choice - the right to decide for oneself. My censorship concerns are also attenuated by the fact that the image is not removed and the users can always reverse their decisions and choose to view the images.

However, I have grave concerns about the ability of groups to use our filtering for their own purposes. We know that there are countries actively censoring internet content for their citizens. I do not want to make it easier for them to do so. Whether and how our self-selected image filtering can be used by others to impose censorship does not seem to have been addressed. I am not a web programmer so I don't know for a fact that a government can piggyback onto an existing filter and turn on the image censoring function for people using servers in their country; it seems logical that it would make it easier for them to censor. If there is a way to do it, I would bet good money that China will find it.

Thus, the main issue that needs to be addressed is whether implementing an image filter will make this kind of censorship, which is not based on individual choice, easier to accomplish. Ileanadu 09:41, 30 August 2011 (UTC)Reply

I think the answer is yes, we will be enabling third-party censorship, and we need to keep that in mind as one of the significant costs of creating the filter. There are two reasons why it is unavoidable.
First, technically, it is very easy to take warning labels we create for a filter and use them in another program. Our openness works against us here.
The second piece of the puzzle is the creation of the labels themselves. The Harris Report and the referendum itself rather uncritically suggest using the existing category system. However, as discussed on the Categories subpage of this discussion, categories for finding and describing something are fundamentally different in scope and application from labels used for filtering. It would be unreasonable to give people a filtering system and then demand and expect that they will not use it to create effective filters.Once created, there is nothing we can do to stop other parties from using them for censorship.--Trystan 13:43, 30 August 2011 (UTC)Reply
Again: we are not creating any new labels. The only labels we'll be using already exist. Recommendation 10 specifically says that new tags are evil because they would enable third-party censorship. If someone can use Commons:Category:Photographs of sexual intercourse (=what we'll actually be using) to censor WMF, then they could do that last year, and they will be able to do that next year, even if we scrap the filter.
Yes, you're right: the existing categories are not well suited for this. But the proposal is to use them anyway, specifically because we deliberately chose an imperfect filter over perfectly designed new tags that would be suited for involuntary censorship.
I don't know why you're having such trouble grasping this: The proposal is to have an imperfect filter with no special handles for censors, rather than a well-designed, censorship-friendly system. The only people seriously considering a censorship-friendly special labeling system are the filter's opponents. The WMF has already said that they're not doing that, even though they know that is the only way to get "perfect" filtering. WhatamIdoing 17:35, 30 August 2011 (UTC)Reply
They will not create new catagorys (whatamIdoing says), they turn existing ones into prohibit ones. Great argument, really. -- WSC ® 18:57, 30 August 2011 (UTC)Reply
I'm not having trouble grasping anything; you and I have just come to different conclusions about whether it is possible to implement a filter using the existing category system without fundamentally altering how categories are applied to images. Giving the community an imperfect filtering system and asking that they do not improve upon it does not strike me as realistic. If a category serves both as a retrieval aid and a filter, it is not possible to require individual users to only use it with the former function in mind, ignoring the latter. Particularly if the filter interface has a built-in function for reporting false negatives, as the referendum proposes.--Trystan 19:03, 30 August 2011 (UTC)Reply

Sex, Violence, and/or Religion question

[edit]

I have no personal issues with viewing "sacred" images, but I can understand how someone else might. At one point, the essay/study on image filtering seems to treat sexual, violent, and religious images as equal categories. When it comes down to the recommendations, however, sacred images seems to have been blithely dismissed because they present different issues. They do not.

If the issue is whether to allow individuals to opt into a system that temporarily hides images that are deeply & personally offensive, one individual can be just as offended by a sacred image as another is by explicit sexual imagery. Once upon a time, a woman's bare ankle was considered erotic. The reason why just seeing an ankle could arouse some people - and shock others - was the rarity of such an image. To a person who is unaccustomed -- and perhaps forbidden by religion -- from seeing sacred images, the sudden appearance of a sacred image can be as shocking as that Viet Nam picture of the kneeling man who is being executed. In some cultures, the sight of a woman's bare breasts would not be taboo, while in other cultures it would be extremely disconcerting to open up an encyclopedia article and see a bare-chested woman. In one case the sight is shocking because of religious principles, while in the other case, it is social mores that tend to make the sight rare & shocking.

Part 3 proposes different treatment for religious images because “a different intellectual process is involved.” It is the same intellectual process, just shaped by different social factors. No image is inherently offensive; it is offensive because the viewer's culture has inculcated the individual with the idea that it is offensive.

One of the questions on the referendum was whether an image filter “should aim to reflect a global or multi-cultural view of what imagery is potentially controversial.” It is possible that an affirmative answer to the question could be perceived as a reason to not censor religious imagery. Religion and culture are often tied together; in fact, religious views are part of what make up a particular culture. Since today there aren't many "cultures" that treat sacred images as a taboo subject, there is no need to censor sacred images goes the argument. This is not just an issue of Muslims barring any depiction of their prophet. Many judeo-christian religions follow a concept of barring "graven images" according to their interpretations of the Ten Commandments, which are found in the Old Testament/Torah. Different religions have different criteria for what constitutes a graven image or "idol" (see [[15]]) and this presents a challenge to the image censor. The same can be said for pornography and violence. Cartoons from Japan can show a woman's breasts or even her naked body, so long as the pubic region is censored. In the west both regions are covered in public depictions; yet still, we have Michelangelo's David. There are levels of pornography as the epithet "hard core" demonstrates.

Let's note again that this is self-censorship, which is temporary & reversible. Censoring sacred images is NOT “about the struggle between the rights of some individuals to define the limits of appropriateness of some images for themselves and others, versus the rights of others to know.” In all cases -- religion, violence and sex -- a group has defined the limits of appropriateness and seeks to impose that limit on its members. In all three cases, the proposed image filter would give the individual the choice to accept or reject that definition.

So, IF you are going to have an image filter, then under the principle of Least Astonishment, there is no reason to treat religious image taboos different than sexual image taboos.

Unless, that filter can be co-opted by a government or other group for its own purposes. [see my prior comments]

Censoring images in all of these categories will require a lot of difficult decisions and, I imagine, a lot of work. Is it worth it? How many people saw naked breasts for the first time in National Geographic? How many people first learned the facts about reproduction from reading their encyclopedias. In 7th grade we were all familiar with certain taboo words, but I didn't know exactly what they meant until I saw them defined in the American Heritage dictionary.

There was no cover that we had to move to see these images or read their text because each publication was fulfilling its mandate. I still think the purpose of any encyclopedia, whether in hard copy or internet, is to provide information to users who are looking for it. I realize that with the proposed image filtering the images/information is still there & merely requires an extra step on the part of some users (who have chosen to be put in this situation by opting in), but I am not convinced that the work that will be required to create, administer and follow such a system is something we want to undertake.

Does the status quo drive people away from Wikipedia/media or just create discomfort? Ileanadu 11:35, 30 August 2011 (UTC)Reply

I think that it is a different thought process, in the sense that it is far less subjective. The rules are formal and verifiable, and the subjects are cleanly identified. The LDS church declares that their temple garments are too sacred to be displayed in public. An object is either a temple garment or it isn't; either the image shows it or it doesn't. With the other types of images that draw complaints, there is always a question about whether a given image is violent enough or sexual enough to really count. There really are no borderline cases in this area.
As for your last question: The WMF has been told by some users that they will not use the projects because of religion-related images. Whether this means that they will not use any of it, or only that they will not read religion-related articles, is more than I know. Perhaps different users take different approaches. I believe that the Arabic Wikipedia has solved the problem of offending their readers by refusing to display any images of the Prophet Muhammad anywhere in ar.wiki, exactly like the German Wikipedia has solved one the problem of offending their German readers by refusing to include the names of convicted German murders after they complete their prison terms anywhere on de.wiki.
On the point of sexualized images, many, many schools block all of Wikipedia over the sex-and-porn images. However, I strongly doubt that this filter will change this, since any child can bypass the filter on any image with a single click. A school presumably would not consider that to be an adequate level of control for young children. WhatamIdoing 20:09, 30 August 2011 (UTC)Reply

Concerns

[edit]

Hyphenization

[edit]
...opt-in personal image hiding feature..

Let me suggest that as soon as possible, we hyphenate image-hiding, so as to clarify meaning, especially in the Wikipedia banner announcement. When I first saw this a few minutes ago, I thought it had to do with a feature to hide your personal images. I think that ...opt-in personal image-hiding feature... does the trick. KConWiki 21:11, 27 August 2011 (UTC)Reply

Slippery slope danger

[edit]

This is how all censorship starts.. slowly. First it's opt-in, then opt-out and then it wil be mandatory on request of foreign governments / organizations, since the meganism is already in place. Very dangerous. If people want to filter, fine, do it on your own PC. Don't give organizations / government the opportunity to easily implement a blacklist for their people. Koektrommel 07:58, 3 July 2011 (UTC)Reply

That's a huge extrapolation based on a very preliminary step - and not one that I think is justified. Given that it would already be technically feasible (and easy) for foreign governments to apply such censorship if they wanted (e.g. see the Great Firewall of China or the Internet Watch Foundation), as well as the finances for technical development available to governments / organisations in favour of censorship (which are several orders of magnitude larger than those available to Wikimedia), I seriously doubt that this would make it significantly easier for them to implement... Mike Peel 20:22, 4 July 2011 (UTC)Reply
I don't think that it is within Wikipedia goals to provide a tool to make job of censors easier. Ipsign 06:57, 16 August 2011 (UTC)Reply

Esto no es una herramienta que aumente la libertad, sino que la recorta. De aprobarse será la propia wikipedia, que no censura contenidos ni temáticas, la que tristemente proporcione el instrumento de censura a los censuradores. Lamentable. Wikisilki 19:01, 5 July 2011 (UTC)Reply

Translation: "This is not a tool to increase freedom, but to limit it. If approved, it will be Wikipedia, which does not censor content or themes, which sadly provides a tool for censoring to the censors. Lamentable." —translated by Sj.
Again, I must very forcefully say that it should not be the goal of Wikipedia to champion "libertad", but to educate. To deny education to the people who need it the most (i.e. children in developing nations that often have highly moralizing, oppressive cultures) because you find it distasteful to implement a voluntary, opt-in method for hiding images that might preclude large swaths of people from using Wikipedia because their cultures don't allow it - this is absolutely vile and idiotic. Unigolyn 08:59, 18 August 2011 (UTC)Reply

I concur. Implementing this feature would be clearly the first step towards censorship on Wikipedia. The next step on this way will be to block whole pages (for example, by adding category 'sexuality') which will allow schools to inject cookie with 'Display-Settings: sexuality=denied' and block such content for all the students. While it might be the intention, it won't stop there: the next step will be to create category 'Display-Settings: opposed-by-chinese-government=denied' and mark Tiananmen Square protests of 1989 with it, which will effectively allow Chinese government to simply inject appropriate cookie at firewall and block content which they don't want Chinese people to see. It is also *very important* to understand that limiting the feature only to 'logged-in' users will *not* fix this problem (regardless of implementation, it will be *very easy* to enforce at firewall level that all users from China are always logged in as the very same user). Ipsign 06:57, 16 August 2011 (UTC)Reply

It sets a precedent that I really don't want to see on Wikipedia. --Dia^ 07:54, 16 August 2011 (UTC)Reply

I dont want that tool, its the key to introduce censorship. If a certain image cannot be deleted if will be added to all possible categories to prevent presentation as far as possible. If someone has problems with uncensored images, he/she should read only their own publications.— The preceding unsigned comment was added by an unspecified user

Thanks, Ipsign and others, for illustrating the slippery slope fallacy. There is no logical reason why this would be extrapolated to tagging subjects blocked by China. In addition, China has already blocked that article and others in the past, so the tagging would be years too late to serve any purpose whatsoever. There are specific reasons given by many people why we should allow users to opt-out of seeing specific types of images. There is no reason why we would then apply this setting to aid censorship by governments. It's illogical FUD, not rational opposition. Sxeptomaniac 22:30, 17 August 2011 (UTC)Reply
Just because one uses the phrase "slippery slope" doesn't automatically make it a fallacy. Like in all political issues, setting a precedent - that Wikimedia is willing to accommodate offensivists - does make subsequent steps toward the bottom of the slope easier. Why allow personal sensitivities to influence what pics get displayed in Naturism, but not allow "national sensitivities" to do the same? 41.185.167.199 12:44, 20 August 2011 (UTC) (a.k.a Bernd Jendrissek)Reply
It's a slippery slope fallacy if you can't give good reasons why the supposed chain of events will happen. There are some gray areas, but the very fact that you can use "personal sensitivities" vs. "national sensitivities" works against the fearmongering. Sxeptomaniac 22:42, 22 August 2011 (UTC)Reply

Can't complete form: "not eligible", and I've tried en, de, nl and give up.

I am utterly AGAINST! the proposal. - It reeks of censorship! Unless it's a filter, to be built in/activated by adults (on their own computer - and ONLY there) for the protection of minors.

Lots of zeros and one question mark as my reply.

Thus "?" I answer question # 6; for "culturally neutral" is an oxymoron. Culture is diverse by nature & definition. To conjure oxygen into carbon dioxide (by magic) while breathing is about all we humans have in common.

Hence my unease at the VERY suggestive example in question # 5 (condone the "ferocity of brute force" versus proscribe a "state of undress") redolent of the nauseating puritanism which abhors all that could ever so remotely hint at sexuality (or indeed the corporeal).

Apart from moral-mongering religious fanatics (atheists included) there is the problem of definition.

"Moral" (from: Latin mores=customs): where and when? Same for "Nude": is that my freshly shaven face (half an hour ago I rid myself of a three days' beard) or only when I was actually in the shower, unclad, naturally (naturely). "Obscene" is another one. Can I come to the conclusion that I find the description of π "Offensive"? I've already mentioned "Culture" and could prattle likewise for days on end.

In my not so humble opinion there is no slippery slope: there is a precipice and, although it might take a while, we, along with wikipedia, shall splatter. Sintermerte Sintermerte 18:24, 22 August 2011 (UTC)Reply

Once you're done hyperventilating, look up "censorship" in the dictionary you used to look up "moral" and get back to us. As far as "religious fanaticism", the panicky opposition to this proposal fits the definition far better. Sxeptomaniac 22:42, 22 August 2011 (UTC)Reply

Abuses by censors: self-censorship seems to open the door to 3rd-party censorship

[edit]

From other comments on this page, there seems to be a severe confusion about the intention of the feature and of its potential (and IMHO very likely) abuses. One common argument pro this feature is that it is self-censorship, which (unlike 3rd-party censorship) looks as a good thing. Unfortunately, there are severe implications of available technical solutions. It seems that this feature will almost inevitably be based on so-called cookies. It will essentially allow any man-in-the-middle who sits between end-user and wikipedia server (usually ISP, which can be controlled by government/private company with certain agenda/...), to pretend that it was end-user who decided not to show controversial images, and Wikipedia servers won't be able to detect the difference (the only way I know to prevent such attack, is SSL, and even it can be abused in this particular case - 'man in the middle' can intercept SSL too, and while user will know that server certificate is wrong, he won't have any other option, so he'll need to live with it). It means that by enabling users to filter themselves out, we will also be enabling 'in the middle' censors to make filtering much easier than they're doing it now (it will also probably mean less legal protection against censorship: for example, if somebody will filter out Wikipedia images in US right now - they will be likely committing copyright infringement - with 'fair use' defense being quite uncertain, but if it is Wikipedia servers who's doing the filtering - copyright argument evaporates, making censorship much more legal). This is certainly not a good idea IMHO. Ipsign 07:59, 16 August 2011 (UTC)Reply

In order to distinguesh between images, they have to be categorized. And there you will find the censors. --Eingangskontrolle 08:47, 16 August 2011 (UTC)Reply

Yes, even if this is intended for personal use, it will in fact be used by network managers and ISPs to censor Wikipedia for their users. MakeBelieveMonster 11:50, 16 August 2011 (UTC)Reply
Okay, so your scenario here is that a malevolent entity, with government-like powers, does broad-based man-in-the-middle attacks in order to... turn on a filtering system, where the user can get around the filter just with one click? There's nothing we can do about MITM attacks (other than recommending SSL, so at least you know you've been compromised) and if there is someone capable of an MITM attack then they are not going to use our click-through-to-see filtering system, they will just block content, period. NeilK 17:52, 16 August 2011 (UTC)Reply
FYI, the world is full of malevolent entities with government-like powers. They're called governments. (Brrrrummmm-Chhhh) ;)
The fundamental problem isn't the optional filtering switch, it's the background work on establishing a censorship-friendly categorisation system that can be read and reused by outside software and used to impose third-party filtering, the willingness to include the switch, and the difficulty that Wikipedia would then have in resisting legal demands to extend the censorship system once all the infrastructure was in place, and the crippling legal liabilities they might have if they'd already broken their own principles on non-censorship, and had the technical ability to impose broader censorship, and didn't. If WP implements a purely optional censorship system, then the next time some kid blows up their high-school, the lawyers will be poring over the kid's internet records to see if the kid ever accessed anything at all on WP about explosives, and if so, they'll be launching a lawsuit on the grounds that WP had had all the software in place that would have let them block the content, and had chosen not to use it.
If you show that you have the machinery and willingness to edit, and you choose not to edit, then you tend to lose the "simple carrier"-style assumed absence of liability for the material that you carry. That's why it's critical to stick by the principle that your charter forbids censorship, and why its so important not to develop these systems. It gives you the defence that you are forbidden to censor by your charter, and couldn't technically do it even if you wanted to. Introducing this system would remove both those defences and introduce a whole new set of potential vulnerabilities and liabilities for WP that I don't think the committee have really thought through. ErkDemon 00:01, 18 August 2011 (UTC)Reply
There are no plans for any "background work on establishing a censorship-friendly categorisation system". The plan is to use the existing categorisation system, and say (for example) that this user does not choose to see images in this list of categories, including Commons:Category:Abused animals by type (and whatever else seems relevant). If a third party wanted to use these categories to censor images, they could have been doing that years ago. They do not need any changes at all to enable them to do this. WhatamIdoing 00:59, 24 August 2011 (UTC)Reply
No. What they are saying is that all it takes is one malevolent user to inapproprately tag harmless content with labels that will trigger the censorship system. Honestly, if you are liable to be offended by images on a given subject then you wouldn't really be viewing such an article in the first place. If you do encounter an 'offensive' image it is probably because a troll has uploaded something inappropriate to an article whose subject suits your sensibilities, and I don't think they would be helpful enough to censor tag their work for you. --2.26.246.148 20:52, 17 August 2011 (UTC)Reply
"That's naive though. Suppose that I need to learn about Naturalism (art) - so I type in "Naturism" - and find (to my horror), not some crappy porcelain figurine of a shepherd, but instead, right there at the very top of the page, to the right of the lede - a picture of half a dozen naked people. A simple typo or misunderstanding of a term (that the reader is looking up in an encyclopedia precisely because they don't yet understand it) can result in such problems. Consider that there are some companies out there who might fire someone for looking at the picture at the top of Naturism on company time when they are supposed to be researching Rococo figurines! So precisely because I'm against censorship of Wikipedia - and because I demand the right to have full-frontal pictures of nudists at the top of the naturist page - I'd very much like a "No Porn Please! filter." SteveBaker 19:10, 17 August 2011 (UTC)" Not to mention ErkDemon's point just above yours. Sdmitch16 02:50, 19 August 2011 (UTC)Reply

So we start with censorship against nudity and cruel pictures and end where ? Political and religious censorship will follow soon. This is not really self-censorship, because someone will have to add a tag to each picture to categorize it. There the censorship already starts. 94.217.111.189 06:30, 21 August 2011 (UTC)Reply


[edit]

Forgive me if I have got the concepts wrong, but the way I see the licensing is that an ISP may not alter the text without providing the original source (or a link to it). If we detect a middleman tampering with cookies to deny users the ability to see these images on a permanent basis, should we pursue those people for violation of the copyright? And if not, does that mean the WMF CC license is unenforceable,and pointless? Obviously data manipulation could be happening already, but a cookie-based solution is an open invitation for low-level tampering. Inductiveload 17:44, 16 August 2011 (UTC)Reply

Technically, such an ISP would not alter what is sent by the WP servers, but would alter what the user requests. Not different from secretly altering (http or even DNS) requests for wikipedia.org with censopedia.example, say (which is of course also not a nice thing to do). On the other hand, already now I can imagine that the content of w:EICAR test file might not arrive in its original unfiltered form at every user's PC, depending on anti-virus policies along the line, a thing I am reluctant to call a copyright infringement.--Hagman 13:56, 21 August 2011 (UTC)Reply
No, hiding an image is not a copyright infringement. If you do not show material you do not need the copyrights to that material. What would be infringement is showing an image, but making the description page inaccessible, or showing articles but hiding the license and blocking the history. Zanaq 17:26, 23 August 2011 (UTC)Reply
Thank you for the clarification, that does make sense now. Inductiveload 01:06, 26 August 2011 (UTC)Reply
Copy+rights do not exist in the United States. Copyrites regulate the copying ritual; -NOT a RIGHT-.

Visual artists should be free to determine what is prejudicial to their honor in their belief according to the artist's culture. I support not allowing third-party sites to redisplay ANY visual art and the Eighth Circuit will examine this starting next month in the United States. Self-censorship is a fundamental human right in my belief stated in my Appellant Brief. CurtisNeeley 21:24, 27 August 2011 (UTC)Reply

conservative POV bias

[edit]
Kruzifix
Image hidden by Wikipedia opt-in personal image filter. This image is categorized as: religious POV (christian) ; health advice (dangerous for vampires) ; contravention to "Thou shalt not make thee any graven image" (Exodus 20:4) ; torture ; corpse ; barely clothed male person -- the categories you blocked are marked yellow -- click link to show image

Any system that attempts to distinguish one image from the next will require individual labelling. In turn, any such effort will, I argue, become dominated by a conservative cultural and religious POV.

I would posit, for example, that the vast majority of images labelled as 'objectionable' will be one's relating to human sexuality and will focus in particular on what social conservatives consider 'aberrant'. It sickens me to think that every depiction of homosexuality, the naked human form or gender non-conformity, whether sexually explicit or not, may have to bear a stigmatising label reflecting a particular prejudice. Such labelling may in turn help trigger external filtering software that already reflect a conservative bias, thus further harming access to Wikipedia.

While I fully admit the global political left has far from consensus views on what is acceptable free expression, the number of images that might fall afoul of "political correctness" or "hate speech" concepts will scarcely compare to the scope and willingness of social conservatives to push their POV. I offer Conservapedia as evidence of how far they will take their agenda and this current proposal is simply opening the door and inviting them in. Added to this situation is the irony that many forms of left censorship are on behalf of cultural and religious conservatism stemming from the developing world, the most prominent example being the self-censoring response to 'offensive' images of Mohammed.

Both coming and going, an image tagging effort will be a boon for conservative views and a blow against the disinterested presentation of knowledge. --Bodhislutva 04:09, 21 August 2011 (UTC)Reply

I agree with your assessment and I'd like to add that I the idea of a catch-all category for "NSFW"/"objectionable content" is thoroughly incompatible with the notion of neutrality (which entails cultural neutrality). The idea, implicity and excplicitly professed by several people on this talk page, to actually host and maintain such a category or categories within Wikimedia is bordering on the obscene. --87.78.45.196 04:44, 21 August 2011 (UTC)Reply

We could introduce a Dutch POV bias instead? Images relating to Firearms and (most) fireworks blocked, some nudity permitted, sex blocked. (Is that roughly correct?). We could differentiate between different political leanings if you'd like. It'd be interesting to compare with what's objectionable or not in other countries. O:-) --Kim Bruning 14:08, 23 August 2011 (UTC)Reply

It'd be interesting to compare with what's objectionable or not in other countries.. It wouldn't just be interesting, it would be _fascinating_. If wanted to wait long enough, we could probably find scientists willing to fund us just to collect that data. It's so fascinating, it might just be worth doing. :) --AlecMeta 23:36, 23 August 2011 (UTC)Reply

Which images are likely to be subject to filtering?

[edit]

Is included in 'nudity', given that it's a block of stone and not a live human, and a classic work of art at that? Is included in 'sex', given that they're not having sex, but they are kissing while naked? Is included in 'nudity' given that everything is visible except a very small area of gluteal cleft? Is included, given that a small area of gluteal cleft is visible? Are and included in 'gruesome medical images'? Some people are pretty squeamish you know? makes me wince because I can tell it must have hurt, so will I be permitted to hide it? Does count as nudity? What about ? What does an image of semen come (no pun intended) under, if it is not a sexual image, but a simple depiction of white fluid in a puddle? If it counts as sexual, do microscopic images of sperm? If not, at what level of zoom are the harmful sex rays attenuated? Beorhtwulf 15:53, 23 August 2011 (UTC)Reply

I don't see how we could keep such images out of such filter categories. With just a few filter categories, they'll inevitably be interpreted as widely as possible. If someone says "This offends me and should be filtered", we have no fair or objective way to say "no, your offense doesn't count".
If you dislike filters, there is an upside-- almost no one would use these kinds of filters because of how wide they would become. --AlecMeta 23:31, 23 August 2011 (UTC)Reply
In terms of a "sex and nudity" filter, I seriously doubt that anything in Commons:Category:Nudes in art would be on the list. Photographs of humans with exposed genitals probably would; it's very likely that the hundreds of young-white-male vanity photos at Commons:Category:Human penis would be. We can take the simple and very likely welcome step of filtering Commons:Category:Photographs of sexual intercourse without blocking every image that shows a little bit of skin or has some remote connection to reproduction.
In short, there's no need to assume that the very people who have permitted and even encouraged sexually explicit photographs for years are somehow going to turn into repressive, censorious idiots now. WhatamIdoing 17:57, 24 August 2011 (UTC)Reply
But what would happen when, inevitably, people do want to add something from Commons:Category:Nudes in art to a filter? Both "Nude" and "Art" are very fuzzy concepts that defy definition, but "This Offends Me" is a simple concept.
I agree with you, our existing community wouldn't morph into censorious crusaders-- our existing community would probably just ignore the filters. But allowing filters might bring whole new groups of editors here-- the kinds of editors who do like censorship and do want to spend all day classifying images into appropriate categories. Tomorrow's editor's won't necessarily share the values of yesterday's community, and we have to anticipate that in our thinking.
Images Categories haven't been controversial because they haven't been attached to filters. If we created a limited number of filter categories, it seems to my mind they would be eternally controversial with a trend toward over-filtration. --AlecMeta 04:17, 25 August 2011 (UTC)Reply
Fundamentally, I trust that the community is, and will always be, able to handle this. We've already figured out what belongs in that category. I don't think that we're going to tend towards over-filtration, and even if we did, the problem is self-correcting: the more false positives that get picked up, the more likely people will be to turn the tool off altogether.
The part that you don't seem to quite grasp is that nobody's proposing "creating a limited number of filter categories". The proposal is to use the existing category tree, not to create new categories. The questions look like, "Shall we put the five-year-old Category:Nudes in art on the list of things that are filtered under a tick box labeled 'sex and porn'?" (Very likely answer: no.) The questions that the community will face do not look like, "Shall we make a brand-new category of 'Things that offend me'?" WhatamIdoing 17:18, 25 August 2011 (UTC)Reply

Other

[edit]

Is the 'beyond-Perfect' proposal controversial?

[edit]

I support the proposed filter as 'good enough' for complex reasons. But I'm also very interested in understanding where others stand on the issues.

What I call the 'perfect' system would be:

  • Infinitely Customizable each user can specify any list of images or categories.
  • 1-Click-To-View-at-all-times images are never denied, just temporarily hidden.
  • Only upon user request only the user can turn on the image-hiding feature.

My 'Beyond-Perfect' system would also have two additional features:

  • Not developed or promoted by WMF, no WMF resources go to support it.
  • Active only on non-WMF servers / clients

Is there anyone who finds filtering so controversial that they'd 'Vote No' on what I've call the beyond-perfect proposal? --AlecMeta 22:11, 19 August 2011 (UTC)Reply

The problem is that it's circumventable. You can lock down what a user is logged in as, you can prevent them from changing that cookie, you can deprive them of JavaScript, and you can stop them from accessing specific parts of a website that allow undoing the filter, thus preventing your "1-click" from working. Samsara 22:52, 19 August 2011 (UTC)Reply
Making it easier for person A to censor person B's reading habits, that's definitely controversial.
I'm curious how controversial this is in the ideal, setting aside slippery slopes. Do we agree a reader 'has the right' to 'censor' their own reading, or is that itself a source of controversy? --AlecMeta 02:40, 20 August 2011 (UTC)Reply
This is an interesting area. It's probably in the class of things where it is a good thing for the individual but a bad thing for society as a whole. The reason it is bad is that it avoids challenging preconceptions with fact. This is part of the concern I think, of the "filterbubble" meme (I haven't read the book). So does the reader "have the right" yes, it's in the nature of the license that they have the right to create derivative (censored) works. But nonetheless we individually and as a community have views about those derivative works. We are generally pretty relaxed about "Wikipedia for Schools" and Wikibooks for example. We are not so happy when people "print on demand" an almost random and un-quality controlled set of articles as a "book", for a ridiculously high price - using our work to take money for an inferior product. We would probably be deeply disturbed if, for example, Holocaust revisionists created a version of Wikipedia that excluded or distorted all mentions of concentration camps. They would of course have the right to do it (in most countries), but we certainly would not wish to be a purposeful enabler of it. Rich Farmbrough 03:01 20 August 2011 (GMT).
I honestly don't see anything relevant to the filer. Nazism can still create their own encyclopedia using Wikipedia materials selectively without the filter. -- Sameboat (talk) 03:22, 20 August 2011 (UTC)Reply
"We certainly would not wish to be a purposeful enabler of it." And in fact the report recognises this risk and warns against it in another recommendation in the same section: Recommendation 10, which I encourage all to read. Rich Farmbrough 15:17 20 August 2011 (GMT).
And censors can create their own copy on their own servers, paid by their money. But not money donated for free content should be used. --Bahnmoeller 16:09, 20 August 2011 (UTC)Reply
Much wisdom in the concerns of resource use. I donated knowing this was still in the works. Most of our donors probably had no idea it was coming. In May 2010, when I had donated to 'protect wikipedia' and it was briefly announced we would start deleting offensive images-- in that case, I absolutely felt a sense of 'betrayal'-- that's not what I had donated to, indeed it was the exact opposite of what I had donated to.
I think the price tag will be very small, but we have to be very careful about resource usage. The language we use to solicit donations is 100% antithetical to the language we used in the image deletions. We need a 'sense' of how "average donors" feel. If someone gave us the system for free, I'm comfy with us using it. But using large chunks of own funds on this particular project may not sit well. It's okay by me, but I can't speak for all the donors, and most importantly, I have no clue at all how this policy will affect future donations-- positive or negatively.
Donors don't 'own' us, they're not 'entitled' to dictate our future, but realistically, if they wanted to build a filter, they probably would have given their time and money to someone else, and that's very much something to consider. --AlecMeta 14:47, 23 August 2011 (UTC)Reply

Why did it take you so long

[edit]

to add a feature that will allow me to hide the pornographic images on Wikipedia, somehow there is no firefox add-on to do this yet (at least no effectively). Las tortugas confusas 13:55, 20 August 2011 (UTC)Reply

Because one person's pornography is often another's art -- and vice versa; there is no commonly-agreed to definition of "pornography". One example is that Oregon, which is part of the United States, has no legal definition for the word "obscenity" due to the free speech clause in its state constitution. And since there is no simple solution for this issue, it hasn't been tackled -- & many of us are no convinced that any solution would be worse than doing nothing. -- Llywrch 16:22, 20 August 2011 (UTC)Reply
"Because one person's pornography is often another's art " Which is why people can turn it on or off. Ottava Rima (talk) 02:49, 21 August 2011 (UTC)Reply
So you support my right to filter out paintings of dead babies representing the Christ child, or of the Crucifixion which are horrifyingly graphic? And are you willing to defend my right against those who don't understand my dislike for these images? -- Llywrch 03:22, 22 August 2011 (UTC)Reply
I'd just like to know where on wikipedia people are finding porn. I've been actively looking, and can't seem to come across any, so it's hard for me to imagine that someone would just stumble upon some without intention. I would argue the 'problem' this filter is aiming to 'fix' isn't really a 'problem' at all. It is something that this site (and every other content provider on the internet) simply should not be concerning themselves with. Self censorship is up to the individual and it's up to that individual to figure out how to make it work... or simply don't use the content provider. That's your choice. Pothed 17:13, 20 August 2011 (UTC)Reply
There is stuff that some people might classify as porn in Commons (e.g., pictures of genitalia, images of people from porn websites, probably even images of people engaging in some form of sexual intercourse); it's one of those problems which you only solve if you know the answer first. And let's be frank: if you're serious about finding porn, searching on tumblr is a better bet. (Hell, even a Google search is a better bet than sifting thru all of the possible categories in Commons.) -- Llywrch 22:28, 20 August 2011 (UTC)Reply
And someone who doesn't want to have any pictures of any nudity on their computer? Too bad for them? If their country bans it or they are at work, then too bad? We need to make sure that as few users as possible are able to access Wikipedia because you want to make a political statement against those you feel aren't as "liberated" as you? That is really selfish and incivil. Ottava Rima (talk) 02:50, 21 August 2011 (UTC)Reply
And if someone thinks wearing a skimpy bikini constitutes nudity (showing too much skin), won't they have the right to be upset if they find those images aren't filtered out? "Why are you using their standard of nudity and not mine?" Evil saltine 21:25, 21 August 2011 (UTC)Reply
Why do you want to force someone to look at something they don't want to look at? Ottava Rima (talk) 21:44, 21 August 2011 (UTC)Reply
Who is "forcing" anybody to use Wikipedia? --87.79.214.168 21:47, 21 August 2011 (UTC)Reply
I don't. I support optional filtering based on the image categories we already have, not creating a whole new set. Evil saltine 21:52, 21 August 2011 (UTC)Reply
It has been pointed out that the current set of categories will be used for the new set of categories. We already have the images put into the specific areas. Ottava Rima (talk) 21:54, 21 August 2011 (UTC)Reply
OK. So activating the "Nudity" filter will hide all images under Category:Nudity (and its subcategories)? That seems to make sense. Evil saltine 22:06, 21 August 2011 (UTC)Reply
What filter will hide all images of homosexuals, all images of interracial couples or all images containing background persons wearing skimpy swimsuits or "blasphemous" T-shirts (a point raised by Trystan)? Which current categories cover those? —David Levy 22:32, 21 August 2011 (UTC)Reply
There are already categories for homosexuality, swimsuit, etc. The article page shows that even a Pokemon image could be hidden. Ottava Rima (talk) 22:35, 21 August 2011 (UTC)Reply
Firstly, are you under the impression that every image of a homosexual person is categorized as such? That isn't close to true. Images directly pertaining to homosexuality (e.g. a photograph of a gay pride parade) fall under such categories, but we don't place "homosexual" tags on images simply because their subjects are gay. The planned system, however, will require us to (unless we reject the widespread belief that such images are objectionable).
Secondly, you appear to have overlooked the word "background." A photograph with an incidental person in the background isn't categorized accordingly. The planned system, however, will require us to do that. (See #Negative effects of using categories as warning labels on the usefulness of categories for a more detailed description.) —David Levy 23:09, 21 August 2011 (UTC)Reply
Are you honestly trying to say that a picture about "homosexuality" is the same as a guy in a picture who may or may not be gay? Wow, that is really weird. Ottava Rima (talk) 23:44, 21 August 2011 (UTC)Reply
No. That the two differ (and the latter currently isn't categorized) is my point. Many people object to the existence of homosexual people (such as Elton John, who's openly gay) and don't wish to be exposed (or have their children exposed) to the sight of one. This is one of countless "potentially objectionable" image subjects not covered by any category currently in use. Other examples have been cited. —David Levy 00:19, 22 August 2011 (UTC)Reply
Who, other than you, has "pointed this out"? It simply isn't true (or even feasible). The current image categories are neither designed to label objectionable content (much of which is incidental, and therefore not categorized) nor presentable in such a context (due to their sheer quantity, among other issues). —David Levy 22:32, 21 August 2011 (UTC
After the porn dispute on Commons, the categories were reworked specifically to put the various types of pornography and other content together so that excess images would be removed. That has been a major issue over there for the past year or so. Where have you been? Ottava Rima (talk) 22:35, 21 August 2011 (UTC)Reply
You're missing the point. No one is suggesting that such categories won't play a role in the planned filter system. —David Levy 23:09, 21 August 2011 (UTC)Reply
(Responding to Ottava Rima @ 02:50, 21 August 2011) If someone doesn't want any pictures of nudity on her/his computer after reading Wikipedia, then that person should be careful about which articles to read in the first place. For example, I've worked on a number of articles about 5th, 6th & 7th century European history & didn't encounter one picture of a nude person. Hell, I've read or worked on a couple thousand articles relating to Ethiopia & not once did I encounter a picture of a nude person. Not even the pictures of topless native women which made National Geographic so popular with grade school boys back in my youth -- before the Internet brought pictures of boobies a few mouse-clicks away. -- Llywrch 04:45, 22 August 2011 (UTC)Reply
What? How does that make sense? So people aren't able to get pictures about, say, the herpes virus without being subjected to genitalia covered with it? People aren't able to look up some Greek mythological figure without seeing some representation of the individual having graphic sex? There are plenty of historical or "cultural" articles that have pictures containing nudity that are just not safe for work or school. Ottava Rima (talk) 16:39, 22 August 2011 (UTC)Reply
This is the butcher shop, the factory floor, where things are made. Deciding what you want to cut out is up to you - and when I say "up to you", I don't mean that you should expect us to perfectly categorize every image that might offend you in some impartial way, because that's just not doable. Wnt 19:05, 23 August 2011 (UTC)Reply

What we have here

[edit]

is a failure to communicate and yet another complete failure of Wikimedia/Wikipedia governance.

And I'm not even talking about the "referendum" itself, but about a failure in a lot of everyday communication on Wikipedia. Instead of throwing a massively intrusive measure like an image filter at the supposed problem, we should at least try to look at the real problem, which plays out on Wikipedia talk pages all the time. Sometimes it works out, sometimes it doesn't.

Consensus is consensus. If you don't like an image in an article, take your concern to the talk page. If others provide overriding reasonings for why a particular image holds explanatory value for the article, that's that. Get over it. But deal with it locally, and deal with it yourself.

Take the example of arachnophobia. On the English Wikipedia, editors at one point agreed to remove actual spider images from the article, not because "people are offended/irritated" but because including actual spider images defies the purpose of the article, rendering it useless to arachnophobics who needless to say are the most likely to look this up on Wikipedia in the first place. On the English Wikipedia, this overriding rationale to replace the photos with non-phobia-triggering drawings prevailed in large part because enough people showed good judgment and didn't keep senselessly pointing to "WP:NOTCENSORED".

On the German Wikipedia by contrast, the situation is different. On the German arachnophobia article, those images are still there, because self-appointed anti-censorship watchdogs keep restoring those images and ignore or shout down anyone raising the point at the talk page. The same reasoning (that those photos defy the purpose of the article) does apply just as much on the German article, but it keeps getting ignored and shouted down. And that is the real problem. When true, discussion-based consensus doesn't prevail against a group of "established" "editors".

The answer to that real problem is of course not an image filter. The answer is better governance. Editors, and admins in particular should look at article histories, go to the talk pages and read the presented reasonings from all sides. Most of the time, a clear judgment is possible as to whether the inclusion of a particular image or type of image does or does not ultimately make sense in the respective article. That then is the consensus, whether or not it takes an RfC or RfAr to get people to accept it.

If we only had strong enough governance so that Wikipedia could be at least somewhat more discussion- and consensus-based and not the pure shoutocracy it has turned into, there would be no need for silly things like an image filter.

Some might point out that many people who want images filtered are only readers. But the answer we give to those people cannot seriously be to essentially say, as a project and community, "yeah, we are also not sure about those images, some want them and some don't, and we as an encyclopedic project are not able to make up our collective mind". What a declaration of bankruptcy! It's basically unconditional surrender to the exact kind of problem we implicitly claim we are here to solve when we call ourselves an encyclopedic project. --87.78.45.196 00:55, 21 August 2011 (UTC)Reply

Your example is stupid - in both Wikipedias the rules are the same. In the EN wikipedia a certain group of editors has got their way, in the DE wikipedia the mayority was different. But this new "feature" would ignore the discussion about single articles and the appropiate image to illustrate that article and will allow certain users in the back of commons to distroy hundereds of articles by censoring images. --Bahnmoeller 15:33, 21 August 2011 (UTC)Reply

"stupid". Oh well, I'm no stranger to putting a conversation ender near the beginning of my own posts, so I guess I can't complain. Anyway, it may not be the most suitable example, but it's the one I could immediately think of where a well-defined group of people (arachnophobics) strongly favor the omission of a certain type of images (photos of spiders) -- and where it makes sense to omit those images based on some people's objection.
In the EN wikipedia a certain group of editors has got their way -- No, in the English Arachnophobia article, reason prevailed. In the German article, reason was single-mindedly ignored by a bunch of editors who are a tad paranoid about "censorship".
But this new "feature" would ignore the discussion about single articles -- Yes, that is exactly my point, and one of the reasons I think the image filter is a uniquely bad idea. --87.79.214.168 20:40, 21 August 2011 (UTC)Reply
It's funny, I take a different approach. I think the spider images shouldn't have been taken out of EN, and one of the benefits of this filter will be that we can make purely editorial decisions without having to stress over the emotions of our readers. Put "the best" spider image at the top, but let readers who are upset by it choose to hide it, so that they instead see a shuttered image. This will remind them that they are missing out on information because of their preference, but it won't be so brutal as to shove a spider in their face against their will either. --AlecMeta 18:20, 22 August 2011 (UTC)Reply
No, you are wrong. Spider images serve no encyclopedic purpose on that particular article. The en.wp editors were correct in removing the image. The German Wikipedia is generally more authoritative and normative, and therefore the image was kept there much more to prove a point regarding "censorship" than because of any actual editorial considerations. --195.14.220.250 18:28, 22 August 2011 (UTC)Reply
Well, I don't mean to single out and judge that specific case, I wasn't part of the discussion, I don't mean to say it was 'wrong'. And we shouldn't be troubled that two different groups of people reached different conclusions about a content decision-- that's the way it always works. I accepted the premise that we removed encyclopedic pictures because they were causing negative emotions-- if I didn't accept that premise, it wouldn't be a good example/metaphor. If it was just boring run-of-the-mill decision, then just consider a different example. :) --AlecMeta 14:27, 23 August 2011 (UTC)Reply
Looking at the two, I think that the German version of the article is much more informative in general. The photo removed from the English version is not the same as the photo used in the German version - the English version just had a random spider, which might or might not be particularly disturbing; the German version shows an arachnophobic person directly confronting a massive spider by holding it in his hand as therapy. Now if confrontation of fear of a spider is good therapy, perhaps confrontation of a fear of an image of a spider is also good therapy.
But what's most important to remember is that Wikipedia articles are not for consumers only. I see the wrong attitude all the time - people think coverage of a company should only include things of use to people who want to buy their products, or coverage of medical issues should only include the perspective of the patient. Not at all. We should be providing useful information for the investor in the company, the competitor to the company, and all those who lazily daydream about whether in a year or five they might become so. And our articles about medical conditions should give information that is informative to scientists, doctors, med students, pre-meds, and kids taking biology class and thinking about their major. That means yes, we show a picture demonstrating the reality that with confrontation therapy, you can somehow talk an arachnophobe into handling a giant spider I wouldn't touch because I'd be worried it could somehow get its little hairs embedded in the corneas of my eyes or just plain bite me. We cover the data, all the data, for all the people. We are not training consumers dependent on an expert elite, we are training the people of the world to be the expert elite. Wnt 18:45, 23 August 2011 (UTC)Reply

Are we addressing the right problem?

[edit]

Much of this debate appears to be emotional, or about values that clearly vary from person to person. Could someone at the Wikimedia Foundation (or in some independent and trustworthy organization) provide basic, 'objective', neutral information about the following simple questions?

- Who is actually asking or pushing for this 'Personal Image Filter' in the first place, and why is this issue coming up now, since Wikipedia and sister projects have been working well for years without it?

- As of this writing, there are 3,715,536 articles in English version of Wikipedia. What fraction of these pages (or of other WikiMedia projects) could possibly be considered controversial (for obvious categories, such as explicit sexual references, extreme violence, etc.), even roughly speaking?

- What clear quantitative, incontrovertible evidence is there that governments (or 'authorities') are actually controlling or restricting access by its population to Wikipedia and associated projects?

- Why are the current practices of 'Neutral Point of View' and 'Least Astonishment' not sufficient for Contributors and Editors to include only materials that are appropriate to the context of their article in an encyclopedia, book, definition, etc? In other words, why is this a 'Viewer' issue and not simply a 'Contributor' issue, or possibly a quality control issue?

Since there are millions of public sites and pages dedicated to potentially offensive subjects, I would not expect curious people (or children, or random visitors) to search Wikipedia for these materials, nor for them to find much offensive content (compared to the bulk of the web outside Wikimedia). To the extent our goal is to generate open access reference and educational materials, are we not losing a lot of time and energy in trying to address a side issue?

Michel M Verstraete 22:35, 21 August 2011 (UTC).Reply

Great questions, thanks Michel. And thanks to AlecMeta for giving very appropriate answers for them. SJ talk | translate   02:51, 6 September 2011 (UTC)Reply
Michel M Verstraete raises lots of great questions. I'm not with the foundation in any way, but let me try to answer those I can:
"Who is actually asking or pushing for this 'Personal Image Filter' in the first place" -- The only people I know who really really want it are Muslim editors of the English-language article on Muhammad. We can't be NPOV without images of him, but there's something very sick and wrong about making old people who barely speak English learn about computers or multiculturalism before they can even happily read the article on their favorite subject. I wish they already had a browser that they could happily use, I wish they had learned multiculturalism before they came here. But the image issue takes what should be a positive experience and turns it into a negative one. People on talk can sometimes be very nice to the people who are upset, but occasionally the very worst of our commmunity replies, creating what could be 'intensely memorable, intensely negative experience". I've talked a lot with that group, and it's important to me that we find a way to 'have our cake and eat it too'-- we know it upsets them, we know how to fix it, and we know how to do it while still "modeling our values" too. So, even though I was one of the biggest opponents of the last bout of censorsip, I think at this point you could reasonably say I'm one of the people 'pushing for this', although I myself would never use it, I do want our low-english-literacy Muslim readers to be able to have access to it. And of course, I want it for people in their same situation who I just haven't personally interacted with, even though it was my experience with that particular group of readers that convinced me we should do something minimal to help them avoid offense.
why is this issue coming up now, since Wikipedia and sister projects have been working well for years without it? -- I have a GREAT answer to this. The only problem is that it's an answer I made up that doesn't reflect the board's point of view. But my answer is: We need it now because we're moving beyond Wikipedia now. This filter wasn't essential to Wikipedias because Wikipedias operate exclusively on NPOV. But the encyclopedia is only the very beginning of a library, and usually it's one of the least-controversial. Our future volumes may require us to be far more offensive and controversial than we had to be just to beat Britannica. In the future, we're not just trying to beat Britannica, we're trying to beat all books. --AlecMeta 17:15, 22 August 2011 (UTC)Reply
The only people I know who really really want it are Muslim editors of the English-language article on Muhammad -- Ah? I was wondering about that. I thought the main problem for Muslims was not primarily looking at depictions of Muhammad, but much more fundamentally the fact that there are depictions of Muhammad. --195.14.220.250 17:19, 22 August 2011 (UTC)Reply
Well, there's plenty of anger about the mere existence of the images, but our hands are totally tied there. Some people want deletion, but we can't give those people what they want, and we can't even seriously consider it, in my estimation. If someone's only problem is that we're giving people content that they want, then they, in fact, have no legitimate problem with us-- sharing information is what we do and what we are.
But a much larger group, and a lot of the raw emotions, come not from abstractly knowing about the images, but from personally encountering them unexpectedly.
When people come and say "this image upsets virtually all of the people around me, so I don't want to show it on my computer screen unless I actually click on it, so I won't upset the people around me"-- we can, in fact, help our readers with that much smaller problem. And we can do it while staying 100% true to ourselves.
I was raised by tv-- I have a very difficult time imagining what it's like that total strangers could have so much power over your emotions just by showing you a single image that wasn't intended to offend. But the fact of the matter is we're writing for Earth-- all of Earth, and believe it or not, we do, in fact, an unbelievable power over some of our global readers. We can upset or enrage them just by writing a good article. Or we can not upset them, simply by using this feature. And we don't actually want to intentionally upset people over and over and over if something this simple would really solve their problem.
There will be lots of people who aren't happy with this system because they'll know the images 'exist'. But some people will be satisfied. Even if they're just a tiny minority, that's okay- it's kind of a tiny feature. --AlecMeta 17:32, 22 August 2011 (UTC)Reply
Maybe I don't quite understand this type of self-censorship. To me, these people are rather like little children who "hide" by closing their eyes. And more importantly, it's weird to help people do that to themselves. It implies that sticking your head in the sand is healthy and good. Also reminds me a bit of the Formerly Obese Man from the Onion Movie.
I for one believe that maybe the core idea of encyclopedias is that, while everyone is entitled to their own opinion, nobody is entitled to their own reality. When will we introduce an option to view the Conservapedia version of an article instead of the Wikipedia version? --195.14.220.250 17:47, 22 August 2011 (UTC)Reply
Maybe I don't quite understand this type of self-censorship. To me, these people are rather like little children who "hide" by closing their eyes. And more importantly, it's weird to help people do that to themselves. It implies that sticking your head in the sand is healthy and good.
I hear you. I do not understand it either. To complicate matters, I don't know that young wikimedians from Muslim nations fully understand it either. We have the same thing in all cultures-- I'm American, and my grandfather used to get quietly furious if anyone dared to enter his home wearing a hat-- I have no earthly clue how he got the idea that this was offensive, I have no idea what he could never understand that young people weren't meaning offense. But ya know-- he got upset every time it happened, and eventually, even though I didn't understand, I started asking my cousins to remove their hats before visiting him, just to not upset him. If someone from another culture asked me to try to 'explain' it, I would be completely at a loss-- I don't understand it in the slightest, but if you convince me something is upsetting you and I can easily fix it-- shouldn't I?
While everyone is entitled to their own opinion, nobody is entitled to their own reality
You say in a phrase what took me a paragraph to ask above. Is nobody is entitled to their own reality or is everybody entitled to their own reality? To put it another way, are we, as a movement, ready to expand our scope beyond "reality-based information" to "opinion-based information"? The "Information Filter Project" is the last place I'd have chosen to start experimenting with "opinion-based information"-- but it's still educational, it's still "in scope", people do want it, and we are the only ones in a position to give it to them. --AlecMeta 15:37, 23 August 2011 (UTC)Reply
Why are the current practices of 'Neutral Point of View' and 'Least Astonishment' not sufficient - again, this is just my answer.. Where we're going, there won't always be a single NPOV. Most books in a library aren't written neutrally. Sometimes art is intentionally astonishing. "A Clockwork Orange" is a hard to watch film because it is so upsetting, and yet, it educates about the horror of violence in a way that few other films do. Tomorrow's "A Clockwork Orange" may be released under creative commons and it may be hosted on Commons-- we want to buy more intellectual freedom for our contributors, and a filter will help buy it. --AlecMeta 17:15, 22 August 2011 (UTC)Reply
It won't. Thats a fact that the American Library Association accepted a long time ago. [16] --Niabot 17:51, 22 August 2011 (UTC)Reply
Well, see, let's think about that for a second. ALA dramatically opposes a library endorsing a rating system-- the library tries to be neutral. So consider w:Index Librorum Prohibitorum, a list books that were prohibited by the Catholic Church, the last of edition was 1948. FORCING library patrons to use it would be horrendous. But it would also be bad for a university library to refuse to let its patrons consult the w:Index Librorum Prohibitorum or access it.
We can have a whole section of our library that is "Lists of prohibited books", so long as we never have one special default list. Indeed, knowing what's objectionable is just another kind of knowledge, as long as it's not imposed-- with that knowledge, we could even have our own "banned images week" to promote high-quality images that have been objected to. --AlecMeta 14:38, 23 August 2011 (UTC)Reply
Once it exists, we shouldn't censor it; not even the Index Librorum Prohibitorum. At the same time, I do not think we should generate a new and updated Index Librorum Prohibitorum, as it is not in line with our own sense of morality to do so. --Kim Bruning 20:18, 23 August 2011 (UTC)Reply
But that information set already does exist-- in the collective minds of all our readers. They want to 'upload' that data to us. It's the data of who hates who and who hates what-- it's ugly data, data that is very, VERY different from Wikipedia data. It's the data of primates and how images affect their emotions. The data could empower evil third parties, but it could just as easily empower benevolent third parties. It's hard to get people to admit prejudices-- this dataset will show the entire planet's biases and prejudices, live and in living color for all to see. If we let people collect that data here, the world will learn something very valuable from it-- IF we do it right. --AlecMeta 20:42, 23 August 2011 (UTC)Reply
I think a library would happily collect the Index Librorum Prohibitorum, but I doubt they would require the books it names to carry labels so identifying them, or integrate the Index as a filter into their search catalogue.--Trystan 23:54, 23 August 2011 (UTC)Reply
Trystan is right. Additionally, a public library will not list either the MPAA's film ratings or the Entertainment Software Rating Board's ratings for games in their catalog system, and they will not stop children from checking out "inappropriate" materials, whether those be books like Joy of Sex, R-rated movies, or M-rated video games. That would be imposing (or endorsing or enforcing) someone's idea of what's acceptable for a particular type of reader on all of their readers.
But they will cheerfully list Hustler magazine under "Pornography -- Periodicals" and Stephen King's writing under "Horror fiction", and they will not expurgate the film and game ratings from the materials themselves, even though individuals choose to use those labels and ratings to avoid material that they do not want to see. WhatamIdoing 18:58, 24 August 2011 (UTC)Reply
This an excellent point, Trystan et al. The only thing I can say is that a library is a physical place usually with shelved content. Our content can be viewed everywhere, often in very public places, and maybe that creates a special need for our patrons that traditional libraries don't have. But there's no escaping the conclusion you all reach-- this is a step away from our 'purest ideology' of Wikimedia's purpose. Sometimes ideology must give way to flexibility and strategy, and this may be one of those times. --AlecMeta 21:26, 24 August 2011 (UTC)Reply
The problem developed because Wikipedia has an absurd amount of pornography and that drew a lot of complaints, especially with our high number of child users (and it being illegal to allow them to see such images). The surveys showed that there were over 1000 white penises but only 1 black penis, negating any evidence that there was truly an "encyclopedic" reason for them. However, a handful of vocal but a minority of users caused an uproar when the porn started to be trimmed down. It was suggested that we could create a filter system so that people could have the porn and others could ignore it. It was a compromise. Ottava Rima (talk) 23:48, 21 August 2011 (UTC)Reply
Can you give any sources for your claims that:
  • "... Wikipedia has an absurd amount of pornography and that drew a lot of complaints, ..."
  • "... there were over 1000 white penises but only 1 black penis, ..."
  • "... a handful of vocal but a minority of users caused an uproar when the porn started to be trimmed down."
  • "It was a compromise."
Otherwise i will have to ignore your opinion entirely, since it seams way overextended and onesided. --Niabot 00:49, 22 August 2011 (UTC)Reply
The statistics were all in 3 reports. I assumed that you bothered to read them before making such responses, being incivil, and vandalising here. I guess I assumed too much. Ottava Rima (talk) 01:25, 22 August 2011 (UTC)Reply
Give me a link to that reports and cite the related part of it, that aren't written by a good friend of Sue or his daughter. PS: I'm curious why you got banned indefinitely from EN and now proudly claim that others are uncivil or vandals. --Niabot 01:35, 22 August 2011 (UTC)Reply
"Give me a link to that reports and cite the related part of it" No. It is on the article portion of the page. You have proved that you are here inappropriately because you failed to actually look and see the basis of the filter and what the filter is supposed to do. Merely jumping in here and making all sorts of ignorant comments is not appropriate behavior. And Niabot, there are multiple websites right now watching this discussion and making it very clear that you have a major track record of inappropriate behavior. Unlike you, I wrote articles that were so respected that a Foundation Board member and an Arbitrator proxied them onto Wikipedia during my ban, and the articles were on some of the most important topics [17], [18], and [19]. You've merely put together a handful of stub like articles on obscure anime or uploaded lots of hentai that is dubiously within CC-BY standards. Your block log [20] is quite impressive. It seems rather clear that you want to force as many people to look at as much porn as possible for whatever reason. Ottava Rima (talk) 16:46, 22 August 2011 (UTC)Reply
(1) Does file:Human vulva with visible vaginal opening.jpg qualify as "porn"?
(2) Who is "forcing" anybody to use Wikipedia? --195.14.220.250 16:53, 22 August 2011 (UTC)Reply
Nice insults. I didn't expected anything better from you. A "handfull" is now comparable to dozens. Hentai is dubiously within CC-BY standards? I laughed again, a little. I was effectively blocked two times on the German wikipedia, all other blocks where removed soon afterwards because they where false. Not a single one was related to a sexual topic. To me it seams very clear that you blatantly insult others with false claims and that you would do anything to harm freedom or the project. Thats my opinion and last word. I don't see the need for a further going discussion, based upon lies. --Niabot 17:46, 22 August 2011 (UTC)Reply
1. There were no insults so don't pretend otherwise. 2. The hentai is from a dubious source and randomly pulled from the internet. 3. And not being blocked for a sexual topic doesn't mean you don't deserve to be. We are a project with children and your mere interacting with them while having explicit images on your user page at Commons is a serious behavioral problem. Ottava Rima (talk) 02:44, 23 August 2011 (UTC)Reply
What about some research? Look here: Category:Intact human penis. How many are black?
If the amount of porn on commons ist "absurd" I can't decide. But the number of white man who like to show off their dicks clearly is :-) Adornix 15:29, 22 August 2011 (UTC)Reply
I can recall deletion requests against mostly black penises. But for 1000 to 1 we should have at least 1000 pictures inside this category. ;-) --Niabot 16:29, 22 August 2011 (UTC)Reply
the number of white man who like to show off their dicks clearly is [absurd] -- That is correct. But why are we trying to superficially address this problem with an image filter rather than showing these people the door and deleting the educationally meritless images? --195.14.220.250 17:05, 22 August 2011 (UTC)Reply
    • About some so-called facts:
  • "... Wikipedia has an absurd amount of pornography and that drew a lot of complaints, ..."
"absurd" is a matter of one's own standard. For that matter, so is "pornography" All in all, as compared to what's in the online world in general, I've seen very little here--but, then, I don't go looking for it. And the complaints represent what proportion of the readership? I suppose between one in ten thousand and one in a hundred thousand, Unless, like the group preparing the report, you go out and try to evoke them.
  • "... there were over 1000 white penises but only 1 black penis, ..."
If so, it is a problem. The solution is to add more racial diversity--just as usual with cultural bias.
  • "... a handful of vocal but a minority of users caused an uproar when the porn started to be trimmed down."
"handful" is also a matter of one's own standard. To me, they include a very high proportion of the people here whose work I respect. Considering that the original trimming was the heavy -handed work of a single editor, I'd expect no less than an uproar. As I recall, most of the images deleted by that editor at that time were restored.
  • "It was a compromise."
That at least is apparently true. But there are some things we do not compromise about. the most important is NPOV, and that the encyclopedia is free of the selection & classification of any band of expert or presumed representative editors is the basis of our NPOV. The WMF does not own the content. It doesn't even own the software. All it owns is a small amount of hardware and a extremely valuable trademark. Of all sorts of intellectual property, trademarks are the sort of thing most vulnerable to compromise and dilution. Compromise NPOV, and you lowers the value of the trademark, because that's the principle thing it stands for. DGG 04:42, 23 August 2011 (UTC)Reply
Neutral point of view does not require violating the law by providing those under 18 in the United States with pornographic imagery. The WMF is legally responsible for the content regardless of "ownership" because it would qualify as a distributor. Pornography is also legally define and the standard upheld by the Supreme Court. You know all of this, so I am surprised that you would say the things you said above. Ottava Rima (talk) 05:39, 23 August 2011 (UTC)Reply
As a side question, whose work are you claiming that you respect? I have spent time looking into who posted, and the ones who oppose the referendum either don't edit en.wikipedia or haven't produced much content. I find it odd how you can respect them, when many of them just upload graphic sexual images and do little beyond that. It is also odd that you are saying that the problem of "1000 white penises" means you should have more non-white penises when it was obvious that the problem actually was that it had nothing to do with being encyclopedic but a small, undiversed group uploading images simply to put it on their user pages in an exhibitionist manner. The worst part is that we allow these people to interact with those under 18. If the WMf was being responsible, every single one of them would be indeffed under a child protection standard. Ottava Rima (talk) 05:42, 23 August 2011 (UTC)Reply
This is apparently not well known, but: NPOV is a policy voluntarily adopted by some, but not all, of the WMF projects. It is not a mandatory policy imposed from on high. In particular, Commons does not have an NPOV policy. Instead, they have a page explicitly saying that the policies of the English Wikipedia do not apply to the whole world, and that they don't follow the NPOV approach. WhatamIdoing 18:11, 24 August 2011 (UTC)Reply

Interim vote totals

[edit]

Just for the sake of information, there have now been over 20,000 votes registered in this plebiscite, which makes it the highest degree of participation in any Wikimedia poll/vote/referendum ever. There is still a week to go. Risker 03:10, 23 August 2011 (UTC)Reply

But the question is, is the software sophisticated enough to determine how many of those are done from the same IP? And with the qualification only being "10 edits", how many of those are from accounts with only 10 edits or under, say, 100? Ottava Rima (talk) 03:37, 23 August 2011 (UTC)Reply
In answer to your first question, yes, vote administrators have access to IP data. As to the second point, that is not a key criterion that will be reported by the vote administration process, although I am sure someone with API access can do the calculations if they want to run through all 20K+ votes, if they are so motivated to do. I'd suggest they wait until the polls close, though; we have no idea how many more votes are yet to come. Risker 03:47, 23 August 2011 (UTC)Reply
Well, I have a suspicion that the loose restrictions are a reason why there have been over 20,000 votes so far. I would be willing to bet that at least 50% of the votes are done by those with less than 100 edits. I hope someone who knows scripts would be able to parse through the info, especially if the WMF is going to be relying on it later. Ottava Rima (talk) 04:13, 23 August 2011 (UTC)Reply
Actually, the huge leap in the number of votes is most likely related to editors with email enabled actually reading up on the issue and expressing their opinion. The objective here is to encourage many opinions, hence the low threshold for eligibility. This tool is designed with *readers* in mind, so the edit count is less relevant for this plebiscite than would be the case for election of Trustees, for example; many people read one of the Wikimedia projects regularly without making many contributions, for a multitude of reasons. Risker 04:22, 23 August 2011 (UTC)Reply
Polls only work when they attain a level of objectivity and negate the ability to game them. Giving those like sock masters the power to decide this matter isn't what I would see as a good thing. By raising a standard, we negate the inappropriate influence of such individuals by forcing them to perform far more work at pretending to be legitimate than they most likely would be interested in performing. I'd rather not see those like Grawp be the primary decider in what the editors of Wikipedia feel. Ottava Rima (talk) 04:31, 23 August 2011 (UTC)Reply
Yeah, whatever the outcome, you'll claim victory. We get it. Also, what the editors of Wikipedia feel is that you should be banned there, which you are. Just saying. --78.35.232.131 04:34, 23 August 2011 (UTC)Reply
I find it interesting that you assume that the sock masters would all be voting against the proposal. That is quite telling, especially with your logged out editing above. Ottava Rima (talk) 05:43, 23 August 2011 (UTC)Reply
LOL, that is what you are clearly assuming, not me or anyone else. Your projection is quite telling. Unfortunately for you, the one person it tells nothing is you. --213.196.209.190 13:08, 23 August 2011 (UTC)Reply
You just switched to yet another IP, which verifies my statement that a small handful of people are using sock puppets and changing IPs to make it seem far more than there actually exists. And I find it interesting how you say "projection" as if I would be logged out editing or socking, which is laughable. Ottava Rima (talk) 14:07, 23 August 2011 (UTC)Reply
Not happening, and not that important if it does. IP users are first class citizens here, let them be or try to change their minds-- don't overly fixate on the signatures. Each comment comes from a human mind-- discuss the issues raised or discuss the concerns in the abstract, but singling out specific comments and using their login status to dismiss their arguments is... counterproductive. --AlecMeta 15:26, 23 August 2011 (UTC)Reply
Generally the sock puppet cases I've heard of are about a few accounts, or at most a couple of dozen. If "sock masters" can sway a poll with 20,000 votes and specific eligibility rules, we have a pretty big problem! Of course any such allegation should be investigated - if there's any evidence for it. Wnt 18:24, 23 August 2011 (UTC)Reply
There are actually tens of thousands of known sock puppets. There are programs where people used to make such things. Go look at the global lock logs and see how many names are oversighted to appreciate the magnitude. Ottava Rima (talk) 19:42, 23 August 2011 (UTC)Reply
Actually, IP users don't matter, especially here. They shouldn't even have access to this talk page. And saying that a person is valid because they are human is ridiculous and doesn't serve any purpose. Ottava Rima (talk) 19:42, 23 August 2011 (UTC)Reply
saying that a person is valid because they are human is ridiculous and doesn't serve any purpose. -- Seek professional help. --213.196.212.168 20:44, 23 August 2011 (UTC)Reply
saying that a person is valid because they are human is ridiculous and doesn't serve any purpose. If you say my remarks served no purpose, your very words prove your statement correct. For my part, I want to hear as many different sincere opinions as I can get, regardless of author. --AlecMeta 21:35, 23 August 2011 (UTC)Reply
there have now been over 20,000 votes registered in this plebiscite, which makes it the highest degree of participation in any Wikimedia poll/vote/referendum ever.
This is very, very exciting and it's going to make it a lot, lot easier to understand this issue. THIS is why we had a referendum-- this is the largest feedback we've ever had about any issue ever. The questions are confusing, but that's okay. Referendums work at getting massive feedback from readers and editors across cultures. They are a new technology the board has just started using, and despite all the handwringing that the numbers will be misused, I think it's a GREAT step in the right direction! Regardless of what the outcome is, just getting this many people to take part in a governance decision is great. --AlecMeta 15:15, 23 August 2011 (UTC)Reply
I could gladly agree if i wouldn't have some bad feelings about this first example.
  1. No one asked for a "no" or "yes". Instead the questioning is in the form: "Is it important for you to get punished today? Ok, we have to admit, it is not important for you. But it is important for use. Please be prepared to receive your punishment.
  2. Because of the first fact, the name for the voting was completely wrong. Just a little mistake? At least the media calls it "referendum", without a second thought, as usual.
  3. All the "fair" argumentation at the voting page is based on the "Harris Report". Did no one asked the question: What qualifies him and his family (daughter) to make such an decision? He doesn't seam to have any expertise regarding the questions of the report. The only qualification i could find is the following: Sue and Harris worked for the same company. That's it.
  4. Why did the WMF choose some "random" man and his family to write this report? Why didn't they ask the experts from the American Library Association? We already know that both would have very contradictory opinions regarding this topic.
Whats left at the end makes me sick. Either the WMF is really that stupid, or the WMF is willingly working against the initial goals of the project. I would be lucky, if they are just that stupid. --Niabot 19:39, 23 August 2011 (UTC)Reply
For the record, Naibot, I'm really getting a lot out of this dialogue. A month ago I thought I was THE most anti-censorship person around, and its very reassuring to find that my basic values are more mainstream than I realized.
The lack of a clearer "Hell No" option in the survey was a mistake. I think if the board had realized how successful this survey-discussion process would be, they probably would have done things differently-- the lack of a "hell no" option makes the survey less powerful than it could have been. But, they can't tinker with the wording once things are underway-- too many languages.
I agree that the general thinking is that we need "something", and that thinking colored the whole referendum process. But that's okay-- 20,000 people now know about this issue and they have expressed an opinion on it. Regardless of numbers, if they feel "Hell No" about it, they'll get that message to us.
During the last board elections, it seemed like nobody was even interested in the global movement. This process is MUCH more successful, albeit imperfect.
Lastly, remembering against that I am absolutely nobody in authority, I think we indeed 'expanding beyond' our initial goals and our initial vision. We won. We're the most useful reference source on the planet. We won beyond our wildest imagination. And going forward, we have to move beyond our humble initial vision, while retaining the core values that make us work. (and I absolutely believe that intellectual freedom is one of those values, so this discussion is good to see. This SHOULD be a hard thing to do right, this SHOULD be controversial, everyone SHOULD have lots of questions and skepticism.)
"Why didn't they ask the ALA experts?" What makes you think they didn't? Off the top of my head, two of our board members spend their days working with/for libraries. I know our board's advisers include some very good ALA experts. I'm not an ALA "expert", but my heart is, if that makes sense. All our opinions helped shape the Harris report. We want all the ALA experts we can get, and I believe we're actively recruiting such individuals whenever we get the opportunity. --AlecMeta 20:03, 23 August 2011 (UTC)Reply
I didn't read anything about the fact that the ALA would have been involved (would be quite an argument). In fact it would run straight against their basic principles, the Library Bill of Rights, to include a non-neutral declaration system.[21] So I'm sure they would never given their support to create such a filter, that labels content in arbitrary, non-neutral categories.
I can only see a report written by some "non-expert" (regarding this topic) and his family. Was that truly the only valid source, the only report? If so - then it's a real joke. Idiocy if you will. --Niabot 01:07, 24 August 2011 (UTC)Reply
Confirming Alec's guess above: the ALA was consulted, as were other Library Associations. The notion of intellectual freedom was referred to often during the development of the report. See for instance Talk:2010 Wikimedia Study of Controversial Content/Archive 2#The Librarians's Perspective. None of the people consulted with during the report were asked to 'give support' to the result. I agree with the sentiment that grouping knowledge into arbitrary non-neutral categories are not compatible with general Library practice. Recommendation 7 was the most controversial of the study's results for this reason. Arguments were made that since the categories were descriptive and already existed, and would not be used to restrict anyone's access (only to make it easy for users to choose to delay their own access to information), that this is rather different from the specific concerns raised in the Library Bill of Rights, the interview above, and similar documents. But the problem remains that choosing to highlight a set of categories as worthy of special concern is in itself proscriptive - which is why the idea of arbitrary categories didn't make it into the Board resolution.. SJ talk | translate   03:34, 6 September 2011 (UTC)Reply
I read the answers to the questions which are closly related to the current issue. None of the questions actually hits the nail on the head, but they give a good indication. For example:
4. Are there various levels of display and accessibility of materials in most libraries?
I suppose in the sense that in public libraries and school libraries materials are arranged according to age groups: hence the picture books in the children’s areas, the teen age books (Young Adult – YA) in the teen (or YA) areas, and so on. But this (or should be) is more about reading level rather than sensitivities. (Underlined by myself)
5. And are these decisions about accessibility made on the basis of potential cultural sensitivities or sensitivities around sexuality (I'm talking in general here, not a policy about kids). In other words, are there certain books that are restricted in some way because of their subject matter?
Ideally, no – this should not be happening; ...
6. Do libraries have a policy about demarcating volumes within the library itself -- in other words, some sort of system that identifies certain volumes as potentially objectionable, or safe?
No – this practice is definitely frowned on and should not be happening. This is counter to our principles, philosophies and practices.
Since some news articles already covered the filter under the name of youth-protection (or in German "Jugendschutz") we quickly see a huge gap between „Sorting material for the reader“ for better accessibility and „labeling content by sensitivities”, even so the proposed system would not be able "to protect the youth".
This continues throughout all the other following questions and falls back to very simple point. The rules for labeling of content. In other words the ALA did not agree with the filter at all. So i may ask: How does it come, that this statements where ignored entirely throughout the further progress?
A very basic question is: Is the categorizing of images/content after categories meant to please sensivities in any way compatible to NPOV? -- Niabot 22:17, 6 September 2011 (UTC)Reply

What info are we missing? Potential ways to move discussion forward

[edit]

The voting part is a huge success. I hear a lot of skeptical voices here on talk though, and while I can try to assuage most of their concerns, I'm not knowledgable enough or authoritative to even address all the points. To brainstorm:

  • What are our best minds picking between in their current thinking? (understanding of course they're always open-minded to new info and new consensus)
  • As realistically as possible, can we 'label' the options under serious consideration and provide as much info as possible about them?
    • In particular, provide some very vague characterization of the expected resource expenditures in each option. This doesn't have to be a dollar or time figure, but just some sense of scale. For example, I think a bare-minimum filter's resources would likely be "extremely minimal" when considered relative to our budget, and just giving a sense of scale for in some semi-official capacity might help.
  • People like knowing who is where and why. It humanizes the board and breaks up the idea of a mythical monolithic "they" who have nefariously planned out the future in advance. When people don't see the disagreement, they're scared that there is no disagreement and the democracy is a sham. Let everyone, board members or advisers, anyone even semi-'official' who wants to wants to, just write a summary of their thinking on the subject, without necessarily committing to that thinking or promoting it, just sorta 'show' the thinking. Even 'undecided' or 'in the middle' or 'not focused on this particular sub-issue' would help people learn they're interacting with good-faith humans, not puppets or cult members. :)
    • As always, some people have come to done this on talk, and they do it well. I just want to encourage others to do it more, especially others who can represent other viewpoints or different thinking on the subject. If we ever evolved to the point that every single board member released a statement, I think that'd be great for banishing the 'evil conspiracy board' myth-- we just don't want anyone to feel 'pressured' into sharing their thinking. Private thoughts are okay too. :)
    • As silly as it is, just seeing that the thinking doesn't involve anyone "out to rid the world of evil images" might be helpful. I know that's comically obvious to longtime meta people, but project people don't know this, and they still worry that religious or cultural conservative americans are co-opting leadership.
  • Consider designating some 'advocates' who will explicitly advocate for a given option. Board members probably wouldn't be good choices for the role of advocates, but former board or advisers or just very active innerish-circle wikimedians would all do fine. This would be another very experimental thing, so think it through-- but again, a highlighting a very specific public debate, personified by a team of advocates, a debate where the public can be spectator or become a debater-- that might help to focus discussion. (or it might make things worse.. think it over).

I'm agnostic on whether these are needed or appropriate for this particular issue and this particular discussion-- but if not this issue, there will be more in the future-- labeling and explaining prototype options while encouraging alternatives, writing simple honest statements reflecting the current thinking of open minds, and also showing diversity of thinking in leadership-- those should all help to focus future discussions on the feedback we most need. --AlecMeta 16:39, 23 August 2011 (UTC)Reply

I think it would be a very good idea to create some alternative implementation proposals (perhaps on a new subpage?) With this much discussion, it can be hard to mentally capture and evaluate the issues being thrown around. If we start a new subpage with various options under each heading, it could help. For example, we could have headings for:
  1. Repeal the board resolution. Could list arguments for doing so and reasons why it is unlikely to happen.
  2. Implement a personal image filter that does not use warning labels, but instead blacklist/whitelist functionality.
  3. Acknowledging that many view warning labels as an infringement on intellectual freedom because they prejudice the user against certain classes of images, develop a minimal, objectively-defined list for the most extreme cases where that infringement is justified, and where it does not apply unequally, in theory or practice, to limit depictions of any identifiable group of people.
  4. Conduct a global survey and identify the top 5-10 classes of objectionable images, attempt to codify what is objectionable about them, and create warning labels based on those criteria.
  5. Develop an interface that allows users to filter an unrestricted number of community-managed warning labels.
  6. Allow users to filter using a selection of 5-10 predefined existing categories.
And so on. The idea would be to coalesce sone of the concepts into a more concrete "What will this look like?" options. The alternatives would have some utility for moving forward (in combination with the survey results), but would primarily be to give users a sense of what is being proposed.--Trystan 14:00, 24 August 2011 (UTC)Reply
I think you need to go read Warning 10 in the report. Using "community-managed warning labels" (like RFD tags) is not only far more work, but it would enable third-party censorship in ways that saying "Sexually explicit stuff: Click here to initially hide images in Commons:Category:Photographs of sexual intercourse, the hundreds of vanity shots in Commons:Category:Human penis, all the images Commons:Category:Sexual penetrative use of dildos, and so forth" would not.
Also, per-user blacklist functionality already exists—but only after the user has been exposed to the image, and only for logged-in users, which is not what our readers want. WhatamIdoing 19:07, 24 August 2011 (UTC)Reply
Well, um, I'm a wikipedia reader, and that's actually what I want (The "eek! click it away!" option). When did we start differentiating between readers and editors anyway? Isn't that going to be one of the root causes why less people actually edit? ;-) --Kim Bruning 19:51, 24 August 2011 (UTC)Reply
Then you've already got what you want (so long as you're logged in). What you want is not what a significant proportion of our readers want. WhatamIdoing 17:20, 25 August 2011 (UTC)Reply
I too am worried about putting the interests of the "reader" over those of the "editor". That'd be bad. We don't have to please non-donor/non-editor populations. That said-- between global users and mobile users, there probably is a very genuine and legitimate need for rudimentary "shockingly upsetting" image shuttering. People sitting in a private office on a computer with a full keyboard is going to become the exception, not the rule. --AlecMeta 21:37, 24 August 2011 (UTC)Reply
Today's readers are tomorrow's editors and tomorrow's donors. We do have to please these people, if we want Wikipedia to have a future. WhatamIdoing 17:20, 25 August 2011 (UTC)Reply

Dubious archiving process

[edit]

Some of the content originally posted on this page has been archived, some not. As necessary as it is to make this page readable, it should be explain why some older content has been kept here and some newer been archived. Chosing what to archive isn't that neutral. Some post where really useless and archiving was a good idea, but still, archiving on a current discussion *might* be seen as a way to hide/censor some suggestions. Cherry 11:47, 24 August 2011 (UTC)Reply

Hi. The content is being archived strictly on date of last edit (which should be up to but not beyond the 20th at this point). While somebody seems to have duplicated some of the content from the archives for some reason (while still leaving it there, very puzzling :/) threads that have not been edited since that date have not been left intentionally behind and no edits past that date have been intentionally archived. (I discovered the restoration of some older content when archiving today; I believe it's all been rearchived...some redundantly. :/) --Mdennis (WMF) 15:51, 24 August 2011 (UTC)Reply

I feel, that splitting and archiving the page is part of the masterplan. I miss a section about the Public. --Bahnmoeller 13:46, 24 August 2011 (UTC)Reply

{{sofixit}} :-) ? --Kim Bruning 19:48, 24 August 2011 (UTC)Reply

Arguments from the German poll

[edit]

Here are the arguments taken from german poll: de:Wikipedia:Meinungsbilder/Einführung persönlicher Bildfilter. If anyone wants to translat it, go on... -- WSC ® 18:40, 25 August 2011 (UTC)Reply

Argumente gegen die Einführung der Filter

[edit]
  • Die Wikipedia wurde nicht begründet, um Informationen zu verbergen, sondern um sie zugänglich zu machen. Das Ausblenden von Dateien reduziert unter Umständen wichtige Informationen, die in einem Wikipedia-Artikel dargeboten werden. Dadurch könnte jede Art der Aufklärung und des Erkennens von Zusammenhängen eingeschränkt werden. Beispiele: Artikel über Künstler, Kunstwerke und medizinische Themen könnten absichtlich oder ohne Absicht des Lesers wesentliche Teile ihrer Informationen verlieren. Das Ziel, ein Thema neutral und in seiner Gesamtheit darzustellen, wäre dadurch gefährdet.
  • Die Kategorisierung von Inhalten nach Zumutbarkeit widerspricht dem Grundsatz von de:Wikipedia:Neutraler Standpunkt. In ähnlicher Weise argumentiert auch die de:American Library Association (ALA), die ein Markieren von Inhalten in Bibliotheken nach nicht-neutralen Gesichtspunkten strikt ablehnt und sogar als „Mittel der Zensur“ ansieht, wenn damit versucht wird, den Lesern bestimmte Empfehlungen zu geben oder sie vor Inhalten zu warnen.[1] Entsprechende Richtlinien hat die ALA in ihrer Library Bill of Rights festgehalten.[2]
  • Rücksichtnahme auf Interessen oder Vorlieben einzelner Leser oder Gruppen ist nicht Aufgabe einer Enzyklopädie (de:Wikipedia:Grundprinzipien). Für Wünsche an die Bildauswahl sind die Leser selbst verantwortlich (z. B. indem sie die Software ihres eigenen Endgerätes entsprechend einstellen oder eigene Filtersoftware einsetzen).
  • Gegner der beabsichtigten Filterkategorien und Dateienfilter sehen in deren Einsatz eine Zensur von Inhalten, welche dem Anspruch einer Enzyklopädie entgegenläuft.[22] Insbesondere wird hier die Gefahr einer Ausweitung der Filterung, eine Pflicht zur Filterung (Anmeldepflicht zum Deaktivieren des Filters) betont, wenn erst einmal eine entsprechende Infrastruktur geschaffen wurde.
  • Auch in Schul- und sonstigen Lehrbüchern finden sich Darstellungen von Gewalt oder expliziter Sexualität. Offenbar wird dies didaktisch nicht in Zweifel gezogen.
  • Schulen könnten ihre Schüler anweisen, u.a. bei Benutzung der deutschsprachigen Wikipedia die Filter zu aktivieren und keinen Gebrauch von der Bilder-Einblendmöglichkeit zu machen. Für Schüler, die diese Anweisung befolgen und keinen anderweitigen Internetzugang haben, käme das einer Zensur gleich. Entsprechendes ist auch für andere Organisationen, die Internetzugänge bereitstellen wie Bibliotheken, Parteien und Religionsgemeinschaften denkbar.
  • Der Filter ist keine Content-Sperre oder eine Jugendschutz-Software. Der Filter ist nur auf den Projekten von Wikimedia aktiv und soll sich problemlos von jedem Leser abschalten lassen. Da der Filter nicht den Zugriff auf Inhalte blockiert, dürfte dies kein Grund sein, weshalb de:Contentfilter-Betreiber ihre Mittel (Domainsperre, Sperren von Artikeln) zurückschrauben werden oder Wikipedia aus dem Index nehmen. Effektiv würde zweimal gefiltert werden.
  • Durch die Einführung von Bildfiltern entfällt nicht die Diskussion darüber, welche Bilder einem bestimmten Leser zuzumuten sind. Auch die Einordnung der Inhalte in bestimmte Ausschlusskategorien kann dadurch auf unterschiedliche Interessen und Vorstellungen treffen. Es ist unsinnig, bestimmte Inhalte noch und noch in Filterkategorien einzuordnen, da es nicht absehbar ist, dass klare Richtlinien darüber entwickelt werden können, welche Dateien in die verschiedenen Ausschlusskategorien gehören.
  • Die Vorstellungen davon, was als anstößig bzw. unerwünscht angesehen wird, können je nach Benutzer, kulturellem Hintergrund und Sprachversion voneinander abweichen. Die Verwendung global gültiger Filterkategorien ist daher nicht sinnvoll, da damit das Ziel, Filter zu erstellen, die allen Lesern und allen Kulturen gerecht werden, technisch nicht umsetzbar ist. Ein Leser, der selbst entscheiden will, was er zu sehen bekommt und was nicht, müsste sich die Bilder ansehen, was in einem Widerspruch endet.
  • Die Aufgabe, Filterkategorien einzurichten und aktuell zu halten, wird den Benutzern übertragen, die dafür Zeit und Mühe aufwenden müssen. Diese Ressourcen könnten stattdessen für anderweitige technische und inhaltliche Verbesserungen eingesetzt werden (Eine Übernahme bestehender Kategorien von Commons als Filterkategorien ist allenfalls eingeschränkt möglich. So finden sich beispielsweise in der Kategorie Violence (Gewalt) und deren Unterkategorien nicht nur Gewaltdarstellungen, sondern auch Bilder von Mahnmalen, Demonstrationen, Portraits, kritischen Karikaturen usw.).
  • Auch ist fraglich, in wie weit die Wikipedia die Filter überhaupt betreiben sollte, schließlich wurde der Inhalt von den eigenen freiwilligen Mitarbeitern nach den de:Wikipedia:Grundprinzipien erstellt und in einem kollektiven Prozess als behaltenswert anerkannt. Eine Ausblendungsmöglichkeit eigener Inhalte zu ermöglichen erscheint daher paradox.
  • Es könnte ein Bumerang-Effekt eintreten: Bearbeiter würden dann nach dem Motto „Dafür gibt es Filter“ ungehemmter potentiell Anstößiges in Artikeln unterbringen.
  • Die fachliche Qualifikation und methodische Vorgehensweise von Robert Harris und dessen Tochter Dory Carr-Harris, den beiden Autoren des Harris-Reports, auf dem der Beschluss des WMF-Kuratoriums maßgeblich fußt, sind fragwürdig. Robert Harris ist ein Rundfunkmoderator und -journalist, der 30 Jahre bei der de:Canadian Broadcasting Corporation (CBC) gearbeitet und dort u. a. mehrere Sendereihen über klassische Musik produziert hat. Zudem hat er einige Einführungsbücher zu klassischer Musik verfasst. Bei der CBC hat Harris 17 Jahre mit der heutigen Wikimedia-Geschäftsführerin Sue Gardner zusammengearbeitet und wurde 2010 von der Wikimedia Foundation als Berater angeworben. Es ist unklar, wodurch sich Harris über seine journalistische Erfahrung hinaus als Gutachter für umstrittene Inhalte der Wikimedia qualifiziert. Harris vergleicht Wikipedia mit CBC,[3] aber eine wissenschaftliche und allgemeinbildende Enzyklopädie hat andere Ziele und Methoden als eine journalistische Institution. Der Bericht ignoriert den kritischen Diskurs zur bewertenden Kennzeichnung von Medien, wie ihn z. B. die American Library Association führt.
  • Das Argument der Foundation (de:Principle of Least Surprise), welches voraussetzt, dass Menschen lieber wenige Überraschungen erleben, ist aus den Computerwissenschaften (Ergonomie von Computerprogrammen) übernommen. Sowohl psychologisch betrachtet, als auch in den Kommunikationswissenschaften, wird überwiegend eine gegenteilige Meinung vertreten. In der Presse beispielsweise werden Fotos zur Verdeutlichung als auch zur Weckung des Interesses verwendet (vgl. bspw. [23]).
  • Die Filter verstoßen gegen die enzyklopädische Säkularität. Der Harris-Report empfiehlt einen besonderen Umgang mit sogenannten „Bildern des Heiligen“,[4] aber schlägt Filter nur ausdrücklich für sexuelle und gewalttätige Bilder vor. Die Resolution des Kuratoriums geht davon aus, dass Benutzer nicht nur sexuelle und gewalttätige, sondern auch religiöse Inhalte als anstößig empfinden können und verweist zudem auf die Verschiedenartigkeit nach Alter, Herkunft und Wertorientierung von Benutzergruppen.[5] Im Rohentwurf der Foundation wird nun ausdrücklich auch ein Filter nach der Kategorie „Bilder des Propheten Mohammed“ geplant.[6] Eine Filterung nach religiösen Vorlieben widerspricht aber der Neutralität und dem universellen Bildungsanspruch einer Enzyklopädie.

Argumente für die Einführung der Filter

[edit]
  • Leser, die zum Beispiel die Darstellung von Gewalt oder Sexualität als anstößig empfinden, sich davon in ihren Gefühlen verletzt sehen oder nicht von davon überrascht werden wollen, können entsprechend kategorisierte Dateien ausblenden.
  • Am Arbeitsplatz oder in einer öffentlichen Bibliothek kann es für den Benutzer der Wikipedia von Nachteil sein, als unpassend oder anstößig empfundene Bilder auf dem Bildschirm zu haben. Die Filter wären ein Hilfsmittel, derartige Situationen zu vermeiden.
  • Es könnten eine größere Leserzahl erreicht und zusätzliche Autoren gewonnen werden, weil manche Leser und potenzielle Mitarbeiter die Wikipedia oder bestimmte Artikel nicht mehr wegen als anstößig empfundener Darstellungen meiden.
  • Bestrebungen, potenziell anstößige Inhalte vollständig zu entfernen (zum Beispiel durch Löschung von Bilddateien), wird der Wind aus den Segeln genommen.
  • Durch die Einführung der Filter könnten öffentlich geäußerte Vorbehalte gegenüber der Wikipedia, die auf der Darstellung von angeblich fragwürdigen Inhalten beruhen, reduziert werden (→ Diskussion).
  • Es handelt sich nicht um Zensur, da es sich ausdrücklich um persönliche Filter (siehe Problembeschreibung) handeln soll, die nur auf Benutzerwunsch aktiviert werden. Die freie Wahl eines jeden Benutzers soll über mehrere Funktionen sichergestellt werden:
    • Der Benutzer wird über die Möglichkeit informiert, für ihn unangenehme Inhalte zu filtern.
    • Der Benutzer entscheidet selbst, ob er die Filterung aktivieren möchte (de:Opt-in).
    • Der Benutzer kann die Filterung jederzeit wieder deaktivieren oder die ausgeblendeten Bilder einzeln einblenden.
    • Dies gilt nach derzeitiger Planung auch für unangemeldete Benutzer [24]. Wahlweise – je nach Ausgang des Referendums („Es ist wichtig, dass die Funktion sowohl für angemeldete als auch für nicht angemeldete Benutzer verfügbar ist“) – ist es auch möglich, dass es für unangemeldete Benutzer gar keine Filter geben wird.
  • Es ist unklar, ob ein Verzicht auf die Filterfunktion überhaupt technisch möglich sein wird. Falls sie „fest eingebaut“ sein wird, würde die deutschsprachige Wikipedia sich mit einem Filterverbot von Weiterentwicklungen der de:MediaWiki-Software abschneiden bzw. müsste eine Parallelversion der Software selbst fortführen.
  • Die Wirksamkeit des vorgeschlagenen „Verbots“ der Filter ist fraglich, da es sich technisch leicht umgehen lässt: Die Haupt-Filterkategorien mit den meisten Bildern werden sich bei den Wikimedia Commons befinden. Darauf könnten sowohl externe „Zensoren“ als auch Zusatzsoftware wie zum Beispiel ein de:Browser-Plugin zugreifen, mit dem sich die Filter exakt nachbilden ließen. Ein Verbot der Filterkategorien in der deutschsprachigen WP wäre bei Bedarf durch ein von Dritten betriebenes Filterkategoriesystem umgehbar. Ein „Filterverbot“ könnte solche Drittlösungen provozieren, die dann nicht mehr kontrollierbar sind und Zensurmechanismen vorsehen könnten.
  • Angemeldete Benutzer können schon jetzt über ihre CSS-Einstellungen einzelne Inhalte ausblenden; es sehen also ohnehin nicht alle Benutzer dasselbe.
  • Die Wikimedia Foundation begründet die Einführung der Filter auch mit dem z. B. in der englischsprachigen Wikipedia geltenden[7] „Prinzip der geringsten Überraschung“. Dies bedeute, dass der Inhalt einer Seite den Lesern auf eine Art dargestellt werde, die ihre Erwartungen respektiert.[8]
  • Der Harris-Report empfiehlt, Benutzern zu ermöglichen, Bilder (mit Darstellung von Sexualität und Gewalt) „in einklappbare Galerien zu platzieren, damit z. B. Kinder diese Bilder nicht versehentlich oder unerwartet zu sehen bekommen“.[9]

Refs

[edit]
  1. Template:Internetquelle
  2. Template:Internetquelle
  3. „The CBC is an interesting place. Like Wikipedia, it is a powerful (in its world) and respected information-providing institution, dedicated to public service and the provision of unbiased (the analog equivalent of NPOV) news and information to the Canadian public. However, like your projects, the power of the institution, and its public-service character, make it the focus of intense and perfectly legitimate discussions over content, balance, mandate, and the need to serve different publics simultaneously.“, Robert Harris, meta:2010_Wikimedia_Study_of_Controversial_Content/Archive
  4. Images of the „sacred“, Harris-Report, 2010
  5. „Some kinds of content, particularly that of a sexual, violent or religious nature, may be offensive to some viewers; […] We recognize that we serve a global and diverse (in age, background and values) audience, and we support access to information for all“, Resolution, 29. Mai 2011
  6. „pictures of the prophet (sic) Mohammed“, Personal image filter, Overview of this system, Mediawiki
  7. Wikipedia:Writing better articles - Principle of least astonishment in der englischsprachigen Wikipedia
  8. „We support the principle of least astonishment: content on Wikimedia projects should be presented to readers in such a way as to respect their expectations of what any page or feature might contain“, Resolution, 29. Mai 2011
  9. „The major recommendation we have made to deal with children and their parents is our recommendation to allow users (at their discretion, and only for their personal use) to place some images (of sexuality and violence) in collapsible galleries so that children (for example) might not come across these images unintentionally or unexpectedly. As we noted in our section on basic principles, we did so because we believed it would show some basic respect and service to one group of our users (those worried about exposure to these images) without compromising the different needs and desires of another (those desiring, even insisting, the projects stay open).“ Children, Harris-Report, 2010

Translation and Discussion

[edit]
Google Translation of Poll and its Talk
Never trust Babelfish! Even it's made by Google. -- WSC ® 08:54, 26 August 2011 (UTC)Reply
Note that the real discussion takes place on the talk page of the German poll, and reflects, largely (as far as I read) the tenor and conclusions of the discussion here. Rich Farmbrough 23:30 29 August 2011 (GMT).

Discussion page unsuable

[edit]

This arbitrary change to the discussion page makes it unusable. Instead of seeing new things pop up at the bottom where I knew how far I had read they are now all over the page. I also can't remember any discussion that the talk page should be changed to this format and now discussion how the topics would be arbitrarily divided among the arbitrarily chosen headers. This makes this discussion more difficult to follow. --94.134.219.11 17:08, 23 August 2011 (UTC)Reply

I can't find which topics I had already read and which I hadn't read. Change it back.

Please put the talk page back to the way it was. --Shabidoo 19:04, 23 August 2011 (UTC)Reply

Censorship is for other people!

[edit]
Did you ever hear anyone say, "That work had better be banned because I might read it and it might be very damaging to me"?
Joseph Henry Jackson

The complaint of most people who don't like it that Wikipedia has pictures of vulvas, or Muhammad, or whatever, is not that they don't want to see them, but that they don't want other people to see them. As a consequence, this measure will satisfy few people. It will make little difference to them if they can hide the horrible, offensive sight of the human body from their own eyes, but poor vulnerable children will still be exposed to corrupting genital mind-rays. It will not satisfy the religious if their particular object of veneration, forbidden from sacrilegious depiction, is invisible on their own computer, but readily visible on a million computer screens around the world. What this will do, however, is provide a facility that can very easily be changed from opt-in to opt-out at some future time. And given that the "think of the children" brigade will never be happy until other people are unable to view certain images, we will have made it much easier for them to lobby for the imposition of their prudish or superstitious ways on the rest of us, who want Wikipedia to present the world as it is, unfiltered by the content police. Beorhtwulf 21:03, 23 August 2011 (UTC)Reply

Couldn't be said better. --Shabidoo 23:05, 23 August 2011 (UTC)Reply
This is the strongest argument against investing time in making any sort of one-click solution easy: that few people will be directly satisfied by having this feature (satisfaction is hard to measure...), that it will not support the desires of parents or teachers or community leaders who wish to impose actual filters on their families/schools/communities - and yet that it will make it easier to switch on "opt-out" filtering in the future ('easier' is hard to measure, but this is surely true to some degree.) However even if one accepts this argument, there are still some features (such as the ability to hide all images, or to turn shutter a single image that offends) that would not run this risk and could quickly be realized. SJ talk | translate   03:12, 6 September 2011 (UTC)Reply
Much truth. We can try to help our readers themselves, but we must never help the true censors. The true pro-censorship crowd gets nothing out of this filter, absolutely nothing. --AlecMeta 23:51, 23 August 2011 (UTC)Reply
Absolutely nothing except a comprehensive and semi-reliable database of tags maintained by a party other than themselves, that they can use to implement an opt-out or even no-opt content filter. In other words, the wiki* community will be doing the true censors' work for them. 41.185.167.199 03:40, 24 August 2011 (UTC) a.k.a. Bernd JendrissekReply
You don't seem to have understood the proposal. There are no tags being used in this proposal. The report specifically warns against the use of tags. The proposed filter is based on categories, which already exist and which therefore could already be used by any would-be censors. If a censor wanted to restrict access to the images in Commons:Category:Penis, they could have started doing that in 2008, when the category was created. They could do that today. They could do that tomorrow. They could do that no matter what the WMF does or does not do with this filter. WhatamIdoing 19:27, 24 August 2011 (UTC)Reply
Category, tag: the same thing. What's being proposed is some way to flag content. The precise technical implementation of said flags, cats, tags, and how we call it, is irrelevant. Zanaq 18:41, 25 August 2011 (UTC)Reply
No, there really is an important technical difference here. Tags make third-party censorship very easy. Saying "Tick here if you don't want to look at images in Commons:Category:Photographs of sexual intercourse" does not have any effect at all on third-party censorship. The images are already in that category; would-be censors can already use that information to suppress images. Using tags (or specially created categories, like "Category:Images that offend people because they show naked bodies") would give censors new and specifically designed methods of identifying images for censorship. This proposal gives censors nothing that we didn't give them years ago, when we decided to organize images on Commons. WhatamIdoing 21:04, 25 August 2011 (UTC)Reply
I completely agree. The true censors will need to look at the images and to decide what to censor for others. The WMF does not protect them, the WMF proudly invests in their new censorship tools. Overall an disgusting idea. --Niabot 02:21, 24 August 2011 (UTC)Reply
Current cats are fine, if that's what we're actually going with; as long as the *users* determine which cats they do or do not want to see. There is still a danger here though: if certain categories are popularly selected by users, they can de-facto become tagging-categories, even if that was never the intent. --Kim Bruning 19:55, 24 August 2011 (UTC)Reply
Current image categories could 'seed' initial filter categories, but on-going, the filter categories would be fundamentally different than image categories as we now know them. There will be different kinds of arguments, different emotions, different standards of inclusion, and probably whole different populations of editors. --AlecMeta 21:18, 24 August 2011 (UTC)Reply
Would you PLEASE stop saying "will". This only confirms that the decision has already been made and that this is not a referendum but some formal process to affirm what you already plan to implement. --Shabidoo 21:45, 24 August 2011 (UTC)Reply
That is excellent feedback. My usage of "will" is entirely hypothetical and I'm absolutely nobody important, speaking with absolutely zero authority or direct involvement with governance-- but yeah it does send the wrong impression. I'll switch to "would" or "might" or something. :) --AlecMeta 04:08, 25 August 2011 (UTC)Reply
not just a whole different group of editors would be appropriate, but an entirely separate organization from the WMF, one frankly and openly devoted to censoring information, rather than disseminating it. People could use their filters in various ways, if they chose--this is free software, and anyone can modify it. All the arguments supporting this proposal is for why censorship is needed in particular situations. Let's assume this true. Those who advocate censorship and think they can do it and get it right and universally appropriate and non-obtrusive should go right ahead and do it somewhere. Somewhere else. It's perfectly all right to work on other places as well as WMF projects--I've worked on two other wikis myself, & one is a wiki with closed editing. Right for its purpose, which is advocacy. Not right for WMF, any more than this proposal is. DGG 19:05, 25 August 2011 (UTC)Reply
That has to be one of the most misleading statements given here. It is a simple fact that millions of people can't read the site because there is no protection, and you are saying that people will somehow be "censored" because they have to make one extra click to see an image? Wow. That is just disturbing and I lost all respect for you. That is beyond logical and is one of the most slanted POVs that could possibly exist. As a librarian, do you give children porn or do you abide by those "censors" and not? Ottava Rima (talk) 21:05, 25 August 2011 (UTC)Reply
As I understand it, the largest group of people who cannot read Wikipedia are the ones in PRC and countries with a similar approach to freedom of expression, , and I hope you're not suggesting we try to meet their standards. The main other group will be schools that do not permit it, and I don't think most of those would permit it no matter what we did with illustrations. After all, we talk about things they may not want their students to know about. DGG 04:50, 26 August 2011 (UTC)Reply
Outdent - "approach to freedom of expression" So basically if they don't have your view they don't have "freedom of expression"? Odd, because countries like Germany and the Netherlands restricts various things like certain kinds of political cartoons as "hate speech" yet have lesser restrictions on pornography. Regardless of your false claims, would you give a child under 18 pornography as a librarian? Because that is what this boils down to - protecting children. Do you think children have the "right" to access pornography or not? Ottava Rima (talk) 14:13, 26 August 2011 (UTC)Reply
Libraries very explicitly disclaim any responsibility for what patrons take out. Some, but not all, libraries restrict children's cards to the children's collection, but I am aware of no public library that monitors and filters what is being checked out. Just the opposite, in fact.[25] It is up to parents to determine what is appropriate for their children, not librarians.--Trystan 15:07, 26 August 2011 (UTC)Reply
Disclaiming responsibility and knowingly providing access to are two very different things. It is not up to the parents to determine what is appropriate for children under US law - many parents have been arrested and jailed for knowingly providing access to pornographic material to those under 18. Librarians know that they legally cannot provide access to pornographic materials to those under 18 and require proof of identification even for computer use of that. DGG knows this. Ottava Rima (talk) 17:27, 26 August 2011 (UTC)Reply
Wikipedia is one of the most incredible experiments ever achieved. It is the result of combined writing and edits and compromise of thousands of contributors and a completely democratic framework which expects nothing more than transparency and openness and now there is a "referendum" which has been organised from the beginning to get the answer the organizers expect...which will give the tools to people to have a dumbed down, tempered if not censored version of this incredible experiment. No...its not a misleading statement at all Ottava Rima.--Shabidoo 00:04, 26 August 2011 (UTC)Reply
Wikipedia is not a Democracy. You do not have a right to edit it. Before you preach about what we are, please get the basics right. Furthermore, you are not DGG so don't attempt to speak for him. Ottava Rima (talk) 02:30, 26 August 2011 (UTC)Reply
Then why is there a referendum if there is no democracy? Why do we vote for the board if there is no democracy? And even if there is no democracy then there is still transparancy and openess....which you forgot to mention in your response. Why did you leave that out? Because there is no transparency in this referendum and you know it. I do not have to speak for DGG I am speaking for myself. And I probably reflect the dozens and dozens of users who are exasperated by the management of this farce "referendum". --Shabidoo 02:47, 26 August 2011 (UTC)Reply
You just said that this referendum proved that the WMF was violating the standards of Democracy, now you are trying to say it is proof of Democracy? Wow. I really think you need to sort out your own confused ideas before trying to get others to agree with you. And you aren't speaking for yourself because I wasn't talking to you before. P.S. asking for opinions isn't a Democracy. Ottava Rima (talk) 02:49, 26 August 2011 (UTC)Reply
I didn't say any of the things you claim I said. This referendum is a farce. It is not even close to open and transparent...just looking at the FAQ of this referendum is evidence enough. I did not link this referendum to democracy once in anything I said. I said that the wikipedia experiment is. Nor do I need to be lectured by you, some one who has had their account on wikipedia blocked dozens of times to tell me what is or what is not democratic. What a farce. --Shabidoo 02:55, 26 August 2011 (UTC)Reply
Outdent - 1. "I didn't say any of the things you claim I said." 2. "a completely democratic framework which expects nothing more than transparency and openness and now there is a "referendum" which has been organised from the beginning to get the answer the organizers expect" 3. "Then why is there a referendum if there is no democracy?" You implied that a referendum must be Democratic. Wikipedia is not Democratic and has never been a Democracy. Under w:WP:DEMOCRACY: "Wikipedia is not an experiment in democracy or any other political system." Your complaints are meritless and show a disrespect for the system as a whole in addition to contradicting themselves. Ottava Rima (talk) 03:13, 26 August 2011 (UTC)Reply
Dear Ottava Rima, where exactly on Wikipedia have you seen pornography? Pictures of vaginas or penisses aren't pornographic. Just look at the article on [26], please.
Pornography is already banned on Wikipedia, and for good reason. Pictures of naked humans aren't immediately pornographic, and none of the pictures on WP can be pornographic since they are all here for educational purposes, clearly not "for the purposes of sexual arousal and erotic satisfaction". Hence none of the nude pictures on Wikipedia can harm any child in the world. Parents who don't let their children explorer human sexuality in a safe way - like here on WP - are the ones that harm their children. This should be obvious. --178.201.100.215 20:11, 26 August 2011 (UTC)Reply
There are pictures of people having sex. If you want to say that is not pornography, then you are using a very different definition of the term than what is within the law. And merely claiming something is educational does not make it so and the law doesn't really care if it is "educational" pornography or not. If you want to say that exploring sexuality is supposed to be a purpose of Wikipedia, then I hope you are banned for encouraging the predation of minors. Having pornographic images in an environment with children is the one of the primary ways pedophiles groom children and the WMF has taken a stance against allowing pedophiles and grooming to take place here. Ottava Rima (talk) 20:29, 26 August 2011 (UTC)Reply
"Within the law"? Maybe American law, but not UK law, for instance. And also not in many other countries that have fluent english speakers. So, please elaborate on that and link to some law texts, otherwise your argument is completely void to start with.
All I am saying by the way is that it would be really weird for us Wikipedians to not trust our own articles here. So if the definition on the en Wikipedia doesn't capture your definition of pornography, then please work on the article together with the other authors and find a better definitions. Then come back and go on discussing. Before that, let's use the official WP definition, shall we?
And with all due respect, for the vast majority of human history, children have naturally seen adults having sex. There was no other way, out in the nature, without houses and beds. To say that merely seeing "people having sex" can be harmful to children is therefore a pretty big claim on your side, so please go ahead and provide evidence.
Last thing, I hope you take the whole accusation of "encouraging the predation of minors" back, otherwise I will report you for this. Yes, I think Wikipedia can be educational for children. And yes, I think Wikipedia should be educational on all topics. So naturally it follows that Wikipedia may provide sexual education of children, for instance by showing them non-pornographic images of people having sex. Just look at the German Wikipedia's "sexual intercourse" article and tell me which of these (drawn) images exactly could harm a child: [27].
Grooming can't happen to a properly sexually educated child. I'm talking about sexual education at ages 6 here, like it is common in countries such as Netherlands or Germany. Anything else will set children up for something unknown, and they will follow it to their doom. If a child knows what it wants and what it DOESNT want, if it has the possibilty to explore sexuality in a safe environment, then a child predator will not seem interesting to it. --178.201.100.215 21:20, 26 August 2011 (UTC)Reply
We follow US law. Furthermore, UK law does not allow the giving of pornography to children or allowing them to access that, so your claims are highly inappropriate. And if you think they can't harm a child, then you disagree with a lot of psychiatrists and are arguing a stance that those like NAMBLA argue, which sickens me. An encyclopedia is not a place for children to "explore sexuality" and it is really disturbing that you would try to make it such. Ottava Rima (talk) 21:25, 26 August 2011 (UTC)Reply
Not a single citation of one of these psychiatrists, not a single word of evidence, not a single word about my argument - instead more accusations by putting me next to those NAMBLA sickos. You clearly know how to disqualify from a discussion. Would you be so kind and at least tell me if you REALLY think that children in the stone age were harmed by their parents having sex in their vicinity? Don't forget that we humans are just animals, and I hope you don't think that puppies are psychologically harmed by seeing their mother getting mated? Just, why whould we humans be different? In which way would it have been a evolutionary benefit for humans to develop psychological problems when something very, very important to reproduction (hint: sex!) happens in their vicinity? This obviously doesn't make *any* kind of sense (Or are you one of those creationist nutjobs?). The only reason that something like this may harm children's psychology is if the society they are raised in makes a big deal out of it. --178.201.100.215 22:05, 26 August 2011 (UTC)Reply
Quite strange, you claim I need to prove that it causes harm. Odd, because if the laws are already in Germany and every other nation forbidding it. It would be up to you to prove why we should break the laws, which is impossible. But your argument boils down to something that is rather inappropriate - you are trying to say that we should show children pornography. Ottava Rima (talk) 01:34, 27 August 2011 (UTC)Reply
Either you're too ignorant or too dumb to understand that this is precisely *not* what I am saying. Pornography provides a deranged image of human sexuality, e.g. a very patriarchal image in which women are just objects to please male sexual needs. THAT is pornography to me, and THAT is what I also believe can be harmful to children. My argument, on the other hand, is wether seing people have sex is harmful to children. I don't know any laws that forbid having sex in the vicinity of a minor, e.g. in your marital bed while your three months old child is sleeping next to it. The reason porn is forbidden is because pornography doesn't show people having sex as they would in real life *at all*. Do you finally get it, or is your mind still numbed by the sheer prudishness of your personality? --178.201.100.215 09:36, 27 August 2011 (UTC)Reply
The law disagrees with your POV, and it is odd how you think that women would not look at pornography. Ottava Rima (talk) 12:52, 27 August 2011 (UTC)Reply
I would dispute that a man looking at gay porn is objectifying women. Though I do agree that there is a lot of unappealing, purely anatomical porn out there (vast archives of "cumshots", for example) I don't believe in general that one sex is objectified more than the other - actors of both sexes are hired the same way, after all. Though one would like to see a better level of quality, bad art is not a crime. Wnt 16:05, 27 August 2011 (UTC)Reply
I neither said that women don't look at pornography, nor that women are the only sex that is objectified (that's why I wrote "e.g."). But apparently you are too stupid or ignorant to understand this. Since you simply ignored the real content of my comment, because you don't have ANYTHING to say against my arguments, I consider that you, in fact, don't know any kind of intelligent response. Well done! Frankly, I believe you are the exact reason why children get abused in the world: ignorant idiotic puritans that never, ever would open their minds for anything that doesn't fit their own view of the world. --178.201.100.215 12:13, 28 August 2011 (UTC)Reply
You are one of the most illogical people ever. You claim that children are abused because of "puritans", but it is those who make claims like yourself that are the ones who have been revealed as pedophiles on Wikipedia. Such people distract from their real problems and wish to have an environment that has children able to access pornography to help groom them. That is why every country bans children from having access to porn, even Germany. Your incivility is hallmark of all such people with your view - without any logic or reason behind you, you bad mouth and attack in hopes of bullying people into letting their guard down so you have access to children. That sickens me. Ottava Rima (talk) 15:19, 28 August 2011 (UTC)Reply
German law makes it clear that those under 18 aren't allowed to see pornography, whether it be "educational" or not. That is -your- country. Also, "Germany has a very broad definition of child pornography, which includes images of all real or fictional people who either are under the age of 18 or who appear to be below 18 to an "average viewer". " Would mean that most of the lolicon images at Commons would have to be deleted per German law. It isn't just a "United States" thing as many German users are trying to claim. Ottava Rima (talk) 21:30, 26 August 2011 (UTC)Reply
What you don't take into account is the fact that "pornography" is rarely defined in any law book. So what you think is pornography doesn't apply here. Please understand that I'm not voting for having pornography on WP, but e.g. drawn pictures such as on the German WP of people having sex. These are *not* pornography by German law and you will not find any judgement by a German court that says otherwise. On the contrary: by a popular judgement of OLG Düsseldorf, Pornography is defined as "grobe Darstellungen des Sexuellen, die in einer den Sexualtrieb aufstachelnden Weise den Menschen zum bloßen, auswechselbaren Objekt geschlechtlicher Begierde degradieren. Diese Darstellungen bleiben ohne Sinnzusammenhang mit anderen Lebensäußerungen und nehmen spurenhafte gedankliche Inhalte lediglich zum Vorwand für provozierende Sexualität." -> translation: "rough depictions of sexuality, which degrade humans to an exchangable object of sexual desires in a way that spurs sex drive. These depictions stay without a connection to other expressions of life, other content is just an excuse for provocative sexuality."
So all you have shown is that you have no clue~about German law. Good job! :) --178.201.100.215 22:05, 26 August 2011 (UTC)Reply
Pornography is actually very well defined in case law. Depictions of the sexual act have always fallen under pornography. The focus on the genitalia has also. Germany has well defined case law on the matter and is really strict. Furthermore, German law says "or fictional". That includes cartoons. Ottava Rima (talk) 01:34, 27 August 2011 (UTC)Reply
Actually, I think you will find it quite different in different jurisdictions at different periods. Would you care to give an exact universal definition? Or even an exact one for Germany? Exactly how much has to be shown of the sexual act to be pornographic? Does it apply to all sexual acts, or only heterosexual copulation? Exactly how detailed must the focus on the genitalia be? Is it the same in both sexes? Is it the same for all media? And if you give the German legal text, are you quite sure it's the same anywhere else in the world?
at least in the US in the early 20th century, pornographic was defined as being sexually exciting, or whose purpose was only to be sexually exciting. This was apparently assumed to be a bad thing. Myself, I regard arousing sexual excitement as a fundamentally good thing, (with the usual exceptions that apply to anything however good in itself) and I think so does much of the world. But it's also a basically individual matter just what is sexually exciting. I for example do not find most of what is commercial pornography in the US to be the least exciting. You may be different, and you have a perfect right to be, but on which of us should it be based? DGG 04:40, 27 August 2011 (UTC)Reply
Pornography is defined by law now. It doesn't matter how you define it, it is quite clear what is pornography and what isn't. So answer the question: As a librarian, would you give those under 18 pornographic material or not? Ottava Rima (talk) 12:52, 27 August 2011 (UTC)Reply
Any sexual depictions are in fact legal in Germany when it is for scientific/educational purposes. I would argue that WP qualifies for that. It matters a whole lot to German jurisdiction if you show an image depicting people having sex in order to demonstrate an educational article, or if you show it to someone - possibly a minor! - with the intent of increasing their sex drive. You jsut go on by saying that Germany bands this kind of stuff, yet these are all just random claims that you don't back up with anything but your ignorance of actual German jurisdiction. Find me a case that supports your thesis; I already showed and translated a part of German jurisdiction that backs up *my* claim. --178.201.100.215 09:49, 27 August 2011 (UTC)Reply
I personally would like to have some warning before I find myself downloading an image that is illegal in the jurisdiction that I live (in my case Britain). I have read both sides of the argument on whether pornography causes harm and that has left me mainly as uncertain as before. But the arena to settle this is thru elected parliaments. Given that wikimedia are accessible from many jurisdictions there should be the option for users to avoid breaking the laws of where they live when they don't intend to.Dejvid 11:35, 27 August 2011 (UTC)Reply
To my knowledge, merely accessing these images is not illegal in most jurisdictions. The image itself also isn't illegal; only sending it to other people is. Therefore, you don't have to be afraid of anything since there is no law preventing you from downloading those images to your computer. The fact that jurisdictions haven't taken action against WP so far is a good indicator that what we are doing is completely fine. The FBI knows about the claims tht WP hosts child porn after Larry Sanger's smear campaign, and still nothing has happened to WP. If we really are to let the jurisdictions decide, then it's best for us to wait out until someone from law inforcement actually complains.
Last but not least, I fail to see in which article there's suddenly pornographic images jumping at you although the lemma of the article is not sexual to start with. Care to give me some insight? "Virgin Killer" apparently doesn't count, that case was discussed way enough. --178.201.100.215 11:51, 27 August 2011 (UTC)Reply
"To my knowledge, merely accessing these images is not illegal in most jurisdictions" Quite untrue. Even German law says that accessing is illegal. Dejvid is 100% correct and your use of an IP here shows that you are pushing something that you know is quite wrong and refuse to connect it to your account in doing so. Ottava Rima (talk) 12:52, 27 August 2011 (UTC)Reply
Accidental accessing of child pornography is not illegal by German law. It's only forbidden to intentionally acquire such images (Beschaffung), and only if the material isn't of artistic nature. Any graphical depiction of child pornography, that doesn't focus on realism (based upon real child pornography) is also not forbidden. See: Mutzenbacher-Entscheidung. Otava Rima! I must mention it again: You use one wrong argument after the other. Hopefully you don't do it intentionally and lie to us, all the time. --Niabot 15:39, 27 August 2011 (UTC)Reply
"Accidental accessing of child pornography is not illegal by German law. " There is no proof of that. There is no way to say "I accidentally saw it". "Accident" is not a defense in Germany. They go off your IP and the material. It is that simple. Ottava Rima (talk) 21:13, 27 August 2011 (UTC)Reply
Thats a huge difference to the US-System! In Germany the prosecutor has to proof that you did it intentionally to acquire this kind of content. If it really is an accident by yourself, then he has no way to proof it. "benefit of the doubt" always applies in such cases. You should really start get some basic knowledge about the world outside of US. --Niabot 22:39, 27 August 2011 (UTC)Reply
"the prosecutor has to proof that you did it intentionally to acquire this kind of content" Nope. You can't just make outrageous claims like this. Ottava Rima (talk) 23:41, 27 August 2011 (UTC)Reply
Thats simply how a Rechtsstaat works. No one is guilty as long it isn't proofed that he is guilty. --Niabot 07:01, 28 August 2011 (UTC)Reply
Sorry, but the mere having the child porn on their computer is proof enough of guilt. That has always been true regardless of what you say. Otherwise, everyone could claim anything was "accidental" or that they didn't actually do it, which is laughable and preposterous. Germany has had a lot of major busts of child pornography rings lately, which shows that they are one of the least tolerant nations regarding child pornography in the world. Ottava Rima (talk) 15:19, 28 August 2011 (UTC)Reply
Even accidentally accessing child porn in the US can result in prosecution. There are zero exceptions to that law, not even for academic research or software companies creating anti-porn filters. As written, US admins at Commons who receive a complaint about child porn and access the page for the purpose of verifying that the file contains what the complaint alleges it does and deleting it could be prosecuted. It's stupid, and the DA who filed the charges would probably be crucified politically, but that's the law.
"It was an accident" is a valid defense, but you have to convince a jury to believe you. An inexperienced US school teacher, for example, was convicted when her classroom computer started showing (adult) porn pictures. She couldn't figure out how to make it stop (porn malware's hard that way), had been told not to turn off the computer under any circumstances, and didn't have enough wits to cover up the screen. It really was an accident—nobody triggers malware on purpose—but the jury didn't believe her. (After several years of legal wrangling, the case eventually resolved with the teacher pleading guilty to a much lesser charge and surrendering her teaching license. So now everyone agrees that it really was an accident—but the victim of this accident has been punished with a federal criminal record, a fine, and the loss of her career.) WhatamIdoing 17:10, 27 August 2011 (UTC)Reply
A sad and upsetting story. But how is this related to the filters anymore? --Niabot 17:20, 27 August 2011 (UTC)Reply
There is no way to prove that you "accidentally" looked at child porn. It is a bad defense and unbelievable. If someone wants to make up a story that they didn't even know how to turn off a monitor or that they accidentally showed porn to kids, then they should go to jail for stupidity in thinking people would buy that. But if they did "accidentally" show it, then your argument is actually in favor of the filter which would help prevent such a thing when accessing Wikipedia for her class. Ottava Rima (talk) 21:13, 27 August 2011 (UTC)Reply
Luckily we don't have a jury inside German courts and it must be proven that you did not accidentally came across such material. There must be at least enough evidence, to proof that you did it intentionally. There is no jury to bribe with. --Niabot 22:44, 27 August 2011 (UTC)Reply
Why the hell are you talking about child porn now? I surely wasn't. And there is no child porn on the commons, not even closely. I was talking about pornographic images (even by my more narrow definition) of adults. No child porn, no bestiality, nothing fetish-related, just normal hardcore porn of people fucking each other. Please show me the point in the US, German and UK lawbooks where it says that accessing adult porn is illegal. For Germany I can guarantee you that accessing adult porn is not illegal for any person. If you don't provide concluse evidence like this, every reader of this discussion will know that I am right indeed. Seriously, I bet you don't even KNOW your own pornography law... --178.201.100.215 12:13, 28 August 2011 (UTC)Reply
Under German law, "lolicon", the stuff that Niabot has uploaded, is illegal for merely having the "semblance" of "youthful" features. That shows that his arguments against the US as puritan is completely without merit. And what are you on about "accessing adult porn"? It was pointed out that it was illegal for those under 18 to access porn in Germany and just about everywhere else. Please log into your actual account instead of hiding behind your IP. Ottava Rima (talk) 15:19, 28 August 2011 (UTC)Reply
"It was pointed out that it was illegal for those under 18 to access porn in Germany and just about everywhere else". Wrong. This was claimed by you and you only, and you provided no evidence to hold up your claim. Even on Wikipedia, it is clearly stated that *providing access* of porn to children is illegal. No word about accessing porn being illegal, neither for children nor for adults. Have you ever heard of a child being arrested because the child accessed porn? Maybe in Iran. Not in any western state. Because there is obviously no sense to penalize a child after it was allegedly harmed by porn already. Even you should understand that anything else would be quite absurd. If you don't... well again, provide me with a law text that says *accessing* porn is forbidden anywhere. You know what? You won't be able to. Because such laws exist only inside your twisted imagination of how you would like the world to be. --93.129.35.41 18:53, 28 August 2011 (UTC)Reply

Most readers of this conversation should already know it. But for those who did not read any line i could just suggest to ignore the comments by Ottava Rima. He has shown (in this thread alone) at least a dozen times that he has no clue about non US law. He also showed a very blindfolded, narrow interpretation about the definition of pornography. Even after US-law sentences like "Depictions of the sexual act have always fallen under pornography." aren't true at all. Maybe he never looked at US art movements from the last 50-60 years. ;-) --Niabot 22:58, 27 August 2011 (UTC)Reply

All of that unpleasant flame throwing aside (myself included). While German porn law may be interesting, remember that the whole point of almost all of the postings here, is that this referendum was organised in a very unfair way, with awkward questions and not enough discussion before and during the initiation of the referendum. Most voters only read the material written by the organisers who are very sympathetic towards the image filter and thus we end up with a lot of information about users, that is scewed towards how people would actually respond if there was a more balanced presentation of the pros and cons of the filter and clearer questions. --Shabidoo 22:14, 27 August 2011 (UTC)Reply

Merely claiming it was unfair doesn't make it so. You have a fringe POV, and thinking that you would have more support if you were dominating the presentation is fantasy. Ottava Rima (talk) 23:41, 27 August 2011 (UTC)Reply
No one should dominate anything here. Thats the very reason why this "referendum" is unfair. --Shabidoo 02:30, 28 August 2011 (UTC)Reply
The referendum was to get people's level of response. It was not for you to treat it as a bully pulpit to demand that the WMF violate US law, German law, UK law, and just about the laws of most of the world because you want children to view pornography. Ottava Rima (talk) 15:19, 28 August 2011 (UTC)Reply

Just make a few copies of Wikipedia with different levels of censorship. You can have a completely uncensored Wikipedia, a Wikipedia for Christian conservatives, one for conservative Muslims (similar to the Christian one, except that pictures of women are only allowed if they wear a headscarf), and an ultraconservative Muslim one were women may only appear in a burqa. Count Iblis 18:07, 28 August 2011 (UTC)Reply

The German Wikipedia is not going to have image filters as it seems. The majority is against it. So yes, this might indeed mean a fork of Wikipedia if the WMF tries to tell us Germans what to do. People on the German WP are discussing this possibility. I hope the WMF has reason and stops this madness before it's too late. --93.129.35.41 18:53, 28 August 2011 (UTC)Reply

Just wanted to let everyone know that Ottava Rima just got blocked indefinitely for his bad behaviour in this discussion, calling other people pedophiles, etc. Just in case you wonder why there is no further response - this is why. --93.129.35.41 20:46, 28 August 2011 (UTC)Reply

It's all about how prude and puritanic many people in the U.S. are and it is because of that. Television commercials like this one for Fa won't be broadcast in the U.S. under any circumstances. But they are broadcast in many European countries, even during the daytime. In contrary to U.S. politicians and religious fundamentalists the ordinary people in Europe are not convinced that people get blinded if they see one nipple. Yeah, indeed, I think, seeing nipples is quite a normal thing – most babys see and suck such body parts very early in their lives and it doesn't seem to have any bad influence on their further development. It are politicians and priests who make human sexuality a bad thing. And it's Wikimedia aiding them. Sorry, that's reminding on the Taliban, but in reality it's worse. Worse because of the people in the board of trustees of the foundation know it better. Or at least they should. I have growing doubts if the people in the WMF board of trustes are the right people to promote the free encyclopaedia and the other free sister projects. I doubt they ever gave the slogan Wikipedia. The Free Encyclopedia. It should be free. With a big F. --Matthiasb 18:54, 29 August 2011 (UTC)Reply

Matthiasb, it is clear that you have not read all of the materials. The WMF receives frequent complaints from readers in Asia, Africa, South America, and the Middle East about the en:cultural imperialism of the USA, and these complaints are often specifically about the inclusion of what they believe are inappropriate sexualized images. American culture might seem prudish to someone from your corner of Europe, but it seems positively libertine to the majority of Internet users in the world.
Our users in the Netherlands and Germany are going to have to face facts: what you think is a normal, acceptable amount of nudity might occasionally be considered a bit much by the average American or Brit, but it is considered very, very inappropriate by nearly every other culture in the world.
Your "enemy" here is not the "prudish" or "puritanical" American. The people who are truly unhappy about your cultural standards are Asians, Africans, Latinos, Arabs, and Persians. They make up more than 80% of the world's population, and they have been vocal and persistent in their requests for the ability to stop what they see as Americans spamming sexualized images everywhere. WhatamIdoing 20:45, 29 August 2011 (UTC)Reply
Please don't try and speak for British users thanks. I'd rather not be lumped in with the puritan brigade, and am very much of the same opinion as my European neighbours Matthiasb and Niabot, with their healthy and rational attitudes towards sexuality and nudity (two categories which do not always overlap, contrary to some people's beliefs). Wikipedia is supposed to be here to spread knowledge and enlighten people, not cater to their medieval religious prejudices and ill-informed ideas about pictures of nipples making their children's eyes bleed. Unfortunately, reality is frequently offensive to the wilfully ignorant. Creationists would prefer that Wikipedia didn't present the facts of evolution, and similarly, prudish ignoramuses would rather no one was able to see a picture of a penis. We should not concede an inch to either party. We should be NPOV in all cases except that we stand against censorship and for free access to knowledge by everyone, whether or not their parents, governments or religious leaders are happy with the idea. This filter is a serious backward step for Wikipedia, and the 'referendum' on it is a farce. Trilobite 05:10, 1 September 2011 (UTC)Reply
Agreed, the US people are not the main people that want this censorship. And indeed, not censoring Wikipedia could be called cultural imperialism. On the other hand, it is equally cultural imperialism if some people that have a patriarchal, sex-negative attitude force us to tag our content, at the risk of enabling a lot of censorship possibilities. Hiding the truth about the human body is funamentally wrong for an encylopaedia. We should show the truth about everything. You could use the same argument of culture-sensitivity for non-image content. So we shouldn't even get started on this way. --178.201.100.215 11:14, 30 August 2011 (UTC)Reply
  1. Censoring Wikipedia on behalf of any group wether US or not is cultural imperialism.
  2. My enemy are censors and censorship. Since the "prudish" and "puritanical" American – heck, almost the whole world laughed about Mr Gonzales' censoring of Justicia – are helping censorship, yes they are the spear-tip in advocating censorship, they are my enemy. If America wants to go back in dark middle age, please proceed as you wish, the next elections are next year, but please let the rest of the world stay in the 21st century. --Matthiasb 21:35, 1 September 2011 (UTC)Reply
"If America wants" this: You're missing the point. This filter is not being created because Americans want it. It is being created because Asians, Africans, Latinos, and Middle Eastern people want this. If you would please check a map, you will discover that Asia, Africa, Latin America, and the Middle East are not part of the USA. WhatamIdoing 23:49, 1 September 2011 (UTC)Reply
You ever saw any raw data? The only thing I saw was a report by Mr. and Miss Harris, appearantly not from the Middle East. I don't believe that this filter is not being created because Americans want it. I don't believe it is being created because Asians, Africans, Latinos, and Middle Eastern people want this. The w:Arab spring shows that people in the Arab world are fed up with censorship. I am sure that this filter is going to be created because neoconservative cicles in the US want this. And it's up to the WMF and the board to proof that this is not the case. --Matthiasb 13:35, 3 September 2011 (UTC)Reply
To set some things straight, while researching the proposed (and rejected) policy for sexual content on Commons, some very surprising things became clear.
  • Yes, it is possible for people to inadvertently view child pornography - in fact, there was a huge flap where w:Traci Lords made some sexual videotapes that were widely distributed in the U.S. and elsewhere as mainstream porn, after which people eventually found out she was 16 at the time. Oops. We can't actually rule out that Wikipedia couldn't likewise pick up an image inadvertently, especially since we have no real option but to trust amateur photographers making educational photographs. In addition there are arguments over definition - there was in fact some argument over w:Virgin Killers where people are still debating whether the album cover is child pornography or not.
  • Yes, Wikipedia does legally provide some things which a person applying the w:Dost test to nowadays would probably call child pornography, namely the "Korpes des Kindes" (Bodies of Children) photographs on Commons. Though not even the ACLU will argue that child pornography is protected by the w:Miller test, it should be apparent that in fact this is the case here, because the photographer, convicted as a pedophile and spending a few months in jail under the lax sexual mores of the Victorian Age, is apparently well regarded historically as a pioneer of the art, and so the photos have historic and artistic significance. While that may sound controversial, the fact is, w:Larry Sanger called the FBI, which was widely reported in the press as investigating Wikipedia at his instigation. And Wikipedia certainly earned no special treatment e.g. by publicly pooh-poohing the FBI's call for restrictions on using the FBI logo in articles at the time. Nonetheless, no prosecution, not so much as a warning ever came out of it, and the photos remain on Commons to this day.
  • I don't know if that means that the FBI believes those photos are protected by the Miller decision or if they're just afraid the courts will finally draw the line against censorship altogether, but they're doing the right thing. The fact is, all that is accomplished by the frantic efforts to ban such images, including expansive bans on computer simulated and artistic images, is the state subsidy of a multibillion dollar market in the kidnapping and videotaping of real children. And the public's ability to remain blissfully ignorant of as many as hundreds of thousands of children held in slave conditions as prostitutes in the U.S. alone. While enforcement agents with weird Orwellian agendas prosecute people for "child porn" for participating in file-sharing networks without knowledge of the files traded, and throw people down the stairs for offering open access to Wi-Fi hot spots, and do next to nothing for kids even when they are arrested as prostitutes!
  • Wikipedia is not about people hiding their heads in the sand like the legendary ostrich, but opening their minds to the facts - including facing brutal realities head-on and finding ways to fix them. Wnt 00:30, 2 September 2011 (UTC)Reply

Should we have discussions in languages other than en?

[edit]

Right now, it looks like all the language discussion pages redirect to en. Being hit with a screenful of english, many editors may simply not even try to discuss.

Could we open up discussion pages in other languages and integrate them into the per-language "discuss" advertisement, along with a reminder for en speakers to participate here as well. --AlecMeta 02:15, 24 August 2011 (UTC)Reply

The discussion pages can be accessed by first going to the pages of the referendums in those languages, and then going to the talk pages. It's very interesting that most of these pages are empty. This includes the ones for some of the languages whose cultural areas have been presented here as among the primary beneficiaries--there seems, for example, to be no discussion in arabic. My tentative conclusion is that the board has been exercising what can only be called cultural paternalism (along with the more literal paternalism of not wanting young people to see sexual images). . Perhaps that accounts for their remarkable assertion that there is an intercultural objective international standard for questionable images. DGG 04:05, 24 August 2011 (UTC)Reply
If anybody is automatically redirected to the English discussion page with already present content, then nobody would assume that it was intentional to have discussions in your own language. Since this seams not the right place for many users, they decide to discuss inside their own language wikis. A huge problem, since nobody can follow this discussions and feed them back to Meta. I could have written onto the German discussion page on Meta. But it seemed to me that only the English discussion page was meant to be the place for discussions. --Niabot 11:42, 24 August 2011 (UTC)Reply
All the sheep herd to the same place? ;-) --Kim Bruning 20:02, 24 August 2011 (UTC)Reply
I suspect we had no idea that we would get this much discussion. Ar, for example, seems to just dump people here if they click on the arabic word for talk: [28]. How do we say "If you don't speak english, you should talk [[here|Talk:Image filter referendum/$langcode]]" in our most popular languages? --AlecMeta 20:20, 24 August 2011 (UTC)Reply

Non-English Discussions

[edit]
This section requires expansion.

Lag

[edit]

The last posts to this page are shown to be from the 21st of August, the history shows that there's more, but it's not shown on the discussion page itself. --94.134.218.22 10:59, 24 August 2011 (UTC)Reply

Check the subpages, or force the server to refresh its cache on the central page. (transclusion is strange like that). Lemme fix... --Kim Bruning 19:56, 24 August 2011 (UTC)Reply
Done!--Kim Bruning 20:01, 24 August 2011 (UTC)Reply
Not done! It's now only up to August 23rd. --94.134.192.160 06:55, 25 August 2011 (UTC)Reply
And the History page isn't working right either. One can't follow the discussion on the page, because it lags and one can't follow it via the History, because that doesn't record everything. Looks like someone is trying very hard to make it impossible to follow the discussion at all. --94.134.215.77 08:32, 26 August 2011 (UTC)Reply
Works For Me, I see your comment of 26 august just fine. For history: use the history of the subpages. This is all pretty straightforward transclusion, I wouldn't call it a conspiracy on those grounds! ;-)
Also: see the button I put at the top of Talk:Image filter referendum. Hit that button once in a while to ensure everything is in sync. --Kim Bruning 11:57, 26 August 2011 (UTC)Reply
That button doesn't work at all and, yes, making it difficult to easily participate in the discussion - including following the history of uptenth subpages linked probably somewhere buried on the page, looks like trying to obfuscate the discussion participation, instead of one page where all new discussion is in the same history and new things added at the bottom not in several random arbitrary categories. --94.134.213.137 09:18, 29 August 2011 (UTC)Reply

HELP: Translation sv/Swedish ready BUT CANNOT BE USED - URL needed for Swedish questions

[edit]

I am going to set the last Swedish page to "done" now, and then the Swedish voting would be ready, but I do not know the URL for the Swedish voting server. It should be something alike the German http://de.wikipedia.org/wiki/Spezial:Sichere_Abstimmung [29] and the French http://fr.wikipedia.org/wiki/Spécial:Vote_sécurisé [30]

--Ronja 12:19, 24 August 2011 (UTC)Reply

I'll make sure that Andrew sees this. Philippe (WMF) 19:58, 24 August 2011 (UTC)Reply
Thanks! --Ronja 22:14, 24 August 2011 (UTC)Reply
Presumably the apprprite redirect from "Special:SecurePoll/vote/230" will be in place? (Not that there's much time left... ) Rich Farmbrough 23:34 29 August 2011 (GMT).

Something to watch

[edit]

Philip Pullman No one has the right to not being shocked... What the filter will do better --Eingangskontrolle 19:24, 14 September 2011 (UTC)Reply

Don't correlate 2 unrelated issues to forge false theory. The filter is proposed because the Foundation respects some user's desire to show the sensitive pictures in the name of encyclopedic value. When Foundation is also trying to respect another user who is offended by the sensitive pictures, the one who is enjoying the freedom to show the picture has no right to condemn the countermeasure. We're not even asking text filter. But would you not be shocked if I glorify Hitler and Holocaust? -- Sameboat (talk) 16:46, 15 September 2011 (UTC)Reply
Seriously! I wouldn't be shocked at all. You have the right to think what you want. If not against the law, you have the right to publish it. Your argumentation would imply to not surf the WWW, because I might stumble upon an page with exactly this content, which occasionally happens. --Niabot 17:21, 15 September 2011 (UTC)Reply
In order to protect yourself against unexpected images, everyone can switch off all images. We dont need to respect users who do not want information, because they have choosen the wrong URL when typing wikipedia.org, this site is about knowledge and education. And I understand your statement about Hitler, that the logical next step has to be a text filter in order to avoid written statements, which might disturb some users. --Eingangskontrolle 08:14, 16 September 2011 (UTC)Reply
So this is their fault to open the de.wp main page when the consequence is far beyond personal tolerance or acceptance. Wikimedia is not for your self-righteous value. -- Sameboat (talk) 09:08, 16 September 2011 (UTC)Reply
It interests me that these people live in a country where cities carefully enforce, as a civil right, strict limitations on when the neighbors mow their lawns, because it's very important that no one be subjected to the dull roar of a lawn mower during the little children's naptime, or (last I checked, which was a few years ago) at any time at all on Sunday.
However, the equivalent of allowing lawn mowing at any time and offering earplugs to the neighbors horrifies them, because while it's very important that the neighbors don't disturb them, it's also very important that they can shock these same neighbors with naked pictures at any time, even if the neighbors don't want to see the pictures. WhatamIdoing 17:46, 16 September 2011 (UTC)Reply
It's amazing how the very idea of this filter as created a giant polemic and bring out extreme reactions and generalisations. On one hand, a user is screaming "DRACONIAN CENSORSHIP" and on the other hand another side is screaming "PROTECT OUR CHILDREN FROM PEDOPHILES WHO SHOCK US WITH CHILD PORN IMAGES" while a small group of people recognise that the filter would be a feature that challenges the current philosophy behind the wikipedia project of "consensus" and "openness". It will be a long and complicated debate. --Shabidoo 13:06, 19 September 2011 (UTC)Reply
I barely see there's anyone actually screamed to protect the children with the image filter (otherwise a censorship tool enforced by the parents.) We don't use children to be our meatshield when defending our standpoint. Neither does WMF -- Sameboat (talk) 09:08, 20 September 2011 (UTC)Reply


General comments

[edit]

Seldomly I have seen such a blatant abuse of the term "FAQ"...

[edit]

... to label a set of personal and disputed opinions. That so called "FAQ" is perhaps the most manipulative meta page I have seen on a Wikimedia project in the last half decade. -- Seelefant 19:00, 25 August 2011 (UTC)Reply

Its is nauseating to say the least. I added the second paragraph in the "censorship" section...but it makes little difference. The FAQ page is a revolting one sided farce. --Shabidoo 21:56, 25 August 2011 (UTC)Reply
It's a page on meta, subject to the same rules as any other meta page. Can you fix it? --Kim Bruning 11:54, 26 August 2011 (UTC)Reply

Claim: some images will be deleted from commons according to the Study of Controversial Content

[edit]

it must be noted that they are offensive to many people, men and women alike, and represent with their inclusion a very clear bias, and point of view – that of woman as sexual object. They are far from neutral. We would never allow this point of view untrammeled and unreflexive presence on any Wikipedia site as a clear violation of NPOV – we should not allow it on Commons either.

— 2010 Wikimedia Study of Controversial Content: Part Two

This means that the images like these File:Bruna Ferraz 2.jpgcommons:File:Bruna_Ferraz_with_Photographer.jpg will be deleted, but apparently most people in here think that none image will be actually deleted from commons. Hipólito Restó & Arte carlitox 12:08, 26 August 2011 (UTC)Reply

Maybe this will be deleted too File:Sunny LeoneDSC 1298.JPG, after all it's clearly biased and violates the NPOV? (according to the study) Hipólito Restó & Arte carlitox 12:13, 26 August 2011 (UTC)Reply
Two Words: "Prudent Idiots!"
Add a Photograph from a male model and there isn't any unbalanced POV anymore. But stop: Is it POV that most models are female? Don't we have hundreds/thousands of sources that guarantee that female models are far more common than male models? Where is that POV that they are talking about? The Study itself is a POV, the POV of some incompetent fags, called WMF --Niabot 12:20, 26 August 2011 (UTC)Reply
Please strike your rude language. That kind of insult is uncalled for. WhatamIdoing 17:33, 26 August 2011 (UTC)Reply
Why don't you whait for an Text-Filter? So you don't have to take the thoughts and disgust of others the board causes. -- WSC ® 17:43, 26 August 2011 (UTC)Reply
Can't imagine why. You should read Civil disobedience and try to understand it's backgrounds, if not blocked already. --Niabot 21:27, 26 August 2011 (UTC)Reply

Hipólito Restó & Arte carlitox, you do not need to worry about this. Commons does not follow the NPOV policy. That policy is not required by the WMF. It has been voluntarily adopted by most of the individual projects, but Commons has directly rejected it as being irrelevant and inappropriate to Commons' purpose.

However, Commons does have a policy that its retains only educational content. It is not merely a free webhost for any type of picture. Non-educational images are, and should be, deleted routinely. This is solely because they are non-educational, not because they are biased or objectionable. WhatamIdoing 17:39, 26 August 2011 (UTC)Reply

I've been playing Censor's Advocate for a while, but even I have to admit that Recommendation 4, taken literally, isn't a good one, and the justification for it is even worse. We do have a duty to be culturally neutral, while Commons does not have a duty to be NPOV at all. We agree that information needs to be "educational", but there is no consensus at all on what "uneducational information" looks like.
The word "Uneducational" is a nice word, but it doesn't exist-- our board could go on a field trip to the New York City public library and spend their entire lives looking through the its 53 million items without ever once finding a single item they could all agree was "uneducational". In particular, porn IS highly educational, whether we like it or not.
Recommendation 4 suggests we just take "intended to arouse" as a 'good enough' definition for "uneducational". It is not a "good enough" definition. All of art is intended to arouse the mind of the viewer, in one form or another-- it educates through arousal. Also, intended by who? The original image producers might have intended it exclusively to arouse, but it may have been uploaded for use in Wikiversity with the intention to educate.
But, I don't think Recommendation 4 is intended as any change in status quo-- I think it's merely an inelegant attempt to justify status quo. --AlecMeta 20:27, 26 August 2011 (UTC)Reply
Recommendation 4, and the quote above that started this section, were made to the community and the Board, but the Board at least did not accept that recommendation. Its focus on sexual content rather than controversial content generally, and its definition of scope was as you correctly point out not good enough. SJ talk | translate   03:06, 6 September 2011 (UTC)Reply

Insight -- tags of Controversial Content _IS_ controversial content

[edit]

Got a really interesting response to my question above[31]. It made me realize something rather amazing--

Just the data about what content is controversial is itself controversial content-- that is, some people object to the WMF even hosting that data. We worry that porn is merely uneducational, but a lot of people are very worried that the tagging data will actually be anti-educational: that it will be used as a tool of censorship and it is 'out of scope' for us to even host that data, much less actively solicit its collection.

That's not my personal point of view, but I think it's a very interesting one that I find much harder to dispute. --AlecMeta 20:52, 26 August 2011 (UTC)Reply

not tagging intrinsically, but tagging for POV; assigning objective subjects to images and articles is encyclopedic--an encyclopedia without categories would be hard to navigate, but an image repository without categories would be unusable. The current proposal is POV tagging, done explicitly because proper objective tagging wouldn't meet the needs of censorship. DGG 05:41, 28 August 2011 (UTC)Reply
Prejudicial labeling is harmful. Informational categorization is helpful. The key phrase here is "Directional aids can have the effect of prejudicial labels when their implementation becomes proscriptive rather than descriptive." Which suggests that a completely context-neutral image tool that lets anyone pick any category and shutter it, without making suggestions to them, would be fine by ALA standards. But any fixed set of categories, chosen for the sake of 'usability', would be proscriptive and would not. SJ talk | translate   00:35, 6 September 2011 (UTC)Reply
It took a long time to figure it out. But the American Library Association (ALA) knew this since 1951 (revised for new media) and introduced basic rules for content labeling and rating systems. It exactly matches with your conclusion.
Since the rules are so old, revised by librarians around the world and still standing; I'm very confused why the WMF paid money for the Harris Report, acting like it wouldn't know about the ALA. Either they never gave it much thought, or they did it intentionally because of a hidden propaganda. Both are bad signs. --Niabot 08:20, 28 August 2011 (UTC)Reply
I believe the Harrises consulted a number of Library Associations, including the ALA and the CLA. The librarian concept of 'intellectual freedom' shows up many times in the context of the report. See this interview.


Defective banner

[edit]

Several times a day, I end up on this page when I don't want to be here.
Why? Because when I am searching for something in enwiki, I get a pulldown menu that overlaps the Wikimedia banner.
I click on something in the pulldown.
But my click "passes through" the pulldown and is detected by the banner and I end up here.
This happens with every Wikipedia banner.
Now, I am using Internet Explorer; possibly that in itself is a factor.
Regardless, due to this defect, I have ended up on the wrong page 100s of times. Hundreds.
Speaking as a professional programmer, my solution to a persistent problem like this, which has been apparent for years, would be to restrict the active area of the banner, excluding the area that gets traversed by the pulldown. And not even the entire pulldown, since the pulldown options are left-justified. So excluding the left half of the pulldown region would be good enough.
And I might restrict this modified behaviour to Internet Explorer only, unless this is a problem which applies to all browsers generally.
This is a better bug report than I get from clients. Varlaam 20:00, 28 August 2011 (UTC)Reply

No chance for this bugreport - wikipedia programmers are used for something completly different. --Eingangskontrolle 19:11, 2 September 2011 (UTC)Reply

Image Filter Referendum - edits vs undesired information

[edit]

Hello,

Thank You to the creators, participants, constituants, non commercial customers, and readers of Wikipedia. I love this search engine. I'm happy it exists. The concept needed to see existence.

Kindly place all suggestions respectfully above the post or indicate them with another color text as this is my first post and I'm offering an opinion as much as I am understanding, for the first time here, how this process works.

I happened to notice, while crossing the image filter referendum page, the other day that there was a portion of the writing that stated importance of the images remaining something to the effect of non biased perspectives. I can not remember exactly what was the wording but it sounded like some sort of neutral or politically correct stance. In all fairness I had probably ought to re read to be certain but it spawned an idea and I am getting it out instead of researching for now. Someone may have something to detail or contribute to this that would clarify. Correct me if I'm mistaken, please, with commentary above this entry and I'll check back in tomorrow.

Why not allow the content to have what ever political position or leaning it is in reality and, if there are the resources to research or collect, then allow the blocks as they would natural happen just as naturally as they would be posted, collect the information and donate to organizations which are already committed to neutrality and global acceptance or even have the information for an artist who is ready to do something interesting and potentially helpful with it immediately or later?

With the fear and sometimes radical thought/talk of monitoring and paranoia about online participation, the concerns of stakeholders and customers of various online entities about where their information is going, and the general willingness of some opinionated people, some must be ready to voluntarily donate general information such as location, religious views, political stance, age, race, sex (ALL possibilities), place of origin, "handicaps", level of education, level of intelligence, stressful aspects of life, level of life experience, professions, careers, passions, former professions, former careers, life title, hobbies, financial status, frequency of travel, places traveled to, time spent reading, time spent on culture, time spent on sport, time spent on team sports, time spent computing, time spent on family, time spent on love, time spent having sex, time spent on self education, born class, and current class etc etc (whatever, the list could go in or be edited) with the option to choose other always(described or undefined) as well the option to answer a question as un/important that could be digitally collected and converted into a statistical one sheet or data that can be distributed to, not an overseer, but a entity or group charged with problem solving such as the United Nations or even more political organizations such as Amnesty International or other non for profits large or small that could use these statistics, along with or without experts, empaths, artists, or specialists that could help the organization understand what is bothering individuals in different areas of life and why. It could even be given to artists only.

Why do we only collect this information for market purposes and is it not often that it is for the purpose of sales? I'm not proposing that the imagery, as a result, should be pushed or dramatized or the kinds of images specifically chosen to be offensive. Absolute political correctness at a time when things are not always is potentially as stagnant and sterile to some motivating problems as politically incorrectness and why not use that which is seemingly negative, in our/your/my point of view, to provide not judgement but information, art, or counter imagery from neutral, caring, humorous, and/or wise entities without names or specifics tied to the data which are not potentially endangering anyone's right to safety, solitude, and privacy. Is it not fair to give it to creative characters that can again be edited if one chooses? It is possible to understand prejudices, issues, and difficulties of certain generalized locales or areas (geographical or otherwise) without targeting individuals or geographic areas and surely it is possible for organizations held mostly with a global respect or regarded artists that cater to a cultured world or deliver with enough cleverness to the mass public to be trusted with enough responsibility to understand issues without causing pain or public shame to others with opposing mindsets.

Is it not helpful to know how prejudices are concentrated? Is not possible potentially negative or borderline types of imagery and information may be also useful in some way if even for awareness alone and does it not seem dangerous, generalizing, or numbing also to homogenize our surroundings all so completely that it is neutral altogether and without dialogue amongst caring circles and mature adults? Isn't there a certain truth in such things that has also a value. If nothing is experimental, specific, in danger of being offensive, or for strictly intelligent and/or sensitive adults with potentially diametric views then isn't our outlook actually potentially altogether unreal?

If I am stating the obvious and taking the writing too extremely then I am admittedly guilty of microscopically criticizing. If, in fact, contributors of Wikipedia are already too busy as it is and this is a relevant way of saving time and energy and decreasing workload, then I can relate and see how this could be a possible intelligent trial to attempt efficiency but maybe we can get more from our situation. If the position is state or politically literally imposed onto some governed group of people you want to reach then I also understand. If Wikipedia is that governed entity, I think I'm a bit confused. I wonder if there ought not to be an adult log-in for sexual, hyper violent or legal but c;early not for everyone's children area that adults can log onto for even more information and imagery. Perhaps this is not the arrangement Wikipedia has with a host or government. Why it is that it reading to me as if the proposal affirms the acceptability of violent material over sexual material? This is ultimately very unfortunate and worth considering as a strange truth for some now.

I appreciate the existing ideas behind Wikipedia, WikiSpecies, WikiHood etcetera I've noticed popping up online. I enjoy using Wikipedia along with the other dictionaries and search possibilities and I am, over all grateful, for the internet and specifically projects like this and I look forward to their development and long term evolution and existence. As an individual who is very fond of imagery and its power to move us I would trust in us to be able to edit what we love, lust, want, like or may like individually as individuals to see as possibilities to grow. I welcome also what I never knew I wanted to see, formally did not want, and even sometimes accept that which I still don't care to see as a reality of the world and I embrace also the proposal to self edit this. What to do with the waste is what we face but wanting it gone altogether in a possibly fantastical way may be as obtuse as thinking our material garbage disappears after it is tossed and may negate its possible benefit of editing what one chooses to accept from one's reality and the power of this and its transformation as well as possibly occasionally (at the risk of sounding cliche) endgering the strength and joy of what we want to see and understand.

Generally, for me, there has been lot missing from given encyclopedias past. Points of view, especially the occasionally extremely juxtaposed, multi sided points of view, especially in the form of real images, that are not immediately or radically dangerous to others with the human power and ability to process considered, may be ways to understand the world more precisely.

If someone has a more positive or productive way of living I welcome understanding it. I am interested in reading any suggestions above this post tomorrow and in the future.

Thank you for sharing your thoughts. We made a decision early on to strive for a single useful article on each topic. We have not achieved that, thanks to language barriers, but within each language we try to simplify things down to a single highly edited article rather than a diversity of unfinished ones. Other possibilities exist, and may one day happen building on top of Wikipedia or in parallel to it. "multi-sided" points of view appear in Wikipedia, where they do, in the form of balanced coverage of what different groups think on a topic... a perspective I have often found quite useful. SJ talk | translate   03:06, 6 September 2011 (UTC)Reply

censorship =

[edit]

wins

190.51.161.77 11:07, 1 September 2011 (UTC)Reply

is completely against the open character of all Wikimedia projects. If one can't see some contributions others made what use those pictures have? If you have something to hide join the secret service of your country and don't make contributions to Commons, Wikipedia and other projects of WikiMedia Foundation. Cheers, S.I. 'Patio' Oliantigna 12:21, 1 September 2011 (UTC)Reply


Will the proposed filter protect first-time users?

[edit]

Given that the purpose of the proposed filter is largely to protect unsuspecting people from coming across graphic images which cause "astonishment", "dismay", etc., will the filter protect first-time users of Wikipedia? Will those coming in from search engines, for example, receive a notice or will the filter only protect those who have already been disturbed by graphic images? Regards, Ben Dawid 08:15, 27 August 2011 (UTC)Reply

A quite significant question. As the "FAQ" states that the filter would "not be a form of censorship", I would deduce that it would not "protect" users from "astonishment" who have not opted in for such "protection". But I could be wrong here. -- Seelefant 08:45, 27 August 2011 (UTC)Reply
I've often experienced astonishment on Wikipedia, and I've never considered this a bad thing, quite the opposite. See en:Aha-Erlebnis.
Of course, I realise that not everyone wants to learn new things or have their preconceived ideas challenged. But those people don't need Wikipedia; they can always go to Conservapedia, Metapedia and the like. --Florian Blaschke 10:08, 27 August 2011 (UTC)Reply
That would be fine, if Wikipedia publicly warned the unsuspecting to stay away and use other sites. But Wikipedia does not claim to cater only for some minority group who are in favour of images that most people find objectionable. The problem is for those people who do not realise that Wikipedia, unlike mainstream encyclopaedias, allows such "disturbing" and "shocking" images without any form of warning or censorship whatsoever. The only reliable way to prevent the unsuspecting from viewing objectionable imagery - and then "closing the stable door after the horse has bolted" - is by Wikipedians having the proposed filter automatically set to "on" for newcomers. This way it will serve its purpose. Otherwise it would be merely useless tokenism. Regards, Ben Dawid 11:40, 27 August 2011 (UTC)Reply
The unsuspecting have long been warned to stay at home, lock the doors, and for God's sake not connect to the internet, which has all kind of haughty and naughty things on it. This simple approach has the additional benefit that they have no means of trying to censor, disrupt and sabotage a collaborative internet project that has done more for the spread of knowledge and enlightenment than whole generations of bigotted puritanic scholars. -- 13:10, 27 August 2011 (UTC)
And mainstream sources these days show disturbing images, even on the front p. of newspaper like the NY Times. (In their case disturbing=violence/disaster, not disturbing=sex, but still not the sort of thing the old EB used to have.) Films aren't like they were 50 years ago, either. And need I mention television--and all this in conventional media, without even going on the internet. There is no way of having an information source that protects children and satisfies adults. There is nothing intrinsically wrong in a childrens' wikipedia with somewhat different standards and policies, but there is something wrong about making it by destroying the principles of the present Wikipedia. DGG 05:38, 28 August 2011 (UTC)Reply
Even Classical scholars are showing less restraint. If the Loeb Classical Library is willing to publish unexpurgated translations of Aristophanes & Plato, then some Wikimedians must find this confirms the exact opposite approach is the best solution. -- Llywrch 00:24, 29 August 2011 (UTC)Reply

I believe the answer is "No." The current proposal would give repeat users the ability to set a preference that would hide certain images by default in the future. They would have to first visit the site and discover that option. SJ talk | translate   02:35, 6 September 2011 (UTC)Reply

Can we have a poll like on the German Wikipedia?

[edit]

Can we have a real poll, at least an internal one, just like the one that is currently running on the German Wikipedia? Would anyone be ready to start one?

I realise that we cannot prevent the filter from being implemented anymore, but at least we'll get a better idea of the true majorities and if it turns out there is a large opposition against the filter, at the very least, we can document this and send a powerful message to the WMF, or perhaps even stop the filter from actually being used on the English Wikipedia, as it seems likely to turn out like on the German Wikipedia. Even if the opposition is not large enough to achieve this, we will have a real poll with real arguments at least, not this farce. --Florian Blaschke 09:33, 27 August 2011 (UTC)Reply

I wouldn't say it can't be prevented. Judging by this discussion, I think a lot of people in this poll are giving the image hiding a very low priority indeed. If the result is dramatic enough, I'm hoping that will be the end of it. Wnt 15:49, 27 August 2011 (UTC)Reply
Friends, I like and liked to contribute here and there in the past. So I wanted to vote in this case – a case that sounded "sound" to me. But I found just an English insert that "The Wikimedia Foundation is holding a referendun …" in my otherwise German pages. I didn’t find the German translation of the referendum, albeit it must exist somewhere. A translation request I coud see, but then only the translation status, not the translation itself. When I wanted to vote I got lost altogether. Inserting "Special:SecurePoll/vote/230" into the German Wikipedia search field gave no result. Please make it simple to us to vote. Fritz@Joern.De or 109.91.251.3 16:45, 27 August 2011 (UTC)Reply
Fritz - sorry to hear this. That's something to improve in the future. (I think you needed to go to de:Special:SecurePoll/vote/230, it won't show up in a search)
Seems like, the board is not interested if the authors of there Projekts are agree? Please start your own Poll! Investigate if your Community is agree with the board! Don't let them fool you, and treat you like a person who is not abel to decide by yourself whats reasonable in an article you wrote, and whats not! -- WSC ® 11:37, 28 August 2011 (UTC)Reply
Image filter referendum/de .. it's not hard. :) Rich Farmbrough 11:12 29 August 2011 (GMT).
Seems like en.users love it to be patronized. Maybe therefore they still got an queen and a advance knowledge about the needs of any community? -- WSC ® 13:02, 29 August 2011 (UTC)Reply
Certainly we can prevent the filter from being implemented if we make it clear enough that the board misjudged the community. I do not think they are so pig-headed that they will simply ignore opposition if the opposition is strong enough. What we have to do now is see what the results are--with the emphasis on the results of question 1, which is the nearest to the general issue. If the majority of people voting from the en WP think it highly important to have such a filter, then we will have one, and I will be forced to admit the majority of people who voted here do not see the encyclopedia in the same way I do. If the majority clearly don't think it important to have one, and the board goes ahead anyway, then it will be time to consider further advocacy and further action. There's no point laying it out now. But I point out that even the board members who at the last 2 elections would have been expected to oppose this resolution thought it necessary to join the majority--presumably to avoid something worse. If the feeling is strongly enough against the resolution, they might well change their mind. DGG 19:40, 30 August 2011 (UTC)Reply
The current intermediate result after five days is 158:27. That means that approximately 85% of the users are against the introduction of the filter inside the German speaking Wikipedia. --Niabot 19:55, 30 August 2011 (UTC)Reply
No, that shows that ~85% of editors who cared enough to comment at de.wiki are opposed to this. Participants in a discussion ≠ All users. WhatamIdoing 20:18, 30 August 2011 (UTC)Reply
This includes also all editors who cared enough to comment at de.wiki that they support it. Average of all participants is mostly likely the average for all users for large numbers. --Niabot 20:45, 30 August 2011 (UTC)Reply
Widewie, ich mach die Welt, wie sie mir gefällt... (WhatamIdoing can read that) Thats always the problem with elections and we didn't had the chance to send an email to all alert all users. And the referendum will show no support at all, as this was not included in the question. --Eingangskontrolle 19:09, 2 September 2011 (UTC)Reply
But what makes watiamdoing believe, if more users cast their vote, the trend would change? This poll is valid in german wikipedia. You can call this vote representative if you want to. You can make a declaration about the trend in the german community. Or you have to prove the poll wrong. -- WSC ® 14:56, 5 September 2011 (UTC)Reply

Nothing wrong with a wiki poll here -- as DGG says, anyone is free to start one. I do have the sense that a reprise of the vote with a proper set of questions might be worthwhile. In the future I would love for the poll tool to let people choose whether they want their vote published on the wiki, which would provide at least a partial list of voters in a more traditional RFC format so everyone could see the poll-mediated-comments as they develop. SJ talk | translate   00:38, 6 September 2011 (UTC)Reply

Poll should have ended

[edit]

Per the main information page, the poll should have ended ~4 hours ago. Can someone update this? Thanks. mc10 (t/c) 04:39, 30 August 2011 (UTC)Reply

No, you are a little early. :) Rich Farmbrough 14:22 30 August 2011 (GMT).

{{Editprotected}}

Now it has ended... Rich Farmbrough 00:09 31 August 2011 (GMT).
But still nothing changed on the page. --Túrelio 12:29, 31 August 2011 (UTC)Reply
Done? - I see The referendum ended 30 August 2011. No more votes will be accepted. Best, -- Marco Aurelio 10:37, 2 September 2011 (UTC)Reply

When are results out?

[edit]

I don't mean to be demanding or anything... but when should we know the results of the poll? Thanks, 173.202.74.217 02:10, 31 August 2011 (UTC)Reply

The results should be released on September 1, 2011.  Hazard-SJ  ±  03:09, 31 August 2011 (UTC)Reply
The complete timeline, including the release of results, has been posted at Image_filter_referendum/en#Timeline for months. WhatamIdoing 16:39, 31 August 2011 (UTC)Reply
And now it was changed. Perhaps the board needs more time to write their letters of resignation? --Eingangskontrolle 19:14, 2 September 2011 (UTC)Reply

Waiting

[edit]

We are waiting for the results. This poll was made with a computer and a computer can give us the figures in a matter of seconds. We want to see the raw figures, not the interpretation, and we want it now. Who is using all this time for what purpose? Perhaps filter some of the votes? --Eingangskontrolle 21:15, 1 September 2011 (UTC)Reply

It takes some time to check, that additionaly added random votes (5 average) dosen't look too random. ;-) --Niabot 21:24, 1 September 2011 (UTC)Reply
Relax, guys. Assuming that the commitee has retreated to the Baker and Howland Islands, they still have more than 11 hours to keep their schedule. -- Seelefant 00:13, 2 September 2011 (UTC)Reply
I considered it as "prime vacation" area, but the internet access was... suboptimal. Philippe (WMF) 06:48, 2 September 2011 (UTC)Reply
  • As noted at Image filter referendum/Results/en, the larger issue is ensuring that we review and provide a genuinely representative sample of the comments received. When the timeline was developed, nobody anticipated that there would be over 24,000 votes or almost 7,000 comments, the latter of which must be manually reviewed, and many of them translated. We're working as quickly as possible. Risker 04:20, 2 September 2011 (UTC)Reply
I hope that you keep in mind that we are interested in the raw data, listed overall and per language/wiki, and not in only the interpretation of the results. --Niabot 06:13, 2 September 2011 (UTC)Reply
Full ACK to Niabot... the longer I watch the cascading effects of this non-disclosure policy, the more I dislike it. -- Seelefant 06:22, 2 September 2011 (UTC)Reply
We are not able to provide the raw data on the comments, as many of them contain personally identifiable information. You'll get the straight information on the numbers, but we are releasing both sets of information together. There's little point to providing per-project results, as just about everyone was able to vote from whichever project they wished. Risker 06:30, 2 September 2011 (UTC)Reply
Most users will have voted from their main account. It might not be 100% accurate, but never the less very interesting. If you want to be cultural neutral, at least the numbers of votes for each language should be shown. It represents the weight of cultures/languages inside the poll, which needs to be considered.
PS: An improvement for the next poll would be to add the information that no personal information should be posted, with a warning that all the information will be released, for reasons of transparency.
PPS: If you send mails, don't filter them by the "bot" extension. No user with "bot" inside or at the end of his name got a mail. Same problem as for the PDF/Book-Tool, which never mentions me as the author. Bug was reported a year ago, until today nothing changed. --Niabot 06:52, 2 September 2011 (UTC)Reply
Per-project results could be very interesting, and are likely accurate: I expect most people to have one main language that they use. Also, people were specifically directed to vote from their main wiki. If the data is available, please release it. If it turns out to be boring, fine, but if it is interesting, why not use it? Kusma 08:22, 2 September 2011 (UTC)Reply
There's little point to providing per-project results -- Please show me the data and let me decide for myself on that. --213.196.208.163 09:08, 2 September 2011 (UTC)Reply
It would be interesting to at least know the numerical results, before the comments are collated. The impression this extra waiting time creates on a community struck by such a controversial proposal is that a lot of spin-doctoring is being performed in the meanwhile. complainer
Per-language (or per-project) would be VERY interesting data to get on ourselves. Since a filter, if created, would cater to the needs of the people who want it most, all we can learn about that group would be good. Language or project may be one factor that pops out as being important. --AlecMeta 21:21, 2 September 2011 (UTC)Reply


We want the numerecial results now - you can read the 20000 comments later. I remember that in the US they sometimes count until the result is acceptable in the candidates brothers state. Is the legal residence of wikipedia still in that state? --Eingangskontrolle 18:47, 2 September 2011 (UTC)Reply

We are still waiting for the numerical results - no more tricks please.
It was promised for yesterday.
You can analyze the comments later, perhaps you hire an expert. --Eingangskontrolle 18:51, 2 September 2011 (UTC)Reply

Jeez-- go easy, all. The delays aren't anything bad, they're just people doing their work. Most of all it's NOT vote tampering-- it's good people trying to oversee a neutral election. --AlecMeta 21:23, 2 September 2011 (UTC)Reply
It's also important to remember that the results are audited by an independent third party - there can't be any tampering. Software in the Public Interest wouldn't have it, and I would resign if I thought it were happening. Philippe (WMF) 23:00, 2 September 2011 (UTC)Reply
What you see play out now with people being suspicious of the board, is the board's own fault in how they decided to present this "referendum" after the fact (i.e. after having already decided to go ahead with an image filter and merely ask the community for "input", mostly just to "keep appearances"). Well, here they are, your appearances: People think the board may be lying and (further) manipulating this whole thing. A very plausible thought, all things considered. --84.44.230.38 23:12, 2 September 2011 (UTC)Reply
Philippe, we all know that presenting the figures of x questions by y options is one of the easiest things a computer can do. They were designed for this. There is no reason not to publish these figures immediately after the poll was closed. The pure figures must be a desaster for the board and and now they are searching for positive comments and are having problems to find some. --Eingangskontrolle 09:22, 3 September 2011 (UTC)Reply
As a matter of fact, no. The figures when they are generated by the computer include, for instance, the "I don't have enough information" as a "-1" for the purpose of calculating averages, so all the averages are wrong and have to be recalculated. That is but one of many tasks involved in getting this information out, but it's the most direct example to speak to your argument. Philippe (WMF) 06:12, 4 September 2011 (UTC)Reply
@Risker: When the timeline was developed, nobody anticipated that there would be over 24,000 votes or almost 7,000 comments – when the timeline was developed it was appearantly scheduled for mid-summer for having a not too big turn in of votes. If it's now surprising that the turn in was much bigger than expected it's obvious that this tactics didn't work and that the community wasn't caught off guard. It is therefore much likely that if this referendem was held during university sessions much more people would have voted, and probably the boards proposal would have been slammed much worse. I think holding the referendem at this time of year was highly manipulative. And I am sure big parts of the community lost all their trust in the board. Indeed the board members should resign as soon as possible, all of them. --Matthiasb 11:03, 3 September 2011 (UTC)Reply
Actually, Matthiasb, this is the largest vote turnout ever in Wikimedia history, with 40% more votes in two weeks than the single-question license-change vote that lasted a month. It is the first time that Wikimedia has held a vote like this, and the closest parallel in complexity has been the election of members of the Board of Trustees, which usually garners around 5000 votes. While it meant quite a bit more work for committee members, I think I can safely say that we are all extremely proud of the Wikimedia community for this highly impressive degree of participation; having this large number of votes is a very good thing just about every way one looks at it. No matter when a vote is held, it will be a "bad time" for some groups and a "good time" for other groups. Risker 06:43, 4 September 2011 (UTC)Reply
I admit it looks eerily like the board is in damage control mode now. Why else wouldn't they instantly release the raw numbers for everyone to evaluate and judge for themselves? Wonder how they're gonna backpedal their way out of this one. You're probably right that it's going to be via emphasizing the comments (which they can select, and still present as "representative") over the raw numbers (which are most likely a bit too actually representative for the board's taste). --87.79.231.29 12:24, 3 September 2011 (UTC)Reply
Actually, the board hasn't seen the results yet either. Seriously, the committee is building graphs as we speak, analyzing a random sample of the comments, and trying to get a giant dataset formatted for the wiki. That is all. -- phoebe | talk 01:41, 4 September 2011 (UTC)Reply
Back-pedaling is easy: it needs nothing more than to say 'sorry, we were wrong' and to accept the vote of the community. --88.102.101.245 13:19, 3 September 2011 (UTC)Reply
It is easy, once they get off their high horse. Consider that we're expecting this from a group of people who obviously think they know better than the community what's good for the project. --213.196.219.153 13:30, 3 September 2011 (UTC)Reply
I think perhaps we are letting our imaginations run amok here. While I have been and remain critical over some aspects of how the referendum questions were presented, there is no reason to suspect anything is amiss in the preparing of the results. It makes much more sense to wait and include the comments with the numerical results, rather than tack them on as an afterthought days later.--Trystan 13:48, 3 September 2011 (UTC)Reply
Does it? The board didn't care much for any of the discussion on this talk page - but they're interested in isolated comments from the community? That doesn't add up. --213.196.219.153 14:39, 3 September 2011 (UTC)Reply
Maybe they're filtering out the socks? — Internoob (Wikt. | Talk | Cont.) 17:11, 3 September 2011 (UTC)Reply
Why, no, they are filtering the comments: after three (soon to be increased) days of frantic barrel-bottom-scraping, they'll come out with something like "100% of female editors from Uzbekistan above the age of 69 (yes, both of them) are for the proposal. Hail filtering, hail king Jimbo!" complainer

estimated schedule?

[edit]

September 1 has come and gone. Could a board member or someone in the know please provide an updated estimated schedule as to when the results will finally be made available to the rest of the community? --195.14.199.31 04:11, 2 September 2011 (UTC)Reply

Updated, release will be on September 3. Please see Image filter referendum/Results/en for further details. (Incidentally, it is still September 1 for a good slice of North America.) Risker 04:17, 2 September 2011 (UTC)Reply
Ah, yes, thanks. Just discovered Image_filter_referendum/en#Timeline. --195.14.199.31 04:19, 2 September 2011 (UTC)Reply
Thanks for the link to the header, I was just going to hunt that down; you didn't have to edit yourself. :-) Risker 04:27, 2 September 2011 (UTC)Reply

Oh, look!

[edit]

"The results are expected some time on 3 September 2011." Bring it already. :/ -- 12.34.666.00 00:15, 4 September 2011 (UTC)


Analysis and next steps

[edit]

What happens if the get the most votes by today and would that featured enabled the next week or in development? -- Mohamed Aden Ighe 14:16, 1 September 2011 (UTC)Reply

It has already been determined that the filter will be enabled. The decision was made by a small group before the community was asked for input. The referendum is not a referendum, and the information provided to voters is hopelessly biased in favour of the proposal's adoption. The board are not interested in the considered opinions of informed and thoughtful members of the community. No one official is even responding to the very sensible concerns and questions raised on this talk page. So I'm afraid your question has no relevance to anything. They'll bring it in whenever they feel like it. Trilobite 05:20, 1 September 2011 (UTC)Reply
Now I get it --Mohamed Aden Ighe 14:21, 1 September 2011 (UTC)Reply
It will not be enabled next week. The tool does not exist. That is the meaning of the sentence "As development of the image hider has not begun yet..." in the information about the referendum here. It will likely be many months before the image filter is enabled. WhatamIdoing 19:28, 1 September 2011 (UTC)Reply
Are you sure? It will not be ready next month, but I am sure the developers were already appointed and are waiting for the official "go". --Eingangskontrolle 21:22, 1 September 2011 (UTC)Reply
Any developper fulfilling such a demand for Wikipedia should feel ashamed. Ze should feel ashamed because of treating the idea of free information and of a free encyclopedia for all like shit. Any developper doing this should be accused of promoting censorship in the world on any available messageboard and forum. --Matthiasb 21:43, 1 September 2011 (UTC)Reply
Various computer-savvy users have suggested that about three months would be a reasonable minimum timeframe. The devs themselves have refused to commit to anything, because apparently not even the design work has been done. It's impossible to produce an accurate timeline until you know what the design is. WhatamIdoing 23:46, 1 September 2011 (UTC)Reply

Avoid post-hoc errors-- how will you analyze and interpret the data?

[edit]

Stating ahead of time how you will interpret the results will lend credibility your words after the fact when you stick to you interpretation. E.g. Here's one such example. Use it, argue with it, ignore it, make your own. --AlecMeta 21:59, 2 September 2011 (UTC)Reply

The poll does not ask a strict yes-or-no question, and the problem we're going to have is that if there is some minority who think it's important, those favoring the proposal will say that we need to "outreach" to them. So if 15% vote 10 for the first question, they'll say "we can't afford to lose 15% of the readership". Of course, that ignores that the 15% were already on Wikipedia to read the vote in the first place. And that there are lots of things the developers could work on that would get 90% or better approval.
I would say that this vote is a vote on priorities, and priorities mean that we are comparing it to other things. The problem is, those other things haven't been voted on - we can just assume that bugs and features with a "consensus" behind them are somewhere >50% approval. So I would say that if the median vote for the first question, the 50th percentile, is 5 or below, then we have a clear indication that Wikipedians want all consensus-supported developer priorities to come before this, which effectively kills it. I leave room for fudging if the 5's turned out to be only the 40th or 45th percentile, based on what "consensus" really means in the other cases, but I doubt we'll have to look into that possibility. Wnt 17:14, 3 September 2011 (UTC)Reply
The poll does not ask a strict yes-or-no question -- True, the poll intentionally does not offer the community a straightforward way to express their opinions/moods/wishes on the idea of creating an image filter in the first place, but a neutral observer should nevertheless be able to roughly gauge the community's will from the poll and from the comments. --213.196.219.153 18:25, 3 September 2011 (UTC)Reply
There is substantial risk that the poll's interpretation will be contested after the fact. After all it's a data point, not the Jade Emperor. As I said above, I anticipate that supporters of the image hiding will try to use even a bad result for them to advance their position. And to be fair, I wouldn't see a bad result for my side as a reason to give up - such a result would only indicate support for such a feature, but not necessarily for the system where all Wikipedia editors are somehow supposed to agree on categorization; I'd still suggest my alternative ideas. Mere support for offering the image hiding wouldn't necessarily mean support for a specific version as developed (especially if it strayed from the original opt-in principle), nor its implementation in every language of Wikipedia against local consensus, nor editors being forced to rate all the images. I would hope the other side, likewise, would take a negative result as a chance to plan a different image hiding scheme for their people which actually works, whether it be by personal javascripts and lists, mirror sites, or some kind of independent open source cybersitter project. However the poll ends, this will be a telling moment when the opposing forces get a glimpse of one another's strengths, but not really the end of the battle. Wnt 21:11, 3 September 2011 (UTC)Reply
Incidentally, opposition against a category-supported image filter is a battle against the battles which are bound to ensue the very instant such categories are created. A preprogrammed disaster, and an easily avoidable one. It's sad that the board and committee didn't do their homework, and won't let us help them with it either.
Supporters of a category-based filter setup have already begun showing their true colors though. When confronted with the simple fact that "10-15" categories couldn't possibly be enough to categorize the myriad distinct types of objectionable images, they are now consistently replying that "minorities are out of luck". In other words, they admit that the image filter is preplanned as simply a "decency" filter, with violence added as a filter category for greater mass appeal. It was never about giving all people the option to filter exactly the images they want filtered.
The saddest bit about this though is not just all the infighting that will happen with any filter category system, it's a wasted opportunity as well. A truly neutral, user-specific image filter is possible, and it's very revealing to see how the most aggressive supporters of the image filter are actually all supporters of an image filter based on specific filter categories. And the board appear to be made up entirely of people like that, otherwise the referendum questions might have been actually neutral, not mere attempts at presenting a fait accompli. --213.196.219.153 22:19, 3 September 2011 (UTC)Reply
Well, there are people saying things like this rather bitterly all over the talk pages, but we should remember one key detail: the filter as proposed with the 10-15 categories will not actually work. While I still would hope to avoid it, the debacle of trying to implement it should provide useful instruction to those who need it. Wnt 08:14, 5 September 2011 (UTC)Reply
Nothing is perfect, nothing could please the whole world. Your general image filter is not perfect either (even if you insist, it's not gonna please most people who demand the image filter, or you merely create that idea to please the opposition of the WMF proposal.) The pre-categorized filter and user-defined filter can technically coexist without excluding each other. More options is always better. Arguing the consequence is just a waste of time, you cannot suppress discrimination by merely denouncing the filter proposal here. It's the nature of every being to reject/discriminate things they dislike, just as you're rejecting the proposal right here. But denying this fact does not lead you to anywhere. More importantly, Wikimedia is not the ground to advocate anti-discrimination, if you cannot change that in the real world, it's not gonna work in the internet either. -- Sameboat (talk) 09:02, 5 September 2011 (UTC)Reply
Oh sod off. You haven't understood a thing, nor displayed any willingness to do so. I'm done trying to educate you. --213.168.110.95 00:00, 6 September 2011 (UTC)Reply

Results now released

[edit]

The preliminary report is now available at Image filter referendum/Results/en. Risker 02:03, 4 September 2011 (UTC)Reply

"Personal" Image Filter Referendum Committee? The image filter version offered by the board is the least "personal" of all proposals I've seen. It's basically a decency filter, so why not accurately call your fine selves the Decency Image Filter Push-Through Committee? At least that would be intellectually honest. --84.44.230.101 02:22, 4 September 2011 (UTC)Reply
The proposal is for a filter that operates only based on the personal choice of the individual reader, hence the use of that term. And the committee did nothing to "push-through" anything; the results are the results, based on the aggregate perspective of 24,000 wikimedians. Risker 02:28, 4 September 2011 (UTC)Reply
The proposal was presented as a filter that operates only based on the personal choice of the individual reader, and as such was accepted by the WMF Board of Trustees. It then weaseled its way into something "parents and teachers" could use to choose instead of the individual reader. Allow us to doubt that most voters have understood, or even smelled the distinction. As one of the most ferocious opposers of this proposal, I have no problem admitting that, were the implementation actually made so that the choice were personal, I would have very few, and rather minor, objections. complainer
Can you describe what you see as the shift to something "parents and teachers" can use to choose? Or changes you would make to ensure the choice was personal? SJ talk | translate   02:20, 6 September 2011 (UTC)Reply
Well, to be strictly technical, the only way I can see this being personal would be on an https connection: if the choice of tags to be excluded is sent to wikipedia encrypted, and the content is, likewise, sent back encrypted, there is no way for a man in the middle, i.e. for somebody controlling the proxy or firewall to interfere with one's choice. This would be personal. But the moment people started screaming about the children, it became rather clear that the purpose of the filter is far from personal choice. What the heck, I'm tired of this. The fundamentalists, the wishful thinkers who think they can help the third world by perpetuating its problems, the politically correct and the devotees of Jimbo have won (three millions, to be precise). Congratulations to them, until google chews them up. complainer
Good job with the spin. You almost manage to make it look as though this was an uncontroversial "feature" that need vaguely-defined tweaks rather than the imposition of an infrastructure towards the destruction of our projects. Particular note must be taken of the skill with with the debate was stifled with begging the question of whether the idea was beneficial at all, of making certain that people who object to the feature at all had no meaningful way of expressing it, and how the entire faux-debate was framed by focusing only on how it can be of limited use to a few people while skipping over entirely how that "feature" will be misused. — Coren (talk) / (en-wiki) 18:52, 4 September 2011 (UTC)Reply
Oh, and let's not forget to mention the millions of readers who were disenfranchised entirely. Remember them? The victims of that "feature"? — Coren (talk) / (en-wiki) 18:54, 4 September 2011 (UTC)Reply
I think that, before the survey actually started, the proposal was seen as "an uncontroversial 'feature' that need vaguely-defined tweaks". It has, I hope, become clear during the process that this is not the case (and once again I draw everyone's attention to the fact that the report's recommendation was carefully targeted at the community, rather than the board in this respect). I see no reason to attack the board, much less the committee or other functionaries, that distracts from the actual message that the whole issue needs a careful re-think. Rich Farmbrough 19:54 4 September 2011 (GMT).
I agree, but I don't share your optimism that the careful re-think is likely to occur; this is why the tone of the result page worries me. I read it twice and the feeling I get isn't "Oh, this is nowhere near as simple as we imagined because the issue is considerably more divisive than we thought it might be", it's "Look how we can make our poorly though out survey appear supportive if you look at it from the right angles." — Coren (talk) / (en-wiki) 20:57, 4 September 2011 (UTC)Reply
Thanks for noting that Rich, and for your commentary - which I generally agree with, here and on the other discussion pages. One reason for the suddenly-polarized discussion now is that we may have reached 2,000 community members in the discussions last year, but reached 100x that this year with the referendum outreach, many of whom had never heard of the process, history, or discussions leading up to it. Coren, your suggestions are more helpful than your cynicism, but I'm glad to hear both. There have been discussions elsewhere about ways to enfranchise readers. I wouldn't mind seeing a similar set of data from readers who have never edited (a cross section of a few hundred or a thousand would suffice!) SJ talk | translate   02:20, 6 September 2011 (UTC)Reply
  • So, basically, that's it? The filter variant put forward in the poll will be instated without the board ever even acknowledging the many different concerns and possible alternatives pointed out on this page? --213.196.213.66 03:32, 5 September 2011 (UTC)Reply
    No conclusions have been drawn yet from the poll. It is unlikely that anything will happen without acknowledgment and further discussion of the concerns and alternatives discussed here. SJ talk | translate   02:20, 6 September 2011 (UTC)Reply

The technical aspect is just the first 1%. The next step will be creation of a classification system with the hundreds of different parameters for each image (e.g. women without head veils, women being allowed to drive cars, anything homosexuality related, etc.) to classify it per the hundreds of different definitions of decency. Then comes getting somebody to classify each image according to each of those hundreds of parameters. 21:16, 4 November 2011 (UTC)

Please fix write protect on "Ask someone to do it for you" page

[edit]

Hello there,

Having translated Image filter referendum/Results/en to de, I complied with the instruction running "ask someone to do it for you" here as I had no time to dig into the electric workings behind the scenes (cite instruction:)

3. When you're completely done translating a page, add "done" after the '=' for that page (or ask someone to do it for you)

but got

Permission error
You do not have permission to edit this page, for the following reason:
This page has been protected to prevent editing.
Return to Talk:Image filter referendum/Translation.

I would like suggest making it possible (again) to comply with above instruction unimpeded.

Could somebody please fix this?

P.S.: I am aware that help is needed to formally finalize the translation as I had no time to dig into feeding the macros right (I noticed that the table cell indicating color did not switch to green, possibly as a consequence). I might have added that on above page if only I could.

Thanks, --217.81.168.69 14:24, 10 September 2011 (UTC)Reply

Data security of the votes

[edit]

The content pages tells:

>>The referendum began on 15 August 2011 and ended on 30 August at 23:59 (UTC), and is conducted on servers hosted by Software in the Public Interest.<<

I read now that on 23 August 2011, Risker said:

>>vote administrators have access to IP data<<

I don't recall whether during voting I read any disclaimer that the IP of my vote would be visible to anyone.

  • Who are the "vote administrator"?
  • Can I see a list of all vote administrators, or of all who have access to IP data or raw data of referendum votes?
  • Are the vote administrators among the administrators on the meta.wikimedia.org-wiki, or other wikimedia projectes, or do the vote administrators belong the the staff at Software in the Public Interest or to the staff of the Wikimedia Foundation?
  • Withinin the raw data of the vote, is the IP of votes connceted with the username and with the individual submitted content, that is with the scale figures and the text comment of the submitted vote?
  • Who has access to the raw data at Software in the Public Interest or at Wikimedia?
  • Will the raw data of the votes be physically deleted?
  • When will the raw data of the votes be physically deleted?

I demand herewith that the IP and user name of my vote shall be physically deleted immediately, --Rosenkohl 12:19, 12 September 2011 (UTC)Reply

Rosenkohl, have you read the privacy policy? It says that everyone's IP addresses are tracked and that they store cookies. Both of these statements are equally true for the poll. WhatamIdoing 23:14, 12 September 2011 (UTC)Reply
Hello, yes I have read the Wikimedia privacy policy. I don't see that the Wikimedia privacy policy is relevant for voting, since the vote is not run by Wikimedia or inside Wikimedia, but on the external servers of Software in the Public Interest. Also, I don't remember that the Wikimedia privacy policy had been mentioned or linked on the voting webpage.
Even Meta:Privacy_policy#Purpose_of_the_collection_of_private_information does not allow someone to examine submitted private raw vote data. The privacy policy reads:
>>The Foundation limits the collection of personally identifiable user data to purposes which serve the well-being of its projects<<
Without a concrete suspicion of abuse against a particular user, the privacy policy does not allow access to connection data. So the privacy policy does not allow "vote administrators" or someone else to look regularly without suspicion into each submitted vote including connection data and username. Even submitting multiple votes under different usernames but with similar connection data does not necessarily constitute abuse, since Image_filter_referendum#Eligibility does not explicitly say that this would be abuse. Besides, users with multiple eligble accounts have received mutliple e-Mail invitations to vote, and could be easily mislead to multiple votes. Even if votes are checked by a vote administrator for having the same connection data, then the user names and voting content should not be visible for the vote administrator.
So I don't see that any of the questions from 12:19, 12 September 2011 (UTC) is answered by referring to the Wikimedia privacy policy, greeting --Rosenkohl 08:34, 13 September 2011 (UTC)Reply
  • There was a link labeled "Privacy policy" at the very bottom of the first poll page. I'm sorry that it didn't occur to you to look for it.
  • The WMF privacy policy does not require "a concrete suspicion of abuse against a particular user". It names three reasons for using your IP address. For example, "the well-being of its projects" requires routine network administration, which sometimes means that a human looks through logs containing that information, or uses a section of the logs to figure out the details of a problem. This is a routine, normal part of website administration. The odds that such behavior would happen to show your particular information is less than one in a million, but if you the idea that a human might look at your activity really worries you, then you will have to stop using the Internet altogether. Every well-managed website does this.
  • That someone might have been misled into voting multiple times does not mean that the committee is obligated to count their multiple votes. It is reasonable for them to identify and remove overvotes, even if the overvotes were made in good faith or by accident. The privacy policy specifically names identifying "duplicate accounts" as a legitimate use of personally identifying information like your IP address.
  • Finally, if nobody in the entire world is permitted to look at raw voting data, then how do you expect them to ensure the integrity of the election? Do you expect your local government to close their eyes when they're reading ballots, so that none of the election officials see the "raw voting data" that is on the ballots? Someone needs to have access to raw voting data. In this instance, the WMF decided that the "someone" would not be a WMF employee. WhatamIdoing 17:37, 13 September 2011 (UTC)Reply
Answers

You asked the following:

  • Who are the "vote administrator"?
The vote administrators are the election committee, as listed at Image_filter_referendum/Committee/en. In addition, I was a vote administrator.
  • Can I see a list of all vote administrators, or of all who have access to IP data or raw data of referendum votes?
See above. There are three additional people who had access: two are on staff at the Wikimedia Foundation (Peter Gehres and James Alexander) who helped with technical aspects of vote checking and the third is the election auditor, Michael from Software in the Public Interest. All of these people had access to IP data, but none had access to data connecting IP to votes.
  • Are the vote administrators among the administrators on the meta.wikimedia.org-wiki, or other wikimedia projectes, or do the vote administrators belong the the staff at Software in the Public Interest or to the staff of the Wikimedia Foundation?
They are a mix of stewards, staff, and volunteer editors. All are extremely trusted community members, and are identified to the Wikimedia Foundation in keeping with the policy on access to non-public data.
  • Within the raw data of the vote, is the IP of votes connceted with the username and with the individual submitted content, that is with the scale figures and the text comment of the submitted vote?
The only place the IP and username and votes are connected is within the underlying data tables, which (to the best of my knowledge) can only be accessed by Michael from SPI.
  • Who has access to the raw data at Software in the Public Interest or at Wikimedia?
If by "raw data" you mean access to the tables that connect the votes to the IPs or usernames, only the election's auditor, Michael from Software in the Public Interest.
  • Will the raw data of the votes be physically deleted?
Yes.
  • When will the raw data of the votes be physically deleted?
Once the data for extracts requested (which is fully anonymized) is aggregated and released.

I hope that answers your questions. If not, you may contact me directly by email (philippe(_AT_)wikimedia.org) or ask here. Philippe (WMF) 08:49, 13 September 2011 (UTC)Reply

Thanks for your answers Philippe (WMF) and WhatamIdoing. Your answers to the questions from 23:14, 12 September 2011 (UTC) are in fact very helpfull for me to understand what is happening with the voting data. Also I think it would have contributed to the transparency of this referendum, if the content page had contained this information beforehand (who the vote administrators are, who has access to the raw data, who can see the complete raw data table, when the raw data will be deleted).

WhatamIdoing, I'm not sure which page the "first poll page" was. I think to remember that I did submit my vote on an external Website, but have no screenshot of this.

I don't object to routine network administration, or that someone has to look at logs to fix a technical problem, but object to routine inspections of private connection data.

I'm afraid you quote the privacy policy out of context there, which reads:

>>For example, when investigating abuse on a project, including the suspected use of malicious “sockpuppets” (duplicate accounts), vandalism, harassment of other users, or disruptive behavior, the IP addresses of users (derived either from those logs or from records in the database) may be used to identify the source(s) of the abusive behavior.<<

So "duplicate accounts" here means the "use of malicious sockpuppets", but not voting multiple times in good faith or by accident.

I don't want the committee to count multiple votes, but don't want them to look at complete data tables with connection data plus username plus vote content.

General political elections and votes are traditionally secret, with anonymous paper ballots, so it is impossible to identify a vote and voter afterwards. Please note that I did not demand that "nobody in the entire world is permitted to look at raw voting data". But to identify multiple votes it would be sufficient to:

  • let a computer check the votes for identical IPs and let the computer remove these votes except say the first one
  • or check for useraccounts who identify to be the same person on Wikimedia projects, and remove these votes except the first one; but then do so without looking at the IP.

Greetings --Rosenkohl 22:29, 13 September 2011 (UTC)Reply

The election administrators did not have access to the votes in connection with the IPs or Usernames. So when looking for multiple votes, they could not consider that data point. They looked at IP, useragent, and behavioral patterns. Philippe (WMF) 05:58, 14 September 2011 (UTC)Reply

Hallo Philippe (WMF), I did understand from your answer 08:49, 13 September 2011 (UTC) that Michael from Software in the Public Interest ist the only person who has access to the complete raw vote data table, and that Michael from Software in the Public Interest is not among the vote administrators. Although I'm not entirely sure, I assume that the "election administrators" are the same persons as the "vote administrators". Above I write that I "don't want them to look at complete data tables with connection data plus username plus vote content" in response to WhatamIdoing, who seemed to insinuate that I wanted the committee to count multiple votes.

I now understand that the election (or "vote") administrators have access to the IP, useragent and behavorial patterns to detect possible mutiple voting, but without having access to usernames and vote content.

What I don't understand is what is meant by "behavioral patterns". The only "behavioral pattern" of a vote I can think of is the timestamp when the vote is sent in, but I don't see how the timestamps of the votes could help the election administrators to detect multiple votes.

Greetings, --Rosenkohl 10:11, 15 September 2011 (UTC)Reply

New image filter considered by WMF

[edit]

mw:Core Platform Team/Initiatives/Hash Checking. Nemo 18:27, 3 March 2020 (UTC)Reply

@Nemo bis: That document is written very vaguely, but I don't think it's referring to an image filter. Rather, it appears to be about helping patrol image uploads for certain categories of illegal content. --Yair rand (talk) 19:14, 3 March 2020 (UTC)Reply
It's a different kind of filter, sure; what we nowadays call an upload filter. There are infinite shades of filtering and apparently WMF is now trying to pick one: phabricator:T245595. Every single case of people trying to use this stuff turned out a disaster for human rights, but sure, why not. Nemo 22:59, 3 March 2020 (UTC)Reply