Talk:Image filter referendum/en/Alternatives and design

From Meta, a Wikimedia project coordination wiki

Alternative ideas[edit]

All images on or off[edit]

Any kind of filtering system seems to me a waste of project resources and a morass. Instead, why not simply give readers the ability to turn all images on or off by default? And perhaps the ability to turn any given image back on, based on the reader's own evaluation of the caption and the article's subject. No tagging needed. No discussions required. Much easier to implement. Culturally neutral. Simple. Barte 11:21, 16 August 2011 (UTC)[reply]

That is already included in every browser setting! In Firefox is Tools > Preferences > Content > remove check mark from "Load images automatically". Finish! So an easy solution is already there. --Dia^ 13:02, 16 August 2011 (UTC)[reply]
That's not the point. If we're considering a quick "hide or show" system, this has to be done on wikimedia project, not by the browser. As you're probably browsing different websites simultaneously, you can want/need a specific website to hide it's image content, while you'll want another site to keep the images visible. Cherry 14:09, 16 August 2011 (UTC)[reply]
Please choose another browser. Opera has such a functionality in, Firefox has likely either a plugin or a greasemonkey script somewhere. For IE and Chrome will also very likely exists some extensions for that purpose (called site-specific preferences, plugins, addons and similar features do handle these!) Mabdul 20:22, 17 August 2011 (UTC)[reply]
"Please choose another browser" ?!! Either this image filter is the reader's issue : he wants it, he gets it by himself, and this vote is pointless, because this is not's Wikimedia's business. Or, this is a wikimedia issue, and there is no way you say something like "choose a decent browser". Accessibility of our content is our problem. Cherry 10:17, 18 August 2011 (UTC)[reply]
I could support this. — Internoob (Wikt. | Talk | Cont.) 22:48, 17 August 2011 (UTC)[reply]
This is what i thought of while reading here. Not browser based, but site based. but how do you support this for readers not logged in? is that possible (tech ignorant here). or would "IP reader" soft (ie can be overcome with a series of clicks) image blocking be good for schools, etc?(mercurywoodrose) 76.232.10.199 07:36, 19 August 2011 (UTC)[reply]
Yikes! Filtering images and any other content is a reader issue, not a Wikimedia issue. Individual readers (or their system/network administrators) can and must determine whether and how to filter images (and other content) they or their user groups access, block, or selectively filter. The associated qualitative issues and practical how-to's are bigger than and not specific to Wikimedia. Accordingly, for Wikimedia to involve itself in this morass (to use Barte's apt term) would be inappropriate, off-mission, controversial, and a waste of project resources. Froid 12:52, 19 August 2011 (UTC)[reply]
I agree with the off-mission sentiments partly, but this proposal is still better than the POV garbage of the original proposal, which would certainly be an even bigger waste of project resources. If we have to implement something, implement it this way. — Internoob (Wikt. | Talk | Cont.) 19:47, 19 August 2011 (UTC)[reply]
That's my take as well. I think a simple on/off option gets Wikimedia out of the morass of trying to selectively filter content. Of course, that would also be true if the project simply posted some instructions on how to turn images off using different browsers. But the former approach, as I imagine it, would be more convenient for readers, while still keeping the project out of the "morass" of selective filtering. It's the selective part of filtering that I think will cause the project grief. Barte 13:51, 20 August 2011 (UTC)[reply]
Support Support without regarding what's going to happen with 5-10-category-filter. Helpfull as well in case of animations which can't be stopped by users who aren't familiar with changing browser settings. Hæggis 20:57, 23 August 2011 (UTC)[reply]
+1 SJ talk | translate   02:12, 6 September 2011 (UTC)[reply]

Create an API for 3rd-Party Websites[edit]

Wikimedia could uphold its commitment to openness by keeping the main WP sites as they are — free of image-hiding — and creating a software API that would enable independent groups to create their own sites which mirror WP's content while allowing them to implement their own display schemes. For example, a group of volunteer parents concerned about their kids seeing disturbing images could create a site called "ChildSafePedia.org" with tools that enable crowd-sourced tagging, rating, and optional hiding of images, all without affecting the official WP site. The API's Terms of Use could include provisions to prevent commercialization and require display of a disclaimer clarifying that the 3rd-party sites are independent from WP. This would allow access to most of the WP content for groups that might otherwise block the site entirely. --KnowWell 18:10, 20 August 2011 (UTC)[reply]

This might be more complex than we need just for this task-- but we do desperately need to open up and start working with 3rd party sites to let them interface with us-- take our content, modify it on the fly, etc. "wiki" is the next "http" . --AlecMeta 23:59, 23 August 2011 (UTC)[reply]

Greasemonkey Off-The-Shelf[edit]

User:87.79.214.168 points out this NonCommercial Off-The-Shelf (NCOTS ;-) ) greasemonkey script which covers most of the requirements as it stands: [1] . Instead of making a lot of hoopla, could we just point users who wish to hide images to this script? Does this cover all the requirements we would like to meet? If not, which requirements still need to be covered? --Kim Bruning 13:57, 21 August 2011 (UTC) I'm all for *OTS. Time and money savings are a Good Thing[reply]

There's no guarantee any 3rd party script will be safe. Pointing our user to such script is very irresponsible if the user's computer is infected with virus or spywares thru these scripts. -- Sameboat (talk) 14:31, 21 August 2011 (UTC)[reply]
Welcome to the wonderful world of open source. We could convince the author to release the code under an open source license (currently (s)he has released the source, but not provided a license afaict; it seems likely they wouldn't mind an F/L/OSS license). Worst case, we can then check the code, digitally sign it (to ensure none of your creepy crawlies get in), and release it ourselves.
Alternately, we can just ask the author to do the same. --Kim Bruning 15:14, 21 August 2011 (UTC)[reply]
Oh yeah, so we have a proposed one right here. -- Sameboat (talk) 04:24, 22 August 2011 (UTC)[reply]
So, you are forcing the people to burden themselves when it could easily be implemented mass scale? The mere presence of such a script undermines all of the "censorship" claims and not the other way around. The script's existence is proof that it is right for Wikipedia to implement a large scale option so people can do it more simply. Ottava Rima (talk) 16:09, 21 August 2011 (UTC)[reply]
If the image filter is a "click here to see this image" on all images, I am not opposed to it. However, if it involves any categorization of images, it will be used for evil by third parties. The good thing about a "click here to see this image" filter is that it will be so annoying that nobody will want to use it. Kusma 16:21, 21 August 2011 (UTC)[reply]
Evil? We already categorize images. We have categories that blatantly label porn and violent images. Claiming that giving people the ability to filter out images so they can edit a page without worrying about them is somehow "evil" is really inappropriate. Ottava Rima (talk) 16:23, 21 August 2011 (UTC)[reply]
Right, we categorize all images in a manner intended to simply describe them. We don't single out "objectionable" ones (a non-neutral value judgement) and create special categories just for them. —David Levy 17:05, 21 August 2011 (UTC)[reply]
That is a rather silly statement. You claim that we categorize things simply, but then claim we shouldn't create "special categories". The first part would imply the second part was impossible. We already have categories for porn. The filter would block them for users wanting to block them. We aren't creating new categories. It would be impossible for anyone to not know this or realize this, so please stop trying to confuse people by putting up misleading and contradictory claims with a tone of "I'm right and you are wrong no matter what either one of us says". It is really inappropriate, especially when you lack a real argument. Ottava Rima (talk) 18:54, 21 August 2011 (UTC)[reply]
Some sweets, yesterday. Currently categorised as "Violent" but not as "Nude".
We may have categories for porn, but not on every one of the 777+ projects, moreover these categories are not what is required. What was considered pornographic in some places in the seventeenth century might not get a second glance now. We would need a whole new ontology, [category:Nude images class 1] .. through to [category:Nude images class 5] for example, with some images getting multiple categories (or tags, or whatever mechanism is used).
And certainly some images such as File:Woundedsoldiervietnamwar1968.jpg are not categorise as either a "medical image" or a "violent image", whereas the image on the right is categorised as violent. Rich Farmbrough 19:22 21 August 2011 (GMT).
Once again, you've quoted someone out of context. I wrote that we don't create special categories for images deemed objectionable. Regardless of whether pre-existing categories could be used (and Rich has explained why that isn't entirely feasible), the plan requires whatever categories are used (whether old or new) to be formally designated "potentially objectionable" or similar.
If you're under the impression that the idea is simply to enable readers to filter any of our current image categories, you're mistaken. The proposed implementation absolutely involves the creation of new categories (whether brand new or categories of "potentially objectionable" categories). —David Levy 19:48, 21 August 2011 (UTC)[reply]
It would also affect how existing categories are used. If we are tagging an image that happens to feature, in the background, a woman in a bikini, a same-sex couple, or a person wearing a sacrilegious T-shirt, we wouldn't likely take that into account. But under this initiative, we need to make sure that any of those depictions are properly tagged so as to not offend those who wish to avoid them. I can picture the warnings on the upload screen: "Please make sure all pictures of women are properly categorized according to immodesty of dress."--Trystan 20:12, 21 August 2011 (UTC)[reply]
Your comment assumes that we don't already require images to be properly categorized or that it is somehow difficult to add images to a category based on another category they are in. I don't know how you came come to either conclusion, especially when we have done such quite easily in the past. Ottava Rima (talk) 20:20, 21 August 2011 (UTC)[reply]
I don't believe I make either assumption. We do properly categorize pictures, but this does not include categories like "Any depiction of homosexuality," "Female midriffs", or "Incidental blasphemy." Yet these are all things which people are likely to be offended by. Neutral description of an image's subject is very different from content warnings for filtering.--Trystan 20:30, 21 August 2011 (UTC)[reply]
You have no proof of that. And even if they -were- offended by such things, we do have related categories. Bare midriffs, as one example. Ottava Rima (talk) 20:41, 21 August 2011 (UTC)[reply]
This would be censored if you didn't like midriffs
We may have a category for everything (sorta kinda) but these are for pictures of bare midriffs. If we pollute them with pictures that merely contain bare midriffs (actually it's called [Category:Crop tops] see example to the right with no bare midriff) we wreck the category system, [Category:Sky] would have to include any picture containing sky and so on. Rich Farmbrough 22:42 21 August 2011 (GMT).
Exactly. I'm trying to get at the fundamental difference between a functional category, like Crop Tops, and a warning label, like Bare Midriffs. As for not having evidence of what people are offended by, Ottava, I couldn't agree more. We don't have any good evidence about what people are offended by, how many are offended, or how offended they are. So how could we possibly implement a system to identify it for them?--Trystan 22:49, 21 August 2011 (UTC)[reply]

┌─────────────────────────────────┘
Reusing the current category system wouldn't work. Just take a look at them. For example the category violence. At first it has many subcategories (with many subcategories on their own) which are only indirectly related to violence. That means that including all subcategories as a filter criteria would give wrong results. If you take a look at the pictures then you will soon notice that many of them don't show actual violence. They are depicting monuments, protests against violence, portraits of people involved, and so on. In fact this means that we will need to create separate categories, look at every individual file and conclude/discuss inside which category it belongs. This will be an enormous effort with ten-thousands hours spend that could have been used to increase the quality and quantity of the current content. --Niabot 01:29, 22 August 2011 (UTC)[reply]

3rd parties can trivially expand the scope[edit]

The software side of this is fairly simple to implement, with a prototype working in a couple of days at worst. The issue is then "merely" (because it's a lot of work for the community) to assign categories for the software to work with. Done, perfect.

Here comes a massive application of en:WP:BEANS, or it would be WP:BEANS, once our own innocent filter is completed :

Now, as it turns out, our database is down-loadable, and our category system is readable by third parties. A third party can write software that either looks at our database or parses our pages, and can figure out what categories the images are in. They can then directly add those categories directly to an actual censoring proxy or filter. This will only take a couple of days too. *really* only a couple of days, since our fine volunteers have already generated all the data this third party needs. No sweat.

You know, while we're at it, images are usually linked to from pages. It's fairly easy to figure out what pages use a certain image. (Commons actually lists those pages!). To be on the safe side, said third party can probably block all the Wikipedia pages that link to a Bad Image. In thier mind, very little will be lost, and a lot of Bad Content can be blocked in that way.

Now, we can take that a step further. Since we now know Bad Pages, we can derive Bad Links, and Bad Topics.

Bad Links are links leaving from a Bad Page. A third party can probably figure to censor anything on the far end of a Bad Link, possibly out to a link-depth of about 3 or 4, without anything that they consider "of value" to be lost.

Bad Topics can be derived from page titles. If a Bad Topic shows up in the body-text of a search result, our hypothetical 3rd party filter-writer can probably "safely" block that too.

And hey presto, a perfectly censored internet. Due to the crowd-sourced nature of wikis, this method of censorship would be more reactive and more current than any other. I think the idea might be worth quite a bit of money. And the really great thing? The actual code for this can be written in a week or two, since all the actual hard work has already been done by wikimedia volunteers!

--Kim Bruning 13:57, 23 August 2011 (UTC)[reply]

That's about the size of it... If you don't have access to a free internet, you're not a free citizen anymore, you're something else. Wikimedia projects, all information systems, are _incredibly_ useful for big brother, or a network administrator who plays the role well. The only consolation is that current Wikimedia volunteers won't waste their time on filters-- the time spent will be spent by a very different group of people with a very different mindset than our current Wikimedia volunteers. And they'll all hate each others guts, and they won't agree on anything. and THAT is educational. :) --AlecMeta 00:09, 24 August 2011 (UTC)[reply]
I think that this is why the ALA classifies such lists and categories as "Censorship tools" --80.101.191.11 19:24, 24 August 2011 (UTC)[reply]

A new filter function based on existing categories, but no new categories[edit]

Most of the objections are related to the definition of what is or isn't objectionable, and about the process of arriving at that definition in the case of individual images.

If the WM community spends discussion time and effort to classify images, then the result of their effort will be used by censors to prevent certain groups of users from accessing those images. That constitutes a worse deal, in terms of the principles underlying WM, than if the censors themselves build and establish a censorship database around and on top of WM without the involvement of most WM editors. We would be making their jobs easier than before.

However, if the category system is left as is, including the requirement that the definition of all categories needs to be neutral (implied: objective), then the would-be censors do not gain any new advantage over the current situation.

The authors of the proposal intend for the individual user to be able to filter their own experience. It would be acceptable for the WM projects to offer such possibility, not based on 5-10 specially-designed categories, but based on any existing categories that the user selects. All we are offering is an add-on to the user interface, not an change to our data structures.

Arachnophobics need not refer to a special category of objectionable images, they can simply exclude spiders. Prudes can exclude any number of existing categories depending on their personal sensitivities. I may have a personal objection against the existence of the category Pornography, because its definition involves second-guessing the intention of the author of the image, but my objection is not made any worse by the possibility of filtering away all the images in that category.--Lieven Smits 09:21, 25 August 2011 (UTC)[reply]

You raise good points. This won't please everyone who wants a "Safe For Work" toggle, but add a simple useful feature without any of the issues you mention. SJ talk | translate   02:12, 6 September 2011 (UTC)[reply]
I think that the use of the "category" language is misleading people. Nobody's talking about setting up a new category, like "Category:Images that are offensive because they show naked humans" and adding that new category to hundreds of thousands of images. They're talking about using the existing category tree, and compiling a 5–10 lists of pre-existing, related categories for simple, user-friendly, optional image filtration.
A one-by-one, 100% customizable approach is definitely more flexible (and I can think of good reasons to move in that direction eventually), but there are three major problems with that approach:
  • It's far more complicated for the user. I believe that Commons alone has about half a million categories. I suspect that most users would want to filter less than one percent of those categories. Can you imagine reading half a million category names? Can you imagine individually ticking one to five thousand of them? Even a narrow area, such as the arachnophobic person in your example might want, is likely to require ticking about 100 categories.
  • It requires extensive knowledge of the material you want to avoid. For example, the people you label as "prudes" might well want to filter images from Commons:Category:Goatse, but they might not know what that even is. The person with a spider phobia might not realize that images of the en:Amblypygi could be triggers, and he is not likely to think that looking at all the pictures to find out if they trigger a panic attack is an appropriate method for figuring out what should be listed.
  • Now remember that Commons's cat tree is in English, and imagine trying to figure out which categories you are likely to want to exclude if you don't speak any English at all. An English-speaker could make a quick decision about Category:Photographs of sexual intercourse, but a non-English speaker might have no idea what that says. WhatamIdoing 18:33, 25 August 2011 (UTC)[reply]
I admit that I feared that new categorisation information was going to be added to individual images, whether in the form of new categories or in the form of a new entity type with an m:n relationship to individual images. After your post I re-read the proposal and I do not think that it is stated very clearly. The Harris report talks about the existing category system, which is not the same as sticking to existing categories; and in any case the referendum is not about the wording of the report.
If the proposal is interpreted in the way that you describe, then most of my worries are alleviated, because editors will no longer be asked to judge directly on the 'appropriateness' of individual images. The remaining concerns are (1) that the definition of the 5-10 groupings is unlikely to be guided by objective criteria, and (2) that the definition of the underlying, existing categories will be de facto modified as a consequence.
(1) Even the Harris report is ambiguous in the examples it gives of 'sexual' images: in one place it seems to relate to images of genitals and masturbation, in another place it suggests including bare breasts -- although the latter example could be meant to belong to a different grouping. Sexual arousal is both a personal and a cultural matter that is impossible to predict in a neutral (i.e., independent from personal and cultural preferences) way. Note that not all groupings need to suffer from this. It is objectively verifiable whether an image depicts the prophet Muhammad, or a spider, or a corpse.
(2) If a grouping is created without the necessary objective and neutral criteria for inclusion, then the mere existence of the grouping will influence the definition of the existing categories from which it is composed. Thus, for example, the current category Pornography may start including pictures that have the effect of arousing a number of users sexually, rather than its current definition which is limited to pictures that have the intention of sexual arousal. In the nightmare scenario of broad interpretation, this could even include the likes of the Ecstasy of Saint Teresa.
But both these concerns are far smaller than my original worry (not entirely gone) that we would be doing the tagging ourselves.--Lieven Smits 08:53, 26 August 2011 (UTC)[reply]
Directly tagging ten million images according to "offensiveness" is my idea of a nightmare, too, and that's why I'm glad that it's not planned.
I think one advantage of using the regular category system is that it's so much more detailed. Someone might put something in a "Porn" category, but—well, go look at what's actually in Commons:Category:Pornography. Would you put that in the list of things to suppress if someone decides to "Tick here to suppress sex and porn images"? I wouldn't. The categories that we'll actually want to put on that list are things like Commons:Category:Photographs of sexual intercourse, and I figure that pretty much any adult should be able to make a good guess about whether a given image actually fits in that category.
Borderline images are difficult to get "perfectly" categorized now, but there's surprising little edit warring over it, and people clean stuff up when they encounter problems. I don't think that this situation will change much. We may have a few more people appoint themselves guardians of certain categories, but that happens now (and is generally a good thing), and I expect it to work in both directions, with the 'guardians' both adding more images and removing inappropriate images. WhatamIdoing 17:08, 26 August 2011 (UTC)[reply]
Our community is surprisingly good at descriptive classification because they have a category system that serves no purpose other than descriptive classification. So they can tackle the inherently difficult task and just keep improving the system.
If our classification system becomes also a filtration system, I think it would be reasonable to expect that we would have at least as many dedicated and efficient filterers as we now have categorizers. And from an information science perspective, the processes are simply fundamentally different. A filterer doesn't have any concern as to what an image is about, while that concept is at the very heart of descriptive categorization. Once arbitrary standards have been set down, filtering is actually much easier, because it's simply a "does contain"/"does not contain" decision.--Trystan 18:44, 26 August 2011 (UTC)[reply]
Even worse we don't have enough categorizers now. Rich Farmbrough 11:22 29 August 2011 (GMT).

Pre-project enabling?[edit]

How controversial would filtering be if it was initially adopted only on Commons, which doesn't have to be NPOV. That would let us work out the bugs and let the projects decide when and whether to enable the filter on their project.

Would this help relieve concerns? --AlecMeta 02:37, 26 August 2011 (UTC)[reply]

Probably not, at least not my main concern: that WM editors will be spending time to make the work of third-party censors easier. It's the tag data itself that worries me, not the functionality that interprets it (on WM or elsewhere). Certain a priori restrictions on the way in which filters are defined could do the trick and put me at ease. Such as: filters are composed of relatively broad pre-existing categories, not of individual images or specialised categories that pertain by definition to very similar images (e.g., categories dedicated to a single work of art). And filters cannot be created unless they have an easily verifiable, objective definition.--Lieven Smits 09:15, 26 August 2011 (UTC)[reply]

Design ideas and issues[edit]

Questions about the design, existing alternatives[edit]

Hello there, I will just add a few comments regarding the upcoming referendum. As to my background: I am a Computer Scientist by profession, so I can technically judge what is involved. As a quick outline:

  • A user (either anonymous or named) is able to exclude certain images from search reasults. In the case of anonymous users, the preferences are stored inside the users session. Closing and re-opening the browser will reset the settings. Users who log in to edit can store their settings in their profile; their settings will not be reset when they close/open their browser.
  • Architecture wise: blacklists and whitelists can be used. To be determined: do these act on groups of images, or on single images? - Is it possible to whitelist single images in a group that is blacklisted? - To blacklist single images of a whitelisted group?
  • How does the software identify the images to "filter"? - There are two options: At load time, the software analyses the image; the result of this analysis is used for filtering. This will incur extra costs in computing time, and memory; There are different algorithms, which yield different results. The other option is static tagging. This option has the drawback that some people need to decide the tags to use ("tag wars" have been cited above). Also the behaviour needs to be specified if an image does not have any tags; the blacklist/whiltelist approach can be used.
  • There are programs on the market that implement a client-side proxy, and that probably cover 80-85% of what this development will achieve. I currently see no benefit in implementing this solution on the server. The solution where the filtering is done dynamically (i.e. no static tags), and on a per-image basis would probably be superior to the client-side filtering. This however comes at the cost of additional cpu and memory usage, as well as false positives/false negatives.

To summarize:

  • If the solution of static tagging is chosen, we have the problem that images need to be tagged, and "agreement" over the tags to use needs to be reached in some way. Also, the behaviour in the case of an untagged image needs to be defined. Finally, we need to define the granularity: Is it possible to "whitelist" individual images of a group that is "blacklisted" (or to "blacklist" individual images of a whitelisted group). Finally: how do we determine the "tags" (or group of tags) to use?
  • If we tag dynamically, we incur extra costs in cpu and memory use of the system. We need to reach agreement over the algorithms to propose for identifying images; we need to implement those algorithms, which may be technically difficult; we may need to think about caching results of calculations, to reduce cpu load. Also note that the algorithms use stochastic information. There will be false positives, and false negatives.

Both approaches have their benefits, and drawbacks. Neither is "quick to implement". So given that client proxies ("filters") out there probably cover 80-85% of the requirement the usual client needs ("don't show images of nude people of the opposite sex"), where is the use case that would justify 3-5 people work 3-6 months, to get the extra 15-20%? --Eptalon 09:12, 4 July 2011 (UTC)[reply]

Please provide more data to explain what "80-85%" means to you here. (list some of the clients you have in mind, and the use cases you feel constitute the 100%). If there are client-side tools that are aware of Commons image categories [or can be customized to be] that would be a useful data point. (And pointing to an open-source client-side option for readers who want one, that is known to work smoothly with WM projects, would be in line with the goal here). Two use cases, for discussion purposes:
  1. You're browsing wikipedia, possibly on someone else's machine, and want to toggle off a class of images. [for instance: giving a WP demo at work, or in Saudi Arabia, &c.] It may not be possible to install your own client software, and you'd like to be able to set this in under a minute.
  2. You come across a specific image you don't want to see again (and checking, find it is part of a category of similar images), and want to hide it/them in the future.
SJ talk | translate   15:42, 4 July 2011 (UTC)[reply]
Client-side proxy filters are aimed at parents worried that their children might see the wrong type of image; AFAIK most of them work with whitelists/blacklists of sites; they do not do an on-access scan of the image. In addition, they might do "keyword scanning" in the text (to filter hate sites, and similar). The customisation of these products lies in being able to select "categories" of sites to block/allow, perhaps on a per user basis. Our "static category blacklist/whitelist" approach would in essence do the same thing, except that to achieve it, we need to do development work, and at the best, we match the functionality of a USD 50 product. In addition, load is placed on our servers to do the filering work (+possible problems with the categorisation). Using the dynamic approach will mean even more load on our servers, the possibilities of "false positives"/"false negatives"; the difficulties in finding training data (note: that data can not be used later on), etc. In short: a lot more (difficult) work. We may exceed the USD 50 product as to functionality, but we have 3-5 people developing 4-6 months. I really don't know if I want to spend up to 250.000 usd (24 man-months) to "not see an image again" - it seems out of proportion.--Eptalon 19:29, 4 July 2011 (UTC)[reply]
Point of clarification... $250,000? NonvocalScream 19:36, 4 July 2011 (UTC)[reply]
Let's be very careful about throwing around dollar figures. That hasn't been scoped, so I think it's dangerous to introduce false numbers to the equation at this point. Philippe (WMF) 19:39, 4 July 2011 (UTC)[reply]
Duration of project: several people, 4-6 months (for the dynamic approach, not using static tags). --Eptalon 20:15, 4 July 2011 (UTC)[reply]
According to whom, by what metrics and judging by the speed of what resources? How about we let the folks who are designing the thing scope it out, once it's, you know, designed? Philippe (WMF) 23:59, 4 July 2011 (UTC)[reply]
Eptalon, you seem to assume that everybody's got a web proxy. I don't. If I really, really don't want to see the infamous picture of the guy jumping to his death from the Golden Gate bridge, my options at the moment are:
  1. Don't read Wikipedia (because any vandal could add it to any page at any time),
  2. Especially don't read pages where that image might logically be present (bummer if you need information about suicide), or
  3. Figure out how to manually block all versions of that image in every single account (five) and every single browser (two) on every single computer (four) I use—which will effectively keep that image off my computer screen, but not any others like it.
This proposal would let me control my computer by clicking a "don't really feel like seeing images of dead bodies today, thanks anyway" button. The images would appear "hidden", and I could override the setting any time I felt like it by simply clicking on the "This image hidden at your request because you said you didn't feel like seeing any images of dead bodies today" button. There is nothing here that would let some institution control my computer. WhatamIdoing 19:23, 5 July 2011 (UTC)[reply]
I dont have one (or use any filtering software); Your porposal shifts the problem though. You need to agree with other people about the categories. In the 16th century, a painter called Lucas Cranach the Elder pained a woman, before a tree, wearing a necklace (called 'Venus'). In the same century Mechalangelo did his statue David. In the 19th century, Jules Joseph Lefebvre painted a woman with a mirror ('Truth'). To me, all these works are works of art, and as such, limiting their audience does not make sense. In the 1990s, a museum in London, used Cranach's painting as an ad for an exhibition; they showed it on posters in the London Underground - and there was an outcry.--Eptalon 14:27, 6 July 2011 (UTC)[reply]
@Eptalon: I think there is a very real advantage to doing a system specific to Wikipedia that takes advantage of our category structure: filtering can be made more precise, which means not just missing less things that people want to block, but more importantly, avoiding blocking educational materials that young readers need access to to learn. This is our chance to give people a solution that isn't as conservative and overzealous as every other generic solution on the market. Dcoetzee 22:31, 16 July 2011 (UTC)[reply]
I didn't sign up to provide people with such a 'solution', Dcoetzee. But thanks for pointing out the extreme slipperiness of this slope.
Now, if someone wants to hide a certain picture for themselves by clicking on a little X or whatever, *fine*, I'm all for user control.
If you pop open a menu that also says "hide all other images in <one of the categories the image is a member of>" ... well, that's fairly sane, as long as no new categories are generated.
But let's not jump on the actual censorship bandwagon. Fair enough? --Kim Bruning 10:35, 21 August 2011 (UTC)[reply]
Incidentally, if we're adding menus to images anyway, could we also have other options that make things more explicit, like "view image info" and "view full size"; or editing assistance like "Alter size", "Alter location", etc... --Kim Bruning 10:45, 21 August 2011 (UTC)[reply]

The problem is not with clicking things. That's fine. People keep concentrating on the clicky bits while there's nothing wrong with clicky bits. ;-)

The problem is with the category scheme. When you categorize images as "problematic" or "potentially clickyable", you're in trouble, because that classification system *in itself* is defined as a censorship tool by the ALA.

So if you have a clicky tool that can click images on or off, but without using cats, you're fine. However, if you have cats, you're toast. --Kim Bruning 19:33, 24 August 2011 (UTC) 19:28, 24 August 2011 (UTC) meeouw![reply]

Logging in to permanently save[edit]

Just to point out: the concept presented in File:CC-Proposal-Workflow-Anon-FromNav-Step3.png (logging in to permanently save options) wouldn't necessarily work in the classic scenario of parents applying filters to their children's computers, as:

  1. Their children would then edit using the accounts that have been logged in (pro: would work well in terms of accountability, con: what happens if the account is subsequently blocked for vandalism? Note that the account would most likely pass semi-protection due to the length of time since creation - although it might not pass in terms of number of edits, at least to start with.)
  2. Their children could simply log out, and/or log in with their own accounts, thereby bypassing the filter.

Mike Peel 21:48, 23 July 2011 (UTC)[reply]

I do not see this as something parents should use to prevent their child seeing images. They still have google images. If a parent does not watch their child while they are on the internet (i.e. watches what they are doing on the internet) or does not obtain a filter program or service that is controllable only by them, then they have no one to blame but themselves is their child watches porn or the like. This is so people who don't want to see nudity can block it for themselves, not for their children. The only problem people might have is those with dissociative personality disorder where one personality wants the nudity but another doesn't.Black.jeff 09:05, 20 August 2011 (UTC)[reply]

You're right: this is completely useless for parents (or schools, or employers, or whatever) censoring someone else's reading. This is only useful for you controlling what you personally and voluntarily choose to display on your computer screen at any given moment in time. It is not designed to permit you to control what anyone else sees. If parents want to control their children's access to Wikipedia, they will have to find another method for doing that.
The point of this is to let a person with PTSD (for example) voluntarily stop the display of mutilated bodies on his own computer screen. Currently, we are forcing people to see types of images that they do not want to see. There is no possibility in this system for you to stop someone else from seeing images that they want to see; even if you could "lock" the preference (which you can't), overriding the setting only requires that the reader click the hidden image. WhatamIdoing 19:29, 23 August 2011 (UTC)[reply]

Protection levels[edit]

Just like many security software appplications have many security levels, I propose the same for this image filter. There must be few levels per tag, 3 or 4, so decisions are easy. For example, 0 for no sexual content, 1 for light clothes like here, 2 for underwear and 3 for visible genitals. --NaBUru38 18:39, 15 August 2011 (UTC)[reply]

And of course, it's always going to be women who will be tagged with "light clothes" because we all know they should wear potato bags to please certain Middle Eastern imaginary beings. Andkon 19:44, 22 August 2011 (UTC)[reply]

Dont use a collapsed frame![edit]

I think really that the system should NOT use AJAX or collapsed frames, for several reasons.

The first, and most important reason, is that a filtered image should not be sent over wire at all. It might get stuck in a company/school proxy server with logging turned on, and that can make bad things happen for the employed person who did think the filter was a "safe" way browsing wikipedia. (like fired from work, having computer account revoked/locked for a period of time, or expelled from school in a time).

With the filter ON, you should be safe and know that nothing that you have elected to filter out, should be sent over wire at all.

Another reason of not using AJAX or collapsible frames in the filter system, is that older browsers might not support them, causing the images to be shown even with the filter turned on. You can never know what happen with older browsers so safest is to control image hiding at the server side.

Also the filter settings dialog, contains category words, that might be stuck in a filter. So instead of showing the filter settings dialog with AJAX, show it in a simple target="_blank" popup. A _blank popup in a link will almost newer be blocked by popup blockers (if you not turned the security settings wayyy too high). That will make the filter safe for everyone to use and rely on.

Also, provide a option to block ALL images too, so if you have a slow link, so you can opt not to view heavy images under a 200kbps GPRS connection. Sebastiannielsen 02:45, 21 August 2011 (UTC)[reply]

Comment: I like the idea of a "No images" option, independent of anything else. In which case you certainly would not want to load the images behind a collapsed frame, you wouldn't want them to be present at all. SJ talk | translate   02:37, 6 September 2011 (UTC)[reply]
If a workplace is going to be that assholic, it's best that they do without Wikipedia. That will give a little edge for a better company to outcompete them. Wnt 17:10, 21 August 2011 (UTC)[reply]
To block all images follow the instructions at en:Wikipedia:Options to not see an image. Rich Farmbrough 18:55 21 August 2011 (GMT).
That does not block the images from being sent over the wire, which is the purpose of disabling all images, if you disable it because you are sitting on 200kbps GPRS. About workplaces, the sysadmin can in many cases not see if the client is showing a "forbidden" image or not, if they not have a client survelliance solution. They can only see if a client has downloaded a image or not, thats why the system should not send the images at all over the wire.
Remember that many pupils use Wikipedia as a resource for their school work, and they might be searching for something related to that, and comes into a article with forbidden images (according to the computer usage policy at that school). Sebastiannielsen 23:40, 21 August 2011 (UTC)[reply]
I tend to think of children and workers-with-harsh-workplace-rules as if they are subjects of a totalitarian society-- I don't exactly know whether we help or hurt by working to meet their needs. If we even consider sysadmins use a tool to determine who has the filter turned on, we legitimize their using it in that way. These populations are essentially "lost" to us, and it may be better to focus exclusively on the needs of people who have the right to view whatever they wish.
Then it's a much simpler problem. 1-click-to-view-a-shock-image is something lots of readers want, and it's really not about censorship as much as it is about turning the screen away from the room before you click to see what's underneath. That's a simple need, with a simple fix.
It's not really about politics or decency or anything as complex as that. It's just a momentary courtesy some readers would like. --AlecMeta 18:13, 22 August 2011 (UTC)[reply]
A simple thing to do, is to force images that are filtered according to user's filter setting, linked. Eg, instead of the image, the user is shown a link that he can click and land on the image wiki page. If we are going to use collapsed frames or AJAX, I think really that we should put a HUUUGE warning on the filter page that says "WARNING: The filter does only prevent you from inadvertently seeing images you don't want to see. The filtering will be done in your local computer, and it does NOT prevent any of the filtered images from being sent over your network wire, and if you use wikipedia from a school or workplace, you can face consequences for images transmitted over the wire, according to your local IT usage policy, even if the images never was shown on your computer screen." Sebastiannielsen 06:38, 27 August 2011 (UTC)[reply]
It would be reasonable to include that information on whatever image-filter help pages we might write (assuming the filter is ever implemented). I think that making it "a HUUUGE warning" is probably unnecessary, though. WhatamIdoing 17:33, 27 August 2011 (UTC)[reply]
Collapse boxes also take time to render, I see the content of collapse boxes quite often. Not a good technical solution. Rich Farmbrough 22:38 30 August 2011 (GMT).

Eingangskontrolle's example[edit]

Kruzifix
Image hidden by Wikipedia opt-in personal image filter. This image is categorized as: religious POV (christian) ; health advice (dangerous for vampires) ; contravention to "Thou shalt not make thee any graven image" (Exodus 20:4) ; torture ; corpse ; barely clothed male person -- the categories you blocked are marked yellow -- click link to show image

The preceding unsigned comment was added by Eingangskontrolle (talk • contribs) 11:54, 30 August 2011.

This page isn't complete: Archives have more alternative ideas[edit]

I should note that I proposed an alternative ([2]) which was oublietted to the Archives before these subtopic pages were set up here. There are five pages of such archives. Should they be divided up and transcribed to these topic-specific talk pages? In any case bear in mind that not all the ideas are listed here. Wnt 18:36, 8 September 2011 (UTC)[reply]