Anti-vandalism ideas

From Meta, a Wikimedia project coordination wiki

Currently there are three possible technical states of page: Unprotected, Semi-protected and Protected. To deal with vandalism, having additional, fine-grained policies for individual pages would be beneficial.

Delay newbie edits for few hours unless approved by reviewer[edit]

en:Wikipedia:Delayed flagged revisions: A delay time for newbie edits will help Vandalism patrollers and watchlisters to review those edits and accept or revert them. Unreviewed edits can go live after the delay time, so that there will not be any backlog or a conflict with "editable by anyone" policy. So the proposed idea is to add a time based auto-review feature to FlaggedRevs so that all unregistered/newbie edits would be delayed for a few hours before going live; while also allowing vandalism patroller to immediately approve or reject those edits. see en:WP:Delayed flagged revisions Vis M (talk) 23:20, 15 February 2021 (UTC)[reply]

AOL suggestion[edit]

On the chance that this hasn't been suggested before, I'll mention an idea to reduce AOL abuse. AOL's IP ranges are known. Suppose Wikipedia implemented compulsory registration for AOL user editors. That is, users who come from shared AOL IP addresses would have to log in with a username before editing. Blocking, when necessary, would be applied to their user login rather than the IP.

I realize this would take a bit of coding but it would probably be worth it. It would make things harder on dedicated trolls who hide behind AOL's anonymity. It would also end the current side effect of blocking legitimate AOL user access to Wikipedia. (Durova)208.54.14.25 07:19, 26 November 2005 (UTC)[reply]

I've thought about this too. But It might just pollute Wikipedias database with an uncountable number of sock-puppets. How would one work around this? -- John.constantine 15:45, 5 December 2005 (UTC)[reply]
Why not insist on registration before editing rights? You could also create a two ranks of users: Unverified user and Verified user. The Verified User could be implemented like Gmail: You start with the editors as Verified Users, and let them then invite new Verified Users. That way, each Verified User has a chain of persons knowing persons back to the editors. -- Svein Olav Nyberg 10:40, 19 December 2005 (GMT)
What edit ability would an "unverified" user have? Having to know someone in the Wikipedia editor community as a pre-requisite would exclude many legitimate editors. 70.122.87.59 01:50, 23 December 2005 (UTC)[reply]
About verified and unverified users- I have a problem with this method. I fear it will make wikipedia into nothing more than a glorified oligarchy of editors and increase content bias. Both already are accusation that wikipedia has faced before and addressed only because anyone can edit wikipedia.
AOL and anonymous users: this is really not a problem... as long as wikipedia makes it difficult for users to create too many multiple account in quick succession (say a 1 day waiting period? scripted confirmation linking and other security features? )
--hydkat (talk) 06:57, 27 February 2006 (UTC)[reply]
For AOL socks and such, perhaps we could implement a cookie-based ban. Sure, it's not the best solution, but it's better than the current method of doing nothing. Besides, I bet at least 95% of AOL users would never figure out how to erase cookies or know that they're behind a proxy that prevents IP bans. --71.225.64.232 22:24, 3 April 2006 (UTC)[reply]
Wiktionary method: Require use of secure.wikimedia.org. 67.142.130.26 02:21, 8 August 2006 (UTC)[reply]
I'm a regular user of secure.wikimedia.org and I have to say, it really isn't up to snuff. Requests sometimes randomly turn up various 5xx proxy errors; article names with URL-encoded forward slashes (%2F) in them always result in 404 errors, cross-wiki (cross-wikimedia) links still direct one to insecure http, etc. And as far as I can tell, secure.wikimedia.org is not officially supported or endorsed by Wikimedia. -- Intgr 14:48, 29 January 2007 (UTC)[reply]

A smart policy in the edit process?[edit]

Since I have started to grow weary of vandalism (haven't we all?) I have come up with an idea. I am a programmer myself, and it is possible that this idea has been discussed before, and/or has too serious disadvantages, but I thought I might aswell share it with you.

Many vandalisms I have encountered concerns the removal of an entire page, replacing it with obscene and/or funny words.

My idea is to re-program the edit-process (i.e when the user hits the "Save Page" button) so that (current page = presumed unvandalized page, new page = presumed vandalized page)

  1. (if current page size in bytes > a fixed value)
  2. AND (new page < another fixed value)
  3. then new page is deemed as vandalism and is refused to save

I know there is at least one problem with the idea; when a page needs to be deleted, removed, redirected...etc. One solution to deal with the redirection issue is to insert the following exception into the pseudo-code above:

  1. if (new page only contains a redirection link) then it is allowed to save

To solve this, the idea must be put forward to developers. As I said, I don't know if the idea is good or bad, but I thought it would be of interest anyway. If it is bad, it might nevertheless give rise to better ideas. I'd be thankful for feedback and all kinds of opinions. If you wish to contact me, please use my talk page in Wikipedia: my talk page

Regards, Dennis Nilsson. Dna-webmaster 18:01, 16 November 2005 (UTC)[reply]

Bayesian spam filtering, such as is employed by Thunderbird, is very good at distinguishing spam from non-commercial email. The algorithm can even aggressively or conservatively filter out spam depending on how bad a false positive is. Wiki vandalism, like spam, has certain common, readily detectable features. I suspect that applying a Bayesian classification scheme to diffs could, with high accuracy, distinguish blatant vandalism from other edits.
The only thing that gives me pause with such a scheme is knowing what the correct action should be when the algorithm catches likely vandalism. Disallow the edit? Allow the edit, but if a single user or IP gets flagged too often in a given time-block, block the user or IP for some time? In particular, the one thing we wouldn't want is for an automated means of detecting vandalism to encourage vandals to become less blatant, in order to "beat the machine." 130.126.101.243 22:03, 1 December 2005 (UTC) w:User:Shimmin[reply]
I would love such a bayesian filter to be implemented. And I have an idea what it should do: Besides the "recent changes" page, there will be a "recent changes that might be vandalism" page for all edits that are over the threshold from the filter. Currently a user (not an admistrator) needs several clicks and page loads to reverse a vandalized page. In this case a "one-button one-click" reverse would be available to any user. I think this might speed up the dealing with vandalization a lot. -- John.constantine 15:45, 5 December 2005 (UTC)[reply]
Yeah, a method of auto-flagging edits and making them stick out like that in recentchanges is a brilliant idea. Also, we could have it try to pick up common things used by vandals in the edit diff, such as obscenities, sentences in all caps, signing an article in the mainspace, etc. Of course some articles may include these on purpose for whatever reason, and as such it would only pick up what wasn't in the previous version. Essentially, we could have the opposite of a minor edit, an automatic flag for major edits. --71.225.64.232 22:28, 3 April 2006 (UTC)[reply]
The point about a baysian filter (of the sort that email programs use to find Spam) is that it can automatically learn what is vandalism and what is not by noting the statistical word usage and word ordering of things that people revert versus the statistics of things that they accept without reversion. This would result in a system that could predict the likelyhood that you personally will want to revert some new change based on the statistical similarity of that change compared to things you choose not to revert. What it will actually pick up on is hard to guess - it might be particular obscene words - but it might be lack of punctuation or particular catch-phrases that vandals use ("XXX on wheels" for example). At the very least, it forces vandals to be creative in the way they screw things up - and creative people are not usually vandals. The learning aspect of this could be spread across all of Wikipedia (so recommended reverts would come about because of the probablility that any Wikipedian would revert this text) - or it could be personalized (so it's the probability that YOU would revert it). Both would be useful. A new kind of blocking becomes possible for persistent vandals - you could automatically revert any change they make that the filter regards as likely vandalism - this kind of partial block would force these people to mend their ways - or be effectively prevented from editing - it would also be possible to apply such a block to a shared IP address without doing so much harm to legitimate editors who share the same IP address. SteveBaker 02:25, 21 August 2006 (UTC)[reply]
Regarding the concern that an automatic delete of identified vandalism would just train the vandals to be trickier - an option could be to automatically divert the vandal's work to a temporary sandbox that only the vandal sees. The vandal, and only that vandal, sees their damage and thinks they got away with it. After 15-30 minutes, after the vandal has become bored and wandered away, all traces of their edit vanishes. User:Ld100 08:12, 27 May 2008 (UTC)[reply]
I think the Bayesian filter, while tried and true, would not work too well on Wikipedia. I do LOVE the idea of it autoflagging various edits. This could REALLY streamline the process, especially in Huggle. I think if their edits pop up flagged too many times in a time period, they are quickly flagged to be banned (sort of like a level 4 warning). Allmightyduck 01:55, 31 May 2010 (UTC)[reply]

Spam button on history list[edit]

There should be a "This is spam" button next to the edits in the history list. Only registered Users see this button. By clicking the following will happen:

  • The IP/User is not blocked, but will be asked to enter a code to stop bots.
  • A Sysop can easily rollback every edit of this IP in the contributions page with only one click with a button like this: "rollback all edits after spam-alert"
  • The spam-marked contribution will be analysed for external links and all edits with links to the same domain will also be asked to enter a bot-stop code.
Sounds good to have a "1 click" way to revert lots of stuff, unless that feature is itself abused! Imagine a button next to each users name in the "history" page that allowed people to "report this user as a spammer" or something. You'd have masses of good people being reported as spammers. I kinda like the g-mail invite approach, except spammers would still find their way in. Software which detected likely spam and flagged user accounts with a "% likely to be a spammer" value might be good.203.192.130.18 12:15, 9 February 2007 (UTC)[reply]
The idea itself is very good, and if it is abused, the sysops will sort out the problem if the user is not or has not been a vandal. There should be a mandatory explanation made when flagging spam edits to make sure that the spam flag is not abused. If there is no good explanation, or no explanation at all, the flagging should be reverted by sysops. --Rion 08:41, 23 May 2011 (UTC)[reply]

Temporary automatic blocks[edit]

For instance, if a user adds the word "penis" or something in small edits many across several articles, have a temporary block kicked in to slow them down. Make it a template or category so we can track it as well. 65.2.188.218 04:54, 3 October 2005 (UTC)[reply]

That may seem like a good idea, and it probely is, but it's an issue to have a auto ban running, as it may go psycho and ban a bunch of inncoent people. --Andrew Hampe 23:10, 30 March 2007 (UTC)[reply]

Expose IP addresses of autoblocked users[edit]

Currently, on en.wiktionary.org, when the daily vandal strikes, and is blocked, a second block is rapidly added (e.g. User #220 autoblocked for sharing an IP address with blocked user so-and-so.) The autoblock is for only 24 hours. Since such a block is indicitive of a compromised address (be it a public library, or a cable modem user who is compromised) it makes sense to make the autoblock permanent. When the system has been secured, the block could then be requested to be lifted. 256^4 is a big number. If a thousand are blocked, who cares?! Why a vandal's (temporary/compromised) address is protected from public view is a mystery.

Balking at permanent blocks, certainly a one year block for a compromised address seems much more reasonable than a mere 24 hours. --Connel MacKenzie 00:06, 17 Apr 2005 (UTC)

The IP which is "compromised" is normally either a dynamic IP (ie, could be anyone from a whole ISP (eg.AOL)) or is a proxy (ie, is a whole university or workplace). However much I like the idea of blocking AOL, I think this would deter a lot of moderate and light users (people who might fix small problems or errors, etc, but can't be bothered to get themselves unblocked) who are targeted by mistake.
That said, something should be done about autoblock, it's quite annoying that I'm at the recieving end of blocks for wikipedia (even for 24 hours), when I've only made constructive edits and my account has been arround for ages. Couldn't the autoblock software check that I've made hundreds of edits over months or years; and not block me!? How hard could it be!? --H2g2bob 01:29, 5 June 2006 (UTC)[reply]

An autoblocked IP#s should imho automatically be unblocked, when a new user logs in, or comes online with a permanent cookie, who has no (recent) record of being blocked. I think that should be easily achieved. Keeping a "last block date" field in the user record should already suffice to make such a decision. The ratinoale behind that should be obvious - when the user behind an IP address changes, a dynamic IP address has been reassigned, or a computer is now being used by someone else. --Purodha Blissenbach 15:27, 22 February 2007 (UTC)[reply]

I believe a cooling period of 1 month should be implemented on dynamic IP when there is 3 block within 7 days. This served to delay vandalism by preventing new account creations. Existing account user can still use their accounts, but IP is blocked from new account creations. Yosri (talk) 17:36, 11 January 2015 (UTC)[reply]

Patrolling new images[edit]

New images submitted to Wiktionary should be marked as patrolled by an admin/sysop before they can be displayed on an article page. (Such a policy probaby wouldn't work as well on Wikipedia.) --Connel MacKenzie 00:06, 17 Apr 2005 (UTC)

Sharing blocks from Wikipedia to Wiktionary[edit]

Wiktionary's daily vandal apparently will hit Wiktionary only a few minutes after being blocked at Wikipedia. --Connel MacKenzie 00:06, 17 Apr 2005 (UTC)


I liked that! Osias 21:39, 25 May 2007 (UTC)[reply]

Leveraging versions[edit]

The ability to display any past version of a page is very valuable. If we were able to display an arbitrary version as the 'stable' version while still working on a newer version this could be even more useful.

(re: the comment later on that this could be used for producing a coherent published version of WP: there could additionally be a 'publishable' version for those articles that had enough detail and had been formatted cleanly/consistently enough to be considered complete.) Sj 06:25, 1 Mar 2004 (UTC)

Mild policy: additional link to 'stable version'[edit]

After registration, a user is a 'new user' until one of his edits was not reverted and made it into stable, then becomes a full user

  • Display latest version and a link to the last edit by a logged-in user ('stable version')
  • for logged-in users, display a 'make this stable' button which also approves the new users that did the last edits

Advantages[edit]

  • provides 'stable version' permalink
  • little impact otherwise- transparent to the user, only the 'make this stable' button won't show up on the first edit (not noticeable for new users)
  • stable version suitable for printed versions
  • ?

Disadvantages[edit]

Tough policy: show last stable version and link to 'HEAD' version[edit]

If deemed necessary (a lot of vandalism/ or 'a critical mass reached') the mild version could be spiced up:

As long as 'stable' just means 'last edit by a verified user', and as long as Recent changes is modified to indicate when there is an anonymous change that needs 'verification' in this sense, this doesn't seem like such a tough policy. Also, I don't see why these things require a separate "make stable" button; any saved edit by a logged-in user could work. I think this is a fine proposal. Sj 06:33, 1 Mar 2004 (UTC)
  • for new visitors, show stable version+ prominent link to latest version (could be toggled via cookie to always show 'HEAD' version instead of stable if preferred)
  • after first edit by anon (session started) and for new users: latest versions to give instant gratification with prominent link to 'live/stable' version
  • for logged-in users, display 'HEAD' version with link to stable version and a 'make this stable' button which also approves the new users that did the last edits

Advantages[edit]

  • Maintains instant gratification for anons
  • Highly robust articles
  • Higher credibility for wikipedia
  • ?

Disadvantages[edit]

  • More restrictive for anon editors
  • Might be overkill
  • Might interfere with the wiki spirit
  • Bad edits could still make it through.


Commenting on first bullet point: Reverse this. Have a standard template box-header linking to the stable version (making it explicitly clear). Newbies who wish to edit generally wish to edit what they've just read, not something that may have been modified 50+ times (and they are unaware of until they hit the edit button). If you are going to implement it the way stated, it'd be preferable to remove the edit button from the stable page. This might be preferable to my first 2 sentences. 24.22.227.53 16:55, 20 September 2005 (UTC)[reply]

Timed release of edits[edit]

Main article: Timed article change stabilisation mechanism

Basically the idea would be that anyone can change anything, but the article doesn't go live immediately for new and anonymous editors, a previous version would continue to be shown. After say, 24 hours, the article goes live only if not corrected. Well established editors get their changes to appear immediately.

What happens when another user edits the article within those 24 hours? 72.192.55.156 23:25, 25 March 2008 (UTC)[reply]

Background color of previously blocked IPs?[edit]

Could you possibly reverse the background color for entries being made from an IP (or user less than 24 hours old) that has been blocked by any wikimedia site, within the past year? Or make the background red or something? --Connel MacKenzie 21:05, 21 Jun 2005 (UTC)

I Just found http://en.wikipedia.org/wiki/User:CryptoDerk/CDVF and amazed I haven't come across it before! What a powerful tool. But with blocked IP addresses still obscured (e.g. User #269 blocked for sharing IP...) I think I'd still like my above request to stand. --Connel MacKenzie 21:25, 21 Jun 2005 (UTC)
The bot I coded watches over all blacklisted IPs which are blacklisted if they are blocked for vandalism or via manual user input. Bot posts alersts on IRC (server irc.freenode.net) in channels: #wikipedia-en-vandalism for en, #wikimedia-meta-vandalism for meta, as well as many other languages :) --Cool Cat [[User talk:Cool Cat|<sup>Talk</sup>]] 03:27, 18 December 2005 (UTC)[reply]

Consensus Networks[edit]

Rather than trying to identify all possible users of the system and identify whether something is vandalism or not (I consider any reference to Intelligent Design to be vandalism; proponents of ID tend to think that any reference to evolution is vandalism), allow vandalism. Instead, build a consensus network to allow each user to filter out what they would consider vandalism. Basically, I am asking for: when I visit a page that contains multiple versions, I can select the version that I like the best. This provides votes for my selected version and against other versions. For pages where I haven't specified an explicit preference, my preference is computed by finding other users who have voted for that page and comparing votes they've made on pages we have commonly voted on.

For pages on which I have selected a preference, it should be possible to detect when my consensus network suggests that a version written after I made my preference is likely to be a version that I would want to see.

It should be possible to identify two competing ideologies and automatically providing a link from my preferred viewpoint to the competing ideology viewpoint. (Neutral points of views are silly. Well written, well referenced, strongly slanted points of view are interesting, and, when they conflict with strong held personal beliefs, are intriguing.)

Google and Amazon provide examples of consensus networks.

Hate vandalism[edit]

This is not anti-vandalism but censorship. 217.15.96.19 16:19, 28 February 2006 (UTC)[reply]

hay guyz!

Unique Word Count as a filter[edit]

Hey. Recently on Wikipedia/En there has been some vandalism where a user takes a page and blanks it, then adds in random swear words, both as the text and the edit summary.

Also, more and more articles without content are being found and deleted.

Would it be possible to run a check on all pages that are created for "At least 12 unique (IE: Not repeated) words of one actual (IE: No $, %, or *s.) letter or more?" Then those articles could not be allowed to be created at all.

This would prevent repetition vandalisms (For example, Penis x 253 million), and also really small articles (Say from Royal Reserve: Royal Reserve) from being around too long. This would also remove strain from both anti-vandal bots and Newpage watchers.

However, a major downside would be the sort of stuff from Ex-Google bombings. Ie: A mass of random words in an article.

Sincerely, User:ArdoMelnikov from the English Wikipedia, without an account. 24.89.195.139 19:43, 17 June 2006 (UTC)[reply]

I totally agree with this and it adds on to a similar idea I already had of only saving pages where there were X characters, as this would get around page blanking and vandals just adding in a few random characters to bypass it — if there is a minimum character limit then it will waste more of the vandals time and may put them off. As a programmer, I could see it working something like this (in PHP which MediaWiki is programmed in) - where $_POST[body] contains the wikitext that has been edited, and strlen is a core function counting the length of strings of text:

if (strlen($_POST[body])>120)
// code to save the page
else
// warn the user, if repeated attempts, ban

Your idea sounds even more substantial, and I'm all for it.

-- User:Jatkins, 22 February 2007 @ 12:06 GMT, on the English Wikipedia (I'll try and come back here, but it might be an idea to post on my user talk page as well. Cheers.)

Semi-protect the complete database[edit]

Vandalism wastes countless hours by countess editors who could all be otherwise doing constructive work in increasing the size of Wikipedia and inproving the quality of the articles. Also the article history log becomes cluttered with vandalism reverts. Checking the article history by comparing edits to check for some of the more insidious vandalism or genuine errors becomes rather time consuming since literally hundreds of edits must be waded through. Vandalism statistics are showing that as Wikipedia grows the percentage of vandalism increases (see en:Wikipedia:WikiProject Vandalism studies/Study1).

The majority of the vandalism is from anon edits or new users. New users that vandalise typically set up an account, vandalise a handful of pages over an hour or so and then are never heard of again. These vandals are likely to be after instant gratification. It appears that this instant gratification makes up a large percentage of the vandalism.

A solution is to semi-protect the complete article space (and templates) to prevent those seeking the instant gratification of seeing their vandalism appear. Any genuine anon or new user edits can be requested via the talk pages. Vandals will then move to vandalising the talk pages however as this [1] shows. However this will make for less work for the vandalism patrol. Heavily vandalised talk pages could be given semi-protection ststus by admins to prevent excessive vandalism. New users will be able to carry out edits after the four day cool down period.

I feel that this policy is a good compromise between the idealism of Wikipedia policies and the freedom of being able to edit. It must be realised that idealistic idealism comes at a cost - in this case the wasted time of editors in cleaning up after the vandals. Alan Liefting 07:31, 12 May 2007 (UTC)[reply]

New users can edit only 3 or 4 times[edit]

My idea is that new users (IPs or registered ones) would be able to do only 3 or 4 edits before being reviewed by an admin e be promoted to "regular" user. Or blocked, if spammer.

Osias 21:47, 25 May 2007 (UTC)[reply]

PS: this is not the same as Edit throttling, but both ideas can be combined. Osias 21:50, 25 May 2007 (UTC)[reply]

Vandal Wall[edit]

Why don't we create a page that anyone can do whatever they want to? We could encourage vandals to attack that page on their talk page, and hope they choose to vent their negative energies there--64.230.2.204 20:41, 10 November 2008 (UTC)[reply]

I would find that a defensive move like that would just make high levels of vandalism worse. Giving vandals an area for there own to vandalize around would just support them in going around on other articles in which they already vandalize them. Plus, if they are vandals, they are not going to listen to rules. Why would they care? They came to Wikipedia for the distinct purpose of creating unrest and changing information to false information. Renaissancee 04:32, 18 January 2009 (UTC)[reply]

I disagree - I've vandalized articles on a few occasions, always as part of a joke. I generally revert after the page has been viewed (within 5 minutes). I've also seen a lot of other vandalisms that follow this pattern (look at the entry for "Fat" and you'll see edit after edit for "Josh is fat", "This word refers to Kyle, who is also ugly", etc). If there were an alternative url such as seriouslytrue.wikipedia.org - which automatically reverted changes after 24 hours - I can speak for myself saying that all of my vandalisms would go there instead. --209.91.142.29 19:04, 30 April 2010 (UTC)[reply]

Vandalizing Wiki. Perhaps we should import lots of articles from Wikipedia to that wiki, and then vandalise them?

Take an idea from the Linux community[edit]

When a new Linux project is started up it usually uses code from an existing project.

A break point is set up and the code is separately and distinctly maintained.

One stream is called the 'stable' branch, the other the 'unstable'

If suggest the Wikipedia does this.

Only the unstable branch would be available for editing.

The stable Wikipedia would not be open to ordinary editing.

Designated editors would trawl the unstable branch for edited material and revert stuff if vandalised, presumably with stuff from the stable.

Genuine edits would be carried over into the stable branch.

People who want to browse the Wikipedia would turn first to the stable.

People who want to edit the Wikipedia would use the unstable.

People using the Wikipedia in deadly earnest would gain confidence that what they were using had not been vandaised.

A tool would be available telling anyone when the Unstable was not the same as the stable.

That would very simply deal with the vandalism problem, as the cost of doubling the size of the encyclopedia.

My Linux analogy is not quite exact, but sufficient for the illustration.

This is Flagged Revisions. PiRSquared17 (talk) 18:44, 13 February 2014 (UTC)[reply]

Vandal IP block[edit]

When editing an article, you could have a "Vandalism" check-box. For checked edits, a thumbs-up/-down button could appear next to the anti-vandalism edit. If it is approved as having been vandalism (thumbs-up), the IP or username of the previous edit could be blocked from Wikipedia. If it is thumbed-down, the anti-vandalism edit could be reversed. -72.173.47.109 20:22, 13 February 2009 (UTC)[reply]

I support this with 1 additional proposal. If 1 sysop vote block, the block is only for max 1 weeks, if 2 or more sysop vote, the block would be 1 year. Yosri (talk) 19:15, 25 May 2013 (UTC)[reply]

See also[edit]

External links[edit]

Discussion[edit]

See Talk:Anti-vandalism ideas