Jump to content

Grants:IdeaLab/Cumulate Likes and Unlikes to automatise harassment limitations

From Meta, a Wikimedia project coordination wiki
Cumulate Likes and Unlikes to automatise harassment limitations
The rate of "Like" and "Unlike" permit to adjust and to automatise delays between edits from an harasser.
idea creator
Rical
this project needs...
volunteer
developer
community organizer
join
endorse
created on21:30, 31 May 2016 (UTC)

Project idea

[edit]

What is the problem you're trying to solve?

[edit]

Repetitive harassment from "bad" abusive users in selected articles and pages.

What is your solution?

[edit]

There is already a suspensions or a banishment system in place for article publishing. It is currently offered by a person and decided manually by dedicated discussions. But a better system would be if any user could earn "Like" and "Unlike" votes with short comments, reported by any reader or editor. Update on june 8 2016: Of course "Like" and "Unlike" votes are only a start point:

  • There are already 2 uses of votes: relative to each user who harasses, and relative to the harassed article.
  • Reasons to limit a user are very complex and humans continue to choose to limit or no, and how to limit.
  • This proposition add a new way to collect and to structure the opinions about harass.
  • This proposition add new types of limitations, and new ways to apply limitations.
  • Each vote is completed by a comment, inside a short list of classic harass types, and a free comment.
  • Each vote contains: the user who vote, the user who perhaps harass, the article/section/time/edit-number harassed
  • 2 story of harasses are available: for each user, and for each article. Any user can read these stories.
  • If humans have chose that way to limit a user, a new way to limit him is to increase the delay between his edits when his rate of "Unlike" increase or is higher a chose level.
  • To optimize the compute of limitations from the opinions along the time, we can change it in function of our use of it and our know-how.
  • In the compute, the pound of "Like" and "Unlike" can decrease along the time for some kind of harass.

If the pace of the "Unlike" score exceeds a certain threshold for a single user, the user may be limited by restricting his editing rights for a certain time period that has been decreed, and potentially increasing the time if the "Unlike" increases on some or all pages edited. Also the score of an user should be visible to all, so that people will know if the user is trustworthy or not. Of course there will always be discussions to choose a type of limitation. But this method of detection and the automation could graduate the limitations, relieve patrol, and will be a self-limiting of disruptive behavior. This punishment framework will deter abusive users from acting spontaneously because they would fear the consequences of having a bad reputation. In order the user rating system should function correctly there would need to be two types of rating scores.

  1. An overall user rating score that would show a sum of how many Likes and how many Unlikes the user accumulated throughout the users lifetime.
  2. An individual article and single page "Like" and "Unlike" grading system based on each edit the user commits to the single page's history.

Looking at and making decisions based on a user's overall score is not an accurate way of judging a user. That is because the user can have lots of Likes from the many outstanding writing and editing the user has done. But for a single article (for some reason) the user can be biased and manipulate the page to his liking. That is why we need the second type of a more specific page related rating system so others can restrict the targeted abusive behavior.

One problem that this "Like" and "Dislike" functionality can emit is that this tool in itself can become a method of harassing other users. To solve this there can be an option for the user to request an review of his rating score. If enough people find that the Dislikes the user received was incorrectly assigned to him and was an act of abuse, they can then vote to remove it. But in general the rating system will be accurate enough, unless Wikipedia will have more evil abusive people than good honest ones.

Also to avoid a voting war, where a user will harass the users that voted negatively against them, the user itself should not be able to see who down (or up) voted them.

Goals

[edit]
  • To limit and minimize abusive users who harass and make edits not pertaining to the subject of the current article.
  • Viewers and editors should have a way of assessing the validity of an user.

Get Involved

[edit]

About the idea creator

[edit]

Participants

[edit]

Endorsements

[edit]
  • Excellent idea which works well on many websites.
  • This is a great idea. It is superior to the system we have now, where a person can continually wipe clean their talk page, often under the guise of "archiving", to hide their complaints, conflicts and misdeeds. Anticla rutila (talk) 06:54, 3 June 2016 (UTC)
    • Interesting, but not quite clear to me how this would work in practise : would the like / dislike option be available for every single edit of every user ? Camster (talk) 07:33, 3 June 2016 (UTC)
  • There's a lot of merit in this idea. I think that the up and down votes should be weighted on factors like how long the voter has been registered on the Wikimedia project, how much positive and negative feedback they have been given, how many of their edits have been reverted, and if they have been the subject of any sanctions. This will go some way to mitigate gaming of the system. Ljhenshall (talk) 21:17, 3 June 2016 (UTC)

Expand your idea

[edit]

Would a grant from the Wikimedia Foundation help make your idea happen? You can expand this idea into a grant proposal.

Expand into a Rapid Grant
Expand into a Project Grant
(launching July 1st)