Jump to content

Grants:IdeaLab/Making it an option to 'flag' an editor, after which a moderator would come and see whether the user should be blocked

From Meta, a Wikimedia project coordination wiki
Making it an option to 'flag' and editor, after which a moderator would come and see whether the user should be blocked
If a user feels another user should be blocked, instead of going through the long process of making reports, which very often do not give the full picture and are mutual, they would be able to call a moderator to solve the problem. To ensure such 'flaggings' are only made when necessary, the user flagging should give a reason for why they have decided to call a moderator. If the reason has no chance of being valid, the 'flagging' would be ignored by the moderator. If the moderator feels that the description does not match what actually happened, the flagging user would be banned for a fixed amount of time. Only autoconfirmed users would be allowed this option to avoid flaggings from fake accounts.
idea creator
Oldstone James
join
endorse
created on14:03, Friday, June 3, 2016 (UTC)


Project idea

[edit]

What is the problem you're trying to solve?

[edit]

What is your solution?

[edit]

Project goals

[edit]

Get involved

[edit]

Participants

[edit]

Endorsements

[edit]
  • This is a good idea because an editor gets the choice to tell admins that something is wrong and the admins would have the final say on what to do. MrLinkinPark333 (talk) 23:44, 3 June 2016 (UTC)
  • This is not an ultimate fix for the whole problem but it could definitely help. Especially considering that a full scale war against trolls would do much more harm than the trolls itself could ever do. But that is only for the articles and other public messages of course! Personal messages should not ever be moderated or even read by others! MrKirushko (talk) 00:43, 4 June 2016 (UTC)
  • It's not that hard to code, easy to use and intuitive, and transparent. Eloy (talk) 02:03, 4 June 2016 (UTC)
  • This kind of thing is the tried and trusted way that online forums have been policed for about as long as they have been around. I have admined/moderated a number and this is the way to hammer down on abuse.

Making it work on Wikipedia will be the tricky thing - just having a button on a user page won't help highlight where the problem is. What you really need is a way to report a problematic message someone has posted. The simplest way is to tag a "report" lin onto the end of the 4 ~ signature code and then get the autosigning bot to do the same. The "report" link would then generate a posting to some kind of abuse board where people can add other evidence then discuss the case and any action required. I have modded forums which pretty much do this, although the abuse board is kept private from public eyes so people can report such things in private, but I don't think that is possible on Wikipedia (or in keeping with the rest of the dispute resolution procedures).

You'll also need a process for people abusing the button and for guiding people who have pressed the button in error (possibly hoping it'd get them a quick resolution to an open and healthy discussion).

The bulk of this should be easy to implement, you'd just need a critical mass of trusted users watching the "abuse board", and the tricky bit seems to be identifying the individual message, but I'd have thought regular expressions should be able to do it, as long as we can ensure the report link is always generated on a post. Emperor (talk) 03:32, 4 June 2016 (UTC)

  • I agree with the others - this would need a lot of work before it could become plausible, but the idea has a lot of merit. To expand on Emperor's idea, maybe have an additional tag on the history page akin to how "thank" was added in order to show gratitude for a specific edit? It'd appear somewhat like this, if implemented:
05:32, 7 June 2016‎ Tokyogirl79 (talk | contribs)‎ . . (1,330 bytes) (+1,053)‎ . . (undo | thank | report)
This would likely be easier than adding it to the end of a basic message and would help us admins know what exact, specific message is the problem. Putting it on the history page would also make it less likely that someone would hit it by accident, meaning that it'd be potentially more deliberate. It'd also help keep from cluttering up talk pages, since they can get pretty complicated as it is. Tokyogirl79 (talk) 05:39, 7 June 2016 (UTC)
  • Support Reporting needs to be anonymous, and with specific criteria. —Neotarf (talk) 23:07, 9 June 2016 (UTC)
    • Just to clarify, the goal of flagging offensive material should be to get the offensive material removed. Removing the editor should not be the goal, the goal is to build the community culture by signaling the types of edits that are acceptable. —Neotarf (talk) 01:04, 11 June 2016 (UTC)
  • This is a nice way to allow for a next step when we can experiment with AI to quickly suggest potential harassments for users to go look closer on and put social pressure on by flagging. Mattiasostmar (talk) 19:40, 24 June 2016 (UTC)
  • Accountability. Arianit (talk) 13:12, 29 June 2016 (UTC)

Expand your idea

[edit]

Would a grant from the Wikimedia Foundation help make your idea happen? You can expand this idea into a grant proposal.

Expand into a Rapid Grant
Expand into a Project Grant
(launching July 1st)