Jump to content

Talk:Community health initiative/Blocking tools and improvements

Add topic
From Meta, a Wikimedia project coordination wiki

Archives: 1

What this discussion is about[edit]

The Wikimedia Foundation's Anti-Harassment Tools team is identifying shortcomings in MediaWiki’s current blocking functionality in order to determine which blocking tools we can build for wiki communities to minimize disruption, keep bad actors off their wikis, and mediate situations where entire site blocks are not appropriate.

This discussion will help us prioritize which new blocking tools or improvements to existing tools our software developers will build in early 2018.

In early February we will want to narrow the list of suggestions to a few of the most promising ideas to pursue. We will ask for more feedback on a short list of ideas our software development team and Legal department think are the most plausible.

Thank you! — Trevor Bolliger, WMF Product Manager 🗨 00:45, 20 January 2018 (UTC)Reply

Problem 1. Username or IP address blocks are easy to evade by sophisticated users[edit]

Previous discussions
  • EFF panopticlick has shown it is possible to identify a single user in many cases through some surveillance techniques. Maybe it is something worth investigating ? Comte0 (talk) 21:16, 8 December 2017 (UTC)Reply
  • This makes sense 4so I'd suggest that administrators could have check-user permissions to quickly check the user's data and understand if the blocked one tries to avoid the block. --Goo3 (talk) 08:15, 14 December 2017 (UTC)Reply
  • Panopticlick was my suggestion too. I wonder if providing checkusers with some kind of a score comparing 2 browsers instead of actually showing them the underlying information - IPs, fonts, headers etc.- would be acceptable under the privacy policy (something like "User A connecting with UA X has a probability of 90.7% of being the same as User B connecting with UA Y"). This would allow even less-technical people to use the tools.--Strainu (talk) 00:29, 16 December 2017 (UTC)Reply
  • I have always longed for an AI tool that can compare editing pattern and language, to detect possible match between accounts, specially blocked vandals and new accounts/Ip that pops up.Yger (talk) 08:59, 18 December 2017 (UTC)Reply
  • My two key priorities are:
    • Enabling us to track a user when they change an IP address. I am not sure we should really block by user agent (say, blocking the latest version of Chrome on Android in a popular mobile range would probably not be a good idea) but blocking by device IDs (great if possible) or blocking by cookie look like good ideas.
    • Proactive block of open proxies. We have no established cross-wiki list of open proxies. Each wiki has its own framework, either with administrators blocking manually or a bot checking some list. It would be very useful to have a global setup for this — NickK (talk) 16:35, 18 December 2017 (UTC)Reply
  • Blocking by user agent in some small range would be good, but can make too much collateral damage if blocked only by user agent in all IP's. Block by device ID sounds like a good idea. Cookie blocking for anons would stop many vandals, so I support it. Also a big support for Proactive globally block open proxies. Stryn (talk) 17:25, 18 December 2017 (UTC)Reply
  • Regarding the second point: If it can be done in a way that does not violate our privacy policy, the proposal to Block by device ID (including CheckUser search) sounds like a great solution. We do waste a lot of time blocking obvious puppets of vandals. Is it technically possible to get a unique ID from each machine? if so after a preset number of blocks from a single device, the system could automatically prevent IP edits as a first step and after another preset level require email confirmation for new username registrations from the device. It would serve as a deterrent for the vandals but still allow users with constructive intentions at shared machines (like public schools etc.) to register and contribute using their accounts.--Crystallizedcarbon (talk) 18:55, 19 December 2017 (UTC)Reply
    • @NickK: @Stryn: @Crystallizedcarbon: Thank you all for sharing your thoughts! I've looked into ProcseeBot, PxyBot, and ProxyBot (verbal conversations about these accounts are maddening 😆) and documented what I've found here and on phab:T166817. I think proactively blocking open proxies a smart move, and just requires some agreement from stewards and others who manage abuse across wikis. As for blocking by UserAgent and Device ID — in the new year I'll be meeting with the WMF's legal department to check if we'll be able to capture this data in accordance with our privacy policy. Cookie blocking anons has already passed legal, it's just a matter of writing the software. I have some notes on requiring email addresses, which I'll post in the section below. — Trevor Bolliger, WMF Product Manager 🗨 23:29, 19 December 2017 (UTC)Reply
      @TBolliger (WMF): And obviously you only looked at English Wikipedia. You can also look at MalarzBOT.admin on Polish Wikipedia or OLMBot and QBA-bot on Russian Wikipedia which are also quite good (although with not that cool names) — NickK (talk) 00:49, 20 December 2017 (UTC)Reply
  • The measures used by Panopticlick more or less breaks down if the user invoke private browsing, especially for those browsers that has some kind of stealth mode where they avoid firngerprinting. In general this has no good solutions, the only solution that really works is to be able to set a protection level for an article or group of articles to «identified users». There is another solution that partially works, and that is to fingerprint the user instead of the browser. Problem is that we need a history for the user to be able to detect abuse, which more or less breaks down. One solution that do work, but not at a primary level, is to increase the cost of creating new accounts. There are several methods to do that, one is simply to let logged in users get more creds over time so they for example might post on users pages or create pages in the main space. That means a new account simply might not solve the problem when they want to avoid a block.
A typical user fingerprinting technique is to do a timing analysis of typing patterns. This can be done in a secure way by adding some known noise to the patterns, and it can even be made so it is obfuscated and/or encrypted in the database. One way to use it could be to check if an unknown contributor on a page is infact a previous contributor. The answer would only be a probability, but together with sentiment analysis it could give a very clear indication that something weird is going on. An other interesting implementation is to use this for a rolling autoblock. If someone is blocked for a short time then that users fingerprint is added to a rolling accumulated pattern. It is possible to check all active users against such a pattern in real time. This similar to what is done in a spread spectrum radio. — Jeblad 00:08, 27 December 2017 (UTC)Reply
@Jeblad: Thank you for your comments. By 'typing patterns' do you mean how quickly (or slowly) they type on their keyboard? Or something else? — Trevor Bolliger, WMF Product Manager 🗨 01:07, 3 January 2018 (UTC)Reply
@TBolliger (WMF): There are several such systems, but the most common one uses typing delays between individual pairs of characters. Because it takes time to build such typing patterns it only works for accounts with a history, so the "cost" by creating a new account should be sufficiently high to prevent people from creating throw-away accounts. — Jeblad 11:31, 3 January 2018 (UTC)Reply

Problem 2. Aggressive blocks can accidentally prevent innocent good-faith bystanders from editing[edit]

Previous discussions
  • For the first point, I think we should require all accounts to have their email address attached (and verified/confirmed) in their settings at all times before any other action taken, regardless how they registered. A specific email address should only be on one account and not on another, unless they remove it from their original account. Tropicalkitty (talk) 00:02, 14 December 2017 (UTC)Reply
I see now that this was mentioned by another individual earlier... Tropicalkitty (talk) 00:53, 14 December 2017 (UTC)Reply
Agree this has a lot of potential. We want to stop the 300 account sockfarms. Creating lots of email accounts would be a significant increased burden for those who do one account per job.
Doc James (talk · contribs · email) 06:16, 14 December 2017 (UTC)Reply
  • Verifying accounts is a good idea. However we should also ignore dots (.) within the Gmail addresses due to technically evabraun@ and eva.braun@... are different addresses yet technically both of them are going to the same mailbox for Google. --Goo3 (talk) 08:18, 14 December 2017 (UTC)Reply
  • It's really not hard to create lots of email accounts, actually. You only need a "catch-all address" to get an unlimited number of addresses on a single domain. Google Apps offers that by default. And obviously you cannot limit the number of accounts on a domain without hurting legitimate email providers. Email in itself is just as unreliable as IPs for identifying sockpuppets.
The solution I see here is to automatically block the "browser" (see Problem 1 above for identification methods) along with the user, including autoblocking new browsers from where the user might try to connect. The better we can identify the computer, the less collateral damage.--Strainu (talk) 00:41, 16 December 2017 (UTC)Reply
  • The stated issue (in the headline) is not a problem for us at svwp.Yger (talk) 09:01, 18 December 2017 (UTC)Reply
  • There is no perfect solution here. Requiring to confirm email may be a barrier for some people (I know Wikimedians with thousands of edits who do not have any email, i.e. they never used email at all). In addition, in some cases creation of multiple accounts from the same IP (I know a university network where everybody on the campus has the same IP) or even same browser (a public computer in a library) is legitimate — NickK (talk) 16:45, 18 December 2017 (UTC)Reply
    • Our Anti-Harassment Tools team has no preconceived notion of what we’re going to build — we legitimately want to see where the most energy and confidence exists for solving the problems with the existing blocking tools. I just want to chime in with my concerns about using email as a unique identifier.
Requiring email confirmation for all users is a much larger question (and fundamentally goes against our privacy policy and mission.) This talk page isn’t the right space to debate that decision.
However, it has been proposed that we build the functionality that accounts can be created and used within an IP range block if (and only if) they have a unique confirmed email address linked to their account. This presents its own set of problems, none of which are insurmountable (but they do add up.) Because creating a throwaway email account is dead simple, we’d likely need to build a whitelist/blacklist for supported email domains. We’d need to build a system that strips periods and plus-sign postpended strings. We’d also need to build a way to check if the email address is unique or already in use by one of the existing millions of accounts.
This seems like a lot of effort to limited effect. Gmail accounts can be created in under 30 seconds, bypassing all these checks. But as some have said, if creating a sock takes 30 seconds longer it might be just enough of a time deterrent. — Trevor Bolliger, WMF Product Manager 🗨 23:40, 19 December 2017 (UTC)Reply
This will also probably mean introducing rules per mailing server: Gmail might have one set of rules, Yahoo might have another one, and my own server will redirect any unique email to server admin, i.e. me.
I do agree the problem exists, but the main reason of this problem is that we have to implement aggressive blocks as we do not have any better solution. Having better solutions would reduce the number of blocks that are too aggressive — NickK (talk) 00:41, 20 December 2017 (UTC)Reply
TBolliger (WMF), I apologise for the input if it would break those policies. I don't know what else to say about this specific problem. Tropicalkitty (talk) 18:49, 20 December 2017 (UTC)Reply
There's definitely no need to apologize. These are difficult problems to solve and all ideas deserve a fair opportunity for consideration. — Trevor Bolliger, WMF Product Manager 🗨 22:44, 20 December 2017 (UTC)Reply
  • Twinkle is mentioned, but in some projects like eswiki it is not functional. Huggle is a very popular and effective tool against vandalism, most of the times it does keep track of the number of warnings issued and increases the level of the templates until the maximum is reached and then it automatically allows for posting to WP:AIV for enwiki and other similar noticeboards for other projects. It is a powerful tool, but it does not always keep the right track of the warning level. In the Spanish project many times it keeps posting the level 1 warning over and over which forces patrollers to have to stop and do a manual report. --Crystallizedcarbon (talk) 18:56, 19 December 2017 (UTC)Reply
    • We'd be interested in making improvements to Twinkle or Huggle, if there is support from the communities that use it most or that want it most! Personally, I think that a tool that automates warning and blocks would help both in terms of admin productivity, but also in helping set consistent fair blocks, when blocks are appropriate. — Trevor Bolliger, WMF Product Manager 🗨 23:40, 19 December 2017 (UTC)Reply
      • I would be glad if someone could make Twinkle easy to be configured, or even better, make it an opt-in-tool for Wikimedia wikis. The new wikitext editor (2017) can't support fi:Järjestelmäviesti:Edittools.js (includes all warning templates, block messages and important messages that can be used on articles) which is shown below the edit window in old editor. Twinkle would help. Stryn (talk) 16:03, 20 December 2017 (UTC)Reply
  • Disable range blocks, they are completely defunc as it is now. Use closeness in IP-address space to other trolls (aka build an IP-range) to identify which requests to inspect, and use timing analysis to locate those that should be given a temporary ban. An even better solution would be to do a cooccurence analysis on troublesome IP-addresses, to identify how the IP-addresses changes when an address is blocked. That would even identify how addresses are reassigned inside an ISP. [An IP-range together with detection of physical location could work. Location can be found by timing analysis on requests to servers at different physical locations.] — Jeblad 00:24, 27 December 2017 (UTC)Reply
  • Wide-range IP based blocks are a huge problem. I meet several times people who wanted to contribute but couldn't because there IP lies close to the IP-Range of some school, etc. I'm happy that finally someone is looking into improving this! But please be care full to not close out other people! Not anyone has an email-Address! (Yes, really and the number is growing again due to kids nowadays not using mail but whatsapp, etc.) We also have already the problem, that when doing a big Wikipedia-Workshop we run into the account creation Throttle! (So make ways to go around them in legit cases!) What I would propose would be to allow setting a ip-range to semi-blocked, meaning you have to create an account and verify you mail-address to edit. But let adding a mail optional for the normal IP-Ranges. For the future we would need even more flexible and intelligent systems. -- MichaelSchoenitzer (talk) 15:44, 31 December 2017 (UTC)Reply
    • @MichaelSchoenitzer: Good point about not everyone having an email address — the way people communicate and exist online is changing. What do you mean by more flexible and intelligent systems, and why wouldn't we build them now if we have the opportunity? — Trevor Bolliger, WMF Product Manager 🗨 01:14, 3 January 2018 (UTC)Reply
      • With more flexible I mean that we do not only have blocked and not-blocked IP-addresses and users may do nothing or anything respectively but have more fine-grained options. From some ip-range one may not edit as IP but may create an account, for another you might need to fill out a captcha for others you may even need to add a verified e-mail and the maximal amount of accounts created by day from one IP-Address might be 1 for some ranges and 100 for others. Right now some Wikipedias use machine-learning to prevent spam & vandalism, others flaged revisions, other use only the ip-blocking system. In the future we could have a combinations of all three systems they should not work side-by-side but together as one system. With more intelligent I mean, that admins should not have to worry about having to set all these options, but the system could intelligently pick them itself, blocking ip's automatically after to many reverts, etc. and the human would be "only" the control instance, overruling decisions mad by software. That's my vision. But to get there you'd need to do a lot at the software, do a lot of research and do a lot of (not always easy) discussion with the community. -- MichaelSchoenitzer (talk) 13:14, 6 January 2018 (UTC)Reply
        • @MichaelSchoenitzer: Thank you for explaining. I agree that in the long-long-term it would be ideal if the system could evaluate the optimal length and technical tactics of which to prevent disruptive users from returning to the wiki, while limiting potential collateral damage. And I agree that all these tools should work as one system — if this talk page discussion leads to a new type of block we will want to best build it into existing tools and workflows without getting in the way. — Trevor Bolliger, WMF Product Manager 🗨 17:29, 8 January 2018 (UTC)Reply
  • I've brought it up before, though hell if I remember where the Phab ticket is at, but allowing rangeblocks to only affect certain useragents can really reduce some of the harm of our blocks on the English Wikipedia. There are some cases where it wouldn't work, but quite a few where it would. -- Amanda (aka DQ) 21:18, 9 January 2018 (UTC)Reply

Problem 3. Full-site blocks are not always the appropriate response to some situations[edit]

Previous discussions
  • One of the ways that the Dutch WP-AC has effectively minimized problems with certain users is to set a maximum numbers of contributions per day for a namespace for a user. It would be great if this could be captured in a block type. Whaledad (talk) 22:10, 8 December 2017 (UTC)Reply
    • The way the Dutch language Arbitration Comission handles those enforcements is wholly unproductive as was with the case w:nl: Wikipedia:Arbitragecommissie/Zaken/Blokkade JP001 where a user was allowed only a maximum number of 10 (ten) edits a day but was given a life-long block that would only be given the opportunity to be appealed to in at least 6 (six) months, cases like this aren’t described anywhere in any rule or guideline in Dutch Wikipedia but are still enforced, the Arbitration Commission doesn't exist to either uphold the preventive nature of blocks and basically cannot really be appealed to for ublocking/unbanning (when was the last time they actually unblocked anyone?), it only exists to sanction editors and though I agree that specialised blocking would help enforce these, why are users being permabanned for making 11 (eleven) non-disruptive edits? The ban-happy culture should be addressed as well, and having specific block settings may be a first step in this direction the current focus of the administrative elite and the Wikimedia Foundation staff is not on editor retention or even content improvement but on making sure that people can’t edit. No actual troll has ever had any issue with evading their blocks, but good faith editors like JP001 are the victims of a culture that justifies permanently getting rid of good faith editors with the excuse that their measures exist only to stop abusers they know they can’t ever get rid of. What baffles me is how little the moderators (or admins) of Dutch Wikipedia actually have to abide by policy, non-vandals are by the rules and guidelines protected from indefinite blocks for such small misdemeanors by the “verhogingsdrempel” but the moderators seem to believe that for certain editors the rules shouldn't apply and a permaban is in place. JP001 did not harass anyone or vandalised anything, but is being punished harsher than those that actually disrupt the project. --Donald Trung (Talk 🤳🏻) (My global lock 🔒) (My global unlock 🔓) 12:16, 11 December 2017 (UTC)Reply
  • Site-bans should be a last resort and not a first, and good faith users who made some mistakes or got angry one day shouldn't be given life-long bans over it, blocking users from accessing others' talk pages and/or create edit summaries would help more if those users have never made a disruptive mainspace bad edit in their lives. --Donald Trung (Talk 🤳🏻) (My global lock 🔒) (My global unlock 🔓) 12:34, 11 December 2017 (UTC)Reply
  • Strong support Strong support Blocks are used as prevention rather than punishment. If it was allowed to block a user from editing in a specific namespace, many users who contribute a lot to articles while persistently making personal attacks on talk pages would not be forced to leave our site. --Antigng (talk) 17:05, 13 December 2017 (UTC)Reply
  • Thanks for kicking this discussion off! As an admin on English Wikipedia and having observed issues in some other, smaller communities I think the crux of the issue is that the current ban hammer is "all or nothing". Topic bans (applying to a page or category) are already a policy on many large Wikipedias like English, and being to able to enact these in code would be a great way to pave the cowpaths and allow for more flexible blocking while retaining more contributors. Being able to apply more specific types of bans might also help discourage use of sock puppets to evade bans. Steven Walling • talk 23:07, 13 December 2017 (UTC)Reply
    • Thank you, @Steven Walling: for jumping in! Great analogy. On English Wikipedia we've found that topic bans are nebulous by design ("broadly construed") so applying a simple page/category/namespace block wouldn't be the 1:1 same. Once we winnow this laundry list of possible block improvements to a manageable few, we will explore how they would work with/against existing policies on the largest wikis. But in general I agree — the blockhammer is too strong for all situations. — Trevor Bolliger, WMF Product Manager 🗨 22:42, 15 December 2017 (UTC)Reply

In terms of solutions in this problem, I agree with all the points (at the time leading to up to this post) except for blocking the contributions (if it's on one user that can't see contributions from all users from a project). Tropicalkitty (talk) 00:13, 14 December 2017 (UTC)Reply

@Tropicalkitty: By 'blocking the contributions' do you blocking a user's ability to view Special:Contributions (like this example)? Or something larger? — Trevor Bolliger, WMF Product Manager 🗨 00:24, 14 December 2017 (UTC)Reply
Yeah that's what meant by it. Tropicalkitty (talk) 00:27, 14 December 2017 (UTC)Reply
Thanks for clarifying! (I agree with you, it seems like a clumsy solution and against the ethos of a collaborative system.) — Trevor Bolliger, WMF Product Manager 🗨 00:41, 14 December 2017 (UTC)Reply
  • I do not recognize this as a big issue for us at svwp. An unserious user mostly also discuss in conflict with our etiquette, so will be blocked also for that reason.Yger (talk) 09:05, 18 December 2017 (UTC)Reply
  • Strong support Strong support. In Ukrainian Wikipedia we heavily rely on this option but we implement it using AbuseFilter which is not a very suboptimal solution. There are many possible options, including block from editing a namespace (e.g. user cannot edit templates), from editing a group of pages (e.g. user cannot edit articles about living politicians), from editing a specific page (e.g. user cannot edit a page where they were engaged in an edit war) or vice versa (e.g. user can edit only a specific page: they violated rules but had to prepare a page of an offline event at the same time). All the examples above are based on real-life cases. We really need a good tool for these cases — NickK (talk) 16:51, 18 December 2017 (UTC)Reply
  • Support Support. Estonian Wikipedia has low threshold of notability and some users write articles about themselves / their companies. Estonian wiki needs blocking option, which does not allow to write about yourself. Taivo (talk) 18:56, 18 December 2017 (UTC)Reply
  • Comment Comment Topic ban, namespace ban... all that goes in the good direction IMO, I would like to share the ideas on possible options that cross my mind:
Blocking range
Area range
Full site (by default)
Topic (categories? block the user to edit everything inside a specific category?? why not?)
Single page ban
possibility to add several pages, with an option for each pages to include their sub-pages or not
Edit range (prevent the user to do some actions)
Use one or several specific tools of the project
But there will always be the same problem of potential block evasion attempts, therefore this problem is closely linked to the problem 1 above, and limited blocks, whatever they are, must be by default accompanied by the usual precautions (prevent account creation, apply same limited block to last IP address used by this user, and any subsequent IP addresses they try to edit from). --Christian Ferrer (talk) 20:13, 18 December 2017 (UTC)Reply
@Taivo: @NickK: @Christian Ferrer: Thanks for the your thoughts, support, and context. I think providing a more granular blocking tool would offer a wide range of benefits for helping nip edit wars, harassment, POV conflicts, and other forms of user misbehavior in the bud. My biggest concern, though, is over-engineering a system. In my mind, just implementing per-page blocking (assuming one user can be blocked from multiple pages with different expirations) will deliver most of the benefit. If you had to prioritize just one or two to build, what would it be? — Trevor Bolliger, WMF Product Manager 🗨 00:09, 20 December 2017 (UTC)Reply
@TBolliger (WMF): If I had to choose one thing it would be per-user AbuseFilter. AbuseFilter can implement most of these (block from editing a page, a namespace, all pages but a given one etc.) but using AbuseFilter applying to all others with conditions per user is too costly — NickK (talk) 00:35, 20 December 2017 (UTC)Reply
@NickK: Duplicating or extending the AbuseFilter would be a major undertaking, so it's unfortunately just outside the scope of what our team can build in our available time. — Trevor Bolliger, WMF Product Manager 🗨 01:15, 20 December 2017 (UTC)Reply
@TBolliger (WMF): Well, I thought it could have been easier as this would rely on the existing basis, i.e. using AbuseFilter core, instead of developing a completely new solution. If this is not possible banning a user from editing an individual page and/or a namespace would be my two priorities — NickK (talk) 12:46, 20 December 2017 (UTC)Reply
My preference would go to individual pages blocking with a possibility of cascading on the subpages. --Christian Ferrer (talk) 05:44, 20 December 2017 (UTC)Reply
I really like this idea. I think it would be useful if it had some of the options suggested by Christian Ferrer. I alos like the idea of having the option for cascading by subpage and/or category. ···日本穣? · 投稿 · Talk to Nihonjoe 00:01, 17 January 2018 (UTC)Reply
  • I believe the only way this could work is as a topic ban on a page, and if that page is a category then it should apply on every member of that category. — Jeblad 00:29, 27 December 2017 (UTC)Reply
  • I think the two priorities are the two opposites: individual page, and namespace. Topic bans at the enWP tend to inevitably lead to disputes about just where the boundaries are, and Wikiproject bans would have the problem that anyone can add a Wikiproject banner to a page. The other intermediate or specialized options are less critical. Being able to enforce bans programmatically is much better than doing it by Arbitration enforcement, which leads to continual complains of unfairness. Editfilter based schemes have the problem that using too many of them cause performance degradation--enWP already has most of the existing ones inactivated. I recognize that other wikis will of course have different needs DGG (talk) 20:49, 27 February 2018 (UTC)Reply

Problem 4. The tools to set, monitor, and manage blocks have opportunities for productivity improvement[edit]

Previous discussions
  • I love and hate the warning count. I love it because some users tend to delete or archive warnings so that their talk page looks clean. I hate it because I saw people send warnings just to harass others. No strong opinion on other points — NickK (talk) 16:59, 18 December 2017 (UTC)Reply
  • I wonder if the option of indefinite bans should be removed, or moved to a higher level of admins. When a ban times out it could be set so a lower level type of admin could reset it, or even an ordinary user. One alternative could be that some bad-faith activity could automatically trigger a resetting of the ban, with an increasing restraint over time. This could even be activated by a warning to the user, thereby giving the user better reason to resolve the issue peacefully. This is also dangerous as the give real trolls a tool to silence opposition. — Jeblad 00:36, 27 December 2017 (UTC)Reply
    • I am curious if others will share their thoughts on these suggestions, but they seems to be a tough sell (eliminating indefinite blocks and requiring bureaucrats to set blocks.) I can see a world where a new usergroup is added who has training specifically for blocking, and I also like the concept of 'reset block' as a separate permission. I can also see the software asking "are you sure?" for indefinite blocks. But these changes will require a lot of per-wiki community agreement. — Trevor Bolliger, WMF Product Manager 🗨 01:20, 3 January 2018 (UTC)Reply

Ottawahitech's perspective on this initiative[edit]

One cannot come up with solutions without first identifyIng the problem they are trying to fix. Introducing more complexity into an area that is already falling apart because of mass confusion and a lack of consistency will only make matters worse.

The English wikipedia now has tens of thousands of indefinitely blocked users. Blocked users are getting more and more common, and more and more productive good faith editors are labeled bad actors. This is happening with alarmingly increased regularity and with little discussion and sometimes without any evidence. More and more lynch mobs sitting at ANI are "prosecuting" other editors that they simply do not like. It is not unheard of to have long-term, prolific editors banned or blocked after a short discussion initiated by an editor no one has heard of before. Sometimes the editor does not get notified of the discussion in which they are blocked.

This whole area on the en-wiki is full of inconsistently applied rules and technical complexity. Not even long-term/clueful users know who is banned and who is blocked, what the difference is between these two, and how a a "community sanction" fits into this. There are no standards about how to label blocked users. Some indef-blocked users are not added to the category I mentioned above. Some blocked users are blocked/banned "in secret" and there is no mention of their block on their user pages (example)flong term. Some indef-blocked users cannot easily appeal their block when their talk page access is blocked by an admin without discussion. Other editors are not permitted to appeal on behalf of so-called bad actors.

No one knows how many have been blocked for the following reasons:

  • Disruption (of what?)
  • long term abuse
  • Continuing to participate by proxy
  • long term failure to abide by basic content policies
  • ???

From previous wiki-experience I already know that my posting here will only get me into further trouble from shoot-the-messenger participants. I also know I will be told this is not the correct location for posting this - and my posting (which took a fair bit of effort) will be removed, sigh… Ottawahitech (talk) 17:32, 15 December 2017 (UTC) Please ping me Reply

@Ottawahitech: If you (or anybody) are concerned about retribution for your participation on this discussion we welcome you to email us your thoughts directly, in which case we'll include your input in our bi-weekly summaries but not attribute your username.
We are looking at improvements to ANI and other Harassment reporting workflows. Our first step is to analyze the results of a recently-run survey about ANI. The WMF's Anti-Harassment Tools team is predominantly focusing on building blocking tools for the first half of 2018 but the second half is reserved for building an improved reporting system. Our preliminary research is already underway.

I agree that blocks are serious and should be more traceable to an actual reason. This is why we included Problem 4 on this discussion. Do you have any thoughts on how Special:Block could be changed to ensure that blocks are more transparent and consistent? — Trevor Bolliger, WMF Product Manager 🗨 00:37, 20 December 2017 (UTC)Reply
@TBolliger (WMF): thank you for pinging me, however your message above mystifies me:
  • results of a recently-run survey about ANI: I followed your link and the only survey I found mentioned on that page is a survey of Administrators, not a survey of the general population of the en-wikipedia.
  • If you (or anybody) are concerned about retribution: My concern was that my input would be removed from this page. My comments have been removed from talk-pages over the years when I tried to express a dissenting view (en-wiki being silenced edit summaries).
  • Do you have any thoughts on how Special:Block: I have no idea what Special:Block does. When I click on it, it informs me that I have committed a permission error (and is probably causing me to be logged somewhere and added to the so-called badactors list?)
I also note that the WMF has apparently not detected the groundswell of distrust of admins by non-admins. Why are you building tools that further empower wiki-admins in subduing non-admins? (Example:Features that surface content vandalism, edit warring, stalking, and harassing language to wiki administrators and staff). Why not build tools that help both harassed non-admins and harassed admins defend themselves? Ottawahitech (talk) 10:24, 27 December 2017 (UTC) Please ping meReply
@Ottawahitech: Special:Block is the blocking tool; clicks on it are not logged except maybe in the servers, not accessible to anyone but sysadmins. Jo-Jo Eumerus (talk, contributions) 10:46, 30 December 2017 (UTC)Reply
@Ottawahitech: Sorry for the wrong link, I updated the page I linked-to to point to also point here. As Jo-Jo mentioned, Special:Block is the page used to set blocks. If you are not an admin then you will see a 'permission denied' error page. Attempting to view this page does not log your username for any future purposes, it's just like viewing any other nonexistent page. — Trevor Bolliger, WMF Product Manager 🗨 01:41, 3 January 2018 (UTC)Reply
I don't think admins should be able to indefinitely block users, and they should not be able to block users from appealing stupid bans. There are a lot of admins that has no clue and use their ability to block other users as a d**k-extension. The system should be made in such a way that blocking good-faith editors comes with a cost. If the cost is acceptable for the admin, then they should block. If the cost is not acceptable, then they don't. Now the cost is non-existent, and that makes it to tempting to block other users. This is really about what the users gain by good behaviour, and the cost with bad behaviour. Now the system are without any type of cost at all. — Jeblad 00:45, 27 December 2017 (UTC)Reply
@Jeblad: I don't think admins should be able to indefinitely block users: If you mean without due process I would certainly support that. I would also add: stop admins from deleting/blanking userpages of blocked users without due process. Ottawahitech (talk) 10:44, 27 December 2017 (UTC) Please ping meReply
@Ottawahitech: I agree. Easiest solution would be to make this a bureaucrat-only right. — Jeblad 19:49, 27 December 2017 (UTC)Reply
I replied to your proposals about eliminating indefinite blocks and requiring bureaucrats to set blocks in the "Problem 4" section above. There is already a cost associated with blocking as with any other administrative action: all actions are publicly logged and can be reviewed by other users. Their reputation is the cost. There are many admins across all wikis and some certainly wield their status and abilities with more responsibility than others, but I do not see this as "empower[ing] admins in subduing non-admins." There are certainly some malicious users who should be entirely prevented from participating at any point in the future, and there are certainly some disruptive users who should have their behavior addressed but they should be retained for their future contributions.
I think the Block tool could potentially add a check for unnecessarily long blocks, such as by requiring the block reason first and for admins to explicitly state why they are setting a longer block than the system default. We'll need to make sure this doesn't cause too much of a burden on existing workflows, as these tools should serve all users in the end. — Trevor Bolliger, WMF Product Manager 🗨 01:41, 3 January 2018 (UTC)Reply

Blocked user continues harassment on other versions[edit]

At svwp we have had problem with users explictly harassing other collegues, where the effeced users also on their talkpage indicate they are fragile. Of course we blocked the user harrasing other users. But these then went on to meta and on its Requests for comment continued their harassemnt and specially the collegues special fragile part. And we were very unhappy, we could not stop the case on meta. It did of course die down there, but a moments we were not only worried that the harrased users would quit but also if it could effect their physical wellbeing. Could we please find a procedure to stop harrassment from other versions? Yger (talk) 09:20, 18 December 2017 (UTC)Reply

I have seen somewhat similar cases on Ukrainian Wikipedia: a user having an interaction ban circumvented it by harassing these users on other wikis (e.g. a user banned on Ukrainian Wikipedia uses Russian Wikipedia to contact a ukwiki user or vice versa) — NickK (talk) 16:55, 18 December 2017 (UTC)Reply
@Yger: @NickK: Cross-wiki harassment is indeed a problem. The Anti-Harassment Tools team has made commitments to build tools specifically to address this problem in 2018 or 2019, both by empowering Stewards with better tools and also by implementing better safeguards for individual users on wikis where they don't frequently edit.
Users can be locked from their account if they commit serious cross-wiki trouble. We could build global blocks if there is strong support, and I believe it might be helpful if block information from one wiki is displayed on other wikis if the user is currently blocked. What do you think would be the most effect counter-measures that we could build? — Trevor Bolliger, WMF Product Manager 🗨 00:56, 20 December 2017 (UTC)Reply
We have a set of users being blocked on svwp for POV and etiquette who continues on enwp. This is an irritation, as they enter biased info on enwp which need detailed knowledge to be aware of. But still, in those cases I think we must accept all wikis are autonomous, so I do not see broad blocking as something wanted. I am only talking of serious harassement that could cause harm on the harrassed part IRL. I beleive what is needed is a procedure for global blocks in cases of serious harrasement. Such a thing should be chanelled through meta (and the global stewards). But I actully think these cases needs to be handled offwiki, peoples wellbeing are at risk and prolonged discussion and details of the IRL frailment, would be even more harmful for the attacked person.Yger (talk) 09:31, 20 December 2017 (UTC)Reply
I also want to high-light that svwp is a small community. It makes it possible to be treat contributors on an indivdual basis. If someone states being on sick leave beacause of burnout or a specific physcic ailment, we can make sure we allow for deviations in behaviour that can occur. And also be extra sensitive to bad behaviour towards these. So what we tolerate in harsh behaviour (and haressment) can be less then on the bigger sites including meta.Yger (talk) 11:27, 20 December 2017 (UTC)Reply

(Out-dent) @Yger: @NickK: Unfortunately it seems that this is happen more with users from smaller wikis. The global bans process was created to address some aspects of this issue but has never been widely used. There is the potential for the wikimedia community to consider more improvements. Right now there is a community request for comment happening about some possible improvements.

While this current discussion is more related to software development for tools, part of the Wikimedia Foundation's Community health initiative's work is to support the community as it considers policy changes that might make the wikis more welcoming. We can capture these ideas for future discussions in 2018 when the topic will come up again. SPoore (WMF) (talk) , Community Advocate, Community health initiative (talk) 04:15, 23 December 2017 (UTC)Reply

@SPoore (WMF): Just to clarify, I do not mean that global bans are a good solution. The cases I mentioned concern users who have overall rather good contributions and are not blocked in home wikis but are often banned. For example, user B in Ukrainian Wikipedia has a ban on interactions with user A, thus they use A's talk page in Russian Wikipedia to send them not quite friendly messages. I don't think it is bad enough to deserve a global ban but probably bad enough to need an action — NickK (talk) 04:23, 23 December 2017 (UTC)Reply
I believe this can be handled by a global credit system. You do GoodThings™ you get credits. You do BadThings™ you loose credits. And make the credits visible to other users. Then it won't be invisible what someone do on a small project. Problem is that some of the bad things are really processes on the wikis, like deletion requests. Users are actually harassing each other by systematically going after articles and deleting them. Should you then give the user credits for doing some work, like deleting the article, or take credits from them because they are harassing someone by deleting their article? — Jeblad 00:56, 27 December 2017 (UTC)Reply
How do we prevent factionalism, people giving upcredits to friends and removing them from enemies? And we delete lots of articles because they are spam, vandalism etc.; should deleting such stuff result in downcredits? Jo-Jo Eumerus (talk, contributions) 18:44, 4 January 2018 (UTC)Reply
If you only have a limited amount of credits there will be a cost however you are using them. It is also possible to give less credits to those that routinely give credits against mean vote. You want to give them some credits, but not to much, or you can weight the credits. Several alternatives are described in various papers, but the general idea is that you need some cost function. — Jeblad 17:04, 20 January 2018 (UTC)Reply

Summary of feedback received to date, December 22, 2017[edit]

Hello and happy holidays!

I’ve read over all the feedback and comments we’ve received to date on Meta Wiki and English Wikipedia, as well as privately emailed and summarized it in-depth on the talk archive page (along with archiving some sections.) We’re looking for non-English discussions and users willing to help translate, and will provide a feedback of those discussions in January.

Here is an abridged summary of common themes and requests:

  • Anything our team (the Wikimedia Foundation’s Anti-Harassment Tools team) will build will be reviewed by the WMF’s Legal department to ensure that anything we build is in compliance with our privacy policy. We will also use their guidance to decide if certain tools should be privileged only to CheckUsers or made available to all admins.
  • UserAgent and Device, if OK’d by Legal, would deter some blocks but won’t be perfect.
  • There is a lot of energy around using email addresses as a unique identifiable piece of information to either allow good-faith contributors to register and edit inside an IP range, or to cause further hurdles for sockpuppets. Again, it wouldn’t be perfect but could be a minor deterrent.
  • There was support for proactively globally blocking open proxies.
  • Some users expressed interest in improvements to Twinkle or Huggle.
  • There is a lot of support for building per-page blocks and per-category blocks. Many wikis attempt to enforce this socially but the software could do the heavy lifting.
  • There has been lengthy discussion and concern that blocks are often made inconsistently for identical policy infractions. The Special:Block interface could suggest block length for common policy infractions (either based on community-decided policy or on machine-learning recommendations about which block lengths are effective for a combination of the users’ edits and the policy they’ve violated.) This would reduce the workload on admins and standardize block lengths.
  • Any blocking tools we build will only be effective if wiki communities have fair, understandable, enforceable policies to use them. Likewise, what works for one wiki might not work for all wikis. As such, our team will attempt to build any new features as opt-in for different wikis, depending on what is prioritized and how it is built.
  • We will aim to keep our solutions simple and to avoid over-complicating these problems.
  • Full summary can be found here.

The Wikimedia Foundation is on holiday leave from end-of-day today until January 2 so we will not be able to respond immediately but we encourage continual discussion!

Thank you everyone who’s participated so far! — Trevor Bolliger, WMF Product Manager 🗨 20:19, 22 December 2017 (UTC)Reply

Gave some comments on the enwiki page on this (mainly, because I saw your post there first). Jo-Jo Eumerus (talk, contributions) 10:44, 23 December 2017 (UTC)Reply

Limit cookie blocks to 1 year only[edit]

Limit cookie blocks to a year maximum, I know that the Wikimedia Foundation won’t take anything any blocked user says serious and that this will land on deaf ears, but this culture of keeping editors who genuinely want to improve content away needs to end. Cookie blocks are by far the worst offender of this, they don't stop disruptive behaviour or malicious sockpuppetry, they only stop people with no intent on disruption from ever editing. Let's take a scenario where a user is new to Wikipedia but sees an error on a medical page, believing that this error could have negative real world consequences they remove this false information, another editor sees this as vandalism and reverts this without giving a reason, the user then removes the misinformation again and writes down why it is wrong, the other editor (usually a rollbacker or admin) reverts again and doesn't give a reason, because this is seen as “vandalism” the “established editor” (a title given based on a user’s edit count, not on any actual measurement of meaningful contributions) can revert as much as they want, as this new editor doesn't have much other edits to their name they will probably immediately face an indefinite block with no talk page access or way to appeal, either they don't know that the UTRS exists or they contribute to a project without a UTRS and they’re essentially banned for life. Maybe in time they learn more about how Wikipedia works, they finally figured out what a talk page is and a year later they still see that medical misinformation standing there, they try to go to the talk page and explain with a reliable scientific source why it's dangerous misinformation and then they hit their indefinite cookie block, why? Because they’re not allowed to edit for the rest of their life, and while real, actual vandals will just do whatever they want, users like this end up getting the heaviest punishments (because blocks are ALWAYS punitive, if they truly were preventive good content wouldn't get deleted solely based on evasion, or even delete pre-block content after a block solely based on a later block), this is why editor retention is shrinking and why women are reluctant to join. A hostile culture that exists because of hostile tools, instead of talking about how to expand the blocking tools, why not limit them so users aren’t site-banned? The aforementioned hypothetical user could've simply been blocked from the “Undo” button or from editing that specific page, but they are banned from the entire site, who does this help? Only the admin wanting to brag about their “500,000 log actions” or whatever, not the project and certainly not the readers. --Donald Trung (Talk 🤳🏻) (My global lock 🔒) (My global unlock 🔓) 11:07, 28 December 2017 (UTC)Reply

Cookie blocks are plain stupid. They are so easy to circumvent that it isn't even fun to point out how to do it. Remove it, it is simply "blocking by obscurity". — Jeblad 13:44, 28 December 2017 (UTC)Reply

Harassment need a user masking system, not a blocking one[edit]

Considering than :

  • w:Harassment concern one or few person having wrong behavior again an other person considered as the victim.
  • Person(s) guilty of harassment could be in the same time useful and effective on other aspects in wikimédia projects.
  • Blocking Wikimedia access as digital ostracism is a very strong punishment creating potential moral frustration and technical problem to other project users (disagreement or IP access blocked).
  • Blocking decision are source of community conflicts.

l thing than allowing any user to hide the actions of any other user or IP account, except administrators one, seems much more appropriate than blocking Wikimedia access.

Advantages of hiding system are :

  • Victims can them self dealing with the situation avoiding risk of community implication.
  • Person(s) guilty of harassment will loose there harassing power without loosing there editing project one.
  • The risk of blocking wrong IP by mistake disappears.
  • Failure to respond is part of good practice to combat harassment.

It seems to me the concrete harassing solution in use on plenty of web platform as facebook, RBNB., etc. Why not on MetaWiki platform ?

Lionel Scheepmans Contact French native speaker, sorry for my dysorthography 15:08, 29 December 2017 (UTC)Reply

"Not technically or socially feasible" would be the problem I think. Page histories and logs would become a total mess if one could hide a particular person's actions in them. Also, a number of people mistake disagreement or critique of bad editing practices for harassment. This idea works for a social media platform, not for projects like encyclopedias that have general obligations for readers and whose participants thus need to be held responsible for their contributions to a degree rather than being allowed to hide other people's comments on them away. Jo-Jo Eumerus (talk, contributions) 10:02, 30 December 2017 (UTC)Reply
@Lionel Scheepmans: Thank you for your comments! I agree with Jo-Jo that some of this will be incredibly complex to execute on article pages and histories, but I can see room for this on talk pages, userpages, notifications, and other forms of communicating on wiki. Our team has built two Mute features, and it has been proposed we extend this to more aspects of the wiki. (I'm particularly curious about building a system that collapses all talk page comments made by Muted users — some users on a popular Wikipedia have already built a system in their personal JavaScript...) If you have more comments on this, I would encourage you to collect and share them on Talk:Community health initiative/User Mute features. Thank you! — Trevor Bolliger, WMF Product Manager 🗨 02:04, 3 January 2018 (UTC)Reply
Thanks for replies. I understand. I will take a look layer.
Masking the rolled back contributions would increase the readability of the history a lot, without being very hard or difficult to implement. — Jeblad 23:41, 17 January 2018 (UTC)Reply
I don't think hiding histories is very useful as most people read the page content directly. And what about the other problem I mentioned. Jo-Jo Eumerus (talk, contributions) 09:56, 18 January 2018 (UTC)Reply
I only commented on rolled back contributions. It is pretty straight forward to do. The rest of your post is about interpretation of critique, but intermingled with style of writing, which is not actionable thus I do not want to comment on that. — Jeblad 16:48, 20 January 2018 (UTC)Reply

Lionel Scheepmans, why don't you change the title to "Embrace Harassment"? I totally disagree with your approach, in el.wikipedia the harassment comes directly from the admins, their actionS haVe indeed been hidden onwiki, but they are still admins, they have never said they are sorry and we have a ruined community. Is this your vision?   ManosHacker talk 16:29, 23 February 2018 (UTC)Reply

ManosHacker, I understand your point of view and I got some problem with admin on fr.wikipedia to. But admin have de right to block people and unblock them self. So the fisrt step for admin arrasment is to withdraw administrative rights. But any way, hiding system don't match with historic page and so on, so that's not a good option. Finaly, the best way front of arrasement behavior is to denie them, never answer, and supress massages on your user or talkpage. Good luck in el.wikipedia ! Lionel Scheepmans Contact French native speaker, sorry for my dysorthography 10:41, 24 February 2018 (UTC)Reply
For the time being we have continuous leakage in productive users and admins, who are now missing from editing. I am not after the removal of rights, people make mistakes. If the harasser says he/she is sorry, in public, the offensive material is removed and the harasser states in public, that he/she will not repeat it again, then the basis is set for community health and there is no need for extreme punishment. There is no other way.   ManosHacker talk 20:01, 24 February 2018 (UTC)Reply
It's a pity an experienced user to say lies reversing the reality. Please ManosHacker stick to the substance of your argument and not wander around saying inaccuracies that the other users are not able to check.--Diu (talk) 05:48, 27 February 2018 (UTC)Reply

Perfection not required[edit]

A lot of the suggestions seem to get shot down using the argument that the bad user could get around it by doing SuchAndSuch. I agree that a very techsavvy bad user can evade a lot of proposed solutions, but most people are not that tech savvy. Many suggestions would be a barrier to some users, not all, but some. I don't think we should be looking for perfect solutions (because they probably don't exist) but for ones that would be effective against a lot of bad users. For some, we will still have to rely on human intuition as we currently do when we "smell a rat". Kerry Raymond (talk) 08:56, 3 January 2018 (UTC)Reply

@Kerry Raymond: I 100% agree. The old proverb "en:Perfect is the enemy of good" rings true here. — Trevor Bolliger, WMF Product Manager 🗨 22:28, 3 January 2018 (UTC)Reply

Not all solutions need to be 100% technical, let's use people too[edit]

A lot of blocked users have particular topic interests (articles, categories) or particular dislikes (certain other users). Is it possible to do some automated analysis of a blocked user's "fingerprints" and proactively scan new account/IP behaviour against those fingerprints of recently blocked users and redflag those users who seem to match for closer human scrutiny of their edits and then draw these redflags to the attention of the users most likely to be willing and able to best monitor that new user's activities (e.g. those who have previously drawn attention to the previous bad behaviour, or perhaps to the WikiProject in which the blocked user was active). People are still better than machines at detecting patterns; I can spot a couple regular socks within a couple of edits (the pattern is just that obvious) but I can't do that in general. Could we have a tool so that someone have their watchlist extended with all the articles the blocked user has previously edited (perhaps with some time limit, or on the basis of "fingerprinting", to keep it manageable) so that user can easily spot the sock's return through their normal watch list. Kerry Raymond (talk) 08:56, 3 January 2018 (UTC)Reply

This proposal falls a little outside the scope of this talk page's discussion. Building tools for users to identify sockpuppets does fall under the purview of the Anti-Harassment Tools team, so in the future we may explore such an idea. In general, our team wants to build tools that empower humans to make better decisions. — Trevor Bolliger, WMF Product Manager 🗨 22:34, 3 January 2018 (UTC)Reply
But this is exactly what I am proposing?! Tools to make it easier for people to detect returning blocked users. For example, I had to add several dozen articles manually to my watchlist because of a recently blocked user, who I suspect will return as a sockpuppet/meatpuppet (a blocked paid user whose stated mission was to update a group of articles, so I am guessing that they will be back). Doing it manually was a big waste of my time. If we don't make it easier for good faith users to detect, monitor and report problem users, more people will do (as many already do) and just move on hoping someone else will deal with it. Kerry Raymond (talk) 00:43, 4 January 2018 (UTC)Reply
I have been operating that this discussion is about the blocking tools themselves, for use when the community/admin/steward/etc has decided "this user needs to be blocked" or in the case of Problem #3 "a full block is not appropriate but this situation needs to be addressed." But you are correct — your suggestion as an evaluative tool is another way to address Problem #1. Most of this work is done manually by interested users or CheckUsers, and the software could certainly help perform a lot of the repetitive evaluative work. — Trevor Bolliger, WMF Product Manager 🗨 18:31, 4 January 2018 (UTC)Reply

Hiding usernames of infinite-blocked vandal accounts[edit]

There is a problem [1] reported in pl.wiki right now about many usernames created by wikinger (I think that wikinger may created hundreds (thousands?) accounts till now, but not all of his accounts are problematic in this case). Many of these accounts are very similar (sometimes with irreverent or abusive insertions) to the accounts of other wikipedians. These usernames are visible to anyone and even after they are blocked, they are still an indirect means of harassing users by wikinger. This would be a solution, the other proposed in pl.wiki discussion is to rename problematic accounts.

By the way I can also menion here the previous problem with this vandal described in phab:T169268. Fortunately, this has been resolved by permanent change of max thanks for non-autoconfirmed users. But I think it should be kept in mind that similar situations may happen in the future on other wikis and maybe allowing communities to change such limit locally would be worth considering. Wostr (talk) 12:28, 4 January 2018 (UTC)Reply

The solution for Thanks spam can be applied to other wikis (or globally) if the problem spreads. And yes, when blocking an obvious attack username the software can make it easier to hide/rename the username to avoid future exposure in logs/histories. Good suggestion. — Trevor Bolliger, WMF Product Manager 🗨 18:34, 4 January 2018 (UTC)Reply

Range contributions[edit]

While we are talking about about improving block solutions, I would also like to put attention to range contributions. We already have basic IP range support in Special:Contributions. That's all fine and dandy, but what's kind of missing here is a dedicated page for ranges only. What I mean with this is like the mock-ups in this ticket on Phabricator. This ticket and it's sub-tickets have been quiet now for half a year or more at this point. It would be nice if stuff like this would also be considered in the initiative, especially when IPv6 is getting common these days, making blocking harder for non-technical administrators. --Wiki13 talk 01:16, 7 January 2018 (UTC)Reply

One could perhaps put such range support in other features such as logs or Special:Nuke. Jo-Jo Eumerus (talk, contributions) 10:15, 7 January 2018 (UTC)Reply
@Jo-Jo Eumerus: I'm not 100% familiar with every aspect of Nuke (does it rollback edits from accounts associated with IPs or just the IP edits? Who is permissioned to use it? Does it warn/prevent Nukes for a large quantity of edits or over a certain period of time?) It's a powerful tool and if it had IP range support, one misplaced '0' could be gnarly. — Trevor Bolliger, WMF Product Manager 🗨 17:36, 8 January 2018 (UTC)Reply
@Wiki13: Range Contributions turned out to be a much bigger strain on databases than we originally planned, but it is certainly something we can look into for our 2018 blocking work. It certainly would be a way of addressing "Problem 1. Username or IP address blocks are easy to evade by sophisticated users". — Trevor Bolliger, WMF Product Manager 🗨 17:36, 8 January 2018 (UTC)Reply
@TBolliger (WMF): thanks for the answer and clarification. I'll be looking forward to the solutions you guys will be putting out in the coming few months, whatever those might be. --Wiki13 talk 17:45, 8 January 2018 (UTC)Reply

Suggestion of an improvment of the way the time of a block is determined - from a discussion on de.WP[edit]

In the German language Wikipedia, a user was blocked, unblocked and blocked again several times in a very complicated case. In this process mistakes happened, because of problems with the way the length of a block is determined. This led to users feeling this person was treated unfairly. Several admins chimed in in stating that this kind of mistakes happened, because the process is confusing, especially when one wants the block to end at a certain time and day and not block for a length indicated in the drop-down menu. Additionally confusing is dealing with different time-zones, especially with time saving in summer, they stated. They would very much like the blocking tools to be improved, so that it is easier to determine the endpoint of a block without having to calculate timezones. One helpful option might be, to add a feature that allows a block to be reinstated for the original length (after a user had been unblocked under restrictions e.g.). It looks like several admins of the German community would appreciate it, if this could be looked into. They asked me to bring the issue to the attention of the AHT-team. For more background, see the discussion on de.WP here. --CSteigenberger (WMF) (talk) 12:38, 10 January 2018 (UTC)Reply

Hi Christel, thank you for bringing this example and suggestions to our discussion! Adding a datetime picker to the block interface has been suggested before, but we've never thought of how timezones can affect this. The software can definitely help with this so mental math isn't required.
By "allow a block to be reinstated" do you suggest adding an ability to 'pause' the block for a brief period of time? If this is to allow the user to participate in on-wiki discussions, some users have suggested an alternative strategy: add an option to allow the user to also participate on a small list of pages, such as ArbCom or Noticeboards. (This could be an expansion of the "Prevent this user from editing their own talk page while blocked" option, or a new option.) This may prevent the user from feeling jerked around and treated unfairly. Do you or the de.WP community have thoughts on this? — Trevor Bolliger, WMF Product Manager 🗨 19:40, 10 January 2018 (UTC)Reply
My thoughts without checking back with the community - this would work very well for most cases on de.WP, as we regularily unblock users to give them the chance to take part in a discussion about a request to unblock them. --CSteigenberger (WMF) (talk) 08:49, 11 January 2018 (UTC)Reply

@CSteigenberger (WMF): I have created some potential mockups for this. They are on phab:T132220 and below.

We want to start building this next week. Thoughts? — Trevor Bolliger, WMF Product Manager 🗨 22:24, 22 February 2018 (UTC)Reply

Sending the question on to the German community now! --CSteigenberger (WMF) (talk) 08:48, 23 February 2018 (UTC)Reply
I like the idea and I think this will be helpful, however, making a block end on a specific day has been the easier part of the calculation. The difficult part has always been calculating the right hour. Could you include a functionality to also pick a time of day and time zone (with the wiki's and specific day's standard time zone being suggested automatically)? Thanks a lot, → «« Man77 »» [de] 09:40, 23 February 2018 (UTC)Reply
I'd like to second [de]. The day caused never any problem, but the hour. Especially winter time. --Sargoth (talk) 11:36, 23 February 2018 (UTC)Reply
There is a DateTimeInputWidget that we could use that would allow you to select/type a calendar date and type in a time. I've added a screenshot to, to the right. If we use this, should the timezone calculation be based on the blocking user's timezone preference, or the blocked user's? — Trevor Bolliger, WMF Product Manager 🗨 17:11, 23 February 2018 (UTC)Reply
Good question. Maybe admins of wikis where different time zones are much more of an issue can say more about that, in German WP we practically only have to deal with daylight saving time. I suppose it would be helpful already if the times shown in Special:log/block as start and end date, and also the time entered in Special:block refer all to the same time zone. So I guess it's the blocking user's preference that matters more.
Partial lifts of blocks are probably the even better way to go, but this proposal here still looks like a nice improvement to me. → «« Man77 »» [de] 13:42, 26 February 2018 (UTC)Reply
While I think a day/hour-pick-thing would be an improvement, the idea that you could suspend a block is more appealing. --DaB. (talk) 14:20, 23 February 2018 (UTC)Reply
I agree. Partially lifting a block is more useful and closer to the sense of blocking, than re-installing a previous block. --Holmium (talk) 16:41, 23 February 2018 (UTC)Reply

Block notices on mobile provide insufficient information about the block[edit]

In the block notice that editors receive when trying to edit, links and other formatted text render as raw HTML, rather than what they're supposed to, on mobile. There is also no explanation of how to appeal this block.

I've already posted this to phabricator – see phab:T165535 – but I was hoping to get this on the radar as well. On the English Wikipedia, it is policy for administrators, upon blocking a user, to provide them certain information about the block – see w:en:WP:EXPLAINBLOCK. For this reason, virtually all block reasons include a wikilink to relevant policies that explain to the blocked user the issues that were identified in their contributions. For example, if a user is blocked for edit warring, a link is provided in the block log to w:en:Wikipedia:Edit warring. Sometimes, we even use templates in the block reason. A few common ones are w:en:Template:CheckUser block and w:en:Template:School block.

When editing from the mobile platforms, these links and templates don't render how they're supposed to. Instead, on the mobile web, a 3-second "toast" pops up that displays the raw HTML of the block reason. Additionally, no information is provided on how the appeal the block if it is erroneous, e.g. if it is collateral damage on a shared IP address/range. This is concerning, since as I understand it, more and more people are making their first edits via the mobile interface, and it's not very welcoming to give them a wall of code if, say, they're editing from school and we're trying to tell them they need to log in first. Mz7 (talk) 03:12, 12 January 2018 (UTC), revised 22:04, 12 January 2018 (UTC)Reply

Oh my, that's pretty bad. And seeing as more and more users are mobile-only, we'll want to provide them with a functional experience, regardless if they're blocked or accidentally blocked. I'll check with the mobile web team to see the status of this. Hopefully it's a quick fix. — Trevor Bolliger, WMF Product Manager 🗨 18:04, 12 January 2018 (UTC)Reply
Thanks! Mz7 (talk) 22:04, 12 January 2018 (UTC)Reply

Geolocation by timing[edit]

An attempt on a better description of geolocation by timing, ref the comment in #Addendum to summary of feedback received to date, January 18, 2018 [2].

Assume an user tries to access a page, and the user has an IP-address that can be accessed through several alternate routes. Each of those routes will delay the connection slightly, or really each nodes on the route will delay the packets, leading to an overall delay. This delay is not a linear function of the distance, but can be learned.

Now assume an ISP has a few gateways to the rest of the world. That means a few well-placed external sites can measure the timing through those gateways, and the distance to those gateways can also be solved (or really the constant delay within some limits).

Inside the ISPs network there are additional delays, as the packets are routed to the gateways. Those delays describes the route they are sent along. When observed through specific gateways, that is from specific external sites, they will have distinct skewed delays that describes their locations inside the network.

If the page a client opens has parts that request parts from specific external servers, then the overall delay for each part can be measured, and with sufficient number of requests more or less the complete internal network of the ISP can be prodded.

This is the rough idea, but the details are a bit more involved. See for example Statistical Geolocation of Internet Hosts. Whatever hosts or clients would be the same. The article use "landmarks", but that can be replaced by observable gateways on the border of a subnetwork.

The problem is that we do not really have a complete picture of how the internet is connected. So instead of trying to solve for every variable in the network, we can try to learn the non-linear function mapping from the delays to geolocations for a specific ISPs. This function will be different for each ISP.

If a vandal is found to have a specific location, and then gets blocked, and another user emerge at the same location through another ISP, then it is highly likely it is the same user. That is likely, but that does not imply they are the same. This must still be taken as just an indication whether an observed IP-address is the same as some other previously observed IP-address. — Jeblad 18:23, 20 January 2018 (UTC)Reply

An additional difficulty with this is that some countries route all traffic through one or a few gateways, thus making it very hard to do geolocation with timing information. Usually you need three independent routes to the target client. One way around could be to have one or more nodes inside those countries.
In Norway there was a single common gateway "NIX" at University of Oslo, but later that has been extended to six such gateways.[3]
Also note that existence of code that access several nodes for timing purposes can be abused by agencies, by snooping on the network traffic and repurposing the geolocation for their own purpose. Still note that this can be easily done by other means. (Ie. packet injection with spoofed sender.) — Jeblad 21:38, 22 January 2018 (UTC)Reply

Addendum to summary of feedback received to date, January 18, 2018[edit]

Hello everyone! Here is an addendum to feedback received on the Meta Wiki talk page from December 22 to today, January 18. A original summary of feedback can be found here.

Problem 1. Username or IP address blocks are easy to evade by sophisticated users
  • More people added support for blocking by email address, page or category blocking (including cascading sub-pages), and User Agent blocks.
  • Use typing patterns (e.g. the rhythm, speed, and other characteristics of how a user inputs text in the editing surface) to identify sock-puppeteers.
  • Use network speed as an identifiable piece of information.
  • Use editing patterns (time of day, edit session length, categories of pages edited) to build improved CheckUser tools
  • Finish building a tool that allows viewing all the contributions within an IP range. (phab:T145912)
  • Extend Nuke to allow one-click reverting of all edits made within an IP range
Problem 2. Aggressive blocks can accidentally prevent innocent good-faith bystanders from editing
  • Perform analysis on IP-hoppers to build a model of how users (intentionally or unintentionally) change their IP address to build more tactical blocking tools.
  • We should limit cookie blocks to 1 year, instead of indefinitely.
Problem 3. Full-site blocks are not always the appropriate response to some situations
  • Build a user masking systems to obfuscate or ‘hide’ users from each other on wiki (e.g. User:Apples should not be able to see actions by User:Bananas (edits, talk page messages, etc.) or interact with them on any article or talk page.
  • Create a credit-based system instead of a blocking system.
Problem 4. The tools to set, monitor, and manage blocks have opportunities for productivity improvement
  • Mobile block notices are abysmal (phab:T165535)
  • Allow admins to ‘pause’ a block so the user can participate in on-wiki dispute resolution discussions without being jerked-around and blocked several times. (This could also be addressed by creating a ‘pages allowed to edit while blocked’ whitelist.)
  • Add a date-picker to Special:Block to make it easier to set an end date and time for a block, taking into account time zones.
  • Make it easier for admins to hide harassing usernames during the blocking process
  • In the long run, we should devise a system that automatically (or strongly encourages to a human administrator) sets an appropriate block length and block method (e.g. IP, email, User Agent, etc.)
  • Some users raised their concerns about giving admins more tools which could potentially affect the balance of power between non-admins and admins. The discussion also proposed limiting the abilities of administrators and that only bureaucrats or a new group of trained users to set blocks, and that indefinite blocks, range blocks, and cookie blocks be removed from the software.
  • We should keep in mind that perfection is not required. Our new tools don’t need to be watertight, just better than they are today.
Next steps

Some people are still discovering this discussion — please continue to share new ideas or comment on existing topics!

In early February we will want to narrow the list of suggestions to a few of the most promising ideas to pursue. Sydney and I will ask for more feedback on a few ideas our software development team and Legal department think are the most plausible. We’ll be sure to keep everything documented, both on Meta Wiki and on Phabricator, for this initiative and future reference.

Thanks! — Trevor Bolliger, WMF Product Manager 🗨 20:10, 18 January 2018 (UTC)Reply

Just a comment on «Use network speed as an identifiable piece of information.» I wonder if this is from something I wrote, but has been misunderstood. I wrote about how timing could be used for geolocation, not to detect network speed. Geolocation from timing is a non-trivial problem, but several papers exist on how to do this. (I should be able to give you some pointers if you want to pursue this idea.) It is known to work for several types of VPNs, but TOR in particular has countermeasures. To use network speed in itself as a single feature is pretty unreliable and should be avoided for high value classification aka blocking someone. It will most likely give a high false positives, be misunderstood, and then lead to invalid blocks. — Jeblad 17:36, 20 January 2018 (UTC)Reply
@Jeblad: That makes more sense, thank you for correcting me! — Trevor Bolliger, WMF Product Manager 🗨 21:04, 22 January 2018 (UTC)Reply

Of this shortlist of 6 features, help us pick 2 to build[edit]

Hello everybody! Over the past weeks our team took a look at all 58 suggestions that came out of this discussion. We discussed each one and ranked them on four criteria — how much demand is behind this idea? what is the potential impact? how technically complex is this to build? is this in line with Wikimedia’s mission? (We're also going to have the Wikimedia Foundation Legal department weigh in with their thoughts, I'll share more when that's ready.)

You can see our ranking and notes at this table, but here are the top ideas that we think will best address the four problems we want to solve:

Problem: Username or IP address blocks are easy to evade by sophisticated users

  • Project 1 - Block by combination of hashed identifiable information (e.g. user agent, screen resolution, etc.) in addition to IP. With this project, we would create a browser fingerprint with some specific identifiable pieces of data about the user's computer and store it as a hash. Admins could then set an IP range block that also includes a match for this fingerprint, but would not be able to see the hashed information.
  • Project 2 - Block by user agent in addition to IP. This project is similar to the first, but would store the user's user agent which would be visible to CheckUsers.
  • Project 3 - Surface hashed identifiable data to surface as a percentage match to CheckUser. With this project, we would create the same browser fingerprint as outlined in the first bullet item, but would not set blocks by the hash, instead it would be displayed to CheckUsers as a percentage match in a tool to compare 2+ usernames (e.g. "User:A and User:B have a 93% match for using the same computer.")

Problem: Aggressive blocks can accidentally prevent innocent good-faith bystanders from editing

  • Project 4 - Drop a 'blocked' cookie on anonymous blocks. This already exists for logged-in users, but we will need to look into the European Union's laws regarding using cookies on logged-out visitors.

Problem: Full-site blocks are not always the appropriate response to some situations

  • Project 5 - Block a user from uploading files and/or creating new pages and/or editing all pages in a namespace and/or editing all pages within a category. With this project, we will provide a few specific ways to block users from specific pages/areas within a wiki, controlled via Special:Block and logged similarly to Special:Log/block.

Problem: The tools to set, monitor, and manage blocks have opportunities for productivity improvement

  • Project 6 - Special:Block could suggest block length for common policy infractions. With this project, admins would have to first select a block reason on Special:Block which would then automatically select the block length.

Our team only has time to build two of these features, so we need your help in determining which of these we should proceed to investigate. Of these six ideas, which holds the most promise and why?

Thank you! — Trevor Bolliger, WMF Product Manager 🗨 23:59, 15 February 2018 (UTC)Reply

Purely a personal opinion as an en-Wikipedian: Project 5. We have histories of users who are productive in part A of the project being useful editors and utterly disruptive in part B. All other projects have somewhat unclear benefits, probably as I don't know of any example situation where they would help. Jo-Jo Eumerus (talk, contributions) 10:51, 16 February 2018 (UTC)Reply
#2 is creepy, #5 only conforms to habits of few wikis, #6 is similarly unlikely to be useful on a variety of wikis, #4 would mostly target clueless users. #1 or #3 are useful and should be fine, as long as the hashing follows some obvious principles (cf. how Internet Archive handles IP addresses). --Nemo 18:54, 16 February 2018 (UTC)Reply
From the near-top of the table, I see there are several proposals of proven usefulness, often building on existing tools: Special:RangeContributions; T117801; T17294, T17273 and T27305. I suggest to focus on similar technical improvements before inventing new things. --Nemo 19:02, 16 February 2018 (UTC)Reply
I agree with Nemo we should first look into make better use of exiisting tools before making new ones. That said I would really want us to look into the idea of "hashed identifiable information" no 1, and learn how useful this can become. So 1 then 3.Yger (talk) 19:58, 16 February 2018 (UTC)Reply
  1. 4 and #5 is good for edits. (btw, #2 is moot since a lot of Checkusers keep their information recorded off wiki or on the Checkuser wiki).
  • Support Support Project 1. Currently I’m dealing with an IP user who has been disrupting en.wiki since November. The usual measure - blocking individual IPs - was tried to no avail as the user was able to switch IPs between several ranges almost at will. Eventually, we had data to block /24 ranges as they were used and then to block four /22 ranges, which seem to be holding (unfortunately, the user has extended their activity to other wikis, this is being handled at meta.wiki). A more targeted blocking function would, I think, enable this kind of disruption to be dealt with more quickly and reduce the potential for collateral damage when using rangeblocks. --Malcolmxl5 (talk) 23:55, 16 February 2018 (UTC)Reply
  • Project 1. If I understand it correctly, this would give us the ability to block IP addresses and even ranges in a way that reduces overkill by only blocking where a user of that IP or range had the same IT setup as the perpetrator of a particular vandalism or harassment edit. If so this would mean that we could both reduce the times where we clock the wrong person and also be a bit more ruthless at rangebloccking trolls WereSpielChequers (talk) 17:38, 20 February 2018 (UTC)Reply
  • #1 and #5 seem to be useful. --Holmium (talk) 18:10, 20 February 2018 (UTC)Reply
  • I want 5 please. Bring it! Ability to block from a category would make discretionary sanction enforcement so much easier. JzG (talk) 22:24, 20 February 2018 (UTC)Reply
  • I would like to see projects 1 and 5 implemented. In my opinion, project 1 could help in case of range blocks with much less risk that others are affected. And project 5 could help to implement some such restrictions more easily. In one case we did this before using filter rules. --AFBorchert (talk) 09:59, 22 February 2018 (UTC)Reply
  • 1 and 5 please. Thanks. —viciarg414 10:49, 22 February 2018 (UTC)Reply
  • in my opinion project 5 followed by project 1 would be useful.--Wdwd (talk) 14:39, 22 February 2018 (UTC)Reply
  • 1 is probably the most generally valuable, if a good hashing algorithm can be found (or we just internally store everything). Malicious users should not be able to evade the block by changing just one parameter (security through obscurity does not work, especially with open-source software such as mediawiki). --PaterMcFly (talk) 15:09, 22 February 2018 (UTC)Reply
1 to 3 are also the most problematic in terms of information privacy. These proposals call for detailed profiles of communication meta data stored on the servers of the WMF. For the scheme to work, these profiles have to be agregated preemptively. Since there is no way to know which user is going to offend in the future, the policy would affect each and every editor. I'd rather not have wikipedia track and store my communication details. Preemptive data retention is frowned upon for a reason.---<(kmk)>- (talk) 11:36, 23 February 2018 (UTC)Reply
  • I support 1 and 5 to deal better with vandalism on the one hand and conflicts on the other hand. However, I think 1 has to be implemented in a more intelligent way as PaterMyFly stated. --MGChecker (talk) 20:29, 23 February 2018 (UTC)Reply
  • All handling of IP's is considered handling of personal data for the GDPR. It's use should be limited as far as possible. 3 doesn't show this information directly (a pre), but the data this has to be available, so to be stored. 5 could be a winner, if we let users earn rights, comparable with Flagged Revisions. Both an IP (which we still do store for all edits) and a username could earn rights by doing good edits, and loose them by bad edits. A user who switches to an other IP will start over if not using a username. RonnieV (talk) 00:02, 24 February 2018 (UTC)Reply
  • I don't know if I'm doing this right. Feel free to move my comment wherever it is supposed to go. But I enthusiastically support the technical ability to block from namespaces and from file uploads without totally blocking the account all together. I can imagine this would offer a less restrictive alternative in a variety of situations, and make us less likely to lose good faith contributors in the long run over specific but often difficult to explain problems. The edges of COPYVIO come to mind as an area that has come up quite a few times. GMGtalk 13:48, 27 February 2018 (UTC)Reply
  • '5most certainly. I think 3' is safer than the alternatives of 1 or 2. DGG (talk) 20:55, 27 February 2018 (UTC)Reply

Update after meeting with WMF Legal[edit]

This morning a few folks from WMF’s Legal department gave some feedback and guidance on these six projects. In short, there are no insurmountable blockers to accomplishing any of these projects. All six have their own nuanced concerns and will require a deeper discussions but each is possible to be built.

The first three projects will require further conversations about exactly what data is captured, how it is stored, how long it is stored, and the format it is displayed to users. We will also want to consider how to provide training and other resources to users to understand how these blocks can affect users. Dropping cookies on anonymous users is possible, but will require careful thought on how to update the cookie policy while avoiding BEANS. There were no privacy concerns for the fifth and sixth projects, so long as each community sets them in accordance to their own policies.

Feedback so far seems to favor a combination of Projects 1/2/3 and 5, with brewing support for 4. Project 6 seems to be DOA. Please keep the comments coming! In the coming weeks our team’s developers will begin technical investigations into the projects that have the most support. — Trevor Bolliger, WMF Product Manager 🗨 19:14, 22 February 2018 (UTC)Reply

Idea 1 will not work. Mozilla is already castrating the user-agent to very basic data, other browsers will follow soon. The general problem is that we are not the only project that would like to re-recognize users: The advertising-companies want that too, and because no-one really likes advertisings, the browser-manufactores (or add-on-programmers) are not afraid to make recognizing harder. --DaB. (talk) 14:26, 23 February 2018 (UTC)Reply

IIRC Safari also do similar thing. bugs.webkit.org — regards, Revi 11:07, 24 February 2018 (UTC)Reply

Technically, I agree with DaB. However, I wouldn't describe the tendency of modern browsers to minimize the amount of meta data presented to the server as a problem. Much the opposite: The ability to identify and effectively trace users just from their meta data opens many disturbing possibilities which can be summarized as 'Big Brother'. Any additional obstacle on this slippery road should be applauded as a small contribution toward a citizen friendly internet.---<(kmk)>- (talk) 23:01, 25 February 2018 (UTC)Reply

Indeed, if browsers start sending us less useless details in the User-Agent data that will make it easier for us at Wikimedia to handle. --Nemo 17:29, 27 February 2018 (UTC)Reply
Just as blocking by IP address/range has become outdated, I assume whatever technical method we add in 2018 will be outdated by 2023. However, I strongly believe this does not mean we should try to catch up. If we build solution #1, it could evolve as different types of technologies become available and identifiable. — Trevor Bolliger, WMF Product Manager 🗨 18:46, 27 February 2018 (UTC)Reply
I doubt that #1 will hold until 2023 – more likely this year and maybe the next. The big browsers (FF, chrom, IE, Safari) are updating fast these days – at least under Windows (auto-updates). The problem is that the user-agent is the only thing you could depend on beside from javascript (which would give you things like the screen-resolution, installed fonts, dirty things like rendered pictures or music). But because Wikipedia can be used without javascript (thanks God!), you can not rely on it – the vandals will learn very fast to disable it (like they learned to reset their ip-address). So in my eyes it will just be wasted money to development #1. --DaB. (talk) 14:30, 28 February 2018 (UTC)Reply

Status update: Keep adding comments, AHT team is reading posts[edit]

Hello everyone, Keep on adding your thoughts about the shortlist. Trevor Bolliger and I are monitoring the discussion. The Anti-Harassment Tools team developers will begin investigating the top projects to estimate the time involved to complete them and other technical aspects of them. We'll give an update about this when we have more information. SPoore (WMF) (talk) , Community Advocate, Community health initiative (talk) 00:22, 3 March 2018 (UTC)Reply

What the WMF’s Anti-Harassment Tools team will build in 2018[edit]

Hi everybody, thank you for the input over the past months, we think this has been an incredibly worthwhile process. Given all the participation on this talk page, our discussion with our legal department, and our preliminary technical analysis we have decided to investigate two projects now, build two small changes next, and later will follow with a third project.

We will investigate and decide between:

  • Project 1 - Block by combination of hashed identifiable information (e.g. user agent, screen resolution, etc.) in addition to IP range. We are still defining what “hashed identifiable information” means in our technical investigation, which can be tracked at phab:T188160. We will also need to decide how this type of block is set on Special:Block (likely an optional checkbox) and how this type of block is reflected in block logs.
  • Project 4 - Drop a 'blocked' cookie on anonymous blocks. The investigation can be tracked at phab:T188161.
  • If these projects are deemed technically too risky, we will pursue Project 2 - Block by user agent in addition to IP. User agents data is already available to Check Users.

We will also be adding an optional datetime selector to Special:Block (phab:T132220) and will be improving the display of block notices on mobile devices (phab:T165535). In a few months (likely around May 2018) we will pursue some form of Project 5 - Block a user from uploading files and/or creating new pages and/or editing all pages in a namespace and/or editing all pages within a category.

Because we value access to Wikimedia (in high risk regions, for users not comfortable with email, etc.) and because of evolving technical infrastructure of the internet (e.g. IPs, browsers, devices) we will need to continually evolve our blocking tools. This article page and the organized user blocking column on Phabricator will be useful in future discussions and decisions. Please continue to add to them or discuss on this talk page as new ideas or problems emerge.

Again, thank you. We’ll post updates here as our work progresses.

– The WMF’s Anti-Harassment Team (posted by Trevor Bolliger, WMF Product Manager 🗨 23:37, 8 March 2018 (UTC))Reply

Update, March 26: We are nearly code complete with adding a DateTime selector to Special:Block (phab:T132220) and we hope to release this to all wikis by mid-April. We are also making good progress on IP cookie blocking (phab:T188161) which I've submitted to WMF's Legal department to review any privacy concerns and to update our cookie policy.
We've decided to not proceed with building Project 1 — hashed fingerprint blocking — given it would be too error prone with the data we currently gather and any additional data would likely be too unreliable to justify updating our Privacy Policy. We are now proceeding with giving CheckUsers the ability to block by user agent (phab:T100070.) We have a new project page for this and encourage your input!
Thank you. — Trevor Bolliger, WMF Product Manager 🗨 23:36, 26 March 2018 (UTC)Reply

UI adjustments to Special:Block[edit]

Hello everyone still watchlisting this page! In the coming weeks we will release phab:T132220 "Add datetime selector to Special:Block to select expiration." Here's how it will look:

As we're working through the details of phab:T100070 "Allow CheckUsers to set User agent (UA)-based IP Blocks" we are considering changing how the checkboxes on Special:Block behave. Right now, some of them will hide/show depending on if a username or IP address is provided (e.g. "Automatically block the last IP address used by this user, [...]".) We would like to change this behavior to enable/disable, meaning these options will always be visible but not interactive, like this top-anchored demonstration. We are tracking this idea in phab:T191421. As we add more options on Special:Block, we believe this behavior will allow administrators and CheckUsers to better understand how these options inter-relate and set more specific, effective blocks.

Any thoughts?

Thank you! — Trevor Bolliger, WMF Product Manager 🗨 15:13, 4 April 2018 (UTC)Reply

Software development update, May 3[edit]

Hello everyone! I have another status update:

  • We’ve completed work on the datetime selector for Special:Block. It is live on a handful of wikis and we’ll be releasing it sitewide in the coming weeks.
  • We’re nearly done with improving the display of block warnings on mobile. It’s our team’s first mobile project so we’re getting used to tiny code on the tiny screens.
  • We’re in the final stage of anon cookie blocking. It’s tricky to QA so we’re taking our time and putting in our due diligence. Should be out before June.
  • We’re generating some statistics about blocking usage across Wikimedia. (phab:T190328) I’ll post a summary here when the data comes back, it should be interesting!
  • User Agent + IP range blocking for CheckUsers is next on the queue.
  • We’re working on designs for granular blocks (per page, category, namespace, and/or uploading files.) We need your help to design this feature! 🎨 See more details at Talk:Community health initiative/Per user page, namespace, category, and upload blocking#Help us design this tool! and join the discussion!

Thanks, and see you on the talk page. — Trevor Bolliger, WMF Product Manager 🗨 21:42, 3 May 2018 (UTC)Reply


I wrote up en:User:Andrevan/Alternative to checkuser and then someone pointed out that this is on the list here. I think this is a great idea and probably not as technically difficult as it seems. It involves some clever javascript and hooking up the UI. The algorithm training part sounds complicated but probably isn't for someone with a bit of experience doing that sort of thing (which I'm not). Andre (talk) 19:26, 30 May 2018 (UTC)Reply

@Andrevan: Hi Andre, thank you for bringing your proposal to our attention. We considered building a fingerprint system as a way to block users/devices but abandoned the idea due to its likely weak ability to reliably identify someone vs. the complications of capturing more personally identifiable information (which will require changes to the Wikimedia Foundation's privacy policy). A lot of this personally identifiable info (screen resolution, user agent, etc.) is easy to change or spoof, and many parts will change without the users' awareness at all. Your proposal might bring some benefits (and would likely help prevent/catch some additional socks) but our team can't prioritize it at the moment. — Trevor Bolliger, WMF Product Manager 🗨 21:58, 30 May 2018 (UTC)Reply
If done properly, by 1-way hashing the PII on the client side, it would never be transmitted or stored on WMF servers, so I'm not sure if the privacy policy would need to be changed. Andre (talk) 22:18, 30 May 2018 (UTC)Reply
Seems like there are much confusion about fingerprinting. A number of quite easy and rather good methods exist for fingerprinting the browser, or "It involves some clever javascript and hooking up the UI" as Andre says. Fingerprinting the browser does not have any real privacy issues, it is not a person per see. The CU tool do fingerprinting already, but it is simply called something else. Fingerprinting the user is more difficult, especially if the user shall not know whats going on, but it is still doable. Even if the training part is difficult. It usually only work if the session is quite long, and will work best to identify shared accounts or successful identity theft. The CU tool does not do it as it is configured now. What TBolliger describes are browser fingerprinting. It will not change during a session, but it can be manipulated.
I believe fingerprinting the user should be considered as an alternative to cookies. — Jeblad 20:56, 9 November 2018 (UTC)Reply

Blocking by cookie for anons[edit]

The idea to block anons by using a cookie is … weird. Nearly all browsers have features to evade tracking, or private window, or extensions for disintegrating cookies, and it is even possible to automate requests for new IP addresses from some ISPs.

Fingerprinting of the browser work in some cases, but be aware that it is possible to automate generation of new browser environments to obfuscate the generated digests.

The only somewhat sure method I know of to block anons are to identify the route, and block the address of the closest node. That is do a traceroute after an edit, await a block, and then keep the address for later reuse if a block emerge. But even that won't work for some of the more fanciful methods, and also note that some ISPs obfuscate the routes.

You can't use anything the user can change as a measure for identification, it must be some kind of asset or information that is outside user control. The router addresses for equipment outside the users control can be utilized, even if the user in some cases can change network. Also latency can be utilized, and even be recalculated into a geographical mapping. If measuring the latency depends on putting some specific code on the perpetrators machine, then it will be under user control and thus the measure could be wrong. — Jeblad 21:22, 9 November 2018 (UTC)Reply

@Jeblad: Thank you for your comments here and in the section above. Dropping a "this user is blocked" cookie for blocked anons was a low-cost, low-effort tactic that we knew would not be a cure-all. It's one small step on a long journey.
We're considering adding 'device blocking' as a project for 2019 in which case we will investigate your suggestions of route/node blocking. The biggest challenge for us is finding a balance between user privacy and site security. — Trevor Bolliger, WMF Product Manager 🗨 22:15, 9 November 2018 (UTC)Reply
@TBolliger (WMF): Just me, old habit. When you fight against an opponent, then you don't want to fight on the opponent terms. You want to fight on your own terms. The opponent has something he can change to avoid identification. You don't want to use that, you want to use what he can't change. You are right that a cookie for blocked anons was a low-cost, low-effort tactic, and as such are a completely valid thing to do. The problem as I see it is that it does not work because your measure against the vandal is to easy to avoid by a counter measure. When the counter measure is to cheap either in work, cost, or risk, then the measure will not solve the problem. When the opponent is in a better position than you, reconsider the physics, perhaps there are some measure where he (your opponent) can't create a counter measure. — Jeblad 00:08, 10 November 2018 (UTC)Reply

Stealth blocking?[edit]

Tracked in Phabricator:
Task T219697

Would stealth blocking have any effect on edits? (I.e. make it seem like the troll's edits are being saved to the wiki but in reality the edits are not being saved) Awesome Aasim 23:12, 4 March 2019 (UTC)Reply

@Awesome Aasim: I think that shadow bans would be a very helpful tactic to prevent a decent portion of the automated or low-sophistication vandals and spammers. My team focussed predominantly on anti-harassment efforts so we will not take on this project, but I would encourage whichever team takes this on in the future to look into stealth blocking! — Trevor Bolliger, WMF Product Manager 🗨 23:56, 4 March 2019 (UTC)Reply
To me that phrasing implies that some "team" (no individuals allowed?) might work on this in the future. Which might not be the case. --AKlapper (WMF) (talk) 13:18, 30 March 2019 (UTC)Reply
One thing that would be really helpful, not just here but with all similar discussions, would be an indication from the developers as to how much work is involved. Sometimes what seems to be a simple change turns out to be a boatload of work, and sometimes what seems like a major change turns out to be editing one line.
As for shadow banning / stealth blocking, please put me on a list of users who would like to be notified if somebody starts working on this. I have a lot of experience in this area, and from past experience working on another site I know that vandals and spammers change their behavior when you change your countermeasures. For example, sites which use shadowbanning typically see vandals and spammers create and use more identities even if they haven't been shadowbanned yet. Some honest users leave because they feel the site is being sneaky and that they no longer know where they stand. There a lot to think about before doing something like this. I would suggest getting input on the English Wikipedia Village Pump (proposals) rather than creating something and throwing it over the wall to see who complains. --Guy Macon (talk) 12:31, 26 May 2021 (UTC)Reply

Adding specific block actions[edit]

Hello all. The mw:Anti-Harassment tools team is considering making some upgrades to Blocks. Over the years there have been several requests to allow for more specific user actions to be blocked individually in Special:Block (tracked in task T242541). These include:

  • Block user from uploading files
  • Block user from creating pages
  • Block user from moving pages
  • Block user from sending thanks
  • Block user from marking edits as minor

These are going to be added to the existing individual blockable actions -

  • Block user from sending email
  • Block account creation
  • Block from editing own talk page.

As we begin to investigate the work involved in implementing these feature requests, we are also considering making some updates to the Blocking interface to accomodate the expanded list of actions to block. The current request has about five additional actions that can be blocked. This brings the total to 8. This number is already large and has the potential to grow further. So, while we can continue to add more checkboxes to the Block page, we could also explore some alternatives.

Considerations for going with the MenuTagMultiSelect widget:

  • Shows all possible actions after clicking into the widget
  • Will require a change in workflow (will require two clicks instead of one)
  • Takes up lesser space (task T217363)
  • Can scale up for many actions

Considerations for going with the checkboxes:

  • Shows all possible actions in one go
  • No change in workflow
  • Takes up a lot of space
  • Won't scale as the number of actions increases

In light of these mocks and considerations, we are seeking your input on the following to make a decision.

  • What actions do you use most often?
  • How many more actions do you think we’d need to be blockable in the future?
  • Between the two design choices, which one would you prefer and why?
@Awesome Aasim, Jeblad, Jo-Jo Eumerus, DaB., Ottawahitech, NickK, Yger, Holmium, Sargoth, and CSteigenberger (WMF): Your input above would be very helpful. Pinging you because you have engaged on this page in the past. Thank you so much. -- NKohli (WMF) (talk) 12:20, 7 April 2021 (UTC)Reply
(invited from Special:Diff/21309412) Thank you very much for the detailed and illustrated feedback request.
  • When partially blocking an editor from a specific article or the Article namespace for edit warring, I usually enable "Block account creation" and "Autoblock any IP addresses used" to prevent most cases of accidental or intentional circumvention. This is, by far, the main use case for partial blocks from my point of view.

    "Block user from marking edits as minor" may have been an idea in response to mobile app communication issues (example), but if a user completely refuses to communicate, their behavior would probably result in a sitewide block sooner or later anyway (example). "Thanks" misuse is generally considered harassment (example); I think if someone clearly misuses that function and doesn't stop when asked to, a sitewide block would be implemented even if the new option exists. Such options are rarely used but good to have in the toolkit.

  • With "sending thanks" and "marking edits as minor", I guess we have already reached a point where further options have little more than theoretical value.
  • I assume that "account creation" will remain auto-checked for sitewide blocks, and "Automatically block" will remain auto-checked for all types of blocks. The options "Creating pages", "Moving pages" and "Uploading files" need to be hidden for sitewide blocks. The classical checkboxes for blocking account creation, sending email and talk page access should not be replaced by a MenuTagMultiSelect for sitewide blocks. That said, the MenuTagMultiSelect looks fine for partial blocks, especially if "Marking edits as minor" and "Sending thanks" are added to the list. If the number of blockable actions is a prime number, good luck displaying them as checkboxes in any visually appealing form.
ToBeFree (talk) 13:48, 7 April 2021 (UTC)Reply
Thanks @ToBeFree. The example links and feedback is very helpful. I have a question relating to the last point you made - how do you define what constitutes as "partial" block and what constitutes as "sitewide" block?
Our current definition in the system is that anything except complete sitewide editing block is considered a partial block. So if someone is, say, only blocked from "Sending email", then that user is considered to be partially blocked in the system. However since these options have existed since before we introduced partial blocks as a feature, there is scope for confusion around what is considered "partial" and what is not. I'll appreciate your thoughts on the matter. Thanks. -- NKohli (WMF) (talk) 13:12, 13 April 2021 (UTC)Reply
@NKohli (WMF): Regarding the definition question, I was only referring to the radio buttons "Sitewide" and "Partial" in the screenshots. Selecting "Sitewide" should automatically check "block account creation". The box is unchecked in the images, but I assume that it will be checked by default. ToBeFree (talk) 20:13, 13 April 2021 (UTC)Reply
  • I'm kind of not a fan of the minor edits, page move, and thanks options. If someone is doing any of these things in a problematic way, the usual approach (at least at en.wp) is to open a discussion, which hopefully ends with them realizing why their editing was problematic, the other likely outcome being a topic ban from taking such actions. We can only know if they are respecting such behavioral restrictions if they are still able to do the thing they are being asked not to do. Also, (again, on en.wp) many of these actions are already restricted to autoconfirmed users,(creating or moving pages, and uploading files) so "throwaway" harassment accounts can't do them anyway. Honestly, to my own surprise, I am not a fan of of partial blocks of any kind. Either you are willing and able to behave like an adult human or you are not. Forcing appropriate behavior through technically suppressing specific abilities is not a long-term solution to curb disruptive editing or harassment. If you're a jerk intent on harassment, that will come through one way or the other regardless of technical hurdles like these placed in the way. A full block tells the user that they are acting in a way incompatible with a collaborative environment, and they have to learn to work within the expected standards of behavior if they wish to continue to be welcome. A partial block from page moves doesn't do that. Beeblebrox (talk) 17:07, 7 April 2021 (UTC)Reply
    Thanks for your input, @Beeblebrox. As you probably know the idea behind partial blocks is to be able to retain an editor and have scope for teaching them to make constructive edits without a complete ban altogether. It's probably most useful for really new editors who don't understand what "behave like an adult human" means on wiki. It's a very different kind of place on the internet and a lot of behaviors that are acceptable or even encouraged on other internet sites might not really apply here. I 100% agree with you that it is not a long term solution to curb disruptive editing or harassment. If a user cannot change their ways despite a partial block, they should still be blocked sitewide.-- NKohli (WMF) (talk) 13:26, 13 April 2021 (UTC)Reply
I think option 2 looks more flexible. Then if a user has "rollback" or "patrol" rights, those can be partially blocked as well. Aasim 18:37, 7 April 2021 (UTC)Reply
Thanks @Awesome Aasim. -- NKohli (WMF) (talk) 14:03, 13 April 2021 (UTC)Reply
Yes, that sounds fine to me. Thanks Sargoth. -- NKohli (WMF) (talk) 18:35, 8 April 2021 (UTC)Reply
  • With the disclaimer that I'm not an admin on any WMF sites: sending thanks should be blocked as part of a regular full site block. I (strongly) dislike marking as minor blocks; this is going to pollute block logs and make for more frivolous, unfortunate ANI discussions on enwiki. Being sent to ANI, especially for minor infractions, is not a positive thing, especially in a volunteer environment, and is more likely to turn editors away. People bring premature complaints there all the time, and they often get actioned, though. The technical ability to make granulated blocks will only encourage more reports IMO. The other 3 are good ideas. ProcSock (talk) 14:54, 8 April 2021 (UTC)Reply
    @ProcSock Thanks for raising the concerns. I am guessing that sending thanks would already be blocked as part of sitewide block. I will double check this.
    I was imagining the option for "block from marking edits as minor" to be used only in extreme circumstances where it is a repeated behavior, like the examples @ToBeFree mentioned above - when the user is unreachable (like if they are a mobile or IP editor). Including this option was proposed by @Thryduulf based on this conversation on enwiki.
    In light of this, do you still think including this option is a bad idea? -- NKohli (WMF) (talk) 13:50, 13 April 2021 (UTC)Reply
    @NKohli (WMF): The driving idea behind that RfC and the original RfC which I proposed was to disable minor edits entirely / disable them for demographics which tend to misuse the tool. But the issue backdropping both these RfCs is the communication bugs with the mobile apps. The data on demographics misusing minor edits broadly are either vandals and those acting in good faith but misunderstanding what the community tends to consider "minor". In both cases this doesn't really help. Vandals are blocked from editing entirely, and the other editors will learn eventually (and it's not good to be blocking people for it, esp as some consider block logs as badges of shame). Minor edits are just an indicator anyway, and since the indicator is subjective really their meaning is just a communication intent rather than something one can 'misuse' per se. I just personally don't think blocking single editors from minor edits solves any of the issues with that functionality, but rather causes more issues, but others may disagree. ProcSock (talk) 01:04, 15 April 2021 (UTC)Reply
    Thanks for the additional context, @ProcSock. I understand the issue better now. We are going to hold off on adding this feature (marking edits as minor) based on this input. -- NKohli (WMF) (talk) 12:48, 15 April 2021 (UTC)Reply
    @NKohli (WMF): Thanks! Whilst I'm here, two other items: If I remember correctly there was a discussion on enwiki that blocked editors can still send thanks. But I may be misremembering a caveat/detail about this (eg perhaps this is only an issue w/ partial blocks), or perhaps my memory is failing me and I've imagined the issue entirely (as I can't seem to find the discussion atm).
    Second, and totally unrelated to this, I think there's some interesting ideas for anti-harassment tools at en:Wikipedia:Universal Code of Conduct/2021 consultation. Not sure how many will make it to your team, but this idea on a simple/convenient, private way to report harassment was particularly interesting IMO (and there's discussion on how current reporting mechanisms aren't ideal, some prominent ones in this section). Possibly some of this might be of interest to your team. ProcrastinatingReader (talk) 01:14, 16 April 2021 (UTC)Reply
    @ProcrastinatingReader So I double checked with my team's technical lead and according to them if a user is sitewide blocked, they should be blocked from all editing and actions (except if the blocker has chosen to allow them to edit own talk page). This should include the thanks action. If you have a link to the conversation about blocked editors being able to send thanks, that would be massively helpful in helping debug the problem. I tried to find it but couldn't.
    Thanks for pointing me to the page for the UCoC consultation. Internally we have been talking about building a tool to allow users to report harassment. There isn't any work on it planned yet but the consultation responses were going to help shape the product. We were thinking of it being singular tool that will allow users to report different types of incidents via the same workflow. Incident reports for different types of incidents (stalking, swearing, edit warring etc.) could be routed to the right avenue as identified by the community and WMF.
    I'll monitor the page and synthesize the ideas in preparation for work on that tool. Thanks a bunch! :) NKohli (WMF) (talk) 12:33, 16 April 2021 (UTC)Reply
  • I think that "Block user from uploading files" and "Block user from creating pages" may be useful, but the other kinds seem of little use to me. Jo-Jo Eumerus (talk, contributions) 10:14, 9 April 2021 (UTC)Reply
    • I disagree. I agree with you if we are talking about registered users, but we often apply short blocks to IP or IP ranges because a longer block would have the side effect of interfering with other people editing from that IP address. In situations where an admin would choose a short block or no block at all they might be willing to apply a much longer block from performing one of the listed actions. NKohli (WMF), could you please add a list of which if any of the actions we are talking about already require being registered? Which require being confirmed? Thanks! --Guy Macon (talk) 04:34, 10 April 2021 (UTC)Reply
      @Guy Macon These options are community configurable so it is up to the wiki to decide which action is open to which group of users. The configuration is in the codebase in case you want to see it. @Beeblebrox mentions above that creating and moving pages and uploading files are restricted to autoconfirmed users on enwiki.
      I agree with the point you are making about reducing collateral for IPs/IP ranges with these more specific block options. That was one of the biggest consideration for us to work on partial blocks. -- NKohli (WMF) (talk) 14:18, 13 April 2021 (UTC)Reply
  • I do think that these additions are good to have, but I'd expect more benefit if finally we got the option to partially block users by defining a whitelist of pages rather than only by defining a blacklist of pages as is the case right now. → «« Man77 »» [de] 08:23, 11 April 2021 (UTC)Reply
    Thanks for the input @Man77. I created a task for this. Would you mind elaborating in the task about why this request would be helpful? That will be great. Thank you! NKohli (WMF) (talk) 14:27, 13 April 2021 (UTC)Reply
    One obvious place that this would be helpful is to allow a blocked user who is already able to edit their own talk page to edit at ANI, arbcom, SPI, or the blocking admins's talk page. Right now we either unblock them and make them promise to only edit at arbcom, or we make them post their comments on their own talk page and somebody else copies the comment over. This is one of those things where I would ask the developers how hard it would be to do, and tell them to do it only if it is easy. --Guy Macon (talk) 15:05, 13 April 2021 (UTC)Reply
    I obviously could have mentioned this when giving the input, but well, there have already been Phabricator tasks about this request for several years. Compare for phab:T27400, phab:T119795, phab:T240311. Regarding the question about why this would be helpful I second what Guy Macon mentioned. → «« Man77 »» [de] 15:43, 13 April 2021 (UTC)Reply
    Thanks for the additional information and for these links. The tasks seem closely related to each other. I've added them to our team Backlog board so we can prioritize it in a future round of development on partial blocks. NKohli (WMF) (talk) 13:19, 15 April 2021 (UTC)Reply

Update: 26 May 2021[edit]

Hi everyone, I am reviving this discussion to share some more mockups for your input. We have been trying to add these new partial block options in a way that make the most intuitive sense to new and experienced admins alike. There are some options in the block form that are only applicable in the case of partial blocks, while some other options apply to both partial and sitewide blocks. Adding new block options to the block form increases the risk of making the form longer and rendering it less usable to some editors -- but there is also value in keeping more options visible to the editors. As you can see, this is not an easy decision to make. We are thinking about a few different options and we would like to hear your input on them:

Current hierarchical layout[edit]

Keep the hierarchical form with relevant fields coming under the radio option. The common options will be shown outside. Partial block options will not be shown unless it is selected.

Sitewide option selected
Sitewide option selected
Partial option selected
Partial option selected

Flat form with Radio button[edit]

Show the radio options on top all the relevant options in a different section. The relevant fields will be enabled and disabled based on the radios that have been selected.

Sitewide option selected
Sitewide option selected
Partial option selected
Partial option selected

Flat form with Toggle button[edit]

Use a toggle button (ButtonSelectWidget) to select block type. The relevant fields will be enabled and disabled based on the radios that have been selected.

Sitewide option selected
Sitewide option selected
Partial option selected
Partial option selected

@Awesome Aasim, Jeblad, Jo-Jo Eumerus, DaB., Ottawahitech, NickK, Yger, Holmium, Sargoth, CSteigenberger (WMF), Beeblebrox, ProcrastinatingReader, ToBeFree, and Guy Macon: Asking all you wonderful people for your feedback on this once more. Special:Block is a heavily used page and we want to make sure any changes we do here don't cause unintentional harm in any way. Specific questions to help structure your feedback:

  • Have you made partial blocks in the past?
  • What block settings do you use most often?
  • Once the new partial block options are enabled, which ones are you most likely to use?
  • Which format among the above do you think will best expose all the features of Special:Blocks to new administrators?

Thank you so much for your time. Really appreciate it. -- NKohli (WMF) (talk) 10:07, 26 May 2021 (UTC)Reply

I take you are asking for a point-by-point reply? Well, here's mine:
  • I have never used partial blocks.
  • Mostly indef blocks as I usually only work on vandalblocks.
  • I dunno, honestly.
  • Um, to me these look like two screenshots of the same button?
Jo-Jo Eumerus (talk, contributions) 10:41, 26 May 2021 (UTC)Reply
I guess that current or radio button layout are to be preferred, assuming toggling is an additional step which may experienced to add unnecessary complexity, from the point of view of 'new administrators'. I myself would handle all of them feeling only slight differences.
In future, blocking from moving pages is a feature I missed many times in the past and would appreciate much! It's as important as blocking from creating pages. --Holmium (talk) 10:47, 26 May 2021 (UTC)Reply
Thanks @Holmium. That is helpful to know. -- NKohli (WMF) (talk) 12:50, 26 May 2021 (UTC)Reply
Thanks for the feedback. @Jo-Jo Eumerus. Is there a specific reason you don't use partial blocks? --NKohli (WMF) (talk) 12:49, 26 May 2021 (UTC)Reply
Two reasons, mainly:
  • I am not deeply involved in the kind of admin work where partial blocks are used.
  • I have been busy the last months due to university work.
Jo-Jo Eumerus (talk, contributions) 13:21, 26 May 2021 (UTC)Reply
  • In the time since my previous remarks, I have made a few partial blocks. One was a user who was a single purpose account that only edited one page, and was doing so in a problematic way and not responding to anyone, so I blocked them from that specific page while discussion was ongoing. The other few times have been users who had adopted a "flying under the radar" strategy, a fairly common thing at en.wp, where a user simply goes inactive when their article edits are criticized, never engaging with other users. I used article-space blocks to try and get them to engage. Results were mixed. I'm not sure I'd use any of the new options as I still feel like if a user is abusing thos abilities a full block is probably the most effective response, but I suppose I could maybe see blocking page moves if that was the only problem with the account's edits. All of the form mock-ups seem fairly simple to use, but probably the safe bet is to just add them to the current form. Beeblebrox (talk) 17:27, 26 May 2021 (UTC)Reply
  • @NKohli (WMF): On my side:
    • I have made multiple partial blocks, but it was often a disappointing experience. The tool cannot handle multiple blocks, if a user is indefinitely banned from specific articles, than temporarily fully blocked (for a more serious violation), after the end of the full block the temporary block is lifted. Thus in practice I am not using partial blocks for experienced users anymore (I use AbuseFilter instead), but still use them for IPs. Handling a partial and a permanent block at the same time is very important to make partial blocks usable for me.
    • On full blocks, it is essential to have a pair (full block + block account creation) quickly accessible to block IP vandals. On partial blocks, I mostly use namespace block or ban from specific articles.
    • I might want to use bans from uploading files and creating pages (currently doing this with AbuseFilter). Again, it will be useless if a permanent block will remove a partial block.
    • I prefer the current hierarchical layout, as it makes a pair (full block + block account creation) quickly accessible to block IP vandals. Note: please also show where the block term will be located: I don't see it on the page, but it is very important to know how it will work
    NickK (talk) 19:55, 26 May 2021 (UTC)Reply
Hi NKohli (WMF), thank you very much!
  • Yes, mostly to prevent further edit warring by users who also make (or can be expected to make) constructive edits outside the area of conflict, or on the article's talk page. A common reaction to sitewide edit-warring blocks is an immediate unblock request with a hardly credible promise not to resume edit warring. Sometimes, this is followed by an unblock (en:WP:ROPE), resumption of edit warring, another sitewide block (this time indefinitely) and endless maintenance and wikilawyering involved in the whole process. We've finally got a solution for that.
  • For partial edit warring blocks: "Account creation disabled" (for every partial block, as account creation and subsequent block evasion are not something that should happen during these blocks), "Autoblock any IP addresses used" (prevent the most simple form of logged-out block evasion). Further disruption on other pages or using other means can be addressed with a sitewide block.
  • Looking at the new options only: "Creating pages", by far. To enforce binding unblock conditions for editors with a previously undisclosed financial conflict of interest. Especially for indefinite-duration conditions, there are always cases of users forgetting, or "forgetting", about their conditions after a year or two. Forgetting about a condition is much more convenient than appealing the condition, after all. Especially if those who have been involved in the restriction discussion have become inactive in the meantime.
  • The flat form layout looks most intuitive to me personally, as it doesn't make new options appear or disappear as a surprise; instead, it shows the entire list of options all the time. This probably makes it simpler to switch between the two settings (Sitewide/Partial) in the block form while making a decision. The toggle buttons look more visually appealing than the radio buttons to me, partially because of the strong blue color, partially because the two options are prominently displayed in the same line, and partially because there is just one concise explanation text for both options in one location, reducing unnecessary eye movement on the page. But all proposed examples look fine to me.
Best regards,
ToBeFree (talk) 21:07, 26 May 2021 (UTC)Reply
I use partial blocks very much. Mostly to stop edit wars (or to get the users attention), and the blocks for a short time, 30 mins up to 24 hours
I also use the standard block, typically 30 mins for vandals
I am quite happy with the layoiut in place today, and to stop creating new pages could be useful.Yger (talk) 05:59, 27 May 2021 (UTC)Reply