Jump to content

Community Wishlist Survey 2022/Anti-harassment

From Meta, a Wikimedia project coordination wiki
Anti-harassment
11 proposals, 291 contributors, 476 support votes
The survey has closed. Thanks for your participation :)



UserBlind mode

  • Problem:
    • Many users have a hard time commenting on the content and not the contributor in certain situations, and often find themselves biased in certain ways towards or against certain users.
    • The occasional need for contributors to pass judgement on the conduct and actions of their peers can lead to interpersonal conflict. The possibility of such consequences often leads to difficulty in fairly judging others' actions, when associated with particular reputations.
  • Proposed solution: An opt-in mode to allow users to navigate and participate without seeing others' usernames. When the mode is enabled, the usernames of other editors are hidden from oneself behind an anonymizing token (e.g., every comment from the first editor named on that page is from "User 1", every comment from the second editor named on that page is from "User 2", etc.) for as long as the mode is on. (The underlying page content is not affected, nor is the appearance altered for anyone other than the person viewing through the mode.).
    • Example:
      • What everyone sees:
        BobTheEditor's actions are in violation of policy. --FredTheEditor (talk) 07:59, 23 January 2022 (UTC)
        No, they are not. --BobTheEditor 07:59, 23 January 2022 (UTC)
      • What you see while you have UserBlind mode enabled:
        [User #1]'s actions are in violation of policy. --[User #2] 07:59, 23 January 2022 (UTC)
        No, they are not. --[User #1] 07:59, 23 January 2022 (UTC)
  • Who would benefit:
  • More comments: Proof of concept, Village Pump post, 2021 wishlist proposal
  • Phabricator tickets:
  • Proposer: Yair rand (talk) 07:36, 23 January 2022 (UTC)[reply]

Discussion

This sounds good, but there are some practical problems: 1) When one of users mentions and misspell (even if deliberately) name of second user, script will not be able to recognize it. (FredtheEditor, Fredetc., BoobTheEditor). 2) In languages with declination would be the usernames in different form ((Czech)Viděl jsem BobaTheEditora (English)I've seen BobTheEditor...). So the practical impact will be limited JAn Dudík (talk) 08:06, 24 January 2022 (UTC)[reply]

It doesn't have to be perfect to be useful. In fact, merely replacing names in the signatures/links could be useful.
In the case of an RFC, there are comparatively few mentions of others' usernames, and editors in most communities put a high importance on evaluating the arguments impartially, without regard to the person's reputation. (In other instances, you definitely want to know who's posting, in which case you turn it off.) WhatamIdoing (talk) 16:43, 24 January 2022 (UTC)[reply]
  • My reading of the examples is that this is something you'd want to function on free-form text everywhere (hopefully only in non-content namespaces). But usernames can be anything, for example look at how this statement would be modified if working on free text:
    ORIGINAL: Many users have a hard time commenting on the content and not the contributor in certain situations, and often find themselves biased in certain ways towards or against certain users.
    MODIFIED: Many [TOKEN1] have [TOKEN2] [TOKEN3] time commenting on the [TOKEN4] and [TOKEN5] the contributor in [TOKEN6] situations, and often find themselves biased in [TOKEN6] [TOKEN7] towards or against [TOKEN6] [TOKEN1].
  • xaosflux Talk 19:43, 24 January 2022 (UTC)[reply]
    • @Xaosflux: The idea is to try to hide only things that are actual references to users, not accidental name overlaps. For implementation: The proof-of-concept I threw together works by recording the names of all users for whom the page has any links to their userpages, talk pages, contribs, logs, etc., and also remembering which strings refer to users for the rest of the session. In practice, that method seems pretty reliable, though it could probably be improved upon. Things could become difficult if users with usernames identical to very common words were to comment on the discussion page being read, but that kind of thing really doesn't happen often. --Yair rand (talk) 21:06, 24 January 2022 (UTC)[reply]

It seems that some people (Роман Рябенко, Phyrexian and Mannivu) have misunderstood this proposal. It's not about "anonymous editing"; it's about an optional mode that lets people who have enabled it hide others' usernames so that they can focus on those people's arguments rather than their identities (en:Wikipedia:No personal attacks, meatball:ForgiveAndForget). Kleinpecan (talk) 15:23, 3 February 2022 (UTC)[reply]

@Kleinpecan Ok so I understood this the way around. However, I think that this isn't something really useful. I mean: if the user disable the gadget they will still have some bias towards the users they're talking to; so this doesn't resolve the problem (having bias and not being able to deal with those biases) and I don't think it's useful. --Mannivu · 17:04, 3 February 2022 (UTC)[reply]
@Kleinpecan I understood it just as you explained. I still believe that it is "anonymous editing". People edit less responsibly if they evaluate only the arguments and do not take into account who is the author. There are even user pages which may declare what some people dislike or how they should be addressed to with respect. The user of the gadget can even attack an argument more than if they knew who the author is. There is also the risk of losing context in a discussion which may span multiple talk pages. Moreover, the users of such gadget can post something harsh intentionally and, when confronted with it, just say that they had the gadget enabled and it wasn't a personal attack. When people put their signature next to the post, they already want to take credit and responsibility for the statements they make. If they do not want that, they can resort to anonymous editing, which is already available. Even more, when nicknames are used, they already provide anonymity which without sock-puppeting allows consistency of discussions across talk pages. So, I see more harm than value in the proposed self-blinding gadget because it breaks the natural flow of discussion and gives potential for reducing accountability.--Роман Рябенко (talk) 08:10, 4 February 2022 (UTC)[reply]

L736E: Because the proposed user-blind mode is optional, there would be no need to engage in stylometry if you want to find out who wrote what. And the proposal is not about "bias in contents"—see my comment above. Kleinpecan (talk) 15:30, 3 February 2022 (UTC)[reply]

@Kleinpecan: I understood the proposal, but I think it's not a good idea. We should not spend energies because some users can not control their own biases, based on what they think they know about other users. It is their own responsability to deal with that, not ours. --Phyrexian ɸ 15:46, 3 February 2022 (UTC)[reply]

Voting

Deal with Google Chrome User-Agent deprecation

  • Problem: Google Chrome, the most widely used browser on the internet, will soon start limiting the information it includes on HTTP requests about the client (also known as the user-agent string). User-agent, along with IP address, is an important piece of data used by CheckUsers to fight sockpuppetry.
  • Proposed solution: See phab:T242825
  • Who would benefit: All projects in which CUs are run and a substantial group of users edit from internet providers who own wide IP ranges and frequently hop the user's IP within this broad range (which I understand is more common in developing countries than in some developed countries like the US in which static IPs are more common).
  • More comments: This is already on people's radar, but submitting it here, on behalf of the CU community, is an attempt to bring it up the list of priorities.
  • Phabricator tickets: phab:T242825
  • Proposer: Huji (talk) 21:34, 17 January 2022 (UTC)[reply]

Discussion

  • It is very useful for CUs to have useragents to rule out or further connect two users who share the same IP / similar ranges. Especially on busy ranges or for proxies two accounts being on the same or similar IPs doesn't necessarily technically link two users to each other. Having alternatives such as UA Hints provided in the tool would be better than having non-descriptive UAs (which is what Google Chrome plans to have). Dreamy Jazz talk to me | enwiki 22:07, 17 January 2022 (UTC)[reply]
  • Could you explain to someone with no experience with CheckUser to what extent anything this could be dealt with/what could be done to deal with this? ~~~~
    User:1234qwer1234qwer4 (talk)
    19:25, 7 February 2022 (UTC)[reply]

Voting

Access log of oversighted contents

  • Problem: Oversighted revisions often contain non-public personal information, which can be accessed to arbitrarily by oversighters. There is a risk of oversighters being bribed to search for oversighted information, in order to dox someone.
  • Proposed solution: Each access to oversighted contents should generate a private log entry, and thus abnormal information collections could be detected. It's not applied to recent oversighted contents for review convenience.
  • Who would benefit: People who have personal information oversighted.
  • More comments:
  • Phabricator tickets:
  • Proposer: Lt2818 (talk) 15:23, 14 January 2022 (UTC)[reply]

Discussion

The CUs on enwiki have explained quite well why the CU log is private: they check some accounts and decide that there was a violation. They then check all the accounts' IP addresses to look for additional socks, and any account they find on the said IPs for confirmation that it is in fact a sock (and frequently decide some aren't). If the check log were public, that would be a huge amount of private data revealed to the public. 2.55.185.246 18:52, 22 January 2022 (UTC)[reply]
Could you give an example of such private data? ··gracefool 22:13, 4 February 2022 (UTC)[reply]
yeah, sure... /s ~~~~
User:1234qwer1234qwer4 (talk)
20:11, 7 February 2022 (UTC)[reply]
  • In case of legal need, I am pretty sure HTTP access logs already allow WMF or legal authority to check all log accesses. -- Pols12 (talk) 13:44, 29 January 2022 (UTC)[reply]
    I'm not sure how long the HTTP access logs are kept. This proposal will let other volunteers (oversighters & stewards) be aware of permission abuse, before a WMF investigation. Lt2818 (talk) 16:05, 29 January 2022 (UTC)[reply]
  • To respond to those saying we should "trust functionaries" - that attitude is simply naive. "Who watches the watchmen?" It's a basic principle of human nature that oversight needs oversight. Everyone needs accountability, no-one is perfectly trustworthy, and even if they were, it doesn't hurt to prove it. ··gracefool 22:13, 4 February 2022 (UTC)[reply]
    We should trust them, because they have got to where they are by showing that they are suitable for the role by many years of work and have earned the trust granted to them over years, and often more than a decade of service. They will have been through community approval (such as Request for Adminship), likely several times over. Their real identities are all known by the WMF as well. Mako001 (talk) 03:16, 5 February 2022 (UTC)[reply]
    I don't think access logs imply distrust of functionaries, but just in case. Do you think CU logs are unnecessary too? Lt2818 (talk) 05:10, 5 February 2022 (UTC)[reply]
    This, essentially. It's not like there haven't been cases of CU abuse/misuse either, and viewing oversighted material certainly has potential for abuse. ~~~~
    User:1234qwer1234qwer4 (talk)
    20:22, 7 February 2022 (UTC)[reply]

Voting

Expose more detailed diff information to the AbuseFilter

  • Problem: The level of "diff" information accessible in AbuseFilter is too crude. Therefore, many forms of vandalism cannot be correctly captured. An example is word-swapping vandalism where the same word may exist elsewhere in the same line or paragraph.
  • Proposed solution: Some proposal has already been proposed in phab:T220764
  • Who would benefit: Wikis using AbuseFilter to fight vandalism
  • More comments: This is a rather specific proposal, so those who are unfamiliar with AbuseFilter and its limitations may not fully appreciate it. Fewer supporters may exist for this compared to some more generically defined proposals. I hope that is considered when comparing proposals.
  • Phabricator tickets: phab:T220764
  • Proposer: Huji (talk) 00:58, 11 January 2022 (UTC)[reply]

Discussion

  • I recently stumbled on an unusual interaction in the AbuseFilter extension which resulted in an accidental block. When moving a link around from one paragraph to another, the link is added to added_lines, but is not added to added_links. Improvements to AbuseFilter are most welcome! —Ivi104 02:21, 11 January 2022 (UTC)[reply]
    AbuseFilter in enwiki and some wikis have the block options disabled, as community consensus think that the filter hits should be reviewed by a human for vandalism. However, this feature is sometimes too crude and can't be watched. I also think AbuseFilter should have more types of conditions. Thingofme (talk) 02:51, 11 January 2022 (UTC)[reply]
  • As an admin who reviews filter reports a lot at AIV, I would say that the best opportunity for improvement lies with those who write the individual filters, not the extension. Daniel Case (talk) 05:28, 11 January 2022 (UTC)[reply]
  • I just wrote Community_Wishlist_Survey_2022/Admins_and_patrollers/Expose_ORES_scores_in_AbuseFilter. While there is no overlap, I believe the two proposals are closely related.--Strainu (talk) 16:21, 11 January 2022 (UTC)[reply]
    Agreed; they are closely related, and I find that to be a good idea too. The distinction is, this is something for which we have a clear path forward, but the ORES proposal has some timeliness issues for which we don't have a good answer yet. Huji (talk) 01:40, 12 January 2022 (UTC)[reply]
  • @Huji: Thank you for proposing this. In order to better understand what you are saying I have a few questions I would like to ask you. Please keep in mind that I don't write abuse filters myself:
  1. Wouldn't word swapping be easily identifiable by looking at the diff size?
  2. If the problem you are trying to solve is to better identify what was changed that triggered the filter, wouldn't looking at the diff itself be more helpful instead of sifting through the variables table?
  3. The phab task proposes a solution by adding new variables (words added/removed). How will this be useful in edits that expand multiple lines when trying to identify word swapping?
  4. Is there any other use case that is not covered by the current variables/functions that will benefit from these variables? If you can provide examples that would be great
Also, could you please update the phab task with a working link? The current one does not work anymore. DMaza (WMF) (talk) 16:55, 14 January 2022 (UTC)[reply]
@DMaza (WMF): great questions!
  1. Not necessarily. An example is the swapping of words "Kurd" and "Turk" (well, actually, the Persian words کرد and ترک respectively) which happens a lot on fawiki. They are same length words, so diff size is 0. Using logic like added_lines rlike 'Kurd' & removed_lines rlike 'Turk' won't cut it either because the rest of the paragraph could (and often does) include the words Kurd and Turk as well. We have vandals who specifically do word swaps. In a diff, we can see exactly which word was swapped (this diff shows that "a" was replaced with "some") but in AbuseFilter we don't have a corresponding variable. What I am thinking is something like added_words or added_characters, in addition to the line-level added_lines which we already have.
  2. Yes, except the diff-related variables in AbuseFilter are also at the line level, not the phrase level. Further below, I have pasted what edit_diff's value would look like for the diff I linked above; you will note that even in these variables, you still don't see the words "a" and "some" distnguished in anyway that can be used programmatically in an AbuseFilter. Essentially, we have capabilities in MW diff which we don't expose in AbuseFilter at all.
  3. In the diff example above (or the Kurd/Turk actual example) you could look for added_words contains 'Kurd' & removed_words contains 'Turk' (here I assume added_words and removed_words are arrays.
  4. Many of the use cases that are currently using added_lines or removed_lines may benefit from using these new proposed variables in addition to or instead of the existing variables. Huji (talk) 19:17, 14 January 2022 (UTC)[reply]
; edit_diff
'@@ -1,1 +1,1 @@
-Here is a word in a sentence.
+Here is some word in a sentence.
'
  • As I wrote on Community Wishlist Survey 2022/Archive/Purely adding keywards on Abusefilter, diffs in detail (e.g. words) would help us. However, splitting a sentence into words could be difficult in some languages, especially which doesn't use spaces to split words. So I worry about the quality of the algorithm. We wouldn't use the new variables for wording diffs if they give us many false positives/negatives.
    Here, I strongly recommended implementing an alternative method to accurate detection at the same time you implement the algorithm to extract word diffs. One idea is just making contains_any() support array of keywords as its arguments (i.e. keywords := ["keywordA", "keywordB", "keywordC"]; contains_any(added_lines, keywords), which currently supports only variadic arguments contains_any(added_lines, "keywordA", "keywordB", "keywordC"). This simple method is enough to detect added words with checking the diff between added_lines and removed_lines (i.e. contains_any(added_lines, "A") & !contains_any(removed_lines, "A") can detect adding word "A"). --aokomoriuta (talk) 03:59, 29 January 2022 (UTC)[reply]

Voting

Allowing user to change own userpage or user subpages content model

  • Problem: Currently, no one other than the administrator can change the content model of a page, if a user wants to change the content of his or her own user pages or subpages, then he/her has need ask a administrator to do it. If given the opportunity to do so, it will be possible to do various tasks including template development through user namespace page. This feature can only be added to Autoconfirmed users to avoid abuse.
  • Proposed solution: Allowing user to change own userpage or user subpages content model
  • Who would benefit: All users
  • More comments:
  • Phabricator tickets:
  • Proposer: MdsShakil (talk) 05:01, 14 January 2022 (UTC)[reply]

Discussion

  • @MdsShakil: Could you explain a bit more about what the problem is? You mention being able to develop templates in the user namespace, but this should already be possible via transcluding an absolute page name (e.g. {{:User:MdsShakil/TestTemplate}}). Also, content models of certain pages are already set correctly: if you try to create a .css or .json subpage for instance, it'll be of the correct content type. As for changing the content model of a user page itself, I'm not sure I see the point of that; could you explain more? Thanks! — SWilson (WMF) (talk) 07:37, 14 January 2022 (UTC)[reply]
    @SWilson (WMF) Yes, using .css starts CSS pages but sometimes they need be changed to Sanitized CSS. Also, this will help create MassMessageListContent in the user namespace and creating Json pages without .json. -- MdsShakil (talk) 08:35, 14 January 2022 (UTC)[reply]
    Regarding the massmessage content model, phab:T92795 is looking to solve that part. — xaosflux Talk 00:22, 15 January 2022 (UTC)[reply]
    Agreed: more detail, please (and example/s?) --Aboudaqn (talk) 20:22, 28 January 2022 (UTC)[reply]
  • phab:T85847 seems relevant and maybe should be done instead (for autoconfirmed users maybe). I know that some wikis like English Wikipedia have gotten consensus to add change content model right to non-admin user groups (like template editor). But both this proposal and the task are maybe just consensus needed.... not sure if this one just needs a push or not. --Izno (talk) 22:50, 22 January 2022 (UTC)[reply]
    agreed but i think all users with AutoPatrol level-access is better, to avoid misuse/abuse; cuz getting autoconfirmed user right is easy and it doesn't take much for anyone to get it. 🌸 Sakura emad 💖 (talk) 17:52, 29 January 2022 (UTC)[reply]
    @Sakura emad: Lots of wiki has no user group like as autopatrol, this right is only able to change own userpage or user subpages content model so i think that is not necessary. --MdsShakil (talk) 04:01, 5 February 2022 (UTC)[reply]
  • Potential for abuse should be considered, since autoconfirmed is easy to get and personal JS/CSS can only be edited by interface administrators. Current AbuseFilters might apply stricter conditions to pages based on their titles (such as matching \.(js|css)), which would not work if arbitrary userspace pages' content models could be changed by their owners. ~~~~
    User:1234qwer1234qwer4 (talk)
    19:58, 7 February 2022 (UTC)[reply]

Voting

Styling for globally locked users

  • Problem: (1) Given a category of user or user talk pages, determine which have or have not been globally locked. (2) Given a page (e.g. a sockpuppet investigation page), somehow highlight accounts that have been globally locked.
  • Proposed solution: It is not possible to determine efficiently whether a list of accounts has been locked. It is now one API request per user, instead of the 1 request for 50 or even 500 users. Solve the API problem and gadget writers can do the rest.
  • Who would benefit: Stewards, those dealing with cross-wiki sockpuppets and spam, admins who deal with block appeals, etc. Also a small reduction in the amount of cross-wiki abuse.
  • More comments:
  • Phabricator tickets: phab:T261752 phab:T237505
  • Proposer: MER-C 19:56, 13 January 2022 (UTC)[reply]

Discussion

Voting

Add variables in AbuseFilter to detect/block thanks

  • Problem: There is no way in AbuseFilter to detect uses of the "thanks" feature. This feature can be abused in order to harass editors. It is possible to mute a spectific account, but when the harasser uses a lot of accounts, the editor targeted by harasser has no other choice than disabling the whole thanks feature.
  • Proposed solution: Create thanks feature related variables, which could be "thanks sender username" and "thanks recipient username" for example.
  • Who would benefit: AbuseFilter editors, harassed editors
  • More comments:
  • Phabricator tickets: phab:T235873
  • Proposer: — Jules* Talk 10:14, 23 January 2022 (UTC)[reply]

Discussion

  • Erratum: I wrote in the proposal that variables "could be thanks sender username and thanks recipient username". We already have a user_name variable in AbuseFilter (for the user who does the action), so we only need a thanked user variable. This would allow to prevent harassment using thanks feature. — Jules* Talk 09:34, 30 January 2022 (UTC)[reply]

Voting

Split up blockedtext into different messages for username block, and IP block and for an IP range block

  • Problem: All blocked users are displayed the same block message (MediaWiki:Blockedtext) regardless of whether they are registered or unregistered contributors. This is problematic and does not allow to give blocked users specific instructions based on the block they're facing.
  • Proposed solution: Split up MediaWiki:Blockedtext into different messages for unregistered contributors and registered contributors.
  • Who would benefit: Blocked users, giving them the right directions based on the block type (e.g., it makes no sense to advertise for unregistered contributors that they can "email $1" when they can't technically do so).
  • More comments: Partly done for composite blocks (multiple blocks affecting an IP/IP range) and for partial blocks. See system messages. Done already for global blocks. For Wikimedia wikis, if needed, they could probably be further customised via WikimediaMessages overrides (if there's a need to do so).
  • Phabricator tickets: task T60858
  • Proposer: —MarcoAurelio (talk) 10:41, 20 January 2022 (UTC)[reply]

Discussion

Voting

Notifications for user page edits

  • Problem: If your user page is modified by a malicious user, you may not notice if your watchlist is overflowed. Unlike a normal article, it is very unlikely that another user will revert or warn about the malicious changes for you. Unlike article vandalism, user page vandalism can affect the unwitting users standing in the community.
  • Proposed solution: Generate notifications for user page modifications by other users, just like user talk notifications work now. Another solution would be to protect user pages from modification, but some edits may be friendly and even useful.
  • Who would benefit: Users whose user page has been vandalized.
  • More comments: It could be made configurable per user.
  • Phabricator tickets: phab:T3876
  • Proposer: Error (talk) 10:49, 13 January 2022 (UTC)[reply]

Discussion

  • Related to phab:T176351 (for the protection part). Note, some projects use abusefilter to add some protections to base userpages already as well. — xaosflux Talk 11:37, 13 January 2022 (UTC)[reply]
    Yes, however blocked users, new users should have the userpage editable as, I think to mark sockpuppets and banned users. Meta already has this, but I have problems when marking problematic userpages for deletion. Thingofme (talk) 10:48, 15 January 2022 (UTC)[reply]
  • This is a nice wish which should be very easy to deploy and should ease our fight against harassment. A single notification when someone, whoever, modify your user page seems fairly reasonable. Xavier Dengra (MESSAGES) 00:01, 22 January 2022 (UTC)[reply]
  • I'm not a fan of blocking such edits entirely (many may be in good-faith, even if by IPs, e.g. in user space TODO or cleanup lists) but getting a notification should definitely be the default. I would add that page creations in another user's user space should also generate a notification for that user. Fytcha (talk) 18:48, 25 January 2022 (UTC)[reply]
  • A notification would be nice, certainly. I'm not sure I'd want it to be the orange bar of doom, though. A few times in the past, I've done something like fixing a typo on someone else's userpage, and I might be less likely to do that if I knew it'd give them a big dramatic alert. {{u|Sdkb}}talk 18:50, 28 January 2022 (UTC)[reply]
  • Or: redo the watchlist so that pages can be categorized, sorted, or assigned levels of importance. This is an important tool for editors - why should so many find it unusable? François Robere (talk) 12:01, 30 January 2022 (UTC)[reply]
  • This proposal inspire me an idea for the watchlist, as per the idea above, a kind of levelled watchlist, with different level of notifications. E.g. it would be useful in the case of overflowed watchlists to chose some pages of which we want a more visible notification within the watchlist when these have been modified. By showing the corresponding notifications in the current watchlist with an additional section "pages to monitor first", or "important pages" or "priority pages", ect... Christian Ferrer (talk) 12:03, 30 January 2022 (UTC)[reply]

Voting

Allow all registered users the right to semi-protect their own user and talk pages

  • Problem: Vandals, and other disruptive users, sometimes maliciously edit the user and talk pages of individual users who have reported them or otherwise dealt with them. Those who are administrators have the ability (and some of us have used this) to semi-protect their personal pages. But users without this right must ask for it to be done, which can lead some of them (I think) not to do it at all because they may not want to wait, and may be afraid their request would be denied.
  • Proposed solution: Allow registered users past a certain number of edits the right to semi-protect their user and talk pages as they see fit.
  • Who would benefit: All users, as it would really make for safer personal space for everyone.
  • More comments:
  • Phabricator tickets:
  • Proposer: Daniel Case (talk) 03:21, 23 January 2022 (UTC)[reply]

Discussion

Voting

Further interaction blockers.

  • Problem: Currently, in MediaWiki, it is possible to mute someone's mention, and to block someone from sending email to personal email account, however there are no ways to prevent additional user interactions, for example writing on user talk pages, editing userspace drafts, or be pinged or be quoted on Phabricator.
  • Proposed solution: It is desirable to 1.) enhance the muting in these aspects, and 2.) integrate options of different form of interaction blocking to a single one for easier control.
  • Who would benefit: All users with an account and participate in editing behavior.
  • More comments:
  • Phabricator tickets:
  • Proposer: C933103 (talk) 03:54, 17 January 2022 (UTC)[reply]

Discussion

  • @C933103: It is currently possible for an administrator to block a user from editing a wiki (either specific namespaces or all pages), see mw:Help:Blocking_users. It is also possible to modify your Phabricator email preferences to decide when you get emails (see https://phabricator.wikimedia.org/settings/user/<username>/page/emailpreferences/). Are these satisfactory? Is there anything more you wanted? DWalden (WMF) (talk) 11:24, 17 January 2022 (UTC)[reply]
    Not really, as the proposal is to ask for additional ability on allowing users control who they do not want to interact with, as well as collecting these controls together in some accessible places like user preference, and neither of the two options can achieve this (they are either tools for administrators only or tools that do not focus on only some specific users. C933103 (talk) 21:57, 17 January 2022 (UTC)[reply]
    I would say this isn't really useful as it violates our Vision. Liuxinyu970226 (talk) 03:40, 20 January 2022 (UTC)[reply]
    How so? Regards, HaeB (talk) 14:45, 23 January 2022 (UTC)[reply]
    For context, see the list of interaction types that the English Wikipedia currently considers as covered by interaction bans (which are aimed at reducing conflict between two particular users). Regards, HaeB (talk) 14:45, 23 January 2022 (UTC)[reply]
    Such suggestions are very like some years ago, German Wikipedia's suggestion that to automatically remove the "Thanks" log, then, not only that suggestion is rejected, but also results the first entry of Limits_to_configuration_changes#Prohibited_changes. Liuxinyu970226 (talk) 12:26, 28 January 2022 (UTC)[reply]
  • Is it actually desirable to enable muting? It seems to me that pervasive blocking features make social media a more antisocial place, because so many people block others for mere disagreement, or for calling them out for breaking rules or actual trolling etc. This will absolutely be abused. ··gracefool 22:01, 4 February 2022 (UTC)[reply]
    Most social media site allow users to block spam or harassing message at the same time the behavior is reported to platform administrators. I think it make sense that on non-content and non-public-discussion pages, parties with behavior who the user feel undesirable can block those on private pages. I am not and would not suggest this to replace the full interaction block as outlined in wiki punishment as that would also be impossible to enforce and hurt cooperation, however 1-to-1 interactions should be optional.
    If there are concern on system abuse by spammer who block people from warning them on bad behavior, then perhaps exemption can be created for administrators and bureaucrats and make them unblockable from such system.C933103 (talk) 14:59, 5 February 2022 (UTC)[reply]
    @C933103 That's why however we shouldn't do so on MediaWiki, as per w:WP:SNS. Liuxinyu970226 (talk) 05:11, 7 February 2022 (UTC)[reply]
    I would like to add that my comment was about the model of which interaction can be deal with, not that Wikimedia projects should function like social media sites. As I have specifically suggested the limitation in scope of the wish.C933103 (talk) 13:01, 7 February 2022 (UTC)[reply]
  • Question. Why do you want to reduce interaction in such form? There is a frustration behind this request that should be understood. What are the kind of contents you do not like to see? --Valerio Bozzolan (talk) 14:29, 11 February 2022 (UTC)[reply]
    It takes time for abusive behavior from one user against another on Wiki to be dealt with by administrator, and such tool would be needed by the target user to disable such sort of abusive behavior before administrators can finish processing relevant cases of abuse. Otherwise such form of abuse could continues until no ends. C933103 (talk) 00:04, 13 February 2022 (UTC)[reply]

Voting