Semi-protected edit request on 17 May 2020
|This edit request has been answered. Set the |
184.108.40.206 19:41, 17 May 2020 (UTC)
- Not an edit request. —Sgd. Hasley 22:17, 17 May 2020 (UTC)
- To clarify, a global ban is a community action while a WMF ban is a foundation action. The second question should be answered by Global bans#Overturning a global ban decision. ~~~~
User:1234qwer1234qwer4 (talk) 21:15, 18 July 2021 (UTC)
A thought from the Foundation's CR&S teams
Hello all. My name is Maggie Dennis, and I am the Vice President of the Community Resilience and Sustainability group at the Wikimedia Foundation. Among the teams I oversee is the Trust & Safety unit. This team ensures that our projects are compliant with applicable law and also explores ways of keeping the Wikimedia community safe and works to minimize exposure to harm for volunteer and reader communities. I’m reaching out today to discuss a potential gap in this volunteer community policy that my teams observed while evaluating and acting on a Trust & Safety investigation. We wanted to bring this up in case volunteer community members would like to consider if this is indeed a concern that you wish to address. Before getting to that, let me give you a little context on the case.
Many of our projects have excellent policies and systems in place to handle such situations. Certainly French Wikipedia was on top of this. We greatly admire and appreciate the leadership of community members in identifying and confronting this situation locally. Wikimedians who work directly with content are often the first to see evidence of such campaigns, and there are many volunteers with much experience in identifying problem behaviours and stopping them. By the time Trust & Safety was asked to investigate by some of those volunteers, much of the work on the local level had already been done.
However, one of the questions Trust & Safety asks itself in any case investigation (disinformation or behavioral) is whether appropriate community options exist that meet the needs of the movement and community members across it. In this case, we wondered if the current community processes support cases where individuals are behaving in ways that suggest they will never be good faith contributors on any project.
To go more into depth on what I mean: It is not uncommon for users who create problems on one project to move to another, and for some communities it is even regarded as a potential path to rehabilitation. Community applied global bans are, under the existing policy, “exclusively applied where multiple independent communities have previously elected to ban a user for a pattern of abuse.” (emphasis in original) If an individual is here as part of a concerted group effort to undermine our very mission, should it be easier for community members to assess global banning before they carry that behavior from one project to another?
However, we wanted to call out the question of whether community global bans should be allowed in cases where the behavior is severe but limited to one project, in case volunteer community members thought it worth discussing the existing community ban policy. Especially in cases of disinformation , these are not always the kinds of situations governed by our Universal Code of Conduct (UCoC), which speaks to the way users treat each other but not the content.
If there is a desire for the Foundation to support a conversation about making such a change to community global ban policy, I hope we would be able to do so in the near future, as our Trust & Safety Policy team is dedicated to supporting the evolution of community policy as well as Foundation policy. However, I’m not suggesting that the Foundation needs to be involved at all. Trust & Safety Policy is a small team, currently very busy with the UCoC, and if they are not needed, there is no reason that this conversation can’t happen spontaneously. We will support if needed, but really just wanted to bring this question up for your consideration.
In this case, again, we do want to thank the French Wikipedia contributors who protected their communities and our collective readers by identifying and addressing the issue first as well as bringing the matter to us.
We encourage those who feel unsafe on Wikimedia projects to use local community processes or, absent such, to contact the Wikimedia Foundation for assistance. The Foundation and the community will work, together or in parallel, to enhance the safety of all users whenever necessary with whatever means we can. To contact the Trust & Safety team about a safety issue, you can write to email@example.com. To contact the Trust & Safety Disinformation team about a specific disinformation issue, you can write to firstname.lastname@example.org.