Talk:Community health initiative/Archives/2017

From Meta, a Wikimedia project coordination wiki

Undue emphasis on blocking

Your introductory "background" section deals almost exclusively with blocking and implies a consensus of volunteers to "improve" blocking tools. But looking at the links gives a different picture of the community discussion.

  • The 2015 Community Wishlist survey is two years old and has only 35 votes, not all of them supports. Little reason is given for the 'support' votes, but one reason 'against' stands out:
"Blocking the worst problem users is a pipedream. Helping out the NSA and its ilk by doing mass surveillance of regular users and occasional visitors by storing troves of user-agent data or sending people out on the web bleeding fancy hacked Evercookie type data polluting their browser ... that's the reality. Just don't."
What effect would this have on the privacy and safety of users who edit from inside the borders of repressive regimes? Or on admins/checkusers who have political positions within repressive regimes?
"Setzen eines „Vandalismus-Cookies“ bei Benutzersperren. Die Cookie-Verfallszeit entspricht dabei der Sperrdauer, maximal aber 1 Tag. Beim Aufruf des Wikieditors Überprüfung, ob ein entsprechendes Cookie vorliegt. Wenn ja, dann keine Bearbeitung zulassen und entsprechende Meldung ausgeben. (/als Zombie-Cookie) (Bug 3233)"
and it means something like
"Set a "vandalism-cookie" for user locks. The cookie expiry time corresponds to the lock period, but a maximum of 1 day. When retrieving the Wikieditor check whether a corresponding cookie is present. If yes, then do not allow editing and output the corresponding message. (/ As a zombie cookie ) ( Bug 3233 )"
Like the other "community discussion", it seems to support implementation of vandalism cookies rather than user agent.
  • The Phabricator requests, see this ticket. If I am reading this correctly, it appears the cookie ones are already implemented, but not the more controversial user agent ones.

Ironically, the 2014 Inspire Campaign referenced in that section has two mentions of blocking:

  • "There is a double standard between admins and editors where admins are allowed to get away with conduct that would cause an editor to be blocked"
  • "There is a double standard between female and male editors where men are allowed to get away with conduct that would cause a woman to be blocked or banned"

While everyone can be sympathetic to the idea of not having "disruptive" users, the sad fact is that there have been quite a number of bad blocks, and that blocks have become more and more political. Any admin can indef any good faith contributor, with no discussion, and no one there to see, unless someone has their talk page watchlisted. There is no mechanism for reviewing blocks, and there is virtually no mechanism for reviewing these "admins for life". —Neotarf (talk) 23:44, 26 January 2017 (UTC)

Neotarf: You're right, there is too much emphasis on blocking in that section. I'll have to correct that. The "What this initiative is about" section is a better overview of what we'll be working on -- improving blocking is part of it, but by no means everything.
I think that the reporting and evaluation tools are going to be key to this project, helping administrators who care about harassment to get good information about what's happening, so they can make good decisions. Right now, the main tool for evaluating harassment cases are diffs pulled out of user contributions, and it's difficult to see the whole picture. We want to work with admins and others to figure out what kinds of tools they need. Do you want to talk some more about the problem with reviewing blocks? I know that people ask to be unblocked, but there's a lot I don't know yet. -- DannyH (WMF) (talk) 00:14, 27 January 2017 (UTC)
So "what this initiative is about" is about finding solutions to bullying and harassment. At this point there seems to be quite a bit of agreement that it will take both social and technical tools, and that the social part will be both the most difficult and the most important. For the moment, I am willing to go along with that, until better information comes along. For some context about social fixes, see "Advice for the Accidental Community Manager" by Jessamyn West and "If your website’s full of assholes, it’s your fault", by Anil Dash, both well known in the social media world.
If your approach is going to be "find out what admins want and give them everything they ask for", you are going to be coming in on one side of a culture war, and you will become part of the problem.
So, what culture war? When I first started on enwiki, there was a divide between admins and non-admins. There was a perception that the bullies had taken over the playground, and the "abusive admins" were harassing the "content creators", who did the proofreading and the actual work of writing articles. In all fairness, there were some real problems at the time, and although the situation did improve, the meme was slow to die.
Then about 2 or 3 years ago, the conflict shifted to one between professional and non-professional editors. The professionals were researchers, educators, program leaders, and software developers, many of whom came to the project through GLAM activities. They found the editing environment hostile, and described it as a "buzzsaw culture". While previous conflicts often saw the WMF in conflict with the community, these editors viewed the WMF as allies. They saw the admins as being very young--some are as young as twelve years old--and mainly interested in Pokeman. There is overlap of course, some very savvy admins started at a very young age, and there are professionals with a keen Pokeman knowledge base.
Others may have a different view--I do still consider myself a newbie, in terms of experience and edit count. I'm not sure what you mean about "the problem with reviewing blocks", when there isn't such a mechanism, but I have a few thoughts about technical solutions I will try to put together later. —Neotarf (talk) 03:03, 28 January 2017 (UTC)
Okay, trying to answer your question about blocking...there have been any number of snarky comments about blocking and banning on the Wikipedia criticism sites, if I had time to look for them, but maybe the easiest thing is to link to Sue Gardner's 2011 editor retention talk to Wikimedia UK where she talks about Wikipedia as a video game [Link -- excerpt is some time after 23:30]: "Folks are like, playing Wikipedia like it’s a video game, and their job is to kill vandals, right, and then we talk about how every now and then a nun, or a tourist, wanders in front of the AK47 and they just get murdered, but in actual fact, what we think now is that it’s all nuns and tourists, right, and it’s a big massacre, right, and there’s one vandal running away in the background, you know, and meanwhile everybody else is dead." [Audience:“Yes”]
So what you can get is situations where, for instance, the staff is trying to provide a creative brainstorming type of situation and the admins and stewards step in and try to make it so the newbies who come to Wikipedia for the first time and make their very first edit to the grant proposal page are told how crappy their proposal is. Does anyone really think the grant team does not recognize a viable proposal? Or that some people are just using the grant process invitation to make some comment to the Foundation where they don't have another venue to do so? You can also get admins demanding that newcomers have conversations about reproductive organs with them, in defiance of community consensus, or arguing with the staff against their "safe space" guidelines, actually disrupting the grant process, or putting into place alternative "free speech" policies that have not been voted by the community, arguing that policy is whatever the admins decide to enforce. This latter statement is probably not meant as hubris, but as a true statement of how things work. So you can see how the staff, and really the whole project, is being held hostage by the admins, or rather a structure that does not allow admins to be held accountable, except in very egregious circumstances, or more and more, for political reasons. Of course they need someone to keep the lights turned on, so when you have the same admins respond to something like the recent situation of compromised Wikimedia accounts, you have comments like this, disparaging "civility" and recommending "Better administrative tools, to help keep out the people that administrators and other people with enforcement authority have already decided should be excluded from Wikimedia sites." An interesting perspective on that here, and I've wasted a bit of time trying to figure this one out.
When I first started editing, there were a lot of proposals floating around to make admins more accountable, but none has gained any traction. At one time I envisioned something like a congressional scorecard that would set specific criteria for admins' actions, but such a thing seems unlikely on WP projects, since few people are willing to risk getting on the bad side of an admin. It could probably be done on one of the criticism websites, but those have only shown interest in criticizing the worst cases, even as they criticize WP for doing the same thing. There have also been periodic proposals for unpacking the admin toolkit, which seems more promising to me, so why hasn't it been done?
I have also heard that admins are not all that necessary anymore, as most vandalism is now reverted by bots.
A longish answer, and one that perhaps only raises more questions than it answers. Regards, —Neotarf (talk) 22:47, 28 January 2017 (UTC)
I have no idea of the percentages of vandalism reversal by bots vs reversal by ordinary editors (obviously you don't need Admins to revert), but I revert a lot of vandalism on a daily basis that hasn't been noticed by bots and probably never would be. Some is a bit subtle, some is glaringly in your face. And I haven't noticed much change in the need to block over the last few years. If anything it's getting worse with the current political climate, and I don't just mean the US situation. Doug Weller (talk) 18:53, 6 February 2017 (UTC)
If you wanted numbers you could probably look at something like this, but vandalism is not the same as harassment, which is more like this (if you haven't seen it already). My point was not about harassment but about the WMF being held hostage by the necessity for admins, for instance this, which I think is an unfortunate exchange, and how many admins are actually needed, and how many of their tasks can be automated. —Neotarf (talk) 02:18, 10 February 2017 (UTC)

Need some English Wikipedia forums moderated only by women

Community health initiative is a good idea. Too bad it will die due to being organized on the Meta wiki, the place where ideas go to die. Few people go to the Meta wiki. If you want more participation put the Community health initiative pages on English Wikipedia.

I initially found out about this info on this rarely visited blog:

It does not allow comments via Wikimedia login though. Why even have a Wikimedia blog that only allows easy login via Facebook, Twitter, and Wordpress? When I started to login there via Facebook it immediately asked to datamine me. I cancelled out.

I came there via a Google search looking for forums addressing the problems of women editing Wikipedia. I see that the problem is from the top to the bottom. From WMF to the editors. No forums moderated only by women that are dedicated to these problems on the most watchlisted Wikipedia, the English Wikipedia. The Teahouse has a majority of male hosts.

Foundations throwing money at related problems for years, but the money is wasted due to lack of participation because they usually go through the Meta wiki. I have seen so much money wasted on projects organized through the Meta wiki. --Timeshifter (talk) 03:14, 28 January 2017 (UTC)

Hi Timeshifter, we will be doing a lot of work and discussion on English Wikipedia, once we really get started. Meta is the home site for the Community Tech team, because our team works across all projects -- check out the 2016 Community Wishlist Survey for an example of a successful project that's organized on Meta. :) But the community health initiative is primarily focused on English WP, so we'll be making some new pages there, once we've got the new team together.
Oh, and those are good points about the Wikimedia blog, and having forums moderated by women. I'll pass on your concern about the blog to the folks who work on that. We'll have to talk and work more on the female-moderated forums, to figure out how we can help. Thanks for your thoughts. -- DannyH (WMF) (talk) 19:08, 30 January 2017 (UTC)
Thanks for replying and for passing on info and ideas. Note that I haven't replied until now. Because unless something is on my English Wikipedia watchlist I tend to forget about it. Please see related discussion here:
Grants talk:IdeaLab/Inspire/Meta - see section I started.
Community Wishlist Surveys tend to be ignored in my experience. A cross-wiki watchlist was in the top ten on one of those lists from a previous year. Still no popular cross-wiki watchlist. There was one that was close to becoming useful, but it was abandoned right when it was getting interesting. I hear there is another one in the works. But like I said, if it is being developed on Meta, I just will not know of it. --Timeshifter (talk) 20:59, 9 February 2017 (UTC)
Timeshifter, we shipped five features from last year's Community Wishlist Survey -- a bot to identify dead external links so they can be replaced, a change to diffs that made changes in long paragraphs appear more consistently, a tool to help users identify and fix copyright violations, a change to category sorting that makes numbers work properly, and a Pageviews Stats tool. The cross-wiki watchlist was on last year's survey, and we're still working on it. It requires some database changes that make it a longer process than we'd hoped, but we're making progress and I expect it'll be finished within 2017. I hope you come back sometime and see this reply, but I guess it's up to you whether you're interested in the answer or not. :) -- DannyH (WMF) (talk) 23:23, 9 February 2017 (UTC)
I know there is great work being done, but it just isn't what I want. ;) Everyone probably says that concerning something. When I said that Community Wishlist Surveys were ignored, I meant relative to the far greater participation it would get if it were posted on English Wikipedia. My 2 points are interrelated. People buy into what they participate in. --Timeshifter (talk) 03:05, 10 February 2017 (UTC)
There already a gender forum at https://en.wikipedia.org/wiki/Wikipedia_talk:WikiProject_Countering_systemic_bias/Gender_gap_task_force ; before you start another one you might think about why no one posts there. The six arbitration cases listed in the side bar might be a clue, also the comments here from the late Kevin Gorman. —Neotarf (talk) 02:30, 10 February 2017 (UTC)
In my opinion there needs to also be a female-only version of the en:Wikipedia:Teahouse. But I am speaking as a guy, so what do I know. --Timeshifter (talk) 03:05, 10 February 2017 (UTC)
Late to conversation. Typing from phone... I had a similar opinion and proposed this - https://meta.wikimedia.org/wiki/Grants:IdeaLab/WikiProject_Women - and created this - https://en.wikipedia.org/wiki/User:Lightbreather/Kaffeeklatsch The proposal topped the leader board for that campaign, but also had a lot of opposition. Both efforts died, mostly, I believe, because of the efforts of two Wikipedia camps: pro-gun editors and their wiki friends, and foul-mouthed editors - including those deemed "valued content contributors" - and their friends. They ultimately got me site banned from Wikipedia. Lightbreather (talk) 17:24, 19 February 2017 (UTC)
I only have so much time and energy. So I haven't read up on all the particulars with your situation. But I think it is insane that only userspace can be used by women to talk amongst themselves about Wikipedia issues without interference from men. I did not realize until just now how backward Wikipedia and the Wikimedia Foundation is. This is enlightening:
Without weighing in on the larger question about how to provide safe spaces so that all users are comfortable participating in Wikimedia projects, I wanted to clear up the misunderstanding related to the WMF non-discrimination policy. In WMF Legal's opinion, the non-discrimination policy does not prohibit users from setting up a women-only discussion in their user space, because the policy was passed by the Foundation board to apply to acts taken by the Foundation and Foundation employees, not individual users. Other policies may, of course, apply. [1]
-- Luis Villa, Deputy General Counsel, Wikimedia Foundation, 7 February 2015
See: Non discrimination policy: "The Wikimedia Foundation prohibits discrimination against current or prospective users and employees on the basis of race, color, gender, religion, national origin, age, disability, sexual orientation, or any other legally protected characteristics. The Wikimedia Foundation commits to the principle of equal opportunity, especially in all aspects of employee relations, including employment, salary administration, employee development, promotion, and transfer."
How twisted that a policy to prevent discrimination against women and others is used by some to stop forums for women.
For example: Wikipedia:Miscellany for deletion/User:Lightbreather/Kaffeeklatsch.
--Timeshifter (talk) 03:25, 21 February 2017 (UTC)
@Timeshifter: Just to note that Wikipedia:User:Lightbreather/Kaffeeklatsch exists although it's defunct.Doug Weller (talk) 11:56, 21 February 2017 (UTC)
I was referring to the deletion discussion about it. Where some people were trying to use the WMF non-discrimination policy in order to discriminate against the women trying to set up a forum for women to discuss Wikipedia-related things amongst themselves. --Timeshifter (talk) 01:26, 22 February 2017 (UTC)
@Timeshifter, I'm not sure why you would want such a group, but you may be interested in the discussion about creating a sub-forum for women on MetaFilter some time ago. There was also some discussion of "castle projects" (German: "Stammtisch") on Lighbreather's proposal. A Stammtisch could be "inclusive to anyone supportive of its goals, but could quickly remove anyone disrupting it." As a result, I started User:Neotarf/Stammtisch, also defunct. —Neotarf (talk) 20:42, 2 March 2017 (UTC)

Comments from BethNaught

This is an epic project and I hope that it can be well executed and have a significant impact on harassment. As an English Wikipedia adminstrator who spends a large proportion of their on-wiki time dealing with a single troll via edit filters and rangeblocks, I am particularly interested in the "Blocking" and "Detection" parts of the initiative. I would like to keep up to date with the project: will there be a newsletter to sign up to? Also, where can I find more details on the planned changes to the AbuseFilter extension? I hope that the scoping exercises will cover these very technical issues as well as general community policies. Thanks, BethNaught (talk) 19:09, 30 January 2017 (UTC)

Hi BethNaught, I'm really glad you're interested; we'll need to work with a lot of admins on this project. We're currently hiring people to work on the project -- the official start is March 1st, but as people join the team, we'll be creating more documentation and starting to make plans.
Community Tech has been doing some work on blocking tools over the past few months, based on a proposal on the 2015 Wishlist Survey and interest from the WMF Support and Safety team. The two features we've worked on are setting a cookie with each block (phab:T5233), and creating a Special:RangeContributions tool (phab:T145912). We're going to finish both of those pretty soon, and then that work will be continued as part of the new community health project.
Right now, the plans for feature development -- like changes to AbuseFilter -- are intentionally vague, because we want to partner with admins and others on English WP to figure out what needs to be improved. For now, bookmark this page -- there'll be a lot more updates coming soon. Let me know if you have more questions... -- DannyH (WMF) (talk) 19:55, 30 January 2017 (UTC)
@BethNaught: I can speak for enwiki that the administrators' newsletter will keep you informed :) Any major technical changes will also be announced in the Tech News. MusikAnimal (WMF) (talk) 23:38, 7 February 2017 (UTC)

Harassment is intrinsic to the WP system

Harassment and more globally to take care of the "Community health" is a very important topic to cover and address. It is really time.

But from my point of view harassment is not due to the behaviour of some bad contributors, uneducated people, agressive personnalities, trolls, ... who should be blocked as soon as they are identified and then efficiently banned.

Harassment (that should be globally seen as any serious behaviour that is in contradiction with the 4st pillar) is an unavoidable consequence of the working principles of the WP project in which numerous people have to interact in *writing* when human communication is based on *visual* interaction. This is accentuated by the fact they have to achieve a *common result* (the content of the article on which they discuss) and for which there are not precise and definied rules of *decision* (rule is that there should be a consensus, which means nothing).

Thes conflictual situations generate hot discussions that can only de-generate into conflicts (by the principle of spiral of violence). In these circumstances, the solutions set up to restrain conflits are:

  • discussions with third party, which is good but not easy to manage and difficily efficient again due to the nature of the communication (*writing*)
  • coercion, which means failure but unfortunaly is the only way to solve issues that is working today.

With other words: Wikipedia has offered to people a place where to develop articles and discuss their content but has not set up efficient mechanisms to rule and manage these discussions, stating that it is community's business to manage them. Expecting that it would work that way was an utopy (due to human nature) . When human people are put in a situation of pure chaos this generates into conflict (some sort of struggle for life to get the point). And this however goodwill, good faith or just smart they coulb be... And humans are just normal...

It should not be forgotten either that the harassment comes also from "good" and well-establish contributors (and sysops and institutions...), who to "survive" in this system gather themselves, to protect themselves and just become stronger. (And this is not a complotist theory -> that a sociological obvious behaviour)...

Have a look at en:Stanford prison experiment and related experiments. That's what we face but in another context. (And here is an example illustrating this)

Not new: by the way this is not new. Citizendium was launched based on the same assessment and a solution was to force contributor to interact under their real identity, in giving an editorial decision power to some contirbutors and in focusing and priviledgin expertise... That's interesting to understand if Citizendium failed because contributors don't feel motivated by a "project" when they can't fight anonymously and when they are expert to take final decisions... That's not directly linked but that's linked anyway.

Proposal: Experts in en:systems psychology should beinvolved to study wikipedia system, understand ist mechanismes, identify its weaknesses and the origin of agressivity and harassment inside this and suggests solutions.

Short term remedies: at short term and before deeper diagnosis could be brought I think that the actions to take should be:

  • consciouness: all contributors should be a way or the other informed or invited to take awareness of the system in which they are "playing", what are the rules of such systems, the difficulty of making it work, the uselessness and counterproductivity of "fighting/stress/agressivity" in such a system, ...
  • highest standards should be expected (and if needed enforced unfortunately) in term of contributor interaction respect (both other people and other ideas should be respected) and therefore any kind of agressivity, from the smallest level, should be prevented (corrected, prevented & sanctionned).
  • practical proposals: 0RR - formalisation and standardisation of communication protocols - establishment of content reviewer committees, ...

Pluto2012 (talk) 10:20, 5 February 2017 (UTC)

NB: I fear I am too late. That's not problem for engineers. That's a problem for sociologists. Good luck.

Smart comments above. Engineers and computer scientists cannot solve this problem alone. Psychologists and those in the social interaction academic fields should be at the forefront. 104.163.152.194 07:04, 28 February 2017 (UTC)
And the solution that is foreseen using 'bots' and other 'automatic filters' is even worse. The real issue is just denied. Instead of trying to educate people, thinking about the system and making it evolve, we are just going to "shot" at and make shut up everybody who will not "play the game"... I don't feel at ease with this.
People just needs to be listened and understood and then, informed. Here, they will be rejected. We are going to generate cyberterrorists... :-( Pluto2012 (talk) 20:37, 1 March 2017 (UTC)

LTA Knowledgebase and abuse documentation

Over at the English Wikipedia, there are talks (permalink) to implement an off-wiki tracking system for long-term abuse. The idea is that by hiding this perceivably sensitive information from the general public, we avoid BEANS issues where the abusers could use the info to adapt their M.O. and avoid detection. This seems similar to what WMF had in mind for the health initiative, so I thought I'd start a discussion here so we can discuss plans and see if it makes sense to team up.

As I understand it, the project for the English Wikipedia tool is being lead by Samtar and Samwalton9. A demo for the tool can be seen here. I am personally quite impressed with the idea and the interface, but I imagine if we were to adapt it for the anti-harassment project we may need to broaden the scope (so it's not just LTA cases), add multi-project support and internationalization. That is something I'm sure Community Tech would be willing to help with, if we do decide we should join forces with the Sam's. TBolliger (WMF) (Trevor) is the product manager for the health initiative project, and can offer some insight into the WMF plans.

So what does everyone think? The RfC on enwiki seems to be going well, so there at least is support for the idea. I know the health initiative is still in it's early stages, but do we have an idea of what we would want in a off-wiki private tool? Does it align with what enwiki is working on? I don't think we have to decide right now, but I also don't want to hold back the enwiki project. MusikAnimal (WMF) (talk) 18:10, 8 February 2017 (UTC)

Hello! And thank you, Leon for looping everyone in. I'm still getting my bearings here at the WMF and we're still assembling the rest of the team for the CHI so I don't have anything more concrete than what's written about on the Community health initiative page and the embedded grant document. One large question we (WMF + wiki project communities and users) will need to work through will be finding an appropriate balance between transparency and privacy for potentially sensitive content. (For example, is a public talk page the most appropriate place to report and discuss incidents of harassment?) Your work on this LTA list is a great pilot program for the concept of hosting community related content off-wiki, and could potentially be expanded/connected to our efforts in the years to come.
I admire the initiative you two have shown and I will be watching your work. Please contact me if I can be helpful in any way. --Trevor Bolliger, WMF Product Manager 🗨 20:09, 8 February 2017 (UTC)
Jumping in here at a later date, it appears consensus is forming for some sort of improvement to the current LTA page setup, but that an off-wiki tool is not looking favourable. A private wiki (like the English Wikipedia Arbcom wiki) has been suggested a couple of times -- samtar talk or stalk 11:27, 20 February 2017 (UTC)
Thanks, Samtar. If visibility is really the only problem to solve (as opposed to searching, sorting, or filtering, or efficacy in data entry) then simply moving the existing templates and content to a private wiki seems to be an appropriate decision. Riffing off this decision — and this idea would need plenty of research and consultation — but what if we built a mechanism to make a single wiki page private on an otherwise public wiki? This functionality already exists for some admin or checkuser features. — Trevor Bolliger, WMF Product Manager 🗨 01:32, 28 February 2017 (UTC)

Better mechanisms for normal editors too

I am not an administrator but have been an active editor on the English Wikipedia for over ten years. In recent months, my involvement on WikiProject Women in Red has led a number of editors to report serious cases of unjustified aggression and stalking to me. One of the main problems cited has been continued attacks by an administrator which has resulted in several of our most productive editors either leaving the EN Wiki completely or cutting back considerably on creative editing. From what I have seen, the editors concerned have been unable to take any effective action in their defence and other administrators have been reluctant to do much about it. Furthermore, while several editors have been following these developments carefully, with one exception, the administrators responsible do not appear until now to have been aware of the extent of the problem. I think it is therefore essential for editors to have reliable reporting mechanisms too so that they can communicate their problems without risking punitive consequences themselves (as has often been the case). In the Women in Red project, we have tried to attract and encourage women editors. It is heartbreaking to see that some have been constantly insulted or completely discouraged for minor editing errors. Further background can be seen under this item on the Women in Red talk page.--Ipigott (talk) 16:39, 26 February 2017 (UTC)

@Ipigott: As an English Wikipedia administrator, I'm obviously rather interested in hearing more - could you email me some more information and I'll see what sort of action could be taken -- samtar talk or stalk 19:19, 27 February 2017 (UTC)
Maybe it's time for another reminder from the late Kevin Gorman, who used to moderate the gender gap mailing list:

I get about twenty emails a week from women Wikipedians who don’t want to deal with any of the process on Wiki, because every arbitration committee case that has involved women in the last two years, has involved all of them being banned.—Kevin Gorman

That goes for Meta too. —Neotarf (talk) 00:36, 28 February 2017 (UTC)

Hi Ipigott. Thank you for your questions and for sharing your project's story. It really is heartbreaking. We hope the tools we build and decisions we make will help avoid the problems your project recently went through. No person should be harassed or bullied into reducing their enjoyment, safety, satisfaction, or pride of contributing to Wikipedia. All contributors (admins and non-admins) can be the victim of harassment and all victims deserve the same resources and respect to report their incident for investigation. And I wholeheartedly agree that the current reporting processes open victims to additional harassment. This is unacceptable.

As for admin unresponsiveness — one of our theories is that many admins do not participate in investigating conduct disputes because it is time consuming and difficult to track incidents across multiple pages. We hope to counter-act this by building an interaction history feature so admins don't have to dig through diffs, histories, and talk pages. Likewise, we want to make some improvements to the dashboards that admins use to monitor incoming reports of harassment. But these are just tools — we also want to help equip our admins with the skills to better identify harassment and make more informed decisions in how to respond to both the victims and the harassers.

Most importantly — all our work will be built on top of in-depth research into the current tools, workflows, and policies on English Wikipedia (at a minimum) as well as consultations with both affected projects and users like WikiProject Women in Red and administrators. As we learn more, our plans will change but the guiding principles and focus areas will remain the same. — Trevor Bolliger, WMF Product Manager 🗨 01:32, 28 February 2017 (UTC)

TBolliger (WMF): Thanks for these reassurances. I'm glad you agree with the need for improvements in the reporting process. You are also quite right in concluding that administrators seldom have the time or patience to go through long lists of past edits but at the moment without doing so it is impossible to draw up a meaningful history of abuse. I look forward to seeing whether your new tools will make it easier to overcome some of these constraints. I'm afraid we are faced with the choice of submitting complaints to ArbCom on the EN Wiki (with the risk of little or no action) or waiting to see if the health initiative can provide a more positive environment for overcoming the problems we have been facing. Perhaps you can provide some estimate of how long we will need to wait for the proposed tools to become accessible. Or whether there is any effective forum ready to examine our findings in the interim.--Ipigott (talk) 16:54, 28 February 2017 (UTC)

The WMF is working on solidifying our FY 2017-2018 plan so nothing is set in stone, but our current draft timeline can be read at Community health initiative#Schedule of work. The reporting system is currently scheduled for 2018 (which is certainly and unfortunately too far away to help your current problems) but if we hear during our consultations and conversations that the reporting system should be fast-tracked, we will rearrange our plans to better serve our communities.

As for your current problems, I do know the Trust and Safety team does review specific cases of harassment, and that they can be contacted through ca@wikimedia.org. — Trevor Bolliger, WMF Product Manager 🗨 18:09, 28 February 2017 (UTC)

Google's Jigsaw/Perspective project for monitoring harassment

In connection with the above, I happened to come across news of Google's recently launched Perspective technology as part of its Jigsaw safety project. In particular, I was interested to see that Google had analyzed a million annotations from the Wikipedia EN talk pages in order to study how harassment had been followed up. Apparently "only 18 percent of attackers received a warning or a block from moderators, so most harassment on the platform went unchecked." Has there been any collaboration with Google on this? It seems to me that it would be useful to draw on their analysis, possibly adapting the approach to identifying aggressors and preventing unwarranted abuse.--Ipigott (talk) 14:22, 3 March 2017 (UTC)

1,000,000 comments were rated by 10 judges...
That makes roughly 1 full-year of work for each of them, assuming they have done nothing else but rating these. Poor guys...
This linked is easy to rate a 'fake' ;) Pluto2012 (talk) 16:03, 3 March 2017 (UTC)
This is not fake. It has been mentioned on the WMF blog and widely reported in the press.--Ipigott (talk) 17:14, 3 March 2017 (UTC)
Indeed, "ten judges rated each edit" is not the same as "ten judges rated a million edits", although it may sound similar in English. For details of the methodology, the text of the study is here: [1]. There is also a more comprehensive writeup at the google blog: [2]Neotarf (talk) 20:49, 3 March 2017 (UTC)
I understood but it is a fake news or a lie...
1,000,000 / 10 = 100,000 edits
To rate one edit, let's assume it takes 1' (you have to read and understand)
100,000 / 60 / 8 = 208 days ie 1 year of work.
Pluto2012 (talk) 06:11, 4 March 2017 (UTC)
Pluto2012: You're assuming that there were only 10 people working on the panel, reviewing every edit. According to the journal article, they had a panel of 4,053 people, which reduces the workload considerably. :) -- DannyH (WMF) (talk) 19:50, 7 March 2017 (UTC)
DannyH (WMF): I just read what is in the article: "Ten judges rated each edit to determine whether it contained a personal attack and to whom the attack was directed."
Neotarf first said that the sentence was not clear and I was wrong to assume that each edit was analysed by ten judges when we had to read they were 10 to make the whole work. Now you tell me that they were 4000+. As stated, I just read what is written and that clearly impossible... Pluto2012 (talk) 04:54, 9 March 2017 (UTC)
Pluto2012: Yes, the sentence in the TechCrunch article was unclear. It should have said, "Each edit was rated by ten judges" [out of 4,000+] instead of "Ten judges rated each edit". There were 4000+ people on the panel.[3] -- DannyH (WMF) (talk) 19:24, 9 March 2017 (UTC)

Yes, a researcher here at the WMF was working with Jigsaw on this project. His findings can be found on Research:Detox. This machine-learning based API can only detect blatant harassment at the moment (e.g. "you are stupid" scores as 97% toxic whereas "you know nothing" score 26%) so claiming that only 18% of harassment is mediated is not entirely accurate — it may be an even lower percentage. At the moment I don't think this exact measurement is worth monitoring — some situations can be self-resolved without the help of an admin, sometimes an aggressive comment is part of a healthy debate (e.g. "we're all being stupid" score 92%) and there is the opportunity for false positives (e.g. "this constant vandalism is fucking annoying" scores a 97%.)

At the end of the day, we want to build tools that empower admins to make the correct decisions in conduct disputes. We are certainly going to explore if using an AI like makes these tools even more powerful. — Trevor Bolliger, WMF Product Manager 🗨 18:30, 3 March 2017 (UTC)

Apparently "only 18 percent of attackers received a warning or a block from moderators, so most harassment on the platform went unchecked." Yes, good catch, the assumption behind this project has been that admins are unable to detect blatant harassment on their own. What if there is something else going on?
Also, I notice that after I mentioned "undue emphasis on blocking", the comments on blocking were merely moved to a difference section. No, I meant "blocking" as being the only tool in the playbook. I know it is the Wiki Way to find the biggest stick possible, and then find someone to hit with it, but what about *talking*? I have always found it ironic that there are five different levels of informational templates for vandals who can barely communicate above the level of chimpanzees typing randomly on a keyboard, but one admin acting alone has the authority to indef long term good-faith contributors without any discussion for any reason or for no reason. Why not simply remove the harassing comment? Why give it a platform at all? —Neotarf (talk) 21:30, 3 March 2017 (UTC)
"there is the opportunity for false positives (e.g. "this constant vandalism is f*...." A big "citation needed" there. If dropping the f-bomb is viewed by the WMF as a desirable manner for admins and employees to conduct themselves, and to defuse potentially inflammatory situations, maybe the next question should be whether this project team intends to be responsive to the needs and values of the editing community as a whole, or just expect to impose its own interpretations and explanations on marginalized groups, as it does not seem to be a very demographicly diverse group. —Neotarf (talk) 22:09, 3 March 2017 (UTC)
Would you agree with the idea that at the end having an IA automatically sanctionning harassment would be disastrous or do you think this has to be nuanced ? Pluto2012 (talk) 06:20, 4 March 2017 (UTC)
I would agree that any system that purely judges how aggressive a comment is and auto-blocks wouldn't work on any Wikipedia or any other wiki project. It's too open to both false positives and false negatives. But I wouldn't say 'disastrous' — there's room for exploring some preventative solutions. (EN Wikipedia and others already use some preventative anti-abuse features such as AbuseFilter, auto-proxy blocking, etc.) — Trevor Bolliger, WMF Product Manager 🗨 19:15, 7 March 2017 (UTC)
I permit to insist that you try to get some support from expert in social sciences.
You don't seem to understand what I mean and the consequences of what you build... Pluto2012 (talk) 04:56, 9 March 2017 (UTC)
As a social science researcher, I understand the social sciences are messy. Nothing is ever black and white and there is a whole lot of grey. Communities are always different and definitions and values are generally unique. Harassment, however, is universally toxic to communities. I personally was excited when I first read about this study. It is the intersection of topics very important to me: community, harassment, education equality, and I've always had a love for computer learning. I honestly spent a few hours reading about this tool and playing with it trying to sort out what might be false positives, or false negatives. I did find some potential holes (with terms generally directed toward women), but that does not mean it is all a loss. In social science research, sometimes you don't know what you'll find, and sometimes those findings surprise you. Even cooler is when some of those findings actually help other people. I'm hopeful this will help a whole lot of people. For now, how about we assume good faith and see how this impacts the community health for current and future contributors. Jackiekoerner (talk) 03:00, 10 March 2017 (UTC)
Of course: Harassment is universally toxic. That's not the point. The point is that harassment is not spontanous or coming from bad people joining the project but it is generated by the wikipedia system on normal people.
The problem is not therefore how to fight it but how to prevent it to arise !
See eg: [2]
-> For each tool developed, the targeted people will find alternatives to express their 'feelings'
-> Other form of harassment (and feeling of harassment) will appear if you counter these
Pluto2012 (talk) 05:39, 5 October 2017 (UTC)

Some technical queries

Since this group seems only inclined to answer technical queries without regard to ethical and organizational implications, there are a few technical issues I have not been able to find an answer to; perhaps someone here has time to give some feedback on some of these. 1) I see auto-blocks mentioned above and just had a conversation about it a few days ago with user who is blocked on enwiki: "... whenever I copy the contents from EnWP it triggers the autoblock if I am logged in. Which means I have to log out, cut the content to something like notepad and then log back in. It's too much effort..." The problem with this is that when the autoblock triggers, it also makes the name of the blocked user visible to anyone else at that IP, thus linking their identity in real life with their user name, which by my reading, is a violation of the privacy policy. There was a fairly high-profile situation a few years ago where someone triggered an auto-block at a conference. One of the Wikipedia criticism sites is reporting that admins do not think the privacy policy applies to blocked users, but it seems like at the very least such users should be informed in advance that their IP will be linked to their user name. 2) Is it possible to turn off the extended edit information on the edit count feature on enwiki? Mine seems to be locked in. 3) The official website syriadirect.org/ for the Syria Direct news service is globally blocked, due to an edit filter for the nonexistent site adirect.org —Neotarf (talk) 00:19, 10 March 2017 (UTC)

English Wikipedia discussion regarding harassed administrators

I thought it might be interesting to those working on and following this initiative to follow a discussion I opened on the English Wikipedia regarding returning administrator rights after a user is harassed to the degree that they undergo a clean start on a fresh account. See here. Samwalton9 (talk) 08:51, 19 March 2017 (UTC)

Hi Sam, thanks for sharing this. It's certainly an interesting topic with a lot of insightful side conversations occurring. One of the goals of the Community Health Initiative is to empower more admins to be confident in their dispute resolution decisions. I've been thinking of this in two dimensions: motivation & ability. 'Ability' will manifest as training and tools so admins can make accurate and fair decisions. 'Motivation' is a little more difficult — it is learning why some admins never participate in dispute resolution and why some cases on ANI are entirely ignored, and providing resources to combat these reasons. Fear of harassment as retribution is definitely a hurdle for involvement, and even worse is a disgraceful result for hard working admins who are legitimately trying to make Wikipedia a healthier environment for collaboration. — Trevor Bolliger, WMF Product Manager 🗨 17:59, 20 March 2017 (UTC)
Hm. Some admins only care about content and not "community management", that's entirely normal. We have a problem if a user cares about handling hairy stuff such as w:en:WP:ANI but doesn't "manage" to; not if they don't care at all. Nemo 19:45, 21 March 2017 (UTC)
I believe you're absolutely correct. I suspect most (if not all) admins became admins because of their involvement in the content building/management. (Browsing a dozen or so RfAs reinforces this suspicion.) I don't expect all admins to participate in community management/moderation, but with 100,000+ active monthly contributors there is certainly the need for some people to perform this work. And those people should be equipped and prepared to be successful. — Trevor Bolliger, WMF Product Manager 🗨 21:11, 21 March 2017 (UTC)

Too technical maybe?

I want first to congratulate the authors of this proposal and I hope that it goes through.

My small contribution to its improvement:

I suspect that this is a computer expert driven initiative, hoping that some tools will help Wikimedia users to better detect harassment. These tools will definitely work... at first. So I support this proposal.

However, I believe (and I have felt it really bad) that harassment can take place in many more covert ways. These cannot be dealt with by tools and software. I would really put much more effort in

  • making a diagnosis on each and every wiki
  • training administrators (yes, I saw that you propose this as part of the job of one person)
  • training for all users to better handle harassment.

--FocalPoint (talk) 12:32, 26 March 2017 (UTC)

Hello, FocalPoint, and thanks for sharing your perspective. I agree — software will only go so far. We view half of the initiative as being software development (thusly named Anti-Harassment Tools) and the equally important second half will be resources (named Policy Growth and Enforcement.) This current wiki page is heavy on the software plans only because I've been working on it since January while SPoore_(WMF) (talk · contribs) just started two weeks ago. That content will grow as we develop our strategy. You may see some of the next steps of this on Wikipedia(s) first as we begin a more proactive communication methods.
Could you please share a little more of your thoughts around what you mean by "making a diagnosis" ? It'd too vague for me to completely understand what you mean. Thank you! — Trevor Bolliger, WMF Product Manager 🗨 21:27, 27 March 2017 (UTC)

Hi Trevor,

making a diagnosis on each and every wiki

means:

  • Stage 1: a simple general diagnostic questionnaire - impersonal and with easy questions, example:
    • what do you like best when contributing?
    • do you participate in discussions?
    • have you ever felt uneasy when contributing to articles?
    • have you ever felt uneasy during discussions?
    • a bit more intrusive but still easy questions

It will probably be easier with multiple choise answers.

Look Trevor, I am no expert, but I know that when you want to diagnoze if kids live in a healthy family, phychologists ask them to make a drawing of the family, they do not ask whether their father or mother is violent.

  • Stage 2: A few weeks, or even months after the first, issue a targeted questionnaire wigth harder questions:
    • have you ever seen any discussion that you believe is harassement?
    • Did any one do anything about it?
    • have you ever accused of harassement durinh the previous xx months?
    • have you felt harassed durinh the previous xx months?
  • What did you do about it?

etc.

Stage 3: Initiate on-wiki discussion

Stage 4: Create focus group and study in person their reactions and interactions when discussiong life on wiki.

I hope I gave you a rough idea, for a process which not only provide valuable information, but which will prepare people, making them more sensitive to community health issues (even before the "real thing" starts).

As far as the el-wiki, my home wiki, please see in the discussion for the future of wikipedia (use machine translation, it will be enough) el:Συζήτηση Βικιπαίδεια:Διαβούλευση στρατηγικής Wikimedia 2017. Already 6 users have supported the text entitles : A Wikipedia which is fun to contribute it, where we are asking an inclusive, an open and healthy environment without harassement, without biting newcoments. Out of the total of 8 participants in the page, six support these thoughts.

With about 49-50 active editors, 6 is already 12% of contributors. A loud message.

The project proposed here is really important. --FocalPoint (talk) 21:06, 29 March 2017 (UTC)

Yes, I'll definitely have a read, thank you for sharing those links! And thank you for expanding on your suggestion. We already have a lot of background information from Research:Harassment_survey_2015 but that was primarily about understanding the landscape — our first steps will be to test the waters on how we can successfully affect change to build a healthier environment for all constructive participants. — Trevor Bolliger, WMF Product Manager 🗨 22:52, 29 March 2017 (UTC)

Psychologists

Have you considered hiring psychologists in order to determine what the root causes are, culturally, of the common harassments? Do you have a thorough understanding of the common harassments consist of, and the contexts in which they arise? Have you considered offering counselling services to users who probably need it?

While some of the tools would definitely benefit from more love, many of the problems we have with harassment, especially on the larger projects, are cultural in nature, and thus should likely be addressed as such, or at very least understood as such, first. How this varies across project is also something you should be looking into, as that is going to be relevant to any new tools you do produce. -— Isarra 01:49, 28 April 2017 (UTC)

+1. Pluto2012 (talk) 04:33, 7 May 2017 (UTC)

"Civil rights" vs "social justice power play"

Re: "The project will not succeed if it’s seen as only a “social justice” power play." at Community input. Is there some reason this alt-right anti-Semitic dogwhistle was used in a grant application? Is there some reason it isn't just referred to as "civil rights"? This really jumped out at me when I read the grant application, that it shows such a contempt for justice and such a zero-sum-game approach to - of all things - writing an encyclopedia. Someone else questioned this on the Gender Gap mailing list as well, but received no response.—Neotarf (talk) 22:06, 28 April 2017 (UTC)

This is a direct effect of that word being thrown into the faces of those who engage in these kinds of topics with some regularity I think. If I had written this, I would have been inclined to formulate it like that as well. However you might be right that it is better to reword this fragment. —TheDJ (talkcontribs) 08:32, 4 May 2017 (UTC)

Getting rid of harassment reports by getting rid of the harassed

About the section on "potential measures of success": "Decrease the percentage of non-administrator users who report seeing harassment on Wikipedia, measured in a follow-up to the 2015 Harassment Survey." This kind of loophole may create more problems than it solves.

If your goal is just to get rid of harassment reports, the fastest way is to just get rid of the people who might be inclined to report harassment, that is, anyone who is not white, male, and heterosexual. Since many of the people who reported harassment in the survey said that they decreased their activity or left the project as a result of the harassment, they may already be gone. More left after they faced retaliation for speaking out, or witnessed retaliation against others. There are a few at the highest levels of the movement who have enough money to be unaffected by policies or who have the money to influence policies, but many professionals will no longer engage in these consultations unless it is face to face and they can see who they are talking to. More and more, such consultations occur in venues that are open only to those who are wealthy enough to be in a donor relationship with the foundation and so receive scholarships to attend. This avenue has been further narrowed since members of websites that are known to dox and publish hit pieces about Wikimedians have started showing up at such consultations. —Neotarf (talk) 23:22, 28 April 2017 (UTC)

I agree that changes in such a metric (relative number of self-reported "victims" of harassment) are extremely unlikely to provide any meaningful information about changes in the real world, because they can be caused by so many things, like: different ways to explain/define the ultra-generic term "harassment", different perception of harassment even if nothing has changed, participants invitation method, willingness of users to alter the results because they like/dislike this initiative, retaliation from users who got blocked or have some other dissatisfaction, other selection biases.
I don't know how you're so sure that some of the groups you mention (like non-heterosexuals) are more harassed than others, but I imagine that at least on some wikis rather than eliminated they're more likely to be discouraged by some entrenching on non-NPOV views for some content which wil disgust large sectors of the population, and this could alter statistics.
In general, as several people noted on Research talk:Harassment survey 2015, the survey was especially susceptible to biases of all kinds. I'm not sure it's easy to manipulate the survey with actions, because it's impossible to predict how any change in reality would be reflected in the survey: an increase in harassment may discourage people from engaging in a survey; an elimination of harassers may make them flock to the next survey; blocked users may organise in some way to skew the results if they feel they have leverage or vice versa that the situation is getting worse for them. I think the results will just continue being meaningless.
There is certainly a risk that certain specific people or groups will be targeted even more strongly that they currently are, by people feeling empowered (or threatened) by this initiative, but I'd expect this to happen mostly for old grudges. Almost certainly someone will try to get rid of some old enemy in the name of the greater good, but this will happen under the radar. --Nemo 13:21, 2 June 2017 (UTC)

English Wikipedia

Thanks for specifying what is specific to the English Wikipedia. I hope the needs of (some parts of) the English Wikipedia will not dominate the project. --Nemo 12:55, 14 June 2017 (UTC)

@Nemo bis: You're welcome, and thank you for participating. English Wikipedia presents unique complexities due to its size and history, but at the end of the day we want all the tools we build to work effectively for all wikis who desire to use them. — Trevor Bolliger, WMF Product Manager 🗨 17:26, 14 June 2017 (UTC)

Exploring how the AbuseFilter can be used to combat harassment

The AbuseFilter is a feature that evaluates every submitted edit, along with other log actions, and checks them against community-defined rules. If a filter is triggered the edit may be rejected, tagged, logged, trigger a warning message, and/or revoke the user’s autoconfirmed status.

Currently there are 166 active filters on English Wikipedia, 152 active filters on German Wikipedia, and 73 active filters here on Meta. One example from English Wikipedia would be filter #80, “Link spamming” which identifies non-autoconfirmed users who have added external links to three or more mainspace pages within a 20 minute period. When triggered, it displays this warning to the user but allows them to save their changes. It also tags the edit with ‘possible link spam’ for future review. It’s triggered a dozen times every day and it appears that most offending users are ultimately blocked for spam.

AbuseFilter is a powerful tool and we believe it can be extended to handle more user conduct issues. The Anti-Harassment Tools software development team is looking into three major areas:


1. Improving its performance so more filters can run per edit

We want to make the AbuseFilter extension faster so more filters can be enabled without having to disable any other useful filters. We’re currently investigating the current performance in task T161059. Once we better understand how it is currently performing we’ll create a plan to make it faster.


2. Evaluating the design and effectiveness of the warning messages

There is a filter on English Wikipedia — #50, “Shouting” — which warns when an unconfirmed user makes an edit to mainspace articles consisting solely of capital letters. When the edit is tripped, it displays a warning message to the user above the edit window:

From en:MediaWiki:Abusefilter-warning-shouting. Each filter can specify a custom message to display.


These messages help dissuade users from making harmful edits. Sometimes requiring a user to take a brief pause is all it takes to avoid an uncivil incident.

We think the warning function is incredibly important but are curious if the presentation could be more effective. We’d like to work with any interested users to design a few variations so we can determine which placement (above the edit area, below, as a pop-up, etc.) visuals (icons, colors, font weights, etc.) and text most effectively conveys the intended message for each warning. Let us know if you have any ideas or if you’d like to participate!


3. Adding new functionality so more intricate filters can be crafted.

We’ve already received dozens of suggestions for functionality to add to AbuseFilter, but we need your help to winnow this list so we can effectively build filters that help combat harassment.

In order to do this, we need to know what types of filters are already successful at logging, warning, and preventing harassment. Which filters do you think are already effective? If you wanted to create a filter that logged, warned, or prevented harassing comments, what would it be? And what functionality would you add to AbuseFilter? Join our discussion at Talk:Community health initiative/AbuseFilter.

Thank you, and see you at the discussion!

— The Anti-Harassment Tools team (posted by Trevor Bolliger, WMF Product Manager 🗨 23:07, 21 June 2017 (UTC))

Changes we are making to the Echo notifications blacklist before release & Release strategy and post-release analysis

Hello;

I've posted #Changes we are making to the blacklist before release and #Release strategy and post-release analysis for those interested in our Echo notifications blacklist feature. Feedback appreciated! — Trevor Bolliger, WMF Product Manager 🗨 18:34, 23 June 2017 (UTC)

Combat harassment ?

For the record:

it is really a pity that people in charge of this project nicely named "community health initiative" will use agressivity to prevent agressivity.

There is no harassment on wikipedia. There are frustrations and misunderstandings due to the poor communication (no body langage) and the poor definition of behavioural rules, which generates agressivity, a form of which is harassment.

As explained a few months ago already, I think there lack some psychologists or specialists in social sciences in this project. I think the leader(s) do(es)n't have the human competences to manage and lead this project and they should report this concern to their own management.

With your bots and filters you will create more damages than "community health" or sanity.

Pluto2012 (talk) 10:36, 12 July 2017 (UTC)

@Pluto2012: You are arguing that there is harassment on Wikipedia in the form of aggressiveness (which is supported by evidence), so this general argument is not persuasive. I also don't think there's a lot of evidence to suggest that this aggressiveness stems from legitimate misunderstandings. In terms of whether the right kind of expertise is present on this initiative, I believe that folks with the right combination of on-wiki experience, tool-building, awareness of research in this area, and ability to communicate with contributors is apt. Expertise in psychology could be useful, but probably isn't strictly necessary to develop responsible and community-supported approaches to address these problems reported by Wikimedia communities. I JethroBT (talk) 21:55, 15 September 2017 (UTC)

Our goals through September 2017

I have two updates to share about the WMF’s Anti-Harassment Tools team. The first (and certainly the most exciting!) is that our team is fully staffed to five people. Our developers, David and Dayllan, joined over the past month. You can read about our backgrounds here.

We’re all excited to start building some software to help you better facilitate dispute resolution. Our second update is that we have set our quarterly goals for the months of July-September 2017 at mw:Wikimedia Audiences/2017-18 Q1 Goals#Community Tech. Highlights include:

I invite you to read our goals and participate in the discussions occurring here, or on the relevant talk pages.

Best,

Trevor Bolliger, WMF Product Manager 🗨 20:31, 24 July 2017 (UTC)

Self-defense

Even before "policing", I like projects which help self-defense. http://femtechnet.org/csov/do-better/ seems ok, focusing on the most common mistakes people make which put them at real risk (though among the resources they link I only know https://ssd.eff.org/ ). --Nemo 06:31, 11 August 2017 (UTC)

I like these projects too. I think there's room for a "security/privacy/harassment check-up" feature which walks users through their preferences and more clearly explains the trade-offs that different settings allow. Additionally, I think features like our Mute and User page protection features would work in this regard. — Trevor Bolliger, WMF Product Manager 🗨 20:04, 12 August 2017 (UTC)

Update and request for feedback about User Mute features

Hello Wikimedians,

The Anti-harassment Tool team invites you to check out the new User Mute features under development and to give us feedback.

The team is building software that empowers contributors and administrators to make timely, informed decisions when harassment occurs.

With community input, the team will be introducing several User Mute features to allow one user to prohibit another specific user from interacting with them. These features equip individual users with tools to curb harassment that they may be experiencing.

The current notification and email preferences are either all-or-nothing. These mute features will allow users to receive purposeful communication while ignoring non-constructive or harassing communication.

Notifications mute

Tracked in Phabricator:
Task T164542

With the notifications mute feature, on wiki echo notifications can be controlled by an individual user in order to stop unwelcome notifications from another user. At the bottom of the "Notifications" tab of user preferences an user can mute on-site echo notifications from individual users, by typing their username into the box.

Echo notifications mute is feature is currently live on Meta Wiki and will be released on all Echo-enabled wikis on August 28, 2017.

Try out the feature and tell us how well it is working for you and your wiki community. Suggest improvements to the feature or documentation. Let us know if you have questions about how to use it. Talk:Community health initiative/User Mute features

Email Mute list

Soon the Anti-harassment tool team with begin working on a feature that allows one user to stop a specific user from sending them email through Wikimedia special:email. The Email Mute list will be placed in the 'Email options' sections of the 'User profile' tab of user preferences. It will not be connected to the Notifications Mute list, it will be an entirely independent list.

This feature is planned to be released to all Wikimedia wikis by the end of September 2017.

For more information see. Community health initiative/Special:EmailUser Mute

Let us know your ideas about this feature.

Open questions about user mute features

See Community health initiative/User Mute features for more details about the user mute tools.

Community input is needed in order to make these user mute features useful for individuals and their wiki communities.

Join the discussion at Talk:Community health initiative/User Mute features Or if you want to share your thoughts privately, contact the Anti-harassment tool team by email.

For the Anti-harassment tool team, SPoore (WMF) (talk) , Community Advocate, Community health initiative (talk) 20:25, 28 August 2017 (UTC)

Community health: Definition needed

Please define the term "community health". It's not at all clear what this is. If you could cite relevant science, that would be helpful. Thank you. --MatthiasGutfeldt (talk) 13:59, 11 September 2017 (UTC)

Hello MatthiasGutfeldt, The way that Wikimedia uses the phrase “community health” is explained on this page. It mentions the first times that the phrase was used around 2009, including the name for task force for the Wikimedia Strategy 2010, Community health task force. Since then the term has been used to study whether there is a good working environment in the Wikimedia community. See Community health workshop presentation slides for another explanation about the term.
So you can see that the term was adopted for this initiative based on prior use. But as far as I’m concerned, if the word is too confusing or does not translate well from English into another language then another term can be substituted that conveys a similar meaning. Currently, there is a discussion on dewiki about the use of the phrase.
I’m interested in knowing your thoughts about the use of the term. You can respond here on meta or the dewiki. SPoore (WMF) (talk) , Community Advocate, Community health initiative (talk) 23:47, 11 September 2017 (UTC)

Focus on deviance

This project has a serious problem: it seems to primarily focus deviant behavior. My experience is, however, that the gravest incidents of harrassment come from respected and experienced members of the communities, not seldom admins. Tools for blocking etc. will be of little use in these cases as they tend to be applied against people at the margins of the communities. A "community health project" should assume a critical stance to the norms and mores that are dominant in the cores of the communities, not just try to make an exclusion of problematic outsiders more effective. This means it should also deal with harrassment coming from admins and regular users (which can hardly be reached by exclusion). If the project neglects the core users' part in harrassing, it will either fail or have negative outcomes.

In a German wikipedia dicussion, it became clear that the concept of health is very problematic, for similar reasons. There is already a tendency to qualify "disturbers" as pathological, which usually will not be accepted if the target of such attacks are regular and experienced users. But if the targets are users with little or bad reputation, such attacks are more frequent and often hardly criticized. We should not equalize good behavior with "healthy" behavior and bad or unwanted behavior with illness.--Mautpreller (talk) 14:48, 13 September 2017 (UTC)

Honestly, no matter what wording you pick, you will probably not cover everything appropriately and definitely not across multiple languages. I note that health has a very broad scope. Fitness is health too. Food can be health.. If the german wiki calls some people's behavior pathological, then that seems like a problem, but it's not really connected to this choice in wording in my personal opinion. —TheDJ (talkcontribs) 20:12, 13 September 2017 (UTC)
@Mautpreller: You're right, we need to address all forms of harassment that are occuring and be able to respond to new forms that may arise in the future. This includes blatant harassment, (e.g. personal insults and universally unacceptable insults) harassment from newcomers who are acclimating to the encyclopedia's content standards and harassment from seasoned editors who've grown tired of low quality edits and vandalism. (Yes, this is a reductive list, intentional for brevity.)
Personally, I think the biggest problem we need to solve is how to properly sanction highly productive editors but retain their productivity. Full site blocks are extreme — what other low- or mid-level punitive responses can we build? Our team tries to not to villainize users who need to be sanctioned — collaboratively building an encyclopedia is hard work and emotions can get the best of any of us. How can we create an environment where incidents of incivility are opportunities for learning and self-improvement? — Trevor Bolliger, WMF Product Manager 🗨 22:26, 13 September 2017 (UTC)
H'm. "Our team tries not to villainize users who need to be sanctioned", that is good. However, I definitely see a problem that this might occur even against your will, simply because of the issues of power asymmetry and social dynamics. Moreover, I am not so sure that I like the idea that "an environment where incidents of incivility are opportunities for learning and self-improvement" should be created. Think of Jimmy Wales. He often uses incivility as a provocation. I can't imagine that he will use it as an "opportunity for learning and self-improvement". --Mautpreller (talk) 09:45, 14 September 2017 (UTC)
Hello Mautpreller, thank you for raising issue of power asymmetry. As the Community health initiative designs solutions to address harassment and conflict resolution, it is important to consider social dynamics of the community. Going back to your first statement, it is true that to be successful this Community Health Initiative needs to look for ways to support people at the margins of the community. As a Community Advocate my job is to make sure all stakeholders are considered, including the marginalized people who are not currently well represented on Wikimedia Foundation wikis today. To do this, I'm arranging for the Anti-harassment tools team members to speak to active and less active contributors, long term contributors and newer ones, and also community organizers who are attempting find new contributors from less well represented groups. And as feasible, the Anti-harassment tools team is speaking to people at the margins of the community, too. We are doing private one on one conversations, group interviews, surveys, formal and informal community consultations (both on and off wiki) in order to learn from many different types of stakeholders. We know that there are many individuals and groups in many different language wikis that need to be considered. The team is committed to expanding our reach as much as we practically can. It is challenging work because there is no feasible way to repeatedly hold large scale meaningful multi-lingual conversations on hundreds of wikis. So, we greatly appreciate you finding us on meta and sharing your thoughts with the international wikimedia movement.
I'm following the German Wikipedia discussion about the Community health initiative with Christel Steigenberger assistance. She will update me next week about the discussions happening there. The phrase "community health" might not work well in some wiki communities because of preexisting cultural interpretations of the words. In these communities, an alternative name for the initiative can be discussed and agreed on.
Mautpreller, your concerns are reasonable and I'm glad you are sharing them now. Our team doesn't want to build tools that will be used to make the marginalization of some groups of new users worse that it is now. With good communication with all stakeholders, our team aims to foster a more welcoming editing environment for a more diverse community. Please continue to join our discussions (on meta and your home wiki) and invite others to participate, too. SPoore (WMF) (talk) , Community Advocate, Community health initiative (talk) 14:20, 19 September 2017 (UTC)
SPoore (WMF) just a small note concerning your last paragraph: marginalization isn't necessarily restricted to new users. --HHill (talk) 12:39, 20 September 2017 (UTC)

Translation tags

Hey folks, I just made a few changes to the translation markup that were causing some of the links to other pages to not work properly. Feel free to adjust the names for the <tvar|> tags if needed, as I just set some names that I thought made sense for the links. I JethroBT (talk) 21:39, 15 September 2017 (UTC)

Help us decide the best designs for the Interaction Timeline feature

Hello all! In the coming months the Anti-Harassment Tools team plans to build a feature that we hope will allow users to better investigate user conduct disputes, called the Interaction Timeline. In short, the feature will display all edits by two users on pages where they have both contributed in a chronological timeline. We think the Timeline will help you evaluate conduct disputes in a more time efficient manner, resulting in more informed, confident decisions on how to respond.

But — we need your help! I’ve created two designs to illustrate our concept and we have quite a few open questions which we need your input to answer. Please read about the feature and see the wireframes at Community health initiative/Interaction Timeline and join us at the talk page!

Thank you, — CSinders (WMF) (talk) 19:48, 3 October 2017 (UTC)

Anti-Harassment Tools quarterly update

Happy October, everyone! I'd like to share a quick summary of what the Anti-Harassment Tools team accomplished over the past quarter (and our first full quarter as a team!) as well as what's currently on the docket through December. Our Q1 goals and Q2 goals are on wiki, for those who don't want emoji and/or commentary.

Q1 summary

📊 Our primary metric for measuring our impact for this year is "admin confidence in resolving disputes." This quarter we defined it, measured it, and are discussing it on wiki. 69.2% of English Wikipedia admins report that they can recognize harassment, while only 39.3% believe they have the skills and tools to intervene or stop harassment and only 35.9% agree that Wikipedia has provided them with enough resources. There's definitely room for improvement!

🗣 We helped SuSa prepare a qualitative research methodology for evaluating Administrator Noticeboards on Wikipedia.

⏱ We added performance measurements for AbuseFilter and fixed several bugs. This work is continuing into Q2.

⚖️ We've begun on-wiki discussions about Interaction Timeline wireframes. This tool should make user conduct investigations faster and more accurate.

🤚 We've begun an on-wiki discussion about productizing per-page blocks and other ways to enforce editing restrictions. We're looking to build appropriate tools that keep rude yet productive users productive (but no longer rude.)

🤐 For Muting features, we've finished & released Notifications Mute to all wikis and Direct Email Mute to Meta Wiki, with plans to release to all wikis by the end of October.

Q2 goals

⚖️ Our primary project for the rest of the calendar year will be the Interaction Timeline feature. We plan to have a first version released before January.

🤚 Let's give them something to talk about: blocking! We are going to consult with Wikimedians about the shortcomings in MediaWiki’s current blocking functionality in order to determine which blocking tools (including sockpuppet, per-page, and edit throttling) our team should build in the coming quarters.

🤐 We'll decide, build, and release the ability for users to restrict which user groups can send them direct emails.

📊 Now that we know the actual performance impact of AbuseFilter, we are going to discuss raising the filter ceiling.

🤖 We're going to evaluate ProcseeBot, the cleverly named tool that blocks open proxies.

💬 Led by our Community Advocate Sydney Poore, we want to establish communication guidelines and cadence which encourage active, constructive participation between Wikimedians and the Anti-Harassment Tools team through the entire product development cycle (pre- and post-release.)

Feedback, please!

To make sure our goals and priorities are on track, we'd love to hear if there are any concerns, questions, or opportunities we may have missed. Shoot us an email directly if you'd like to chat privately. Otherwise, we look forward to seeing you participate in our many on-wiki discussions over the coming months. Thank you!

— The Anti-Harassment Tools team (Caroline, David, Dayllan, Sydney, & Trevor) (posted by Trevor Bolliger, WMF Product Manager 🗨 20:53, 4 October 2017 (UTC))

Submit your ideas for Anti-Harassment Tools in the 2017 Wishlist Survey

The WMF's Anti-Harassment Tools team is hard at work on building the Interaction Timeline and researching improvements to Blocking tools. We'll have more to share about both of these in the coming weeks, but for now we'd like to invite you to submit requests to the 2017 Community Wishlist in the Anti-harassment category: 2017 Community Wishlist Survey/Anti-harassment. Your proposals, comments, and votes will help us prioritize our work and identify new solutions!

Thank you!

Trevor Bolliger, WMF Product Manager 🗨 23:57, 6 November 2017 (UTC)

Implicit bias study grant proposal

FYI Grants:Project/JackieKoerner/Investigating the Impact of Implicit Bias on Wikipedia, a proposed qualitative study of implicit bias on Wikipedia. Very close to final decision, with committee comments noted on talk page. I imagine it would be helpful to have some feedback on how such a project would fit into this initiative. (not watching, please {{ping}}) czar 20:10, 24 November 2017 (UTC)

  1. Wulczyn, Ellery; Thain, Nithum; Dixon, Lucas (27 October 2016). "Ex Machina: Personal Attacks Seen at Scale" – via arXiv.org. 
  2. "When computers learn to swear: Using machine learning for better online conversations". 23 February 2017. 
  3. Wulczyn, Ellery; Thain, Nithum; Dixon, Lucas (27 October 2016). "Ex Machina: Personal Attacks Seen at Scale" – via arXiv.org.