What Wikimedia project(s) and specific areas will you be evaluating?
Is this project measuring a specific space on a project (e.g. deletion discussions), or the project as a whole?
All English WikiMedia talk pages (to start with).
Describe your idea. How might it be implemented?
The system will be based on the work “Building a rich conversation corpus from Wikipedia Talk pages” that has been introduced in the Wikimedia research showcase in June 2018. The project builds a pipeline that can record the user actions of adding, deleting, modifying and restoring of a change to any talk page, derived from the Wikipedia history data dump.
Building upon this, we process every new revision and score it according to the likelihood of the utterance being toxic using the Perspective API, a machine-learning model that identifies toxicity in utterances. This tool has provided significant help to the New York Times moderation process, and we believe it can help with the Wikipedia community as well. Instead of using the API, we could also consider building on Jigsaw's public Kaggle toxicity dataset, but the model quality would likely be worse.
Are there experienced Wikimedians who can help implement this project?
If applicable, please list groups or usernames of individuals who you can work with on this project, and what kind of work they will do.
The outcome of the project will be a moderation assistant for the community. The success could be measured in three ways:
- The fraction of personal attacks that are identified and the decrease in time they are publicly shown (e.g. following the methodology of https://arxiv.org/abs/1610.08914).
- Via interviews with the community to compare with previous data to see if the harassment issue can be moderated better using this system.
- The system can also be A/B tested to see if the machine provided toxicity scores can help with moderation.
How would your measurement idea help your community make better decisions?
After you are finished measuring or evaluating your Wikimedia project, how do you expect that information to be used to benefit the project?
The system can provide evidence for administrators when making decisions about the reported cases of online harassment, and can potentially help with wiki-hounding, since with the system, users reporting these cases can easily search for evidence to help the community reach a judgement.
Do you think you can implement this idea? What support do you need?
Do you need people with specific skills to complete this idea? Are there any financial needs for this project? If you can’t implement this project, can you scale down your project so it is doable?
Yes, although we’ve already made a lot of progress in an open source project hosted on github, to implement this proposal we would need support from Wikimedia for access to the streaming API of the database, and also to a pub-sub for suppressed revisions so that removals of PII can be propagated into any additional databases used.
About the idea creator
Researcher, engineer, interested in online harassment issues, who has been working on the Wikipedia data for a year.
- Volunteer Provide support, occasional coding, product clarification, and whitelist for perspective API access. Iislucas (talk) 15:58, 9 August 2018 (UTC)
- Developer I would like to help implement the system. Vegetable68 (talk) 16:07, 9 August 2018 (UTC)
- Based on a lot of existing tools, this is very doable, and I'd be happy to help too. Iislucas (talk) 15:57, 9 August 2018 (UTC)
- I think this is very Important! Tabarak yta (talk) 22:45, 9 August 2018 (UTC)
Expand your idea
Would a grant from the Wikimedia Foundation help make your idea happen? You can expand this idea into a grant proposal.
No funding needed?
Does your idea not require funding, but you're not sure about what to do next? Not sure how to start a proposal on your local project that needs consensus? Contact Chris Schilling on-wiki at I JethroBT (WMF) (talk · contribs) or via e-mail at cschillingwikimedia.org for help!