Research:Wikihounding and Machine Learning Analysis
This page in a nutshell: In our analysis of the AN/I noticeboard, we were able to find several hundred threads in the AN/I archive that made reference to WikiHounding. However, an analysis of these threads--focusing on a subset where some resolution was apparently achieved--determined that allegations of WikiHounding that are reported to AN/I are rarely clear-cut or straightforward, and that as a result this dataset is therefore not a good source for labelled training data machine learning analysis or for collecting representative descriptive statistics on the prevalence or nature of WikiHounding on English Wikipedia. |
Participants from the Anti-Harassment Tools team and the Research Team at the Wikimedia Foundation are exploring creating a machine learning model on the harassment phenomena called wikihounding.
We are focusing on English Wikipedia AN/I cases that refer to wikihounding and/or wikistalking to create a labeled training dataset for this model based on community-determined instances of wikihounding. The AN/I archive is not a well-structured dataset, but it's accessible for qualitative analysis and open.
Background
[edit]Wikihounding as a phenomenon is both quantitative and qualitative. On one hand, all interactions inside of a digital space has a quantitative aspect to it—every comment, revert, etc is a data point. On the other, determinations of whether a particular interaction between two or more editors constitutes wikihounding is an inherently qualitative evaluation, requiring investigative research, argumentation, and policy analysis by the editors (often, but not always, administrators) who deliver the final 'verdict' and decide the outcome of the incident report—on a case-by-case basis. By analyzing data points comparatively inside of wikihounding cases and reading some of the cases, we intend to create a baseline for what are the actual overlapping similarities inside of wikihounding.
Wikihounding, as defined by the Harassment policy on en:wp, currently has a fairly loose definition:
“the singling out of one or more editors, joining discussions on multiple pages or topics they may edit or multiple debates where they contribute, to repeatedly confront or inhibit their work. This is with an apparent aim of creating irritation, annoyance or distress to the other editor. Wikihounding usually involves following the target from place to place on Wikipedia.”
This definition doesn't outline parameters around cases such as frequency of interaction, duration, or minimum reverts. Unlike some kinds of editor disputes such as the three revert rule, there are currently no set baselines and parameters around wikihounding. One contribution we intend to make with our research this research project is to establish those parameters, including descriptive statistics related to wikihounding, such as what is considered a 'normal' amount of reverts in a wikihounding case, the average duration of interaction before wikihounding is reported, etc.
If we are able to label a sufficient number of candidate wikihounding cases (e.g. accuser=editor1, accused=editor2, reporting_time=timestamp-of-report, is_wikihounding=True), we intend to use this data to train a machine learning to detect potential cases of wikihounding even if they are not reported to AN/I.
Findings from this research will also inform the development of improved tools for analyzing and visualizing editor interactions, to make it easier for administrators to evaluate the evidence of wikihounding allegations.
Methods
[edit]Wikihounding can look like a lot of things, such as various forms of edit warring, etc. There are a lot of cases that would technically be called hounding but occurred within a larger context of another case (e.g. a user being a sock puppet or engaging in more rampant and widespread harassment or bad behavior). Likewise, some non-disruptive behaviors may resemble wikihounding even if they are consentual and productive (e.g. a mentor reverting a mentee's edits across several articles and posting a lot on their talkpage). Ultimately those cases need to be identified as something else, not hounding.
To differentiate between what is hounding versus what is not, we plan to start by looking at AN/I labeled and archived wikihounding cases.
- Examine a small number of reported instances of Wikihounding to identify similarities and patterns, and validate our existing assumptions about the nature of Wikihounding.
- Label a larger number of AN/I cases for accuser, accused, date of report, and outcome of AN/I case.
- Generate descriptive statistics about the identified instances of hounding (e.g. average duration of interaction and number of reverts)
- Train a learning model on the interactions (co-located edits, reverts, and talkpage posts) between the accusing and accused editors in the weeks prior to the date that hounding was reported.
- Run the model over other sets of editor interactions that look superficially similar to the training set, and manually audit the resulting cases that the model determines are likely to be instances of wikihounding. Are there patterns? If so, what are they? Are there false positives?
- Looking at time, frequency, location, and contextual aspects of hounding such as incivility, antagonizing content, toxicity and labeling those terms.
Open questions and considerations
[edit]- What about recall? Analyzing AN/I cases may yield high-precision, but not represent all cases of wikihounding (e.g., a relatively new and inexperienced editor may be the target of hounding, but not know enough about Wikipedia governance to report the case at AN/I).
- What is adjacent to wikihounding? Are there things that are similar to wikihounding but not quite hounding? What are the false positives, false observations (i.e. reverse wikihounding)?
Timeline
[edit]- January 2018: extract and examine AN/I cases related to WikiHounding
Results
[edit]The analysis we were hoping to do was based on some assumptions about WikiHounding as a phenomenon, and AN/I as a forum, that were not born out once we started digging into the data. Some of what we found is described in the January 22 work log entry.
In brief
- Most accusations of wikihounding brought before AN/I are not determined to be wikihounding at all.
- In most cases of Wikihounding brought before AN/I, there is no final determination or verdict about whether hounding occurred.
- If there is a determination that hounding occurred, it's very possible that the person bringing the accusation will eventually be accused of hounding themselves.
- Wikihounding is closely associated with other types of bad behavior: sock-puppeting, IP-hopping, tag-teaming, edit warring (esp. 3RR violation), offline staking and threats, and personal attacks.
Conclusion
[edit]There are not enough canonical (clear cut, resolved) cases of hounding available in the AN/I dataset to train a machine learning model.
In theory, Wikihounding is a particular kind of harassing behavior that can be identified reliably. But in practice, wikihounding is not a sufficiently concrete or discrete set of behaviors in itself to be a good focus for modeling harassment.
AN/I is not a place where people generally get resolution to complex issues. At least, not generally the resolution they are seeking. Many threads don’t have a clear conclusion. While some incident reports seem to be dealt with promptly and conclusively (for example, sockpuppet investigations), this is not usually the case with incidents that require lots of interpretation and argumentation and/or otherwise have a heavy 'subjective' component. When there is a conclusion to threads around wikihounding, it’s often a blanket admonishment to all the editors involved to “stop it”, and that’s about all.
Next steps
[edit]It is not yet clear what approach will work best for identifying and characterizing patterns of harassing behavior (including wikihounding) at scale. We are discussing options and will start a new project page when our next steps are clearer. We'll link back here when we do.
See also
[edit]- Research:Topical coverage of Edit Wars
- Editor interaction analyzer: a ToolForge tool that is sometimes used to examine the history of interactions between editors in hounding cases