Machine learning models/Proposed/Language-agnostic revert risk

From Meta, a Wikimedia project coordination wiki
Model card
This page is an on-wiki machine learning model card.
A diagram of a neural network
A model card is a document about a machine learning model that seeks to answer basic questions about the model.
Model Information Hub
This model uses revision content and metadata to predict the risk of being reverted.


How can we help editors to identify revisions that need to be “patrolled”? The goal of this model is to detect revisions that might be reverted, independently if they were made on good faith or with the intention of creating damage.

Patrolling content in more than 250+ Wikipedia projects is a difficult task. The amount of revisions, plus the different languages involved requires a complex human effort. The aim of this model is to help patrollers quickly identify potential problems, and revert damaging edits when needed.

Previous models had tried to solve this by creating language-specific solutions, however, that approach is difficult to escalate and maintain, because it requires as many models as languages used on the Wikimedia projects. Moreover, complex-language models are just available in certain languages, leaving out smaller Wikipedia editions. Therefore, this model is based on Language Agnostic features, making it possible to use it for any existing Wikipedia, and for new language projects that can appear in the future.

This model was trained using the two tables from the Wikimedia Data Lake. We used the MediaWiki History table, and the Wikitext History one. Meta-data was extracted for the former, and other features such as number of references, images and wikilinks, were extracted from the latter one.


This model is deployed on LiftWing. Right now, it is available for internal usage. You can see technical details on how to use it can be found here. This model can be used to detect revisions that might need to be reverted. A high “revert probability” output (over .9) would provide good precision, while lower threshold (0.5) would provide recall. This model should be used just for Wikipedia Articles (namespace 0), its features won't work outside Wikipedia.


Motivation[edit]

Knowledge Integrity is one of the strategic programs of Wikimedia Research with the goal of identifying and addressing threats to content on Wikipedia, increasing the capabilities of patrollers, and providing mechanisms for assessing the reliability of sources[1]. The main goal of the project is to create a new generation of patrolling models, improving accuracy, fairness, and maintainability compared to previous state-of-the-art ORES[2].

The current model is completely language agnostic and can run in any Wikipedia language edition.

Supported languages

['aa', 'ab', 'ace', 'ady', 'af', 'ak', 'als', 'alt', 'am', 'ami', 'an', 'ang', 'anp', 'ar', 'arc', 'ary', 'arz', 'as', 'ast', 'atj', 'av', 'avk', 'awa', 'ay', 'az', 'azb', 'ba', 'ban', 'bar', 'bat-smg', 'bcl', 'be', 'be-tarask', 'be-x-old', 'bg', 'bh', 'bi', 'bjn', 'blk', 'bm', 'bn', 'bo', 'bpy', 'br', 'bs', 'bug', 'bxr', 'ca', 'cbk-zam', 'cdo', 'ce', 'ceb', 'ch', 'cho', 'chr', 'chy', 'ckb', 'co', 'cr', 'crh', 'cs', 'csb', 'cu', 'cv', 'cy', 'da', 'dag', 'de', 'din', 'diq', 'dsb', 'dty', 'dv', 'dz', 'ee', 'el', 'eml', 'en', 'eo', 'es', 'et', 'eu', 'ext', 'fa', 'fat', 'ff', 'fi', 'fiu-vro', 'fj', 'fo', 'fr', 'frp', 'frr', 'fur', 'fy', 'ga', 'gag', 'gan', 'gcr', 'gd', 'gl', 'glk', 'gn', 'gom', 'gor', 'got', 'gpe', 'gsw', 'gu', 'guc', 'gur', 'guw', 'gv', 'ha', 'hak', 'haw', 'he', 'hi', 'hif', 'ho', 'hr', 'hsb', 'ht', 'hu', 'hy', 'hyw', 'hz', 'ia', 'id', 'ie', 'ig', 'ii', 'ik', 'ilo', 'inh', 'io', 'is', 'it', 'iu', 'ja', 'jam', 'jbo', 'jv', 'ka', 'kaa', 'kab', 'kbd', 'kbp', 'kcg', 'kg', 'ki', 'kj', 'kk', 'kl', 'km', 'kn', 'ko', 'koi', 'kr', 'krc', 'ks', 'ksh', 'ku', 'kv', 'kw', 'ky', 'la', 'lad', 'lb', 'lbe', 'lez', 'lfn', 'lg', 'li', 'lij', 'lld', 'lmo', 'ln', 'lo', 'lrc', 'lt', 'ltg', 'lv', 'lzh', 'mad', 'mai', 'map-bms', 'mdf', 'mg', 'mh', 'mhr', 'mi', 'min', 'mk', 'ml', 'mn', 'mni', 'mnw', 'mr', 'mrj', 'ms', 'mt', 'mus', 'mwl', 'my', 'myv', 'mzn', 'na', 'nah', 'nan', 'nap', 'nds', 'nds-nl', 'ne', 'new', 'ng', 'nia', 'nl', 'nn', 'no', 'nostalgia', 'nov', 'nqo', 'nrm', 'nso', 'nv', 'ny', 'oc', 'olo', 'om', 'or', 'os', 'pa', 'pag', 'pam', 'pap', 'pcd', 'pcm', 'pdc', 'pfl', 'pi', 'pih', 'pl', 'pms', 'pnb', 'pnt', 'ps', 'pt', 'pwn', 'qu', 'rm', 'rmy', 'rn', 'ro', 'roa-rup', 'roa-tara', 'ru', 'rue', 'rup', 'rw', 'sa', 'sah', 'sat', 'sc', 'scn', 'sco', 'sd', 'se', 'sg', 'sgs', 'sh', 'shi', 'shn', 'si', 'simple', 'sk', 'skr', 'sl', 'sm', 'smn', 'sn', 'so', 'sq', 'sr', 'srn', 'ss', 'st', 'stq', 'su', 'sv', 'sw', 'szl', 'szy', 'ta', 'tay', 'tcy', 'te', 'test', 'test2', 'tet', 'tg', 'th', 'ti', 'tk', 'tl', 'tly', 'tn', 'to', 'tpi', 'tr', 'trv', 'ts', 'tt', 'tum', 'tw', 'ty', 'tyv', 'udm', 'ug', 'uk', 'ur', 'uz', 've', 'vec', 'vep', 'vi', 'vls', 'vo', 'vro', 'wa', 'war', 'wo', 'wuu', 'xal', 'xh', 'xmf', 'yi', 'yo', 'yue', 'za', 'zea', 'zh', 'zh-classical', 'zh-min-nan', 'zh-yue', 'zu']

Users and uses[edit]

Use this model for
  • Automatically find revisions that requires patrolling.
  • Vandalism detection.
  • Create bots for assisting admins and patrollers to remove vandalism or non good-faith edits.
Don't use this model for
  • As ground-truth for training other models
  • Making predictions on projects other than Wikipedia language editions
  • Making predictions on the first revision of a page or a revision that is the only one for a page.
Current uses
  • Research.
  • To be implemented on products soon.

Ethical considerations, caveats, and recommendations[edit]

The model is built using meta features that take into account user characteristics. However, it may exhibit bias against edits from new users or IP edits due to past experiences. To address this issue, we have developed an alternative Revert Risk Multilingual Model (RRML) that specifically mitigates such biases, however that model requires more processing power and might be slow. Therefore, for anonymous edits - on the 47 languages covered - we recommend using the RRML. For the reminding edits (non-anonymous or not covered by RRML), we recommend to use this model.

Model[edit]

This model uses this set of features:

  • Article features:
    • We used the features developed for the Language Agnostic Article Quality Model.
    • We computed the article quality features for the current and parent revision.
    • We measured the quality differences between these revisions.
  • User features:
    • Account "age" (difference between revision timestamp and the user creation date)
    • Number of previous revisions made.
    • Number of users groups.


Performance[edit]

Implementation[edit]

Model architecture

The model is build using the XgBoost library.

The detailed model training procedure and configuration can be found in this repository.
Output schema
{
  model_name: <model name string>
  model_version: <model version string>
  wiki_db: <wiki code string>,
  revision_id: <revision_id string>,
  output: {
     prediction: <boolean decision result>
     probabilities: {
        true: <probability of being reverted>,
        false: <probability of being NOT reverted>
  }
}
Example input and output

Example input:

curl "https://<endpoint>/v1/models/revertrisk-language-agnostic:predict" -d @input.json -H "Host: revertrisk-language-agnostic.revertrisk.wikimedia.org" --http1.1 -k An example for input.json: { "lang": "en", "rev_id": 123855516 }

Example output:

{
   "model_name":"revertrisk-language-agnostic",
   "model_version":"2",
   "wiki_db":"enwiki",
   "revision_id":123855516,
   "output":{
      "prediction":true,
      "probabilities":{
         "true":0.6868777275085449,
         "false":0.3131222724914551
      }
   }
}

Data[edit]

Data pipeline

The model was trained on a dataset collected using the two tables from the Wikimedia Data Lake. We used the MediaWiki History table, and the Wikitext History one. Snapshot dated 2023-05 was used with the observation period from 2022-01-01 to 2022-01-01 (12 months). We filtered out revisions created by bots. We used the 70% of the data for training, and 30% testing, using a random split.

The data collection process can by found on this repository.
Training data
We randomly selected the 70% of the data mentioned above.
Test data
We used the remaining 30%.

Licenses[edit]


Citation[edit]

Cite this model as: ... to be added soon.

References[edit]

  1. Zia, Leila and Johnson, Isaac and Mansurov, Bahodir and Morgan, Jonathan and Redi, Miriam and Saez-Trumper, Diego and Taraborelli, Dario. 2019. Knowledge Integrity. https://doi.org/10.6084/m9.figshare.7704626
  2. https://www.mediawiki.org/wiki/ORES