Objective Revision Evaluation Service/Get support
ORES supports a limited set of Wikis, but support is growing all of the time. This page describes how to get ORES support for your favorite wiki. Make sure to double-check the support table as we may already support your wiki. There are several different types of support that we can provide. It really comes down to which class of model and the tier of support we are aiming for.
There are two major classes of models that ORES supports: edit quality and article quality.
Edit quality & curation
The edit quality models make predictions about the characteristics of an edit's quality. These models are useful for detecting vandalism and identifying goodfaith contributors. Many Special:Recentchanges patrolling tools rely on these quality models to highlight bad edits. en:WP:Snuggle and en:User:HostBot uses the goodfaith model to highlight goodfaith newcomers who need support. There are two levels of support that for a wiki that we offer around edit quality: basic and advanced. It's recommended that we start on both levels of support at the same time.
At the very basic level, we provide a reverted model that attempts to predict whether or not an edit will need to be reverted. This model is "trained" using a sample of past reverted edits that happened in a particular wiki. In order to train a basic reverted model, we'll need to have a basic set of language assets in ORES for processing the content on the wiki. We have basic language assets available for many languages. Review Research:Revision scoring as a service/Word lists to see if we've started work on your wiki/language. If the primary language of your wiki is not in this list, you can request that we start working on it by filing a task in phabricator.
While the reverted model is useful and can be trained based on the history of article, it is slightly problematic. For example, maybe reverted edits are not actually damaging but rather are reverted for other reasons. Sometimes, a change will look like a revert, but it is actually some other operation—for example, archiving a talk page. It's much better if we can train our prediction models on more nuanced judgements of the quality of an edit. damaging predicts whether an edit causes damage and goodfaith predicts whether an edit was saved with good intentions. To gather the data needed to train these models, we can set up a Wiki labels campaign with a random sample of edits for evaluation. You can request that we set up such a campaign for your wiki by filing a task in phabricator.
Article quality models
Article quality models make predictions about the quality of an article as of a particular revision. These models are useful for evaluating the progress of article development and knowing which drafts are ready to be published. Currently, we train these models using the assessment classes that Wikipedians use to rate articles. If your wiki already has a process by which articles are rated for quality, you can file a phabricator task that answers a few questions to help get us started.