Research:Revision scoring as a service/Sandbox

From Meta, a Wikimedia project coordination wiki
Jump to navigation Jump to search
Revision Scoring as a Service logo.svg
Created
21:23, 23 August 2014 (UTC)
Duration:  2014- — ??
GearRotate.svg

This page documents a research project in progress.
Information may be incomplete and change as the project progresses.
Please contact the project lead before formally citing or reusing results from this page.


In this project, we are building machine scoring models that rank, sort, categorize or otherwise evaluate revisions of MediaWiki pages. These scorer models are made available through a web service. See the Objective Revision Evaluation Service. We gather examples of human judgement to train the models with wiki labels, a human-computation system that integrates with MediaWiki.

Software components[edit]

revscoring[edit]

"revscoring" is the name of a python library that we use to define and extract features to use in building and training machine prediction models.

code · docs

ORES[edit]

The Objective Revision Evaluation Service is a web service for hosting scoring models built using revscoring.

code · docs

Wiki labels[edit]

Wiki labels is a human-computation system that runs on top of MediaWiki/OAuth to deliver a flexible hand-coding interface for gathering human judgment for use in training and evaluating scorer models

code · docs


Sub-projects[edit]


Contact us[edit]

We're an open project team. There are many ways you can get in contact with us.

Team[edit]

Tools that use ORES[edit]

Other possible uses[edit]

See also[edit]