Research:Metrics/qualitative assessment

From Meta, a Wikimedia project coordination wiki

Evaluating the quality of an editor's work through qualitative assessment takes advantage of human judgement to arrive at what is commonly considered the "gold standard" of quality assessment. While this method for determining quality is more robust and even more accurate than the purely quantitative metrics, the qualitative assessment process is much more labor intensive.

Construction[edit]

Raters[edit]

  • Researchers
  • Wikipedians
  • Crowdworkers - (E.g. Amazon's mechanical turk)

Validation[edit]

See Inter-rater reliability.

Limitations[edit]

Usage[edit]

References[edit]

  1. Aaron Halfaker, Oliver Keyes and Dario Taraborelli, Article Feedback v5, Stage 2 Quality Assessment, WMF (2012)
  2. Aaron Halfaker, Oliver Keyes and Dario Taraborelli, Article Feedback v5, Stage 1 Quality Assessment, WMF (2012)
  3. Aaron Halfaker, R. Stuart Gieger, Jonathan Morgan & John Riedl. (in-press). The Rise and Decline of an Open Collaboration System: How Wikipedia's reaction to sudden popularity is causing its decline, American Behavioral Scientist.