Research:Quality of PPI editor work

From Meta, a Wikimedia project coordination wiki
Jump to: navigation, search
Nutshell.png
This page in a nutshell: This research project compared editors "recruited" via the Public Policy Initiative to other editors with similar edit counts. It concluded that the Wikipedians we recruit this way are just as good as the editors we get in other ways.


Background[edit]

I'll be examining whether editors who edit Wikipedia as part of the Public Policy Initiative (PPI) create more high quality edits compared with regular editors. To assess quality I will compare the revert rates for PPI editors with regular editors.

Research Question[edit]

Are edits from PPI editors reverted less often than edits from other editors?

Methodology[edit]

To answer the research question, I will match each PPI editor to a regular editor who is very similar in terms of editing behavior. Similarity is determined using the following variables:

  • number of characters added
  • number of unique articles added
  • cumulative number of edits in main namespace
  • cumulative number of edits in non-main namespace

I calculate the distance between a pair of editors using Euclidean Distance, and match each PPI editor with a regular editor who is most similar. Then, I run a t-test to see if PPI editors are significantly less often reverted than regular editors. Simple matching is an econometric technique as a substitute for randomized experiments. Simple matching matches a pair of respondents on observables.

Data Description[edit]

Below are four charts that show for the above-mentioned variables how similar a pair of editors is along that specific dimension.

Summer of Research - Comparing PPI editors & regular editors by article count Summer of Research - Comparing PPI editors & regular editors by character count

Summer of Research - Comparing PPI editors & regular editors by cum. edit count main ns Summer of Research - Comparing PPI editors & regular editors by cum. edit count other ns

Results[edit]

Paired t test
------------------------------------------------------------------------------
Variable |     Obs        Mean    Std. Err.   Std. Dev.   [95% Conf. Interval]
---------+--------------------------------------------------------------------
revert~a |     202    .1485149    .0453613     .644705    .0590698    .2379599
revert~b |     202    .1683168    .0535148    .7605877    .0627945    .2738392
---------+--------------------------------------------------------------------
    diff |     202    -.019802    .0574315    .8162552   -.1330475    .0934436
------------------------------------------------------------------------------
     mean(diff) = mean(revert_count_a - revert_count_b)           t =  -0.3448
 Ho: mean(diff) = 0                              degrees of freedom =      201

 Ha: mean(diff) < 0           Ha: mean(diff) != 0           Ha: mean(diff) > 0
 Pr(T < t) = 0.3653         Pr(|T| > |t|) = 0.7306          Pr(T > t) = 0.6347

Interpretation[edit]

The t-test shows that there is no significant difference between the revert rate of PPI and regular editors which suggests that edits made by PPI editors are not significantly better than regular edits.

Future Research[edit]

  • A possible follow-up question is to study how active PPI editors are once they are finished with their courses. Are they becoming permanent community members or are they dropping out?

Implications[edit]

This sprint illustrates that the matching methodology is promising for running non-randomized experiments. It's quite easy to find highly similar editors in terms of editing behavior and so we can use this for A/B or experimental kind of testing.