Research:Exploring systematic bias in ORES/Calibration and balance for newcomers and anons
Evaluating bias encoded in ORES
I built a data pipeline to measure ORES's bias against newcomers and anonymous editors. Human labels from the training data are the "ground truth."
I retrieved the human labeled edits from Wiki_labels, scored the edits using the ORES API, and obtained edit metadata from the Wikimedia API (it turns out that I could have used the data lake instead). Next I pushed this data to the data lake and identified newcomers through their edit histories using a Spark script. I defined "newcomers" as editor accounts that have been active for less than a month and/or which have less than 5 edits. Anons are edits by editors that are not logged in. I also defined a group of "normal" editors who are neither newcomers nor anonymous.
I considered two kinds of ORES models. The first (the damaging classifier) predicts whether the community would consider an edit to be damaging. However, just because an edit is damaging doesn't imply that the editor caused damage on purpose. The second model (the goodfaith classifier) attempts to predict the intent of the edit in terms of whether it was made in goodfaith.
After building this dataset, I pulled it out of the datalake and made the plots below to evaluate how ORES algorithms in terms of fairness. I considered two notions of "fairness" that can be applied to classification systems. Both notions adopt a frame that fairness constitutes of equal treatment regardless of status. To people thinking about algorithms in the criminal justice system, "status" might refer to a person's race. Here we consider how the algorithm treats editors who might be newcomers or anonymous.
When I think about fairness, I care about more than whether the algorithm says that newcomer and anonymous editors make worse edits on average. This might be true (it almost certainly is), and it would be surprising if an algorithm not explicitly designed to favor such editors didn't scrutinize their edits. Such an algorithm might be considered fair or unfair depending on how we define "fairness," which is not an easy or obvious thing to do, as we will see.
I considered two possible ways of defining fairness. The first is calibration. A well-calibrated classifier predicts similarly accurate probabilities regardless of status. It is free to predict that newcomers (or anons) have a higher probability of making damaging edits compared to normal editors. But if it systematically over-estimates this probability, then we would say that it is biased against newcomers (or anons) in terms of calibration.
The second fairness criteria I consider is balance. As with calibration, a balanced classifier is free to associate newcomers and anons with a greater risk of making damaging edits. But instead of looking at the predicted probabilities, balance considers the kinds of errors the algorithm makes. A false-positive error occurs when the model predicts damage, but the edit was not truely damaging. Since having your edit labeled damaging is a bad thing, a model with a higher false-positive-rate (fpr) for newcomers (or anons) compared to normal (neither newcomer nor anonymous) editors is biased in terms of false-positive balance. If ORES has this kind of bias then edits by newcomers (or anons) will be more likely to be labeled as damaging when they are actually good edits compared to other editors.
Similarly, a false-negative error occurs when the model predicted that the edit was good, but the edit was actually damaging. Similarly, a model with a lower false-negative-rate (fnr) is based in terms of false-negative balance. If ORES has this kind of bias then edits by newcomers (or anons) will be less likely to be labeled as good when they are actually good compared to other editors.
It turns out that, while both of these notions of classifier fairness might seem reasonable or intuitive, that it isn't possible in practice to have them both (unless your model is a perfect predictor or the status is irrelavent). 
The code is in my fork of the editquality repository here, on github.
To assess the calibration of ORES models, for each of the 26 wikis that have enabled ORES models, I first estimated the likelihood that an edit is damaging (or goodfaith) within each group of editors (newcomers, anons, normal) simply by taking the mean over the human labeled edits for each group. I compared these estimates to the mean probability estimate output by the model for each group. Taking the difference of these two means provides a measure of calibration.
The above evaluation in terms of calibration uses the raw probability scores output by the model. But to measure balance we have to choose a threshhold in order to convert probability predictions into discrete classifications. Choosing these threshholds is somewhat arbitrary, but the ORES models are being used by people and the threshholds that they use to make discrete decisions (like to define filters in RecentChanges, are published in Special:ORESModels. These define 4 different thresholds for each model to correspond with different levels of confidence in the classification. Different Wikipedia communities choose different thresholds according to their preferences, and not every wiki uses 4 threshholds for both models.
To make the plots below I estimated the false positive rate and false negative rate for each group of editor for each wiki using the human labeled edits.
I found evidence that the ORES models typically have systematic biased against newcomer and anonymous editors. In terms of calibration, all of the damaging models are biased against newcomers and anons except for Arabic, Bosnian, wikidata, and Finnish. And so are the goodfaith models for Czech, English, Spanish, French, Italian, Korean, Dutch, Portuguese, Romanian, Russian, Albanian, Swedish, and Turkish.
Whether a model is biased in terms of false-positive or false-negative balance can depend on the choice of threshhold for mapping between probabilities and classifications. I used the thresholds that Wikis are using in practice to power tools like the filters on en:Special:RecentChanges. For most wikis and thresholds, the damaging models are biased in terms of false-positive balance and goodfaith models are biased in terms of both false-positive and false-negative balance.
We shouldn't be surprised that the goodfaith models that were unbiased in terms of calibration are biased in terms of balance. In fact, there is an inherent tradeoff between these two different notions of algorithmic fairness. Kleinberg et al. present a rigerous proof of this, but I'll try to offer an intuitive explanation here:
Requirements of calibration and balance place constraints on the types of errors that the classifier can make, but these constraints can only be satisfied if the model is perfect predictor, or if the two groups of editors make damaging edits at the same rate.
For the model to be calibrated, the proportion of edits by anons labeled as damaging must equal the true rate of damaging edits by anons, and symmetrically the proportion of edits by nonanons labeled as damaging must equal the true rate of damaging edits by nonanons. This means that the errors within each group of editors have to be symmetrical. Errors that over-estimate the probability that an edit is damaging must be offset by errors that under-estimate it.
On the other hand, the balance constraint links errors in one group to errors in the other group. A balanced model can predict a higher rate of damaging edits for anons compared to non-anons, but it must do so without increasing the rate of false positives for anons above that of non-anons. This means that balance requires that the group of edits that are truly non-damaging, have the same average score for anons edits as for non-anons. And the same goes for the group of edits that are truely damaging.
Since anons are truely more likely to make damaging edits compared to non-anons, a calibrated model will assign all of their edits higher scores. But this means that the model will give non-damaging edits by anons higher schores than non-damaging edits by nonanons. So the model is not balanced. If the model could predict perfectly, then it wouldn't have to do this since all damaging edits could be assigned a score of 1 and all non-damaging edits would have a score of 0. But in the tragic world where we can't predict perfectly, we're stuck with a tradeoff between balance and calibration.
I also did a [similar analysis] looking for biases against edits to articles on women or about places in the Global South. However, in that case I did not observe a consistent pattern of bias.
- Kleinberg, Jon; Mullainathan, Sendhil; Raghavan, Manish (2016-09-19). "Inherent Trade-Offs in the Fair Determination of Risk Scores". arXiv:1609.05807 [cs, stat]. Retrieved 2019-04-02.