Research talk:Accuracy and quality of Wikipedia entries

From Meta, a Wikimedia project coordination wiki

is this finished? has this been presented?

The project is a few weeks behind schedule because of unforeseen delays with one of the case studies (Arabic) and because the main PI had to drop off entirely from the project because of personal circumstances. A public version of the executive report will be available in May. --DarTar (talk)

Peer review? Final version?[edit]

DarTar, could you update us on the status of this paper? I believe it was going through journal submission, and there was intended to be a more final version. Is that still in the works?

Two specific reasons:

  1. Efforts continue to transcribe this on Wikisource; this is a painstaking process, and if the version being transcribed is a draft, it would be good to know (see wikisource:en:Assessing the accuracy and quality of Wikipedia entries compared to popular online encyclopaedias)
  2. The Nature study continues to be cited as the authoritative "version" of this report (for instance, this new video from the News Literacy Project), and is getting very old, and was pretty lacking in methodology compared to this study.

Any further news? Thanks again for your work generating new knowledge in this vital area! -Pete F (talk) 21:06, 2 March 2015 (UTC)[reply]

Hi Peteforsyth: thanks for bumping this. I haven't heard back on final paper submissions from the Oxford team, do you want to ping the authors directly. Other than that, the main limitation of this study IMO was its small sample size. The most promising project in the area of quality scoring at the edit, article and project level is the work EpochFail is leading in the area of revscoring and algorithmic article quality classification. Check it out if you haven't --DarTar (talk) 21:57, 6 March 2015 (UTC)[reply]
DarTar I would be happy to. Can you suggest the best way to reach them -- perhaps send me an email address? As to the sample size:
  • The study addressed that, emphasizing that it was aiming to produce a methodology as a first step toward a more comprehensive later study; I believe a peer reviewed methodology would have great value in itself;
  • The sample size was at least similar to the Nature study (22 articles here vs. 42 articles there)
Additional major benefits:
  • This study is open access; the Nature study is not
  • This study is fairly recent, Nature is not
  • This study covers several languages, Nature does not
So...I think it's pretty important to maximize the potential of this study. At minimum, if there is significant learning about the ideal methodology, that would tend to help future efforts in this area -- no?
Regardless, I'd be happy to send a quick email to inquire about the status. Thanks! -Pete F (talk) 00:15, 7 March 2015 (UTC)[reply]