Research talk:On the collaboration with Wikimedia Communities in the context of building Machine Learning Systems

From Meta, a Wikimedia project coordination wiki
Jump to navigation Jump to search

Insights that would help us get moving[edit]

  1. What has your experience been working with the ORES/Scoring Platform/Machine Learning team during the process of building a Machine Learning model for your community? Feel free to tell us the whole story, we would love to hear it!
  • @Theklan:, I think you'd have some great insights here. Could you review what we have for process and leave any comments/make edits here? --EpochFail (talk) 16:26, 18 March 2021 (UTC)[reply]
@EpochFail: I'll try to help! -Theklan (talk) 16:29, 18 March 2021 (UTC)[reply]

Data collection may involve extensive community contact[edit]

The authors may be aware of this already, but I wanted to point this out: in some cases step 3 (collecting data) involves extensive contact and interaction with community members, depending on what data is collected. As a historical example, the edit quality models (damaging and goodfaith) were built from a data set of labeled edits. This labeling was done by community members, since it required speaking the local language and (some) familiarity with the community's norms. A substantial amount of work went into organizing these labeling campaigns. Typically an enthusiastic community member would play an ambassador role, liaising with WMF staff and recruiting other community members to help label edits. For other models, however, data collection didn't involve community interaction at all: figuring out which edits were reverted or which articles have which quality rating on them were things researches could do by themselves (or write scripts to do). --Roan Kattouw (WMF) (talk) 00:04, 16 March 2021 (UTC)[reply]

Article Quality model for the Dutch Wikipedia[edit]

Hi, this is my personal story so far for the design of the article quality model for the Dutch Wikipedia.

RonnieV and me met with Aaron at the Wikimedia hackathon in Prague in 2019, where he and Roan tried to train ORES in Dutch curse words, and needed help of native speakers. This was a fun way to get to know what ORES was! I followed the developments of ORES through the wikimedia-l mailing list where possible, and attended a presentation of Aaron at the online 2020 Wikimedia hackathon in May. An article quality model had been discussed several times within the Dutch community, but a manual model seemed way to much work and way to much maintenance for the small community and was therefore always disregarded by us. AI might offer a solution to us and since I was attending the hackathon anyways and knew a bit of what ORES could offer, I attended.

From then onward I repeatedly discussed possibilities within the community and got a green light to set up an opt-in trial version for our Dutch encyclopedia. RonnieV and I created a 5-level quality model to begin with, with a rough selection on how we think the differences in quality could be measured, and examples and categories that the different quality levels can already be found in. Our community gave feedback on this model before we took it to the developers. I started a Dutch translation on our Wikipedia about the ORES interface, where I explain the ORES-tool that is already available (revision scoring), and the new tool on article quality that is now being developed. I also set up a page where people could express their interest in becoming involved once the model comes into Beta, to help test the algorithms. I try to attend the weekly ORES-office hour, so the developers can ask questions about the local structure of our Wikipedia: templates used, categories for the specific quality levels, and how to filter out bot-created articles for instance.
I already gave presentations to the Dutch and Belgium chapters and to our partners in the GLAM field about the upcoming model, because I think they for one could hugely benefit from it for their writing weeks/months and research, and to see were the lacunae in our language version lay.

Best, Ciell (talk) 16:59, 20 March 2021 (UTC)[reply]

ORES and WikiProject Women Writers[edit]

Thanks for notifying me about this research. I am a long-time power-user and champion of ORES on English Wikipedia. I use it to help me rate articles, mostly focused on WikiProject Women Writers, a project I founded in 2014. I've discussed how I use ML with EpochFail at various conferences and we continued our conversations on my en.wp talkpage. He subsequently developed a script or two which significantly improved the experience for me so that I could get my work done more quickly. I'm not very technically-inclined but if it would be helpful, I'd be glad to chat more regarding my user experience point of view. For examples of how I use ORES, check out my en.wp edit history during the last pandemic year (March 2020 - February 2021) when I lost the inclination to create Wikipedia articles, and instead, focused on a massive article review/rating campaign, which also included improvements to the articles being reviewed/rated. --Rosiestep (talk) 18:02, 20 March 2021 (UTC)[reply]