Jump to content

Talk:Movement Strategy/Recommendations/Iteration 3/Evaluate, Iterate, and Adapt

Add topic
From Meta, a Wikimedia project coordination wiki
The following discussion is closed. Please do not modify it.

Open, facilitated community conversations have finished. You are welcome to continue commenting and discussing, but the discussions will not be as closely facilitated and documented.

Thank you to everyone who shared their input. To learn about what’s next, visit our FAQ page.

Editorship numbers should be the key metric

[edit]

We don't need fancy impact measurement. We need more editors and we have easy measurements. 1/5/100 edits/month is a great metric that's easy to measure and easy to target with interventions. Increasing that metric should be WMF's key concern and the other metrics should be secondary. People don't donate to Wikipedia because they want knowledge equity. They donate because the like the content that we do provide and hope that the donation allow Wikipedia to get more of the content that it does provide. A/B tests should be routinely used to figure out whether technical changes increase editorship. To the extend that equity is a prime mission of the WMF, WMF fundraising requests shouldn't talk about keeping Wikipedia running but talk about equity. ChristianKl14:08, 21 January 2020 (UTC)Reply

Hi ChristianKl, thanks for bringing this up. Certainly, numerical metrics are great and have been in use for quite some time within the Wikipedia community. The trouble with numbers is they don’t tell a story. By looking at the numbers of edits, what does that tell us? We can draw information from the numbers, but often it’s making up a story from our individual point-of-view. This recommendation is suggesting better and more frequent evaluation across the board because there really is a lot measuring success by edits misses. I believe any of this evaluation and the resulting adaptation will increase the number of editors and edits per month. Best, Jackiekoerner (talk) 16:30, 21 January 2020 (UTC)Reply
Your response seems like you are not even addressing the point I have made. I haven't called for looking at the numbers of edits and I do have a history of speaking against using them as a measurement. I have called for counting editors who do edit. Not understanding the difference between the two illustrates that you haven't thought seriously about the issue which is concerning.
Numbers might not tell the kind of stories that people from the humanities want to hear but they are a better tool. Total numbers of editors don't tell your stories about equity but they do tell you how many people feel comfortable enough to edit Wikipedia regardless of their gender and person identity.
All the big websites that are successful today have metrics and optimize to maximize those. It's effective at scaling the impact. WMF on the other hand doesn't focus on optimizing and thus spends a lot of money in ways that don't have a significant impact. ChristianKl18:50, 24 January 2020 (UTC)Reply

What about the WMF & Community?

[edit]

One additional group of metrics comes to mind: Community/WMF relationship metrics, and that's deliberately in both ways.

Community trust in the WMF, and especially T&S, took a battering this year - as it has at several points in the past. There were several aspects in several recommendations where I would have been "merely" extremely reticent a year ago but am now firmly against because I don't trust any more authority ceded. Others share these viewpoints.

Conversely, we know comparatively little about how the WMF staff really feel about the Community (as a whole), only what gets published in releases by the most senior staff. Anonymised information from the whole workforce would be very interesting.

Thus, I suggest the following evaluations (all to be anonymously gathered and presented). Community demographic data must include primary project/language variant to show variation:

  1. Community general trust in the WMF
  2. Community general trust in T&S
  3. Community confidence in WMF's ability to effectively and neutrally carry out the Wikimedia movement recommendations
  4. WMF staff general trust in the Community
  5. WMF staff assessment of the Community's willingness to discuss implementation of recommendations in good faith
  6. WMF staff trust in the WMF's strategic decisions over the previous year

Nosebagbear (talk) 16:40, 23 January 2020 (UTC)Reply


Clarity, conciseness, and proposed edits

[edit]

Moving discussion here per CKoerner (WMF)'s request. (proposal forthcoming)

"Provide evaluation with resources and experts on any given area"

[edit]

(moved to subheading) I was editing for clarity, and I can't figure out what this is supposed to mean. Any help? Mdaniels5757 (talk) 21:27, 23 January 2020 (UTC)Reply

@Mdaniels5757:, evaluation is critical to understand the current progress in any area, whether it is content coverage, contributors' technical skills or community health. However, in order to evaluate properly the degree of progress in all the dimensions that an area encompasses (e.g. community health can be measured in terms of the number of conflicts, type of conflicts, positive communication with newcomers, etc.), we need to have experts who are able to define, measure, study and communicate that area on a regular basis. --Marcmiquel (talk) 23:25, 27 January 2020 (UTC)Reply

minor wording thing

[edit]

Can we please change the non-word "reorientate" to "reorient"? - Jmabel (talk) 17:58, 1 February 2020 (UTC)Reply

That sounds like a very reasonable suggestion. I'll see if we can get it changed --Abbad (WMF) (talk) 19:01, 2 February 2020 (UTC).Reply

This appears to derive from iterative and incremental development

[edit]

It seems that this recommendation is taken from en:Iterative and incremental development, which is in opposition to the en:Waterfall model.

Iterative and incremental development theory refers to software development. In terms of social architecture, this would be an endorsement of en:Piaget's_theory_of_cognitive_development#Assimilation_and_Accommodation in a collective sense.--Epiphyllumlover (talk) 03:32, 4 February 2020 (UTC)Reply

Feedback from Hindi Community members on Evaluate, Iterate, and Adapt recommendation

[edit]

There was support on developing an evaluation process to improve diversity and inclusion in technology, policies, and governance systems for empowering the capacity for movement stakeholders. There was support expressed on iterating processes that propose changes in technology, policies, and governance systems to promote validation through research and testing. It was added that the best practices are built out of learning experiences that are important to adapt, iterate and scale. RSharma (WMF) (talk) 19:27, 16 February 2020 (UTC)Reply

Highlights from the Spanish and Catalan/Valencian Conversations - Evaluate, Iterate, and Adapt

[edit]

Perception to be something basic. Looks more like a goal rather than an end.

Possible tension-point: Amical does not want money for anything that can be volunteered or perceived as unnecessary bureaucracy, but the communities around them do not want to be themselves (neither volunteers nor local staff) those who have to waste time doing metrics or giving information that they understand that if its is demanded from a higher scale, it is the higher scale who has to get it. They don't talk about not giving evaluation information, they don't want the evaluation job to rest on those who do the thing.

Scale problems perceived in the Catalan context: they share some context with Spain, but their language/wiki does not share content not with Spanish neither Basque or Galician.

Efficiency issue: See Safety. If WMF is unable (because it was not part of its initial role) to provide security to volunteers in a diverse and changing world, it is probably more efficient to seek to meet that need outside of WMF rather than spend time and resources in trying to create something from scratch.--FFort (WMF) (talk) 16:31, 21 February 2020 (UTC)Reply

Feedback from Wikimedia UK

[edit]

We felt that there was some overlap here with some of the other recommendations - and that evaluation needs to cut right across the strategy. We wondered how this will relate to local reporting requirements, and what the timetable and process will be for determining those?


The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made in a new section.