Talk:Learning and Evaluation/Evaluation reports/2015/Editathons

From Meta, a Wikimedia project coordination wiki
Jump to navigation Jump to search

Is the report useful?[edit]

Richard Nevell of Wikimedia UK shared the following opinion with me over email and agreed to let me copy here: "The report is useful as a benchmark. Within our...corpus of events, we can work out an average for what we might hope an event achieves in terms of characters added and removed, and it is reassuring to see that those benchmarks correspond to those of the wider movement. It helps to shape and manage expectations." Abittaker (WMF) (talk) 21:29, 19 August 2015 (UTC)

Other measures[edit]

Sanna Hirvonen of Wikimedia Finland shared the following ideas with me over email and agreed to let me copy them here:

  • Does the GLAM partner have plans to continue collaboration / organise events / donate content / hire a Wikipedian in residence
  • Number of GLAM professionals among the programme participants: this is important especially if influencing professionals and building partnerships are priority goals.
  • The amount of blog posts, press releases and presentations generated by the programme - especially ones by partners or participants. They indicate that we have managed to spread awareness / positive perceptions on Wikimedia projects.
  • Testing new programme formats is valuable because it helps us learn what works and what doesn't. Can this be reported in some way? How about reporting on what we have learned from a programme we have organized?

Abittaker (WMF) (talk) 21:29, 19 August 2015 (UTC)

Richard Nevell of Wikimedia UK also shared some suggestions on other data to help to tell the story of edit-a-thons with me over email, which he agreed to let me copy below:

  • Relating back to global metrics, we're mostly happy with the data currently available. The one drawback is that finding an automated tool to help work out how many articles have been improved (as opposed to just created) has been challenging.
  • It would be useful to know how good editathons are at converting attendees into volunteers who arrange events for example. Is it a viable gateway to get people volunteering?
  • Readership of articles edited, in a similar way to the functioning of BaGLAMa.
  • The numbers are great for showing impact, and a second prong of having a particular case study to latch onto would be useful as it engages a slightly different audience who don't necessarily respond to numbers. It also helps to put things into context. What does 20,000 bytes of text mean? Having someone explain they they felt empowered to write about women's history for example really strengthens the impact of that number.

Abittaker (WMF) (talk) 21:29, 19 August 2015 (UTC)

Potential areas for further investigation[edit]

How do different types of editathons change motivation or outcomes?[edit]

How do different types of prizes change motivation or outcomes?[edit]

What could be done to improve retention rates, especially amongst new users?[edit]

What could we do to learn more about featured/good articles across the wikis?[edit]

Question about a specific edit-a-thon[edit]

Dear evaluation team, I'd like to learn more about a specific edit-a-thon that your report is based upon. According to your graph Participants, articles created or improved, and text pages added or removed, one of the edit-a-thons produced 324 pages of text with little more than 20 participants. Which means on average, each participant wrote more than 16 pages of text. I'd like to know the secret of how someone can write more than 16 pages on just one afternoon/evening. My experience as a long-term Wikipedians is that even writing 2–3 pages of text would take me up to three or four hours. Would you mind investigating that specific edit-a-thon and providing us here with more information? Thanks a lot in advance, --Frank Schulenburg (talk) 18:14, 16 September 2015 (UTC) (in my personal, not my professional role)

Thank you for noting this, Frank. Let me look into this and get back to you here in the next few days. Abittaker (WMF) (talk) 23:40, 24 September 2015 (UTC)
Hello again Frank--I checked the data, and there are a couple things going on here. First, this data point was the Seven Sisters Edit-a-thon, which was run as a single program but was a series of edit-a-thons that took place at different universities over the course of three weeks. Also, the "Absolute Text Pages" illustrated in the graph are not just text pages added, but the sum of the number of text pages added and the number of text pages removed. I hope this answer helps--it is always great to see people interested in the data, so please do let me know if you have further questions. All the best, Abittaker (WMF) (talk) 19:19, 25 September 2015 (UTC)
Amanda, thanks for your answer. Shouldn't the edit-a-thons in this series better be looked at as single events? The report makes statements about the average amount of text added to Wikipedia ("the average event resulted in about 12.5 total text pages"). Now, if some events that you've included in your report are single events, whereas others are "a series of edit-a-thons", wouldn't that distort the average amount of text that is being reported under outcomes / content production? Best, --Frank Schulenburg (talk) 20:18, 25 September 2015 (UTC)
Also, are there any other edit-a-thons in your dataset that are actually series of edit-a-thons? --Frank Schulenburg (talk) 20:49, 25 September 2015 (UTC)
Hello again Frank--the program leader defines what the program implementation is. Some are several edit-a-thons with the same topic and goals that the program leader organizes all together and chooses to think of as one program. But it is also important to remember that not all events with a single continuous time period are the same either--as you can see in the event length chart, almost as many events lasted over seven hours as lasted three to four hours. There was even one edit-a-thon in Mexico that lasted 48 straight hours. Thus, we cannot draw conclusions about productivity from the number of edit-a-thons in an implementation, but should rather look at productivity per hour--the data for which can be found in the input and output tables of the appendix. I hope this answers your question--but if not, feel free to ping me again :) Abittaker (WMF) (talk) 05:07, 26 September 2015 (UTC)
Amanda, thanks for your answer. I have a follow-up question: how many of the "events" were actually series of events? Thanks, --Frank Schulenburg (talk) 17:44, 28 September 2015 (UTC)
Ping Abittaker (WMF). What's the current status of my request? --Frank Schulenburg (talk) 18:05, 17 October 2015 (UTC) 
Good morning Frank, you can see the affiliation and name of each program in the appendix. There don't appear to be to many series of events, but that was not something we tracked in this report. I'm sorry for the delay in response, and I hope the answer is still useful. Abittaker (WMF) (talk) 18:39, 19 October 2015 (UTC)