Art+Feminism User Group/Reporting/Metrics2018
Overall met or exceed our metrics forecast with over 315 events, 3800 participants, and 43,000 content pages created or improved across all Wikimedia projects: 22,500 articles, 18,000 wikidata items, over 2,500 images to Wikimedia Commons. (see note re: Paris below). Our content created was 4 times the number we projected.
As of July 20, we have:
- 286 Events on Dashboard + 2 WiR meetups + ~30 events without wiki/dashboard presence
- 3731 Editors + 77 WiR
- 18.8K Articles Edited + 134 WiR
- 3.09K Articles Created + 376 WiR
- 2.53K Commons uploads + 115 WiR
Linear and Exponential growth continues
We continue to have roughly linear growth of Events and Participants, and exponential growth of articles and productivity.
Less than 1% of all articles were Deleted
0.67% of all articles were Deleted. 69 articles were flagged for deletion via Speedy, Prod, or AfD; of those only 23 were actually deleted. You can see that section of our midpoint report for more detail.
65% Percent repeat organizers
Of the 167 nodes in Streak that organized in 2017, 108 of them organized in 2018.
Unfortunately Paris, our largest node last year, was not able to organize an event this year. Last year they had about 1000 people attend over a two day period. Most of these people came for a longtable conversation, and a more modest ~50 people attended and edited. Adjusted for this loss of 1000 participants, our metrics outcomes tracked with our prediction model.
Wikimedia Egypt / CAI ArtAndFeminism 2018 competition
In processing our outcomes we noticed that Wikimedia Egypt's CAI ArtAndFeminism 2018 competition was hugely productive. You can see the guidelines of the competition on the meetup page ar:ويكيبيديا:مسابقة المرأة والفنون (In Arabic: we were reading via Google translate...) The dashboard page registers 1.57K Articles Created and 8.38K Articles Edited. That is about 50% of the total articles created, and 45% of the total articles improved!!! (Excluding Wikidata and Commons work).
What does this mean: We have thought about this, and have a couple of takeaways:
- This project has significant room for growth in the Arab world. This meshes with our focus on Latin America, Africa and Asia.
- Competitions might be a productive format to play with. Based off this example, only a small number of people might really get into it (22 people in this case) but the gamification and competitive aspects might lead to significant output. We do wonder what the labor would be required to administer and adjudicate the process. Maybe we can try one or two competitions with groups that have previously held them before.
- Given the impact of Paris's absence on this year's outcomes, we are going to produce a forecast with some contingency for these large nodes. We will isolate these 9,950 articles from the forecast model, and calculate that aspect of the campaign separately.
Implications of growth in Wikidata and Commons work
Following on the logic from the CAI competition above, we are going to be judicious in including the 18,000 Wikidata items and 2500 Commons items in the growth model. We have not previously included the Commons items, so adding them now would skew the curve. We will continue to model the non wikidata+commons articles separately, and will add in the wikidata commons separately.
For our data models, we are going to use 12,500 articles to build our base forecast off of. We will also retroactively remove the Paris data from 2017 to better model future growth (though it appears that Paris 2016 was only reported as ~50 people, and didn't count the non-editors who came for the long tables...?)
Per our midpoint report, we have had increased success with implementing dashboard in our communities workflow.
Missing or incomplete data. This year we had fewer events that only created meetup pages, and or that had incomplete or missing data. Per our 2017 Metrics report in 2017 we had 30 with bad data. This year we had less... maybe 10; we actually don't have exact numbers because we collectively decided that it was not worth the work to track those events beyond an initial email asking if we missed their data.
Undercounting: Per our 2017 Metrics report, we are having difficulties with an undercount for large events, like MoMA. No matter what we do, we don't get everyone in the room to sign in. This is generally true for all events (maybe 10-15% fall off) but we think it is higher for larger events. This is true regardless of what tool we use: dashboard, meetup page, etc. We aren't sure what to do about this.