Learning and Evaluation/Evaluation reports/2015/Evaluation initiative
The evaluation initiative in the movement is unique in that, not only it gathers data for these reports, but also aims to develop capacity in different communities to build a shared understanding of what learning and evaluation means.
The Current Initiative[edit] |
In November 2012, the Wikimedia Funds Dissemination Committee (FDC) proposed that we discuss «impact, outcomes and innovation, including how to measure, evaluate and learn» across the Wikimedia movement. This request raised some important questions, such as:
To address these questions and help measure the impact of our programs, the Wikimedia Foundation’s Program Evaluation and Design initiative started in April 2013, with a small team and a community call to discuss program evaluation. In the first few months, the team worked to identify the most popular Wikimedia programs and collaborated with a first set of program leaders to map the program goals and potential metrics. Then, the team invited a broader community of program leaders to share feedback about their capacity for evaluation through an online survey. We wanted to explore what programs were out there, what was important for program leaders and what they were measuring. Initial survey results indicated that many program leaders were tracking quite a bit of data about their programs. By August 2013, informed by these survey results, we launched the first Round of Data Collection in September 2013 and completed our first Evaluation Report (beta). This high-level analysis started to answer many of the questions raised by movement leaders about key programs and their impact. The report was well received by our communities and generated many discussions about the focus of these programs, their diversity and the data they collected. But it still left room for improvement. | |||
Community engagement |
Since releasing the initial beta reports, the program evaluation and design team members have conducted activities to grow evaluation capacity and awareness in the movement. The team has: Led 11 in-person workshops in conjunction with community gatherings providing over 35 hours of workshop time including:
Organized 24 virtual meet-up opportunities offering more that 30 hours of direct training and support on learning and evaluation including:
Shared more than 15 learning and evaluation focused blog posts and worked to connect with over 100 program leaders to develop design resources and learning patterns for programs. In July 2014, we collected a second annual survey of evaluation capacity via the 2014 Evaluation Pulse Survey. In most every case, a majority of program leaders reported using of each of these resources. Self-report of evaluation tracking, monitoring, and reporting had demonstrated a shift in community capacity and engagement in learning and evaluation. | |||
Evaluation Pulse |
Of the 90 program leaders who responded to this year’s Evaluation Pulse survey[1], before the implementation of global metrics in late August 2014, most program leaders reported they were already tracking many key data points for understanding program inputs including: date/time of program[2], input hours[3], program budget/costs[4], and donated resources.[5] As well as tracking program outputs and outcomes: participant user names[6], number of new accounts created[7], gender of participants[8], number of media uploads[9], number of new articles[10], number of articles edited[11], content areas improved[12], and lessons learned[13].
In addition to tracking more inputs and output counts of their programs, most program leaders also reported tracking the content areas improved (87%)[14] and their lessons learned (94%)[15] 70% of survey respondents reported having accessed direct consultation with WMF team members. When asked what direct mentoring and consultation they had accessed:
Many had also used a number of Portal Resources:
When asked to share the ways they were monitoring participant outcomes, similar to the 2013 survey, program leaders were most often monitoring what participants contributed at the program events, followed by monitoring what participants contributed after the events (see graph below). Notably, program leaders were more likely to report they were using participant follow-up surveys in the 2014 Evaluation Pulse as compared to 2013. Consistent with the increasing number of requests for survey tool access by our team, self-reports also suggested that surveys were being used more in 2014 compared to baseline. In addition to reports of increased programs and impact monitoring, nearly twice as many program leaders reported they identified measurable targets for their program goals in 2014 compared to only 39% who reported so in 2013 (see graph below). Finally, the majority of program leaders were feeling prepared to move ahead with most aspects of evaluation:
|