Introduction to Evaluation is an overview of the evaluation initiative at the Wikimedia Foundation and is a guide to our portal Grants:Evlauation. Thank you to the many volunteers, organizations, and Wikimedia staff who have been involved in this initiative and who continue working on this effort. If you have questions about the portal or would like further assistance, contact evalwikimedia.org.
In 2013, the Wikimedia Foundation started talking evaluation. We have worked to help volunteers and organizations to begin evaluating their projects and program activities, to help program leaders and evaluators to:
more clearly identify goals and target outcomes for projects and programs,
measure against those goals and share learning about projects and programs, and
design and learn to implement effective programs to reach goals.
A group of related projects and activities that: a) share the same objective; b) are repeated on a regular basis, and c) are based on a similar theory of change while using similar processes and interventions to make that change happen.
There are three basic characteristics of a project that make it a “program.” A program:
has shared objectives: a group of related projects and activities that share the same objective / goal
is sustained or repeated: a group of related projects and activities that are repeated on a regular basis / that involve a long term commitment
is guided by a consistent, or similar model: a group of related projects and activities that share a similar theory of change and that use similar processes and interventions to make that change happen
For this evaluation capacity-building initiative, we are all in it together – this effort will be a continuous process of learning for everybody involved. We will work collaboratively and we understand that context matters.
The aims of the current initiative are to support evaluation practice that is:
Foster program leader choice
Focus on building program leaders capacity to help themselves
Support and enhance desired outcomes
Develop program leaders evaluation logic and skills
Involve program leaders as key participants – they make the major focus and design decisions and select and commit to process and outcomes
Program leaders draw and apply conclusions to the design of their programming.
Focus on intended uses and users
Actively involve users in all aspects of the evaluation
Measure the extent to which goals and objectives are met
Provide comparable data to inform decisions to continue, expand, or reduce funding based on costs and impact.
Lead to ongoing commitment to using evaluation logic and building a community culture of learning
The portal has four main sections with evaluation resources: Study, Plan, Measure, and Share. Two additional sections, Connect and News & Events, are spaces for networking within the evaluation community through talk pages, online and face-to-face events.
Navigation menu of the Evaluation portal
Evaluation is a field of study, which has its own language and theory. This section holds resources to help guide new evaluators with the terminology, theory, and guides related to evaluation in the Wikimedia movement.
The Glossary is one of the most valuable sections that defines some of the key terms that may be used in conversations about program evaluation. Who is a ‘program leader’? What is a ‘program implementation’?
Step-by-step guides can be found in the Learning modules about various resources and tools for evaluating a program, like writing surveys and using Wikimetrics.
Evaluating a program means planning in advance. One of the first steps to planning an evaluation is to identify goals, their specific outcome targets, and appropriate metrics for measuring those outcomes. This section holds resources to help guide new evaluators with processes and tools for planning your evaluation strategy.
Once you identify your goal (or goals), you might check out some of the links to Program Resources as a most basic guide of best practices and associated program goals, and metrics.
Is your program slightly different? The Logic Model is a great process to help map your program’s vision.
Explore Learning Patterns related to implementation, such as how to collect usernames, how to ask about gender, or how to advertise a program.
In order to evaluate you must know what, and how, you will measure. Tracking and monitoring a project or program provides data that can help tell the story of how that program is working as expected and where improvements might be needed. This section holds resources to help guide new evaluators with strategies for tracking and monitoring as well as tools and resources for collecting data on outcomes related to Wikimedia programs.
Wikimetrics is a useful tool for easily measuring user contributions on Wikimedia projects, here you will find a how-to guide and resource links. Survey questions are great for measuring participant’s attitudes, attitudes, or behaviors.
In the measure section you will also find survey tools including the Survey Question Bank, a question repository for questions based on program goals or the survey goals.
You can also learn about Qualtrics, an online survey platform, that Wikimedia program leaders may request access to for piloting a large-scale online survey.