Program evaluation basics: why evaluate in the first place?

From Meta, a Wikimedia project coordination wiki
Jump to: navigation, search

Contents

This page aims at giving the reader some answers to the question "Why should we evaluate our programs?". After reading this page, people who are running programs will have a better understanding of why program evaluation is valuable and also which questions program evaluation helps to answer.

Why evaluate in the first place?[edit]

The easiest answer to the question of why to evaluate a program is: because you want to know if it works! (or, even better: because you want to know whether it is efficient, effective and has impact). Consider the many hours of work that you and perhaps a bunch of other volunteers have put into your program. You've done a couple of Wikipedia workshops, you've explained to hundreds of newcomers how to upload images to Commons, you've put thousands of hours into developing a great software extension that makes editing easy like a piece of cake… Then what? You know how many people participated in your workshops, but you have no idea about how many of them are still editing after half a year (impact.) People seemed to enjoy your hours-long explanations of copyright policies on Commons, but you have no clue whether that's the most effective way of getting them to upload the stuff that's on their hard disk (effectiveness.) Your software seems to work great, but you have no idea of whether is was worth spending all those endless days on getting it up and running (efficiency). Here's where program evaluation comes into play. But there's even more to it. Let's take a look at a couple of other reasons why program evaluation is valuable.

Choosing the program that aligns best with your mission. Let's face it: as a grantee, you have many different choices when it comes to choosing what you want to spend time and money on. You could participate in Wiki Loves Monuments, you could teach students how to write Wikipedia articles, you could partner with museums as part of Wikipedians in Residence, etc. But how do you know whether the specific program aligns with the goals of your organization? Ideally, you would have a great deal of information about which programs achieve what kind of outcome. Also, you'd want to know which of the existing programs has the biggest impact with the least amount of resources spent on it. Currently (as of late 2012), this kind of information doesn't exist. That's what's behind our efforts to think more about program evaluation.

Improving or changing your program. Let's assume you're already running programs. Do you already collect data on the efficiency, effectiveness, and the impact of your activities? And do you use this data for continuously improving your programs? (If you do, that's great; you might want to share this information online, so other grantees can learn from you.) Program evaluation is a great way of improving your programs along the way. You might want to think about evaluating your program once you're halfway through the program cycle. Or even more often. The more you're able to tell whether certain things you do as part of your program are working out as planned (also: whether they're not achieving what you set out to achieve), the more you will be in a position to take these learnings and put changes into effect that will improve your program. And – as long as you don't forget to share these learnings – other grantees will benefit from your insights as well. This is how learning through evaluation can drive the impact of Wikimedia programs across countries.

Making a case for a new program. Let's assume you woke up one day with that great new idea in your mind about how to make Wikipedia a better place. Everything seems to be so perfect. But how do you convince others that your idea is worth the effort and money that's needed to make your program come to life? Gut feelings, perceptions, even anecdotes might not be enough to persuade those people who will ask you for more objective evidence. That's where a small pilot program with a bullet-proof evaluation comes into play. Data might not always convince everybody involved, but it will at least demonstrate the value of your case.

Being accountable to your donors and / or funders. Wouldn't it be great if you had some hard facts that you could show to your donors (e.g. as part of your annual report) about how much of an impact your programmatic activities have? Not only would this add to your own satisfaction, it would also make it more likely that your fundraiser will be more successful. And, with the Funds Dissemination Committee (FDC) being in place since 2012, there's also another reason why program evaluation will be more important than ever: after the first round of funding for 2012/2013, the FDC requested more information about program impact, so it has a better foundation for making recommendations on what to fund in the future. This means that from now on, funding decisions will rely heavily on the ability of grantees to demonstrate what impact their programmatic activities have. That means that grantees will have to start thinking about program evaluation, in case they plan to apply for movement funds through the FDC process.


Wikimedia-logo.svg
Program evaluation basics
+ Add a commentDiscuss this topic

Evaluation Main Page?[edit]

These pages are very nutshellish and to the point. I just wondered why there is no formal introductory page, cf. the E3 "portal". Might come in handy for reference and establishing some kind of international hub for evaluation matters. Michael Jahn WMDE (talk) 22:15, 4 March 2013 (UTC)