User:JMitchell (WMF)/Sandbox/Workshop In Budapest

From Meta, a Wikimedia project coordination wiki

Template:Evalheading

In June 2013, the first Program Evaluation and Design Workshop will take place in Budapest, Hungary. This workshop is being offered by the Wikimedia Foundation, with support from Wikimédia Magyarország.

Over the next couple of years, the Wikimedia Foundation will be building capacity among program leaders around evaluation and program design. A better understanding of how to increase impact through better planning, execution and evaluation of programs and activities will help us to move a step closer to achieving our mission of offering a free, high quality encyclopedia to our readers around the world. With this in mind, we are pleased to announce the first Program Evaluation and Design Workshop, on 22-23 June 2013 in Budapest, Hungary. We have only 20 slots available for this workshop and the application deadline ends.

For detailed information on visiting Budapest, including weather, visa requirements, getting to and around the city, etc, please check out the "Visiting Budapest" page.

Our long-term goals for the workshop are:

  • Participants will gain a basic shared understanding of program evaluation
  • Participants will work collaboratively to map and prioritize measurable outcomes, beginning with a focus on the most common programs and activities
  • Participants will gain increased fluency in common language of evaluation (i.e. goals versus objectives, inputs and outputs versus outcomes and impact)
  • Participants will learn and practice how to extract and report data using the UserMetrics API
  • Participants will commit to working as a community of evaluation leaders who will implement evaluation strategies in their programs and activities and report back at the pre-conference workshop at Wikimania 2013
  • And participants will have a lot of fun and enjoy networking with other program leaders!


Agenda[edit]

During the workshop in Budapest, we will only have a limited amount of time. Therefore, we will be focusing on the some of the more common programs and activities:

  • Wikipedia editing workshops where participants learn how to or actively edit (i.e. edit-a-thon, wikiparty, hands-on Wikipedia workshop)
  • Content donations through partnerships with galleries, libraries, archives and museums (GLAMs) and related organizations
  • Wiki Takes/Expeditions where volunteers participate in day-long or weekend events to photograph site specific content
  • Wiki Loves Monuments, which takes place in September
  • Education program and classroom editing where volunteers support educators who have students editing Wikipedia in the classroom
  • Writing competitions, which generally take place online in the form of contests, the WikiCup and other challenges – often engaging experienced editors to improve content.

Friday June 21, 2013[edit]

Day 0 Etherpad

19:00
Optional Dinner (no host)

Saturday June 22, 2013[edit]

Day 1 Etherpad

9:00
Meet at venue
  • Light breakfast served
  • Evaluation survey
09:15
What is Program Evaluation and Why we Evaluate
  • Overview and welcome
  • Presentation Content & Process
  • What program evaluation is, isn’t, and why we do it
09:55
Stages and types of Program Evaluation
  • Stages in the Evaluation process
  • Types of Evaluation and associated purposes and strategies
10:20
Short break
10:30
The aims of the current evaluation approach
  • The iterative Evaluation-Design process
  • Evaluation approach
  • Stakeholder roles in evaluation
10:45
Visioning
  • Evaluation visioning group activity
11:30
Program Evaluation Spotlights
  • Select Lightning Talks by participants
12:30
Lunch

Catered lunch

13:30
Theory of Change and Logic Models
  • Theory of Change and Chain of Outcomes
  • A Focus on Outcomes and Impact
  • Continuous Improvement models of programming
  • The Logic Model as an important part of the evaluation toolkit
  • Logic Modeling Basics
14:30
Logic Model Break-out Session 1: Mapping through the chain of outcomes
  • Articulating Theories of Change
  • Mapping of Inputs, Outputs (Activities and Participants), and

Outcomes (Short-, Intermediate-, and Long-term Outcomes/Impacts)

16:30
Afternoon break

Coffee/Tea

16:45
Whole group Sharing
  • Sharing
  • Discovery of commonalities and distinctions
18:00
Pre-Dinner break
19:00
Evening dinner

Sunday June, 23, 2013[edit]

Day 2 Etherpad

8:30
Meet at venue
  • Light breakfast served
  • Day 2 announcements
9:00
Check-in: Take Aways from Day 1
9:20
Data Sources
  • Types of Data
  • Data Sources
  • Overview of UserMetrics API
    • Creating Cohorts and Data Requests
    • Selecting Metrics and Parameters
    • Interpreting Output
    • Access and Availability
10:50
Logic Model Break-out Session 2: Identifying Data Sources and Gaps'
  • Prioritize Inputs, Outputs, and Outcomes
  • Identify evaluation data measures and gaps
12:00
Lunch
Catered lunch
  • Evaluation survey (should be turned in before leaving lunch)
13:00
Logic Model Break-out Session 3: Prioritizing Outcome Indicators
  • Prioritize input, output, and outcome measures
  • Prioritize needs for additional evaluation strategies/measures
14:00
Whole group presentation and processing
  • Group Sharing
15:00 -15:45
Wrap up

Presentations[edit]


Workshop in Budapest Outcomes[edit]

Participants feedback[edit]

Participants in the first Program Evaluation & Design Workshop, Budapest, 2013

Twenty-six international participants came together in June 2013 for the pilot Program Evaluation & Design Workshop in Budapest, Hungary. The workshop brought together 21 Wikimedians from 15 countries. The participants – all with a track record of doing program work – represented five different program types: Edit-a-thons/Editing Workshops, GLAM Content Donations, Photo Upload Contests (Wiki Love Monuments, WikiExpeditions, WikiTakes), On-wiki Writing Competitions (Contests, i.e. WikiCup) and the Wikipedia Education Program. Participants were asked to complete PRE and POST workshop surveys in order to assess the workshop activities impact in terms of its set objectives:

  1. Participants gain a basic shared understanding of program evaluation.
  2. Participants will work collaboratively to map and prioritize measurable outcomes, beginning with a focus on the most popular programmatic activities.
  3. Participants will gain increased fluency in common language of evaluation (i.e. goals versus objectives, inputs & outputs versus outcomes & impact).
  4. Participants will learn about different data sources and how to extract data from the UserMetrics API.
  5. Participants will commit to working as a community of evaluation leaders who will implement evaluation strategies in their programs and report back to the group.
  6. Participants will have a lot of fun and enjoy networking with other program leaders!

The majority of the pilot workshop participants entered the workshop with no or only basic understanding of eight of ten program evaluation terms included in the survey, only the terms program, qualitative, and quantitative were well-known to the group at the beginning of the workshop. By the end of the workshop the majority left the workshop with an applied or expert understanding of nearly all the key terms included on the survey. Importantly, the core concept terms “theory of change” and “logic model,” while still less understood than the other terms, demonstrated highly significant gains along a similar trajectory as the other selected terms that were less known at PRE survey time.

PED Understanding pre and post survey results June 2013

Specifically, understanding of each of the selected terms demonstrated the following growth from PRE to POST:

  • Cohort: Understanding grew from 19% reporting applied or expert understanding at PRE to 78% at POST
  • Inputs: Understanding grew from 38% reporting applied or expert understanding at PRE to 100% at POST
  • Logic Model: Understanding grew from 25% reporting applied or expert understanding at PRE to 47% at POST
  • Outcomes: Understanding grew from 40% reporting applied or expert understanding at PRE to 84% at POST
  • Outputs: Understanding grew from 30% reporting applied or expert understanding at PRE to 95% at POST
  • Metrics: Understanding grew from 50% reporting applied or expert understanding at PRE to 63% at POST
  • Program: Demonstrated a growth trend from 63% reporting applied or expert understanding at PRE to 74% at POST
  • Qualitative: Understanding maintained with 75% reporting applied or expert understanding at PRE and 74% at POST
  • Quantitative: Demonstrated a growth trend 75% reporting applied or expert understanding at PRE to 84% at POST
  • Theory of Change: Understanding grew from 12% reporting applied or expert understanding at PRE to 53% at POST

In addition to actual change in understanding of a new, shared vocabulary, participants also demonstrated a high level of success in accessing several core learning concepts that were presented and modeled throughout the course of the workshop. At POST survey time, participants rated their level of understanding of six key learning concepts that were part of the workshop presentations all of which they rated rather high.

PED Post Survey Self-Ratings June 2013

Furthermore, the majority of the participants were highly satisfied with the process of, and logic models generated by, the break-out group sessions. At both PRE and POST survey time participants shared one word or phrase that best represented their feeling(s) about evaluation. At PRE survey, motivations, while somewhat “curious” also presented aspects of feeling pressured to participate while at POST survey time there was much more excitement expressed, along with a fair bit of their feeling overwhelmed. When asked what next steps they were planning to implement in the next 45 days, the participants’ most frequent responses were:

  • Develop measures for capturing outcomes (47%)
  • Conduct a visioning activity to articulate their specific program’s impact goals and theory of change (42%)
  • Develop their own custom logic model to map their specific program’s chain of outcomes (42%)

Although most participants offered specific ways that the workshop could be improved, the majority of participants felt confident in their ability to implement next steps in evaluating their program and they shared the ways that the Program Evaluation and Design Team could best support them in those next steps (i.e., broader community involvement, quality tools for tracking data, survey strategies, materials to teach other program leaders, and an online portal for engagement) toward which the team continues to direct progress.

Complete responses are summarized in the Results Summary (see below)

PRE and POST Participant Survey Results

Program Evaluation and Design Workshop Logic Model Drafts[edit]

This page is currently a draft. Material may not yet be complete, information may presently be omitted, and certain parts of the content may be subject to radical, rapid alteration. More information pertaining to this may be available on the talk page.


Note to Program Leaders: While the Logic Model tool was mostly successful, there was still some confusion around inputs vs outputs in terms of "activities" for many, as well as confusion in identifying the WHO did WHAT in terms of "Participation" outputs. So... in the current version seen below, outputs are articulated instead of the former two-category system (i.e. Activities and Participation) there are now three categories under outputs (i.e. Participants, Activities, & Direct Products). You are welcome to also share your perspective on this on the talk page as it is something we are continuing to work on for clarity's sake. ____________________________________

Within each column are those items identified within each program’s general theory of change as delineated and mapped out in each of the program-based break-out session groups. The items are in columns for the various Input, Output (Participants, Activities, and Direct Event Products), and Outcome (Short-, Medium- , and Long-term outcomes) categories. Note – since there was confusion and some gaps in the end products related to the confusing output category “participation” we have currently revised the outputs mapping to include participants, activities, and direct products as separate output prompt categories. Please feel free to share whether this helps with clarity and thorough mapping or if it somehow further muddies the waters on this article's talk page.

Program Name: Overarching themes example

Theory of Change Vision: Wikimedia Programs will recruit, retain, and support contributors to create and maintain high quality content across Wikimedia projects.


Click here to see the logic model created at the workshop

Pickles[edit]

Pickles are a situation or problem you have to solve.
  1. Documentation of Programs/projects/evaluation – Where, how, when, and what to collect?
  2. Including community voices in the dialogue as opposed to imposing WMF’s narrowing focus on the movement
  3. Confusion over unexpected outcomes and whether they are important to take into account
  4. Defining outcomes verses outputs – simply something that happens after you lose control of participants – a lot still struggle with the distinction
  5. Too many modes of communication (i.e., mailing lists, wiki, events, etc.)
  6. I was positively surprised that it was possible to get certified Halal food in Hungary

More Links[edit]