Wikimedia monthly activities meetings/Quarterly reviews/Grantmaking and Program Evaluation/October 2013
The following are notes from the Quarterly Review meeting of the Wikimedia Foundation's Grantmaking and Program evaluation teams on October 25, 2013, 2pm-5:20pm and a followup meeting, on October 30, 2013, 3pm-4:40pm.
October 25:
- Present: Siko Bouterse, Katy Love, Jessie Wild, Anasuya Sengupta, Erik Moeller, Sue Gardner, Frank Schulenburg, Tilman Bayer (taking minutes), Haitham Shammaa, Jaime Anstee, Asaf Bartov, Adele Vrana (from 3:49pm)
- Participating remotely: Sarah Stierch, Jonathan Morgan
October 30:
- Present: Siko Bouterse, Jessie Wild, Anasuya Sengupta, Frank Schulenburg, Sue Gardner, Adele Vrana, Asaf Bartov, Tilman Bayer (taking minutes), Jaime Anstee, Haitham Shammaa, Erik Moeller
- Participating remotely: Jonathan Morgan
Please keep in mind that these minutes are mostly a rough transcript of what was said at the meeting, rather than a source of authoritative information. Consider referring to the presentation slides, blog posts, press releases and other official material
Agenda
[edit]Q1 Grantmaking Overview: 45 minutes
- Agenda & structure of the meeting (5 minutes)
- Overall Grantmaking progress (10 minutes)
- Annual Plans, Project & Event, Individual Engagement (30 minutes)
- (Break - 5 minutes)
Evaluation & Learning Strategy: 95 minutes
- High level overview (10 minutes)
- Internal Evaluation (10 minutes)
- Needs Assessment (15 minutes)
- Gaps (20 minutes)
- Learnings (20 minutes)
- (Break - 5 minutes)
- Current & Future plans (20 minutes)
Resources Needed and Questions: 30 minutes
Anasuya:
Welcome
This is our second review
The first one was mostly about structure, processes
Since then, worked with Program Evaluation team. This review is focused on Learning and Evaluation, and the work by our two teams.
Overview: Where Grantmaking is now
[edit]Established structures, processes
Gave clarity to movement about difference between the various grant programs. For different constituents of the Wikimedia movement, from individuals to groups to established organizations.
Built trust and understanding through site visits, online conversations, IRC, conferences, ...
As a results, saw some shifts in grantees' thinking
Grantmaking envelope this year: $8 million, largest chunk is FDC
In Q1 gave $1.2M in grants = 14%, 55% to the Global South. Will change with the first round of FDC allocations.
Sue: the 21.3% in the US, that's mostly Education? yes
Movement roles, new ED, strategic planning... will all impact grantmaking
WMF is seen as a role model. Important for our own frameworks for learning, reporting, communication...
Katy on current round 1 FDC requests
Sue: growth refers to requests, not actually spent money? yes
(Katy:)
average of average increases = ~119%, but weighted average this round is 39%.
total Round 1 requests: $5.9m, total available for both rounds: $6m - but FDC is mindful of that
Top three things we are learning:
- Impact measurement: takes time, but we see improvement
- PE&D’s influence is appearing: we see indicators that track beyond outputs (e.g. participant numbers, partner numbers, event numbers) are appearing in reports
- Orgs are getting better, structure and reporting - better metrics
- Shift in culture towards transparency, e.g. reporting failures
- WMF/FDC staff guidance and entities are having intensive one on one discussions about which grantmaking stream may be right for them (FDC versus PEG).
- Conversations around levels of growth are also taking place.
- Not much engagement by non-chapter community members in community review, even not much cross-chapter review
- This issue still needs to be resolved.
Asaf:
median grant is ~$7k (avg. is ~$17k)
Outlier (largest): Wiki Education Foundation - unusual in that it's a WMF program being spun off to an organization that will have (short-term) staff
WLM overall funding (incl. via FDC allocations) about a quarter million dollars
Erik: what is the bulk of WLM spending?
(Asaf:)
Prizes, travel to jury meetings etc., prize-giving events, exhibitions, some short-term contractors, some microgranting
Erik: is that number publicly tracked?
Asaf: excellent question!
Jaime: working on that, will be published - [1] (has been shared with organizers of WLM for confirmation. notes by chapter abbrs. say status via notes)
(Asaf:) top 3 lessons learned:
1. trend towards content acquisition projects
e.g. beside WLM, WMUA digitization
Possible reasons:
- easier to show results for photographs? (fewer policy/syntax barriers..)
- existing strong framework by WLM, well promoted?
- Can we replicate that WLM model
Sue: additional theory: Wikipedians getting smarter about what they are good at (e.g. content liberation),
Content liberation perhaps better suited for many Wikipedians (diplomacy with museums, sorting out licensing...), than e.g. broad outreach, public speeches...
(Asaf:) Reuse of donations - was done well with Bundesarchiv; Images from Czech Mediagrant also rather well integrated on Wikipedia
Sarah: ...does this on a regular basis as a volunteer and thinks everyone is right in their theories :)
(Asaf:)
2. Even experienced people do not read our documentations, announcements
But personal conversation works well, see changes in behavior and attitude afterwards . Sarah: +1!
(Asaf:)
3. COI is still a recurring challenge. Grantmaking had to point this out on several occasions ; it's sometimes a cultural issue where CoI is not well understood or perceived to be a problem
Anasuya: we try to explain that this can hurt organizations and people’s credibility in their own communities
Sue: I had lots of conversations about this - I don't think there is a good shared understanding. Some people saying openly that they were motivated to become chapter board members by hope for employment or consultancy. Comes from an innocent place, lack of understanding
Asaf: Hope to have a session about this at WMCON in Berlin.
Siko:
21 proposals in round 2 last year, 8 proposals totaling ~$60k in round 1 this time
categories:
- tools
- offline outreach (e.g. content acquisition)
- online community organizing (e.g. adopt-a-user program)
- research
top three lessons:
1. (from midpoint reports:) non-monetary support is important, especially to individuals: planning/reporting/mentoring...
2. # of proposals hasn't increased much, but quality slightly better. 1/3 incubated in IdeaLab. Publicizing opportunities still a challenge
3. participation not just by usual suspects, editors want to be involved, e.g. 10 of 18 committee members not chapter
Erik: $150,000 = cap for this round? yes
Erik: how comfortable are we about not spending some of it?
Siko: very comfortable
Erik: community gives thumbs up / thumbs down or rating?
Siko: rating first, then shortlist, then due diligence process
Erik: impression that quality is trending up was based on these scores?
Siko: gut feeling so far, scoring will confirm
Erik: many technology related proposals (tools category)? yes
Erik: we should cooperate on impact assessment for these
(Siko:)
Travel and Participation grants: (formerly Participation support program)
new partner: WMCH (joining WMF and WMDE)
funding most requests that come in
13 in Q1
Lessons learned:
full analysis will be published in Q2 and based on that we’ll start to make changes
From survey: Room for improvement in page design, expansion
Erik: these numbers are tiny, how much staff time is going into this?
Siko, Winifred: very very few
Erik: is there desire to scale this?
Siko: yes
Erik: total # includes all partners? yes
(Siko:)
program needs to be publicized better
Erik: under what conditions would we shut it down?
Anasuya: e.g. if in the survey only 10% said it's useful
Jessie clarifies that survey population = all who edited related pages on Meta
Erik: reimbursement tracking system by Redhat, SuSE... might be more scalable
Erik: let's discuss general technical solutions for grants tracking
Anasuya: so far just wiki-based
Erik: How far are we with Fluxx?
Asaf: waiting for questions to be resolved before implementation can begin
Value in still having grant pages on Meta, for community participation
Erik : will chapters be able to use Fluxx?
Asaf : We don't know yet, because we might have different procedures, and we still don't know enough about the technical capabilities of Fluxx.
Erik: might be especially suitable for these small transactions, also considering privacy
Redhat, OpenSuSE use ticketing systems
Asaf: but they don't have a commitment to community review
Erik: not sure this is true
Anasuya: OK, that's worth looking into
Erik: so, worth thinking about tracking systems, perhaps integrate into MediaWiki - but not as far as processing receipts
Sue: sounds to me you are also talking about conceptual modification: not submit in advance...
Erik: large international Redhat community, Fedora ambassadors program, recognizes community members as ambassadors, reimburses them. There are other open source communities with similar needs. We should look at communities that are most like us, even larger than us
Anasuya: and it's a perfect time to look into this
Sue: making requests timely, manually might be a burden on community members. Could imagine a program where you become trusted (until you're not ;-). Finance could play a rule in vetting a random sample of expenses, or community (peer) review (?)
Erik: also advantage over topic-specific programs like WMDE's (books, travel..)
Anasuya: when scaling, need to be wary of forum shopping between the various programs (also those offered by chapters)
Sue: a little waste (from over-trusting) in a program that is vastly good might be preferable to burdens of present system
Erik: How do I as Wikipedian encounter this stuff, enter this world?
Sarah: I didn't know about all this, even after years, until Liam told me. Still found it intimidating...
Sue: Sarah's comment makes me think about gender, women might be more reluctant to apply because they might be more sensitive to being vetted or questioned in public
anything that pre-approves people is more likely to be used
Siko: my concern is more about the technical solutions, the human solutions are within our reach but tech is not
Erik: yes, that's why we should begin thinking about this now. Not an easy problem, start research what experience of Redhat etc. is.
We need to think about consolidation of existing programs as precondition
Evaluation and Learning Strategy
[edit]Frank:
where we are: still at the beginning of building capacity around program design and program evaluation
Most grantees and program leaders don't yet fully understand that evaluation is an essential part of planning and executing a program, that it helps with improving your program over time
many are still struggling with basic things
concerns about burden on volunteers
positive signs:
- increased awareness
- eagerness and ability to learn, good participation in workshops
- infrastructure that we build (e.g. WikiMetrics) is being embraced
Imagine a world...
... where people, programs and organizations supported by grants create significant impact for the movement
-Frank
opportunity for support and mentorship by WMF; we'd like to empower people to get better over time
working together with other teams in grantmaking:
joint events, FDC support, infrastructure building (Jonathan), grants (Jessie)
Jessie on evaluation work both by PE team (Jaime, Sarah) and grantmaking learning evaluation team (Haitham, Jonathan, Jessie):
We spend a lot of time supporting the grantmaking program officers in executing their programs in an effective way.
Tons of surveys
Grants evaluation is not only important for us, but also for the whole movement.
Key questions: (More details at our portal page https://meta.wikimedia.org/wiki/Grantmaking_and_Programs/Learning_%26_Evaluation_portal/About )
- org effectiveness (see also later): what are characteristics of successful Wikimedia orgs, what are good offline models
- systems for learning and evaluation: find good reporting requirements, facilitate cross-sharing and learning
- Community mapping and baselines
- Internal processes: effectiveness/efficiency, fairness, transparency of our own programs
Jaime on relation of Grantmaking L&E team and Program Evaluation & Design team:
3 main principles:
- self-evaluation
- foster collaboration
- Build capacity
did assessments:
- document review
- RL events (Budapest workshop etc)
- surveys
- online forums: Programs portal etc.
Document review:
- "landscape report" of reports and chapters reports - what kind of information was shared, what are gaps
- organizational database
RL events:
- Pilot workshop in Budapest: encourage move towards impact reporting
- Pilot group meetup at Wikimania: Wikimetrics, community involvement
- Grantee day post-Wikimania: communications
Surveys:
- Participant surveys for the events
- program leaders survey (69 repondents): 31 chapters (up to 8 people from one chapter) + other orgs, individuals. Broad interest in various topics
Online forums (in order of decreasing participant #):
Facebook, Meta portal, mailing list, Google Hangout (1 per month), IRC Office hour
Jessie on Gaps that emerged from the assessments:
overview:
1. org development and design
2. knowledge about eval and design
3. communication (share info)
4. information resources (get data to do eval)
This is where a lot of our teams' work lies
1.:
- needs assessment and strategic planning
- governance and leadership (e.g. COI issue discussed earlier)
- Lack of time, capacity.
- clarity on movement roles
Jessie: Example : I was talking recently with on Wikipedian in Brazil, and he told me "we spent a lot of time and energy in establishing a chapter, but now I think that the chapter might not be the right thing to do. We haven’t thought about what type of form would best support the type of work we want to do in Brazil."
Budgets and growth: (slide 47)
average of 39% increase, but this is disproportionate over our movement.
uneven distribution - 3 chapters receiving 58%
Anasuya: 19 of current chapters don't receive funds from us at all - big jump from there to grants/FDC receiving chapters
Erik: even bigger if separate chapter revenues are taken into account
Jessie : this is something we have been considering during FDC proposal evaluation, to consider the overall budget, not only the funds they receive from WMF.
Jaime on eval/program design gaps:
- unclear theory of change
- lack of clear goals and objectives
- inconsistent or no examination of outputs and outcomes
- lack of prioritization
program leaders eager to move forward, but need support
Frank: some don't track the user names of people who participated in their programs, which means they can't track whether people actually edited Wikipedia after attending a workshop or Edit-a-thon
Seen editathons with 1 participant supported with chapter money, with subsequent decision to increase # of editathons
Jaime:
not all goals suitable for all activities (e.g. for editathons - retention rather than amount of content added?)
Jessie on gaps in information resources:
number 1 need: tools
Erik: Wikimetrics should cover a lot now. What else are people asking for?
(Jessie:) end to end stuff, e.g pulling all user names of participants for an event, visualizations
Jaime: also surveys. We use Qualtrics.
Erik: open source alternatives?
Jaime: for the moment, Qualtrics is best for us
Erik: multilingual support?
Jessie, Tilman: This appears to be a strength of Qualtrics currently
Sarah: requests from GLAM community
Erik: Analytics team is currently triaging requirements for pageviews API
Sue: proposes to cut some of the remaining slides to make room for discussion
--break--
(slide 63)
Jonathan:
idea was to create a space to bring community together for conversation about eval
provide good information material (research, learning patterns), tools, forum to ask questions,
still under construction
learning patterns: based on design patterns, easy to create/apply, safe way to discuss failure, not a reporting requirement
Sarah: Jonathan and I worked on this with Heather, positive community response
Erik: this looks cool, did you consider doing a pattern workshop?
Jessie: want to get translations integrated
Anasuya: also, want to connect this with grants reports, see it as an empowering process across the movement.
Discussion
[edit]Sue:
Talk about where we are overall
so back months ago when we started talking about prog eval, we had assumptions, theories: e.g. in the movement there is not a lot of knowledge about eval
from slides, i see this being confirmed
discussed being a grantmaking vs a partnering organization
if we were a standard grantmaking org (like the Sloan Foundation), we would turn down a lot of the current requests
but we are different. We're not a grantmaking org, we can't turn down applicants. They are our partners, our colleagues.
Discussion points: (summary from comments by Sue, Erik and the teams, including the followup meeting on October 30)
- Don’t want to turn people into good grantwriters. We need consistent formats to have some level of comparison across the movement, but need to support them to tell their stories better. Our core principle is around self-definition and self-assessment: people have their own understanding of what their goals are, in their own contexts, and measure against those. At the same time, we are looking at movement-wide goals that we can analyze for impact; we’re doing a lot of translation back and forth. And we should have explicit counterpressure internally to make sure we’re not becoming too complex and difficult (Jessie’s responsibility for the Grants team, Sarah’s for P&E)
- Let’s make sure to have people talking about what works and what doesn’t in as many simple ways as possible. If that’s bringing them together in a room to talk about it (like Wiki Loves Monuments), let’s do it.
- It would be great if we can decouple information and process architecture, then the better we can meet grantees' needs. But right now, there’s no consistency in chapter reporting for the movement, which is why it’s hard from a portfolio approach. Some groups are integrating our formats into their own reporting, so they’re not duplicating. The more we can do it like this - so that it’s not an empty exercise - the better. Including blog posts, learning patterns...
- We also need to support building capacity across the movement - including our own internally - to help us evaluate and learn better. We all want to be effective, and there’s a lot of work we all need to do to understand what impact means at different levels: movement, WMF grants programs, organisations, individual volunteers... Just asking the right questions - What do we want to do? Why? What are the needs of our community? What skills and capacities do we have, what do we need? Who do we ask? - will help make the evaluation process more empowering, which is what we’re trying to do.
- Shouldn’t overburden the grantees for information, and must streamline our requests. At the same time, remember we are stewarding considerable resources - in Anasuya and Katy’s previous worlds, $60,000 would have been a huge amount - and everyone owes it to the movement to be accountable for the impact.
- Need to be able to ask the tough questions about long-term impact of the current activities. 2-5 years from now, shouldn’t still be writing large checks for activities that don’t achieve much. Funding opportunities are limitless, but not our resources.
- Our grantmaking is diverse: from individuals (getting relatively small amounts of money), to groups, to chapter/orgs. 19 chapters not getting funding from us, 9 in <100k range (13k is median grant size), 9 orgs in $100-500k range, 3 orgs in >500k. Not one size fits all.
- Need to look at orgs from a multi-tiered perspective: those getting a lot of $$$ need to show impact and focus; those that are growing need to be thoughtful about modes of growth; those that are just starting out are fine as they are or may ask for support. Also need room for experimentation. All of this needs data, orgs sharing data transparently, relationship building, positive models from each other, pointing out inefficiencies when they exist... How to become more effective orgs?