Talk:Funds Dissemination Committee/Additional Information and Analysis/Interviews/Sue Gardner
These are very thorough notes on Sue's interview. I think they are excellent fodder for comments and discussion. Please add comments, questions, issues below. --Barry Newstead (WMF) (talk) 01:12, 9 May 2012 (UTC)
Question about Arb Com 
I am happy to have people discuss anything from my notes here (and I have posed a couple of questions elsewhere on these pages), but I do have a specific question for anyone who is knowledgeable about Arb Com.
One model for leadership in the Wikimedia movement is the Wikimedia Foundation Board of Trustees, and there are a number of characteristics the FDC might usefully copy from the Board of Trustees. For example: a mixed model of elected/appointed members, staggered expiry of member terms to support continuity, and so forth.
Another model is the Arb Com, which I talked a little about with Bridgespan the other day. But, I am only familiar with the English Arb Com. Although I have met some of their members, I don't know very much about the workings of e.g., the Dutch Arb Com or other Arb Coms. If there are people reading this page who are familiar with how the Arb Com (any Arb Com, any language) works, I'd appreciate if you could help me answer any or all of the following:
- How many members does the Arb Com have (minimum/maximum?), what is the duration of terms, and how is membership determined?
- Is there a chair or coordinating member, and if so what's the process for appointing/selecting that person?
- How was the inaugural Arb Com membership determined, and what changes were introduced over time as the Arb Com became established?
- How is it working? What are some pros and cons of the current framework for how members are selected, how they leave, how they're replaced?
- Are there any lessons learned from the establishment of Arb Coms that might usefully be applied to the establishment of the FDC?
I do hope that some non-enwp Arbitration Committees will also comment here, because I'm quite interested to know more about the variations between projects, but I'll try to answer for English Wikipedia.
- As of our last election cycle, Arbcom has 15 seats, and all seats are currently filled; however, we have three members who are inactive at present, and two others who are on-and-off for personal reasons. For the 3 elections prior to that, we had 18 members. For the last 3 annual elections, members have been appointed for 2 years. The committee's overall size tries to accommodate the fact that most arbitrators will be inactive for a portion of the year (sickness, work commitments, holidays, family matters, etc), and some may resign, but the committee's workload is fairly constant.
- In theory, we have a co-ordinating member, who is chosen based on willingness rather than any formal process, and is usually someone with at least a year on the Committee. In reality, the role is not well fleshed out. To some extent, its primary function is to prod others to vote, respond to other requests, etc, and to take a lead role in certain situations.
- As best I can tell (it was before I started editing), Arbcom came to be as a creation of Jimmy Wales, who was (quite reasonably) seeking to divest responsibilities for just about all dispute resolution on the site, including bans, desysops, and similar issues. He appointed the first committee and encouraged the community to set up elections for future members.
- How it's working is a tricky question. When originally created, almost all of the work of Arbcom was addressing the behaviour of individual editors. In the past few years, we've dealt with a lot of issues that include religion (the Scientology case even got a mention from Stephen Colbert), science and pseudoscience, and ethnic disputes (Ireland, Eastern Europe, Israel/Palestine, Senkaku Islands, etc.) At their core these all revolve around behavioural problems, but there are usually big-picture issues involved in most Arbcom decisions. The community has pretty well assumed the role of handling the behaviour of individual editors, and holds its own ban/block discussions.
What isn't as obvious are the other responsibilities of the Committee, including the selection and appointment of Checkusers and Oversighters, the functioning of the Audit Subcommittee (sort of a localized Ombudsman dealing with Checkuser/Oversight concerns), decisions to desysop, and being the "court of last appeal" for blocks and bans. The latter is the unseen part of the iceberg when it comes to Arbcom workload, as we get 20 or more requests a month, all of which need to be researched, and there is often extensive communication involved when it's perceived that perhaps a blocked/banned editor can become a productive volunteer. I will note that many other projects have developed their own ways to handle many of these issues, so that they are not part of the Arbcom's portfolio. However, as these functions were all historically within Jimmy's scope, they were devolved to Arbcom, and we as a community have not successfully developed alternate means of addressing them.
As to people coming and going, we start off the year with a full complement of arbitrators. Should any resign during the course of the year, they are not replaced; their seat is filled at the next year's election. I don't think this would be workable for the FDC, if for no other reason than that I'm not persuaded that election is the best method for filling candidacies for the FDC. Each year, we have fewer candidates for the open seats; my personal belief is that the community has become more aware of the scope of work being done by Arbcom members, while also being aware of the extent to which committee members are viewed "under the microscope", and thus is less inclined to look at Arbcom membership as a privileged position, and more like one with a big workload and lots of scrutiny and little public reward. I've sometimes said it's like being a regular featured article writer, in that a lot of work goes in and lots of people have an opinion about it. The difference is that the FA writer usually gets a little bronze star at the end of the process.
- I think the most important lesson to be learned from ENWP's Arbcom experience is to resist scope creep, because those little "add-on" tasks can sometimes turn out to require more time and attention than the core functions, and can distract from the core functions.
- This does help, Risker, thanks. I've asked Philippe to ping some non-en ArbCom members, to see if we can get their experiences here too. It doesn't make sense for the FDC to slavishly copy the ArbComs, because the scope/goals/deliverables for Arb Coms are different from the FDC's. But I think where there are existing models that work pretty well, and have been time-tested and iterated, it makes sense for the FDC to benefit from what they've learned. So thank you for this info -- it'll help. Sue Gardner (talk) 20:42, 12 May 2012 (UTC)
Not seeing the correlation between this funds dissemination model and the promulgation of the movement mission/strategy 
I think I may be missing something here. This proposal appears to suggest that the FDC should not be assessing funding requests by considering their component elements or providing feedback to the requesting entity about whether or not individual elements are within the movement mission, are likely to have significant impact, are well-considered, etc.; it specifically excludes line-item reviews. While I appreciate that once the Board has approved an entity (let's be honest, right now we mean chapters), there should be an assumption of trust, at the same time there's no evidence that the Board has really done that kind of legwork with existing grants to the chapters. As well, even with the best intentions it is not difficult for a chapter to develop some mission creep over time. What seems to be missing here is the importance and value of the FDC providing feedback to the chapters that do not receive 100% of their request about *what aspects* of their request that may have resulted in reduced funding. Now, it may be that the FDC decides to allocate XX% of its available funds to groups that meet Criteria A, so those in Criteria B will have to share a smaller pie; in that case, the response may simply be that all groups meeting Criteria B are affected proportionately. However, if there are specific elements of a chapter's annual plan that seem to be outside of mission/are not likely to have significant impact/don't appear to be well-considered, why would the FDC not specifically point to those elements and ask for them to be re-examined (or alternately paid for out of the chapter's other funds)? I think this feedback function is pretty important in order to meet the expectations for an FDC "whose sole purpose will be to make recommendations to the Wikimedia Foundation for funding activities and initiatives in support of the mission goals of the Wikimedia movement." Risker (talk) 04:44, 11 May 2012 (UTC)
- I second Risker's point and spoke to this issue in my interview with Bridgespan. While I think that the FDC's actual decision should be an allocation amount, a significant part of their value added should be there critical analysis of the plan. I think their analysis in the form of the staff report on the plan and the summary points from their deliberations should be a public document that can form part of our collective body of knowledge. This should include posing the tough question about strategy alignment, potential for impact or likelihood of successful implementation of specific programs. Over time, the FDC (with the support of the staff around them) will be exposed to most of the programming work in the movement and will see what is working and what isn't, their analysis should help to push work in all corners of the movement to higher impact programs and help us avoid the inertia that comes with more established organizations that want to keep on doing the same things, regardless of impact, at larger and larger scale. It will also assist innovators and folks working on strategic priorities in places with less established chapters or other groups gain a foothold in the funding process. --Barry Newstead (WMF) (talk) 16:03, 11 May 2012 (UTC)
- Yeah, I agree that we will need a framework for the FDC to use to evaluate funding requests. My expectation is that it will be some version of the five priorities expressed in the strategy plan: infrastructure, participation, quality, reach and innovation, possibly with specific reference to outcomes as reflected in the targets -- total number of people reached by the projects, number of articles, quality of articles/information offered, number of active editors, and healthy diversity in the editing community (specifically, women and Global South contributors). I am assuming that through this process, Bridgespan will design for us a draft scorecard or framework for the FDC to use in its evaluation. We won't get the scorecard right in the first year, which is why I plan to build in lots of opportunity for iteration, via review processes and via feedback from the FDC advisory group. Even so, I think it would be good if Bridgespan could produce a draft scorecard early, so that we can post it and have it reviewed. (Maybe we could enlist volunteers to help us do some test reviewing of sample projects, or the Wikimedia Foundation budget, or something, in order to work out some of the kinks before the FDC puts the scorecard into practice, for real.) That's what I'm thinking right now. Thanks Sue Gardner (talk) 20:39, 12 May 2012 (UTC)
- In response to the discussion here, I've just edited the section of the proposed recommendation on how the FDC will make its decisions. Here's the current version. Previously, there was a fairly generic list of criteria for how the FDC would make its decisions, which pointed towards the strategic priorities, as well as incorporating some of them (e.g., innovation). Reading the version that had been there, it was not obvious to me how grant requests would be evaluated. If I were an FDC member, I wouldn't have known what I was supposed to be doing. So, I have changed the list so that now it contains the strategic priorities, as well as the targets, rather than just making a reference. I think this is a lot more practical and understandable --- it's clear to me now, reading it, how grant requests would be evaluated and decisions made. Anybody reading this: please take a look and say what you think. Thanks Sue Gardner (talk) 19:14, 14 May 2012 (UTC)