Grants talk:Learning & Evaluation/Survey drafts
Question to add
One question to add which is in the FDC survey:
Q: On a scale of 1–4, to what extent do you agree with the following statements about the most recent round of the Grants process: (1–4 range “Strongly agree” to “Strongly disagree” + “Not enough information”)
- The most recent round was fair and transparent
- Expectations and deadlines were clearly and effectively communicated
- Requirements of participants were reasonable and achievable
- Participants had clear avenues to provide real-time feedback
Please comment on any high or low scores you gave (open text box)
pending on if the IdeaLab expands to have ideas submitted that are also for Project/Event grants and not just IEG, we should add some of the relevant questions here too:
Qa. Did you participate in the IdeaLab?
- Yes (1)
- No (2)
Qb. (if Yes to Qa) Was there anything in particular you liked OR disliked about your experience in the IdeaLab? (Open ended)
Qc. (if Yes to Qa) Did collaborating with others in the IdeaLab improve your idea?
- Yes - definitely!
- It might have
- No way
- No one collaborated with my idea
While this survey is innocent enough on its own and I'm sure that it will provide lots of groovy data to do interesting analysis on, I would be wary of loading applicants down with piles of paperwork and process that isn't directly related to their project. If applicants see that they have to go through a cavalcade of process to get these relatively small amounts of funding, then I think there's a danger that some worthy initiatives will decide that it's not worth their time to engage through this pathway. Craig Franklin (talk) 11:58, 17 December 2013 (UTC).
I can't agree more. A lot more work to do while the purpose is not enough clear. Is it worth the extra effort? I'm afraid not really or even really not. Tell the applicants in the KISS-way what we expect in a request for funds provided by WMF. Be open and transparent so everyone knows the do's and don'ts of pleasing the people involved in the fund dissemination process and specifically the top notch decision makers. Klaas|Z4␟V: 22:39, 17 December 2013 (UTC)
- Agree with you all that it is important to be careful about how much we ask, and how often. I'll just comment from an IEG perspective: In IEG's first round, running a post-notification survey was really valuable to see what worked and didn't about a pilot process - particularly because it was an opportunity to get feedback from people who didn't get funding, not just from those who became grantees. We may decide that after a couple of rounds we're hearing all the same things and we don't need to do it each time - I think that would be reasonable, I certainly dislike extra steps myself and don't want to put others through them. On the other hand, those surveyed really are asked to only spend about 10 minutes on it, and taking it is optional (they don't lose funding by not taking it, just miss an opportunity to complain anonymously!). Still, I appreciate seeing a reminder to keep an eye on this, and I'm sure the L&E folks will take it to heart too in terms of keeping question counts low and flows simple. Siko (WMF) (talk) 02:11, 18 December 2013 (UTC)
IEG post-grant survey: I copy-edited it yesterday, without realising the changes needed to be marked for transfer onto another document. I found the layout and order unnecessarily complicated. It's partially simplified now.
It's a long and winding pathway. The shorter and simpler for applicants, the better the quality of the information, generally. I can imagine that unsuccessful applicants might be disinclined to bother, so something at the top about how we value their input especially, as well as a shorter survey, might encourage them.
- First, the numbering system could be vastly simplified (what about Q1, Q2, Q3, etc?).
- Second, could we not have (1), (2) etc after each answer choice? Or are they merely artefacts of the auto system that will be used? If so, I hope they don't appear.
- Where each choice of answer leads down to a specific further question, could this be flagged on the spot? So:
Q2 Was this the first grant (of any type) you have submitted to the Wikimedia Foundation (WMF)?
- Yes __ (Go straight to Q3)
- No __ (Go straight to Q4)
At the moment it's a forest of words instructing where to navigate to, and these instructions are too wordy and not in the right place.
Would be easier to copy-edit directly, as I've done with the first survey on the page. There, I didn't introduce the "Go straight to Qx" there, but I believe it's necessary.
Above all, I think it's necessary to ask: "What the most important information we need?" and "How can we get it with the minimum effort by applicants?". That, by the way, is my approach to the whole grant-application procedure—maximising our ability to judge and give feedback, and minimising their work in applying. The two aims can in many places (although not all) be made to work together.
PS Most of the question texts in all three surveys could be simplified and rationalised (especially given that most readers will be second-language speakers). I've done a little more, but the whole page needs cleansing. Some questions could be conflated—so where you ask two separate questions about good things and about bad things, a typical technique would be to make it one question, like: Please specify the two best and the two worst things about X. 26 questions is just daunting. And some of the navigational complexity and number of questions could be instantly reduced; for example, in the IEG survey, after the choice table in Q16, why not a single Q17:
Was there anything in particular you disliked or found frustrating, or liked or found useful, about the community support you received?
- Free answer:
Instead of this complicated task:
Q17 If your answer to Q16 was Very dissatisfied, Dissatisfied, or Neutral – Was there anything in particular you disliked or found frustrating about the community support you received?
Q18 If your answer to Q16 was Satisfied or Very satisfied – Was there anything in particular you liked or found useful about the community support you received?
- Free answer:
- Hi Tony. Thanks for your copyedits. Many of those are significant improvements and I'll incorporate them into the final versions. And thanks for your extensive feedback as well. I believe that many of your comments above refer to artifacts of the way the system in which the survey was built (Qualtrics) outputs a static draft of the survey text. I believe that most of the branching logic you suggest here is already implemented in the survey (for example, your suggestion to skip 2.3 if the answer to 2.2 is "No". That logic is the text in italic script immediately below the question title (those words are not exposed to the user. And the question numbers aren't either). Another example: a survey participant would never see both Q4.2 and Q4.3 because they are only prompted to answer 4.2 if their answer to 4.1 is "Very Dissatisfied", "Dissatisfied" or "Neutral" (they can only pick one option). Same for 4.3 and "Satisfied" or "Very satisfied".
- The longest possible path through the survey is 17 questions, but only 12 of those require a response (none of the free text questions do). Similar surveys I've done in the past, of similar length, have been completed in an average of about 4-5 minutes, so I don't think this will be a burden on proposal submitters, as long as the question prompts are clearly worded and the logic is implemented correctly.
- I'm going to add some explanation to the top of this page today that makes the format of these survey drafts more clear. Hopefully that will help us in the future make the best use of your time. Basically, if I'm asking for feedback from Committee/community members, and getting detailed and well-thought out responses, I want to make sure I provide all the information you need to focus your feedback on the aspects of the survey that need it most. Thanks again, and sorry for the lack of clarity, Jmorgan (WMF) (talk) 19:27, 18 December 2013 (UTC)