Grants:Programs/Wikimedia Community Fund/Rapid Fund/Improving Source Reliability Literacy Among Nigerian Wikipedia Editors (ID: 22946664)/Final Report
Application type: Standard application
Part 1: Project and impact
1. Describe the implemented activities and results achieved. Additionally, share which approaches were most effective in supporting you to achieve the results. (required)
Between 1 May and 31 July 2025, we delivered a focused, fully virtual capacity-building program aimed at improving source-evaluation skills among Nigerian English-Wikipedia editors. The core of the project was a six-module curriculum delivered in fortnightly sessions, supported by practical exercises, mentorship, an edit-tracking campaign, a Community Resource Hub on Meta, and the production of a Source Evaluation Handbook (v1.0). The program produced strong measurable outcomes: participants edited ≈1,730 articles, made ≈9,240 total edits, added ≈113,000 words and ≈1,090 references, reached ~2.54 million article views, and engaged 60 active editors. Post-training survey results show very high satisfaction and clear skill gains: 88% rated the program Excellent, ~71% said they are Extremely confident in evaluating source reliability after the training, and ~82% are Very likely to apply the learnings to future edits.
Implemented activities
- Six-module curriculum (virtual, fortnightly):
- Modules: (1) Understanding reliable vs unreliable sources; (2) Navigating the Nigerian media landscape; (3) Investigating & verifying sources; (4) Editing with integrity — source integration; (5) Critical thinking for source evaluation; (6) Building a personal source-evaluation toolkit.
- Each live session combined short presentations, instructor demonstrations of tools (Citer, Wayback, WHOIS, reverse-image search).
- Practical tool training and workflows:
- Demonstrated and practised a compact verification workflow (WHOIS → Wayback → reverse image → fact-check → archive) and the use of Citer (citer.toolforge.org) to create well-formed references quickly. Participants installed and used browser extensions and archived pages during exercises.
- Mentorship and peer review:
- Participants were trained by mentors for targeted feedback. Mentors talked through borderline sourcing cases, and provided brief follow-up reviews.
- Live edit campaign + tracking hashtag:
- A practical editing campaign ran during the project using the edit-summary hashtag
#NSRLso we could monitor applied skills in real articles. Contributions were evaluated for quality as well as quantity.
- A practical editing campaign ran during the project using the edit-summary hashtag
- Materials and hub for sustainability:
- Produced the Source Evaluation Handbook (v1.0) — a practical handbook consolidating the six modules, checklists, tool guidance (including a Citer mini-guide), and an appendix of quick reference sheets.
- Launched a Resource Hub on Meta to host course materials, curated lists of Nigerian sources, and ongoing guidance.
- Communication, recordings and asynchronous supports:
- Set up a WhatsApp support channel for real-time help and group coordination, recorded sessions for asynchronous review, and issued digital certificates and incentives (prizes to three quality contributors) to encourage participation and quality.
- Monitoring, evaluation and feedback loop:
- Collected post-course survey responses and used edit/metrics monitoring to measure output (edits, references added, article views).
Which approaches were most effective
- Hands-on, applied learning (live editing + breakout exercises) — most effective.
- Large majority (≈82%) rated the practical exercises as highly effective; participants’ edits show immediate application (1,090 references added). This approach moved participants from “knowing the rule” to “doing the edit correctly.”
- Tool-driven workflows (Citer + verification routine).
- Citer was the most frequently-named tool participants planned to keep using; Wayback and WHOIS were also commonly cited. The specific workflow (WHOIS → Wayback → reverse-image → fact-check → archive → Citer) gave participants a quick, repeatable habit that improved the quality and defensibility of citations during live edits.
- Mentorship and quick peer review.
- Mentorship allowed rapid quality control of live edits and provided a soft escalation path for borderline sources. Mentored edits required fewer subsequent reversions and were easier to defend on talk pages; survey comments singled out mentor clarity and patience as a major strength.
- Documented resources + a persistent Meta hub & handbook.
- Participants repeatedly asked for downloadable materials and referenced the handbook as a resource to revisit; recordings and artefacts reduced friction for those who could not attend live sessions and supported long-term behaviour change beyond the grant window.
- Clear tracking mechanisms + incentives (hashtag #NSRL and small prizes for top three quality contributors).
- The hashtag made it straightforward to identify contributions for quality review and prize consideration for the top three contributors, and certificates and recognition increased active participation and timely submissions.
The combination of short, practical modules, a compact verification workflow, mentor support, and durable resources produced measurable improvements in source evaluation and editing quality. The project demonstrated that relatively small, well-designed virtual interventions can produce sustained editing improvements and help make Wikipedia’s coverage of Nigeria more verifiable and robust — directly supporting the Wikimedia movement’s goals of improving knowledge equity and editorial capacity.
2. Documentation of your impact. Please use space below to share links that help tell your story, impact, and evaluation. (required)
Share links to:
- Project page on Meta-Wiki or any other Wikimedia project
- Dashboards and tools that you used to track contributions
- Some photos or videos from your event. Remember to share access.
You can also share links to:
- Important social media posts
- Surveys and their results
- Infographics and sound files
- Examples of content edited on Wikimedia projects
The Nigerian Sources Reliability Literacy Project successfully engaged participants in exploring, understanding, and applying principles of source reliability on Wikimedia projects.
The Project page provides a full overview of objectives, activities, and timelines, while the Resources Hub offers curated learning materials, training modules, and reference guides for ongoing use by the community.
Participation and contributions were tracked via the Outreach Dashboard, which records the number of articles improved, edits made, and other quantitative measures of engagement. Practical examples of this work are visible through #NSRL-tagged edits on the Hashtags tool.
Because the project was conducted entirely online, the meeting screenshots archive documents our live sessions. Evaluation was supported by participant reflections and suggestions collected through the Feedback & Learning Experience Survey, which informed both our impact assessment and recommendations for future initiatives.
Additionally, share the materials and resources that you used in the implementation of your project. (required)
For example:
- Training materials and guides
- Presentations and slides
- Work processes and plans
- Any other materials your team has created or adapted and can be shared with others
- Meeting recordings: https://drive.google.com/drive/folders/1XHzmwGQus8ZneQIlWaTtSR5XyKTz0zof?usp=sharing
- Presentation slides: https://drive.google.com/drive/folders/1zOuVf_12-qAsBBpUD3STEYgIF7u-qUch?usp=sharing
- Source Evaluation Handbook: https://drive.google.com/file/d/1CZus-uqfOLNyXvjYWcrqFZGkJ8QNX-iA/view?usp=sharing
- Resource Hub: Nigerian Sources Reliability Literacy/Resources Hub
3. To what extent do you agree with the following statements regarding the work carried out with this Rapid Fund? You can choose “not applicable” if your work does not relate to these goals. Required. Select one option per question. (required)
| A. Bring in participants from underrepresented groups | Strongly agree |
| B. Create a more inclusive and connected culture in our community | Strongly agree |
| C. Develop content about underrepresented topics/groups | Strongly agree |
| D. Develop content from underrepresented perspectives | Strongly agree |
| E. Encourage the retention of editors | Strongly agree |
| F. Encourage the retention of organizers | Strongly agree |
| G. Increased participants' feelings of belonging and connection to the movement | Strongly agree |
| F. Other (optional) |
Part 2: Learning
4. In your application, you outlined some learning questions. What did you learn from these learning questions when you implemented your project? How do you hope to use this learnings in the future? You can recall these learning questions below. (required)
You can recall these learning questions below: Through this project, we hope to gain deeper insights into the patterns of source reliability issues among Nigerian Wikipedia editors, including how and why unreliable sources are frequently cited. By engaging participants in structured training and mentorship, we aim to assess the effectiveness of different pedagogical approaches in improving source evaluation skills.
Key learning questions include:
- To what extent do Nigerian editors rely on local sources that are considered unreliable by Wikipedia standards?
- What common misconceptions do editors have about source reliability, and how can they be effectively addressed?
- Does targeted training significantly reduce the use of unreliable sources in Nigerian-related Wikipedia content?
- How does mentorship impact long-term improvements in sourcing practices among Nigerian editors?
- What are the best strategies for ensuring sustainable knowledge transfer within the Nigerian Wikimedia community on source reliability?
We will collect qualitative and quantitative data, including feedback from participants, content analysis of articles before and after training, and tracking improvements in sourcing practices over time. This will help refine future interventions and improve our strategies for strengthening Wikipedia’s credibility in Nigerian topics.
- Learning Question 1: To what extent do Nigerian editors rely on local sources that are considered unreliable by Wikipedia standards?
- What we learned: Our article reviews confirmed that a high proportion of Nigerian-topic pages included local news blogs, non-archived newspapers, and commercial press releases. These were often used because they were the most accessible or the only sources covering certain events, even if they didn’t meet WP:RS criteria.
- Learning Question 2: What common misconceptions do editors have about source reliability, and how can they be effectively addressed?
- What we learned: Many editors believed that recency equals reliability — assuming a source is trustworthy if it’s recent and widely circulated on social media. Others thought a source was acceptable if it was “famous locally,” regardless of editorial oversight. Training that used side-by-side comparisons of sources (reliable vs. unreliable) proved most effective in shifting these assumptions.
- Learning Question 3: Does targeted training significantly reduce the use of unreliable sources in Nigerian-related Wikipedia content?
- What we learned: Follow-up tracking on sandbox exercises and live article edits showed a marked improvement — a reduction in unreliable citations within trained editors’ contributions over the following month. However, sustained improvement depended heavily on post-training mentorship.
- Learning Question 4: How does mentorship impact long-term improvements in sourcing practices among Nigerian editors?
- What we learned: Editors who engaged regularly with mentors via the module presentations or WhatsApp groups retained the habits better and self-corrected more often.
- Learning Question 5: What are the best strategies for ensuring sustainable knowledge transfer within the Nigerian Wikimedia community on source reliability?
- What we learned: Combining a structured module library with peer-to-peer review sessions created a living resource and a culture of accountability. Training materials alone weren’t enough — the mix of practical exercises, RSN walk-throughs, and community recognition for good sourcing had the strongest impact.
- How we will use this in the future
- Continue: Maintain mentorship structures and improve our quick-reference checklists.
- Adapt: Build more localised case studies to address Nigeria-specific sourcing challenges.
- Stop: Relying solely on one-off workshops without follow-up.
- Expand: Integrate resource reliability review into edit-a-thons and thematic campaigns, not just special trainings.
5. Did anything unexpected or surprising happen when implementing your activities? This can include both positive and negative situations. What did you learn from those experiences? (required)
One of the most surprising and positive developments during the project was the speed and ease with which participants grasped the concept of source reliability. While we anticipated some level of improvement after training, the depth of understanding displayed by many editors—especially during practical editing sessions—was remarkable. This success was largely due to the hands-on mentorship model we employed, where mentors not only explained concepts but worked side-by-side with mentees on live article improvements. This approach made abstract policies, such as WP:RS and WP:V, more tangible and directly applicable.
From these experiences, we learned that practical, context-specific mentorship is significantly more effective than theory-heavy sessions alone. For future projects, we plan to expand the peer mentorship aspect and provide ongoing post-training “check-in” opportunities to reinforce these positive habits. The unexpected readiness of participants to adopt higher sourcing standards is an encouraging sign that similar approaches could be scaled to other Wikimedia communities with comparable challenges.
6. What is your plan to share your project learnings and results with other community members? If you have already done it, describe how. (required)
Our primary channel for sharing project learnings is the Resource Hub. This hub consolidates all developed modules, session recordings, mentorship notes, and best-practice documentation in a structured, easily navigable format.
We have already taken initial steps by announcing the completion of the hub in our local WhatsApp group and providing quick-start guides to make it easier for others to integrate the materials into their own activities.
Part 3: Metrics
7. Wikimedia Metrics results. (required)
In your application, you set some Wikimedia targets in numbers (Wikimedia metrics). In this section, you will describe the achieved results and provide links to the tools used.
| Target | Results | Comments and tools used | |
|---|---|---|---|
| Number of participants | 60 | 60 | Available via Outreach dashboard |
| Number of editors | 30 | 35 | Available via Outreach dashboard |
| Number of organizers | 3 | 3 |
| Wikimedia project | Target | Result - Number of created pages | Result - Number of improved pages |
|---|---|---|---|
| Wikipedia | 300 | 38 | 838 |
| Wikimedia Commons | 20 | 1480 | 0 |
| Wikidata | |||
| Wiktionary | |||
| Wikisource | |||
| Wikimedia Incubator | |||
| Translatewiki | |||
| MediaWiki | |||
| Wikiquote | |||
| Wikivoyage | |||
| Wikibooks | |||
| Wikiversity | |||
| Wikinews | |||
| Wikispecies | |||
| Wikifunctions or Abstract Wikipedia |
8. Other Metrics results.
In your proposal, you could also set Other Metrics targets. Please describe the achieved results and provide links to the tools used if you set Other Metrics in your application.
| Other Metrics name | Metrics Description | Target | Result | Tools and comments |
|---|---|---|---|---|
| Mentors | Train mentors who will continue guiding editors on sourcing practices after the project ends. | 5 | 5 | From mentorship program. |
| Follow-up discussions | Host follow-up discussions with key Nigerian Wikimedia communities/subgroups/networks to evaluate the long-term impact of the training. | 2 | 2 | We're expecting more editors to reach out whenever they are in doubt. The project (via the handbook) gave them several venues to report their uncertainties. In summary, this doesn't end; we expect follow-up discussions to be continuous. |
9. Did you have any difficulties collecting data to measure your results? (required)
No
9.1. Please state what difficulties you had. How do you hope to overcome these challenges in the future? Do you have any recommendations for the Foundation to support you in addressing these challenges? (required)
Part 4: Financial reporting
[edit]10. Please state the total amount spent in your local currency. (required)
4396330.58
11. Please state the total amount spent in US dollars. (required)
2924.07
12. Report the funds spent in the currency of your fund. (required)
Upload the financial report
12.2. If you have not already done so in your financial spending report, please provide information on changes in the budget in relation to your original proposal. (optional)
13. Do you have any unspent funds from the Fund?
No
13.1. Please list the amount and currency you did not use and explain why.
N/A
13.2. What are you planning to do with the underspent funds?
N/A
13.3. Please provide details of hope to spend these funds.
N/A
14.1. Are you in compliance with the terms outlined in the fund agreement?
Yes
14.2. Are you in compliance with all applicable laws and regulations as outlined in the grant agreement?
Yes
14.3. Are you in compliance with provisions of the United States Internal Revenue Code (“Code”), and with relevant tax laws and regulations restricting the use of the Funds as outlined in the grant agreement? In summary, this is to confirm that the funds were used in alignment with the WMF mission and for charitable/nonprofit/educational purposes.
Yes
15. If you have additional recommendations or reflections that don’t fit into the above sections, please write them here. (optional)
Review notes
[edit]Review notes from Program Officer:
N/A
Applicant's response to the review feedback.
N/A