On this page, you will find answers to: What are the key findings to take away from the report?
What are the next steps in understanding the impact of GLAM implementations?
How does the program deliver against stated goals?[1]
The 46 GLAM implementations[2] reported here were held in 10 different countries, with 41 different institutional partners. Some of these implementations were part of new partnerships and some were ongoing. If increasing awareness of the Wikimedia project continues to be a central goal of GLAM partnerships, it will be important to develop and capture measures of awareness.
The GLAM implementations included in this report contributed over 57,000 media to Commons, and 15% has already been used in over 250,000 Wikimedia articles. Some of these media would not have been uploaded without the advocacy and expertise brought from GLAM partnerships.
How this information can apply to program planning
Use the information to help plan for program inputs and outputs.
Keep in mind the participants of a GLAM implementation and design the implementation for that audience. For example, GLAM implementations that aim to train new editors to upload may need a different model than implementations that aim to encourage ongoing GLAM partners to contribute more. Also think about the objectives and expectations of your GLAM partner. Does the agreement serve the objectives of both partners? Do all parties understand how many resources they will need to dedicate? Keeping your partners in mind and using the data in this report can help you set expectations and design a strong GLAM implementation.
The data from different GLAM implementations can help find the right combination of participants for your contribution goals.
This table represents the middle 50% of GLAM implementations for each metric:[3]
Lower
Higher
Implementation length
5 days
164 days
Total Participants
1
19
Media uploaded
83
966
Unique Media Used
15
137
Reach out and connect to other GLAM leaders.
Among all the benefits of connecting with fellow program leaders, you can find program leaders who ran similar GLAM implementations in similar contexts, and ask about the resources needed and outcomes expected. When using budgets presented here for planning purposes, try to find an implementation in a location with a similar economy to your area and consider reaching out to a successful program leader to discuss potential resource needs (including possible budget or donated resources). Alternatively, you can find an implementation based on the same model in a different location and talk to the program leader about the costs before translating those expenses into local prices.
Use the distribution statistics as guardrails against costly plans that may not produce scaled results.
Information on cost per media uploaded and cost per unique media used can also be helpful references for comparing the cost of your implementation with how much content is produced. As with overall budget information, these should be taken in the context of each implementation. If planning a new program, you might expect the costs to fall within middle 50% of costs per output reported. Programs nearer the bottom of the middle 50% are more efficient at creating more outcomes with fewer inputs. We hope, as we continue to evaluate programs and feed the results back into program design, that we can learn more from the programs achieving the most impact using the fewest resources.
GLAM partnerships and specific GLAM implementations differ in length, subject area and scope, yet they are organized successfully within and across many Wikimedian communities. They are replicable and adaptable to many different contexts in which institutions have content to share on Wikimedia projects.
If you ran a program that delivered excellently against goals, please speak up! Consider writing a blog post, how-to guide, or contribute to the talk page sharing your ideas on why your program was so successful.
If your program surmounted a particularly tricky problem in program design, consider writing a learning pattern!
If you ran a program and want to report key metrics to the Learning and Evaluation team, our collector is always open. Visit our reporting page to learn about the reporting forms contents and find the link to voluntary reporting.
Connecting with other program leaders, evaluators, and designers
If you are considering running a new program or updating an existing one, consider reaching out to experienced program leaders who have organized a similar program. You can find leaders by program in the appendix, on our facebook or during virtual hangouts.
Join the mailing list for regular updates about program evaluation, tools, etc.
Questions about Evaluation and Impact
What, if any, ideas do you have about other ways we should evaluate GLAM implementations or programs in general?
What questions around program impact or evaluation do you have after reading the reports?
What further data investigations would you like to see (or do!) for this set of GLAM implementations?
Questions about Measures
What, if any, measures have you used that are missing from these reports?
What, if any, tools/bots/programs/strategies do you use to measure the outcomes of your GLAM implementations?
↑Here we examine together all GLAM implementations which reported, but recognize that they do not all share program goals. We encourage organizers of each implementation
to consider the data in terms of what matters most to their priority goals
↑We use “GLAM implementations” here to mean time-bound implementations of partnership agreements. For example, if a partnership has been ongoing for two years, but had two agreements between August 2013 and January 2015 under which new media were uploaded to Commons, we count that here as two implementations.
↑The lower and higher numbers are based on the lower quartiles and upper quartiles.