Read this summary page for a description of the report, data highlights across three core outcome areas, and lessons learned across program implementations. Use the tabs in the navbar to find detailed sections that dive deeper into the data.
A GLAM content release partnership is an agreement in which a gallery, library, archive, or museum (also called GLAM) works with Wikimedia community members to upload media to Wikimedia Commons. This report presents data on seventeen months of GLAM partnerships. It uses metrics shared across a broad spectrum of programs so that goals and outcomes can be discussed across different program types. It can be used for:
designing and planning future GLAM partnership agreements,
exploring program effectiveness, and
celebrating GLAM successes.
The authors recommend using caution drawing conclusions about individual programs based solely on the data presented here, as there is insufficient information about each unique partnership’s context and goals.
This report includes data from programs that existed between August 2013 and January 2015. Data were received or mined for a total of 46 GLAM partnership agreement implementations held by eight organizational and individual program leaders during that time. We use “GLAM implementations” here to mean time-bound implementations of partnership agreements. For example, if a partnership has been ongoing for two years, but had two agreements between August 2013 and January 2015 under which new media were uploaded to Commons, we count that here as two implementations.
The GLAM collections uploaded on Wikimedia Commons and pageviews of those uploads highlights the vast contributions of cultural institutions and their importance on Wikimedia projects. The content release partnerships represented in the report included over 57,000 media uploads. While the size of content collections released varied widely the average implementation generated 46 uploads, and costs ranged from $0.33 USD to $38.12 USD per media uploaded.
Of the 46 content release partnerships covered in the report, 14 were related to the 229 GLAM categories for which there are cumulative pageviews data. The 14 collections with data represented in this report have had over 5.1 billion pageviews of the entirety of their collections hosted on Wikimedia Commons, since they began content releases, across all Wikimedia projects. This is roughly equivalent all pageviews of Japanese Wikipedia,the second largest Wikipedia, in one month.
Of the content contributed through the partnership instances included in this report, 15% has already been used in over 255,867 Wikimedia articles.
For the 12 GLAM implementations that reported on sharing and learning, 11 created knowledge to share about how to run a successful GLAM program--either through blogs, printed materials, or an experienced program leader who is willing to help support a similar program.
We still need more data in order to draw stronger findings about inputs (e.g. dollars or volunteer hours), outputs (e.g. media uploaded), or outcomes (e.g. retention). This also means that we are limited in determining the following:
Costs. Key costs for GLAM implementations may include staff time and digitization equipment, and other costs may exist depending on local context. In addition to dollar cost, we need more data around the valuable work of volunteers.
Associating costs to outcomes. The report is unable to draw clear conclusions about how inputs influence outputs and outcomes without more data.
Other measures and outcomes. This report is limited to certain measures and outcomes about GLAM implementations. Other outcomes likely exist which are not captured, for example: growing volunteer abilities for running programs or increasing awareness of Wikimedia among GLAM institutions.
Use the report data for planning future GLAM partnership agreements. The report summarizes each input, output, and outcome metric. For planning, program leaders can look to similar contexts to theirs and use the range of data to know what is generally a high or low number for each metric. Tables are provided with data from each GLAM implementation. These offer readers access to local context information and contacts.
Increase shared learning about GLAM through program tracking. Having more data and more measures are key to having a deeper understanding about programs and their outcomes. This includes online data (e.g. articles created) as well as offline data (e.g. budget, volunteer hours, motivation). To successfully translate data into shared learning we need both more data and more ways to share & interpret it. Two ways to increase data is to increase capacity building around data collection, as well as improve data collection tools. Two ways to increase sharing and interpreting data could be to connect program leaders for peer-to-peer exchanges and support and to improve navigation of existing resources.
The data collected here are insufficient to see a comprehensive picture of GLAM programs. If you can, please contribute to the talk page with ideas about how to capture better information on GLAM programs, and how we can
What ideas do you have about other ways we should evaluate GLAM programs?
What questions around program impact or evaluation do you have after reading this?
What measures are missing from this report?
What strategies for measurement and tracking do you use?