On this page, you will find answers to: How deep do the data go? What are the programs goals?
Response rates, data quality, report limitations 
Data were obtained on 46 GLAM content release partnerships which occurred between September 2013 and January 2015. The metrics in these reports are meant to give a general sense of the landscape of GLAM partnerships. The report represents only a handful of possible measures for understanding these implementations and their results. Further, the content release itself is only one type of activity that typically takes place in GLAM partnerships. The data in this report were both reported by program leaders and mined from WIkimedia project pages.
GLAM data for this report were collected from three sources:
directly from GLAM program leaders;
from publicly available information on organizer websites and on-wiki reports; and
The data obtained included: number of participants, implementation start and end times, number of media files uploaded, number of unique media used, and ratings of image quality (Featured, Quality, and Valued.) Where start or end times were not reported, we estimated the dates from the first or last day a media was uploaded to an implementation category. Only a minority of implementations reported key inputs such as budget, staff hours, and volunteer hours, and this information cannot be mined. Thus, while we explore these inputs, we cannot draw many strong conclusions about program scale or how these inputs affect program success.
In addition, the data for GLAM implementations are not normally distributed. This is partly due to small sample size and partly to natural variation, but does not allow for comparison of means or analyses that require normal distributions. Instead, we present the median and ranges of metrics and use the term average to refer to the median average, since the median is a more statistically robust average than the arithmetic mean. To give a complete picture of the distribution of data, we include the means and standard deviations as references.
To see the summary statistics of data reported and mined, including counts, sums, medians, arithmetic means, and standard deviations, see the appendix.
Program leaders most often selected “Increasing Awareness of Wikimedia Projects” and “Increasing Contributions” as priority goals; these were selected for 89% of GLAM implementations.
To learn about which program goals were important to GLAM program leaders, we asked them to share their priority goals for each GLAM implementation. Four GLAM organizers reported priority goals for nine implementations. The number of priority goals selected ranged from 4 to 10; the average number selected was nine. The table below presents the priority goals selected by GLAM program leaders, listed by frequency of selection.
↑We use “GLAM implementations” here to mean time-bound implementations of partnership agreements. For example, if a partnership has been ongoing for two years, but had two agreements between August 2013 and January 2015 under which new media were uploaded to Commons, we count that here as two implementations.
↑Many thanks to the builders of these tools, especially: Magnus Manske, who created Catscan; YuviPanda, who created Quarry; and WMF Analytics and data researchers who created Wikimetrics.
↑We provided a list of 19, identified as common priority goals during the 2013 Budapest training, with the option to write in a 20th. Program leaders could select as few or many as they thought appropriate.
↑We present the median and ranges of metrics and use the term average to refer to the median average, since the median is a more statistically robust average than the arithmetic mean. To give a complete picture of the distribution of data, we include the means and standard deviations as references.