Research:Data
There is a great deal of publicly-available, open-licensed data about Wikimedia projects. This page is intended to help community members, developers, and researchers who are interested in analyzing raw data learn what data and infrastructure is available.
If you have any questions, you might find the answer in the Frequently Asked Questions about Data. If you still have questions, you can email your question to the Analytics mailing list (more information).
If you wish to browse pre-computed metrics and dashboards, see statistics.
If this publicly available data isn't sufficient, you can look at the page on private data access to see what non-public data exists and how you can gain access.
If you wish to donate or document any additional data sources, you can use the Wikimedia organization on DataHub.
See also inspirational example uses.
Also consider searching for datasets at Zenodo, Figshare, Dimensions.ai, Google Dataset Search, Academic Torrents, or Hugging Face (see also a curated "Wikimedia Datasets" list on Huggingface).
Quick glance
[edit]By access method
[edit]Dumps of all WMF projects for backup, offline use, research, etc.
- Wiki content, revisions, metadata, and page-to-page and outside links
- XML and SQL format
- once/twice a month
- large file sizes
- The dumps.wikimedia.org domain also hosts other data
- The MediaWiki API provides direct, high-level access to the data contained in MediaWiki databases over the web.
- Meta info about the wiki and logged-in user, properties of pages (revisions, content, etc.) and lists of pages based on criteria
- JSON, XML, and PHP's native serialization format
- The Wikimedia REST API provides page content in various formats.
- The Wikimedia Analytics API provides pageviews and aggregate edit stats.
The Toolforge hosting environment allows you to connect to shared server resources and query a copy of the Wikimedia project's content databases.
- Acts as a standard web server hosting web-based tools
- Command-line tools
- Account required
PAWS is a Jupyter Notebook environment within Toolforge that allows e.g. querying database replicas and APIs for analysis.
Quarry is a public web interface allowing SQL queries to database replicas.Raw pageviews, unique device estimates, mediacounts, etc.
- Delimited, usually: Project, (Page title,) Count
- Aggregated hourly or daily
- pageviews complete – mediacounts – unique devices
Reports based on data dumps and server log files.
- Unique visits, page views, active editors and more
- Intermediate CSV files available
- Graphical presentation
DBpedia extracts structured data from Wikipedia. It allows users to run complex queries and link Wikipedia data to other data sets.
- RDF, N-triplets, SPARQL endpoint, Linked Data
- Billions of triplets of info in a consistent ontology
A collection of various Wikimedia-related datasets.
- Smaller (usually one-time) surveys/studies
- DBpedia-Live and others
- Figshare (datasets tagged 'wikipedia')
Differential privacy homepage
A collection of differentially-private datasets, released daily, weekly, or monthly.
- pageview data
- editor/edit data
- centralnotice data
- search data
By data domain
[edit]The table below is a quick reference of data sources organized by data domain. For a more detailed overview of Wikimedia data domains and how to access data in each domain, use the links in the table or see Research:Data introduction.
Data dumps
[edit]WMF releases data dumps of Wikipedia, Wikidata, and all WMF projects on a regular basis, as well as dumps of other Wikimedia-related data such as search indices and short URL mappings.
Content
[edit]XML/SQL dumps
[edit]- Text of current and/or all revisions of all pages, in XML format (schema)
- Metadata for current and/or all revisions of all pages, in XML format (schema)
- Most database tables as SQL files
- Page-to-page link lists (
pagelinks
,categorylinks
,imagelinks
,templatelinks
tables) - Lists of pages with links outside of the project (
externallinks
,iwlinks
,langlinks
tables) - Media metadata (
image
,oldimage
tables) - Info about each page (
page
,page_props
,page_restrictions
tables) - Titles of all pages in the main namespace, i.e. all articles (
*-all-titles-in-ns0.gz
) - List of all pages that are redirects and their targets (
redirect
table) - Log data, including blocks, protection, deletion, uploads (
logging
table) - Misc bits (
interwiki
,site_stats
,user_groups
tables)
- Page-to-page link lists (
- Stub-prefixed dumps for some projects which only have header info for pages and revisions without actual content
See a more comprehensive list of what is available for download.
Other dumps
[edit]Dumps.wikimedia.org offers various other database dumps and datasets, including
- Adds/changes dumps (includes no moves or deletes, plus some other limitations) (documentation)
- Wikidata entity dumps – see Wikidata:Data access for more information
- Various analytics datasets (described below)
Download
[edit]You can download the latest dumps for the last year (dumps.wikimedia.org/enwiki/ for English Wikipedia, dumps.wikimedia.org/dewiki/ for German Wikipedia, etc). Download mirrors offer an alternative to the download page.
Due to large file sizes, using a download tool is recommended.
There are also archives. Many older dumps can also be found at the Internet Archive.
Data format
[edit]XML dumps are in the wrapper format described at Export format (schema). Files are compressed in gzip (.gz), bzip2/lbzip2 (.bz2) and .7z formats.
SQL dumps are provided as dumps of entire tables, using mysqldump.
Some older dumps exist in various formats.
How to and examples
[edit]See examples of importing dumps in a MySQL database with step-by-step instructions.
Existing tools
[edit]Some tools are listed on the following pages, but these tools are mostly outdated and non-functional:
License
[edit]All text content is multi-licensed under the Creative Commons Attribution-ShareAlike 3.0 License (CC-BY-SA) and the GNU Free Documentation License (GFDL). Images and other files are available under different terms, as detailed on their description pages.
Support
[edit]- Mailing list: xmldatadumps-l
- Bug reports: Dumps Generation project in Phabricator
- Design work on Dumps 2.0 replacement: Dumps Rewrite project in Phabricator
MediaWiki API
[edit]The MediaWiki API provides direct, high-level access to the data contained in MediaWiki databases. Client programs can log in to a wiki, get data, and post changes automatically by making HTTP requests.
Content
[edit]- Meta information about the wiki and the logged-in user
- Properties of pages, including page revisions and content, external links, categories, templates,etc.
- Lists of pages that match certain criteria
- See the full list of available information
- See also additional information for Wikidata and Wikidata's SPARQL query endpoint
Endpoint
[edit]To query the database you send a HTTP GET request to the desired endpoint (example https://en.wikipedia.org/w/api.php for English Wikipedia) setting the action parameter to query
and defining the query details the URL.
How to and examples
[edit]- API Tutorial
- Example:
https://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&format=xml&titles=Main%20Page
fetches (action=query
) the content (rvprop=content
) of the most recent revision of Main Page (titles=Main%20Page
) of English Wikipedia (https://en.wikipedia.org/w/api.php?
) in XML format (format=xml
). You can paste the URL in a browser to see the output. - More examples
Existing tools
[edit]To try out the API interactively on English Wikipedia, use the API Sandbox.
Access
[edit]To use the API, your application or client might need to log in.
Before you start, learn about the API etiquette.
Researchers could be given Special access rights on case-to-case bases.
License
[edit]All text content is multi-licensed under the Creative Commons Attribution-ShareAlike 3.0 License (CC-BY-SA) and the GNU Free Documentation License (GFDL).
Support
[edit]- Frequently asked questions (FAQ)
- mediawiki-api mailing list
Toolforge and PAWS
[edit]Toolforge hosts command line or web-based tools, which can query copies of the database. Copies are generally real-time but sometimes replication lag occurs.
PAWS is a Jupyter Notebook environment within Toolforge that allows e.g. querying database replicas for analysis.
Content
[edit]Toolforge hosts copies of the databases of all Wikimedia projects including Commons. You can use the contents of the databases under the Toolforge rules.
Data format
[edit]Explore the database schema of the MediaWiki software.
How to
[edit]Using Toolforge requires familiarity with Unix/Linux command line, SSH keys, SQL/databases, and some programming.
To start using the Toolforge, see this Quickstart guide.
Existing tools
[edit]See https://admin.toolforge.org/
Support
[edit]See wikitech:Help:Cloud Services introduction#Communication and support
Recent changes stream
[edit]See EventStreams to subscribe to Recent changes on all Wikimedia wikis. This broadcasts edits and other changes as they happen.
Existing tools
[edit]See wikitech:Event Platform/EventStreams/Powered By
Analytics Datasets
[edit]Analytics Datasets on dumps.wikimedia.org offers stable and continuous datasets about web request statistics (including page views, mediacounts, unique devices), page revision history, data by country, and Wikidata QRanks.
Pageview statistics
[edit]Pageview statistics are one example. Each request of a page reaches one of Wikimedia's Varnish caching hosts. The project name and the title of the page requested are logged and aggregated hourly.
Files starting with "project" contain total hits per project per hour statistics.
Per-country pageviews data is also available, sanitized for privacy reasons. See this announcement post (June 2023).
See the README for details on the format.
You can interactively browse the page view statistics at https://pageviews.toolforge.org. More documentation on the Pageviews Analysis tool is available.
Clickstream data
[edit]The Wikipedia clickstream dataset contains counts of (referrer, resource)
pairs extracted from the request logs of Wikipedia.
Geoeditors
[edit]The public "Geoeditors" dataset contains information about the monthly number of active editors from a particular country on a particular Wikipedia language edition (bucketed and redacted for privacy reasons). For some earlier years, similar data is available at [1]/[2], see also Edits by project and country of origin.
Misc datasets
[edit]Additional datasets (mostly irregular or discontinued ones) are published at https://analytics.wikimedia.org/datasets/. These include Caching research data, and AS Performance Report.
WikiStats
[edit]Wikistats is an informal but widely recognized name for a set of reports which provide monthly trend information for all Wikimedia projects and wikis.
Content
[edit]Many dashboards that display trends about reading, contributing, and content broken down by different projects such as:
- unique visitors
- page views (overall and mobile only)
- editor activity
- article count
Data format
[edit]Data is presented as charts with the option to download the underlying data.
Support
[edit]For more details on Wikistats, see wikitech:Data Platform/Systems/Wikistats 2.
DBpedia
[edit]DBpedia.org is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia and to link other datasets on the Web to Wikipedia data.
Content
[edit]The English version of the DBpedia knowledge base describes millions of things, and the majority of items are classified in a consistent ontology (persons, places, creative works like music albums, films and video games, organizations like companies and educational institutions, species, diseases, etc.). Localized versions of DBpedia in more than hundred languages describe millions of things.
The data set also features:
- about 2 billion pieces of information (RDF triples)
- labels and abstracts for >10 million unique things in up to 111 different languages
- millions of links to images, links to external web pages, data links into external RDF datasets, links to Wikipedia categories, YAGO categories
- https://www.dbpedia.org/resources/ has download links for all the data sets, different formats and languages.
Data format
[edit]- RDF/XML
- Turtle
- N-Triplets
- SPARQL endpoint
Access
[edit]- https://dbpedia.org/sparql is DBpedia's SPARQL endpoint.
License
[edit]- DBpedia data from version 3.4 on is licensed under the terms of the Creative Commons Attribution-ShareAlike 3.0 License and the GNU Free Documentation License.
Support
[edit]- Mailing list: DBpedia Discuss
- Forum: https://forum.dbpedia.org/
- DBpedia related publications, blog posts and projects
DataHub
[edit]The Wikimedia organization on the Open Knowledge Foundation's DataHub is a collection of datasets about Wikipedia and other projects run by the Wikimedia Foundation.
The DataHub repository is meant to become the place where all Wikimedia-related data sources are documented. The collection is open to contributions and researchers are encouraged to donate relevant datasets.
Wikivoyage also maintains data on its own DataHub:
- Hotels/restaurants/attractions data as CSV/OSM/OBF
- Tourism guide for offline use
Differential privacy
[edit]The WMF privacy engineering team uses differential privacy to release data that would otherwise be too sensitive to release. This data currently only includes pageview statistics; in the future, it will include statistics about editors, centralnotice impressions and views, search, and more.
Content
[edit]- Pageview data (currently only available as daily TSVs)
Data format
[edit]Differentially-private data is currently available in static TSV form at https://analytics.wikimedia.org/published/datasets/. Work to make this data available via API is ongoing.
License
[edit]Differentially-private data and code is available under a Creative Commons Zero license.