Wiki database design

From Meta, a Wikimedia project coordination wiki
Jump to navigation Jump to search

What data do we have[edit]

  • article versions (lot of data)
  • uploaded files (lot of data)
  • user accounts (little data)

The interesting thing is that neither article versions nor uploaded files are ever modified - the new versions are uploaded, while the old stay in the archive.

Read operations[edit]

These are record-level:

  • get newest version of an article
  • get contents of newest version of some uploaded file
  • get specific version of an article
  • get history of an article
  • get diff between two versions of the article
  • get user preferences

These are not:

  • status of links
  • recentchanges
  • watchlist
  • search
  • list of articles that are longest/shortest/newest/orphaned/etc.
  • general statistics

While record-level read-only operations are going to be quite fast in any sane implementation, the others will need some special structures for reasonable performance.

Write operations[edit]

  • uploading new version of an article
  • uploading file
  • modifying user preferences

Acceleration of RecentChanges[edit]


Special SQL table, just for that


We could store RecentChanges as a in-memory table (recreated on database restart), using zero-locking queue.

Acceleration of Watchlist[edit]


An index on creation time for versions.

insane 1[edit]

One insane-RC-style queue for each user.

insane 2[edit]

Store last requested version of each watchlist in cache. Use some smart updating when the watchlist is requested again in short time. Will improve performance mostly for people who use whatchlist frequently.

Acceleration of Search[edit]

sane 1[edit]

Use MySQL FullText index.

sane 2[edit]

Use external spider. No locking, but the data is going to be less up-to-date and we will have less control over the results.

sane 3[edit]

Use Google. Even less work for us, and even less up-to-dateness and control.


Hack MySQL replication (with binary log of changes) to allow searches being handled by separate MySQL process. Almost real-time ( delay measured in seconds) and no locking involved.

Acceleration of link tables[edit]


2 SQL tables:

  • links
  • brokenlinks