Wiki database design
What data do we have
- article versions (lot of data)
- uploaded files (lot of data)
- user accounts (little data)
The interesting thing is that neither article versions nor uploaded files are ever modified - the new versions are uploaded, while the old stay in the archive.
These are record-level:
- get newest version of an article
- get contents of newest version of some uploaded file
- get specific version of an article
- get history of an article
- get diff between two versions of the article
- get user preferences
These are not:
- status of links
- list of articles that are longest/shortest/newest/orphaned/etc.
- general statistics
While record-level read-only operations are going to be quite fast in any sane implementation, the others will need some special structures for reasonable performance.
- uploading new version of an article
- uploading file
- modifying user preferences
Acceleration of RecentChanges
Special SQL table, just for that
We could store RecentChanges as a in-memory table (recreated on database restart), using zero-locking queue.
Acceleration of Watchlist
An index on creation time for versions.
One insane-RC-style queue for each user.
Store last requested version of each watchlist in cache. Use some smart updating when the watchlist is requested again in short time. Will improve performance mostly for people who use whatchlist frequently.
Acceleration of Search
Use MySQL FullText index.
Use external spider. No locking, but the data is going to be less up-to-date and we will have less control over the results.
Use Google. Even less work for us, and even less up-to-dateness and control.
Hack MySQL replication (with binary log of changes) to allow searches being handled by separate MySQL process. Almost real-time ( delay measured in seconds) and no locking involved.
2 SQL tables: