- Please visit my user page on Wikipedia for my views and perspectives on that particular project: http://en.wikipedia.org/wiki/User:Metaeducation
Suggestions for Wiki Technology
Many think that allowing any bozo on the internet to update web pages will never work in the long term. Human nature doesn't really seem to be "good" enough to sustain anarchy, especially since one persistent vandal could spoil a whole article (which happens all the time). Even well-meaning editors can argue forever—leading to the infamous "revert wars" where they overwrite what others have contributed in earnest. Plus when it comes to having threaded discussions, the wiki model has a horrible user interface...and it's tough to tell when people alter things you wrote after the fact to make it look like you said something you didn't.
I agree with all these points, and what we have is only going to be viable for a short-term. Hopefully we can learn something in the meantime, see the potential of the metaphor, and conceive better systems without getting bitter like Ted Nelson. He was working on an early hypertext system that failed and said:
- "HTML is precisely what we were trying to PREVENT—ever-breaking links, links going outward only, quotes you can't follow to their origins, no version management, no rights management."
Still, he's right about that, and Wikipedia is following the pattern of being too simplistic to stand in the long term. I lean towards a belief in something more like Wikinfo, which allows for multiple pages on a topic that reflect different points of view. This is the only desirable solution for pages (or sections, or sentences) that are controversal. Since this can become unwieldy, choosing a default perspective based on the direct (and implied) information that is stored in w:social networks will be the ultimate solution. But that's probably a distant future, as Wikipedia has a lot of catch up to do technologically with efforts like LiveJournal. (Please don't interpret this as dismissive of the people who write MediaWiki—they're volunteers, and they grapple with serious issues of scalability that I sure wouldn't want to be stuck with solving.)
All criticisms aside, here are some productive suggestions for making Wiki technologically better:
Automation of Disambiguation Pages
The lack of some kind of automatic mechanism of handling disambiguation pages strikes me as a major flaw. People are forced to manually mention at the top of a page that a term has other uses. If this were handled better, there wouldn't be any need to separate out things like Wiktionary from the Wikipedia. The point of the internet is that we are not limited by the boundaries or size of a book. This fits me squarely in the inclusionist camp...and I think a proper handling of disambiguation pages will be key in replacing domain names and growing Google monopoly. We should give ownership of w:Web portals back to "the people", instead of search engines or the squatter who registered the site name first.
Rich Formats Precluding Multiple Language Editions
Developing multiple language editions of Wikipedia—which don't even try to make articles use a consistent format or the same pictures — does not seem very forward-looking to me. Though I know machine translation isn't ready yet, it would be more worthwhile to have writers composing articles in a format that is more understandable to computers — just as XML is doing for other kinds of data. If a sentence (or part of a sentence) proves inpenetrable to the machine, give it a hint and then leave that hint (invisible) in the article. The more hints you have, the better the translation should be able to get — even in languages that aren't explicitly mentioned in the hint structure.
In the radical extreme of using hyperlinks, every single word and phrase would be linked. On Wikipedia this is discouraged because it makes editing and reading articles unwieldy, so writers are encouraged to be selective about which terms to call out within a given context. I am very interested in the idea of making it easy to link *everything*, and then use some kind of heuristic which will decide whether to offer the user a visual cue to a particular article. This could be achieved using a comprehensive analysis of article clusters, and whenever a link would lead to a sufficiently tangential subject then the link would be highlighted.
Personal information about the individual browsing can help this process, such as noticing that an American probably doesn't need to see a hyperlink to an article about a U.S. state while someone living in Africa might. Even further tasks would use data encoded in social networks so that topics which are relevant to people you know are visually distinguished. (I'm sure this sort of capability is being explored by many search engines and portals, that want to munge pages you are reading instead of handing you the site directly, but the legal issues of doing this to non-free content are extensive.)
Management of Article Length
I like assembling lots of short stubby articles together into an article which represents a better overview of how they relate, in a way that makes it easier to mentally assemble. So though I'm not always a fan of disruptive templates that show up on articles, I am very fond of the cute merge tags. Working on this kind of restructuring feels good, and I like the way the refinements become organic...if they merge and split again, that's fine.
A specific feature that I think should be implemented is the ability to transclude the lead section from one article into another. If a page has gotten too long and then broken up, people are stuck writing a second summary in the main article—which becomes out of date. An extension of this facility could have a target article length expressed by the user, and then visually break up or merge the sections of articles automatically.
Lobby Sites To Surrender Content and become Wikipedia Subset Mirrors
Many non-wiki sites currently exist whose content overlaps a subset of the Wikipedia—such as the List of Atari 2600 games. In an ideal world these sites would willingly surrender their content to the Wikipedia, to make it a better information resource. Though profit motives block many from doing so, I believe there are at least a few sites that would like to but are concerned that there will not be a solid "snapshot" which provides archive of the information in a form that they have approved of.
Although I have maligned sites which replicate Wikipedia's content, I believe that it would be perfectly acceptable for non-profit sites to mirror subsets of Wikipedia. However, I do not think these sites should feature advertisement—and there should be a very clear indication of how to go to the current up-to-date version and contribute edits to it. The ideal understanding would be that these sites would synchronize their versions to the latest Wikipedia, after they personally were satisfied that those changes were improvements. Running a site in this way and having the discipline to not take the content in a separate direction does require trust that the open process will eventually lead to a version that everyone can be happy with.