Researching Wikipedia 
I'm a student at Harvard Collge currently engaged in a semester-long research project on online communities, and I was wondering if you'd be interested in participating. I'm currently unsure of what "participating" might mean, but I hoped that you might have some ideas or interest. I couldn't find any direct contact information, so I'm making this post in hopes that you are interested. If so, post to my talkpage . Else, excise this comment. I won't be offended.
Field-value pairs 
Maybe you can even merge my article into yours ? :0))
- Absolute-atively, that's a good application of f-v pairs. I'm going to concentrate on putting the underlying technology in place in MediaWiki, and then try applying it to categorization (like [[category=physics]]). We'll see how that works out, but I think part-whole relationships could be very useful. --Evan 06:46, 8 Dec 2003 (UTC)
- I am willing to be a tester on test.wikipedia.org to help fix bug :0) Hashar 09:25, 10 Dec 2003 (UTC)
You might be interested in this US Patent and the discussion about it, because you might have created w:Prior art by posting This is a proposal to allow field-value pairs in MediaWiki articles. on 9:40, 7 December 2003
Mutante 10:38, 9 May 2006 (UTC)
Creative Commons dual licensing 
Hi Evan. What do you think of Guide to the CC dual-license? The goal is to enable users to dual-license their Wikipedia entries with CC-sa-at. Do you think we could start a trend? Best, -- Kowey 09:49, 10 Jan 2004 (UTC)
- Well, thanks for pointing it out. I actually think dual-licensing is a doomed pursuit, fraught with hazards. I don't recommend it. See http://www.wikitravel.org/en/article/Wikitravel:Dual_licensing for some details. --Evan 01:17, 11 Jan 2004 (UTC)
Any news about a debian package? I'd really appreciate one. Thanks :-)
184.108.40.206 19:35, 21 Jul 2004 (UTC)
404 handler caching 
Hi Evan !
I'm looking for an error 404 system to create and handle cache, only in PHP.
I saw http://meta.wikimedia.org/wiki/404_handler_caching but the links to your works are dead :(
Where can I find your work ?
Thanks in advance
--Iubito 06:45, 8 Mar 2005 (UTC)
- The links are back up. Thanks for the reminder. --Evan 20:15, 14 Mar 2005 (UTC)
I'm extremely interested in the Docbook conversion project listed on the Meta-Wiki that you have started. Is there anything to this project beyond the planning on the Wiki page?
I need to setup a collaborative documentation system, and MediaWiki with Docbook output would be absolutely ideal.
RDF metadata and other XML things 
Hi - is the RDF metadata stuff still maintained? It seems to be disabled (action=dublincore gives me a blank page) anyway... I would really like to see this work. It would be especially cool if it would be possible to list relations between pages in the RDF part (links, reverse links, language links, categories, etc).
In related news, I have been working to get XML output working for mediawiki. I have chosen a quick-and-dirty way to do this, avoiding to reqrite the parser and such - if you are interrested, please have a look at User:Duesentrieb/XML and tell me how you like it. From a representation in XML, it would be a matter of (more or less) simple XSLT conversion to go to Docbook, FO, etc and from there to PDF, TeX, PS, ...
Please tell me what you think -- Duesentrieb 00:19, 27 May 2005 (UTC)
Hey there - I have tried to put together the results of our chat on a new page, RDF. Feel free to change it around or move it to a better name. Thanks again for your input :) -- Duesentrieb 17:38, 3 Jun 2005 (UTC)
I just want to tell you that I have a prototype up and running (not public, though), works excelently! Thanks for the pointer to RAP and the DC terms (i knew about the metadata spec, but not the terms). I am using a few more vocabularies now, namely SKOS and EXIF (look at RDF#Links).
I will be away until next friday (maybe i'll drop by here anyway, not sure) - when i'm back I hope i can get someone to install it on a test wiki. I can give you the source then, too. There is still a lot of TODOs and FIXMEs in the code...
Right now, I have the following datasets (models) implemented:
- basic: the standard DC/CC stuff (date, publisher, creator, format, etc). CC is not implemented as a separate dataset, but as a different output format (RDF/XML CC-Style).
- custom: custom rdf stuff extracted from the page.
- media: basic info about the media file associated with a image:xxx page. This includes custom media info on tha description page, given in a simplified format:
<media-info> source=blabla | creator=foobar | ... </media-info>, which could also comae from a license tag, etc. AS of now, only the last uploader is included. Maybe the uploaders of all version should be there? Or make this optional?
- exif: exif data for images. The exif and media datasets have the file itself as the subject, not the description page.
- authors: full list of contributors. Current default cutoff is 1000 unique authors
- links: outgoing links for the page. cutoff applies.
- links: incomming links for the page. cutoff applies.
- categories: categories of a page (cutoff applies). This is output as both, dc:subject and skos:subject (skos:subject is a refinement of dc:subject, but has an inverse, skos:isSubjectOf)
- members: category members (cutoff applies). This uses the skos:isSubjectOf relation (maybe skos:member would be better? or make this an option?) Also, categories are always types as skos:Concept (unless in CC mode, where everything is a cc:Work - not good, because the domain of skos:subject is skos:Concept)
Pending datasets are user contributions and user uploads (and maybe some more...). Also, i'm somewhat concerned about how RAP scales for very large models. As I said before, it would IMHO be better to have something that generates output on the fly, without building models in memory. I'll have to think about that some more, I guess. Useing the object cache would be a good idea to, I think - but I havn't looked at that yet. What would have to be done in order to have it flushed automatically when related pages change?
So, best regards, and thanks again! See you in a few days... -- Duesentrieb 14:22, 5 Jun 2005 (UTC)
- I think building models in memory is the only way to have models come from lots of different sources and yet still create efficient output. Spooting info to stdout as you go would probably be a lot less useful. I think you should make sure it's a problem. --Evan 20:22, 15 Jun 2005 (UTC)
Well Done!!! 
On reading your Friends of gays should not be allowed to edit articles article, I laughed out loud! Well done - You're a talented writer! --Celestianpower
- I'd like to second that - that was brilliant! A wonderfully witty way to address these issues. Thanks! KillerChihuahua
- Me too. Absolutely hilarious. Wonderfully dry. -- 220.127.116.11 03:40, 27 March 2006 (UTC)
- I agree completely. That was absolutely hilarious. Thank you. -- Joshua 17:32, 2 February 2007 (PST)
RDF metadata 
- OK. I think at some point the CC metadata is going to disappear in favor of the RDF extension... we'll see. --18.104.22.168 03:20, 14 February 2006 (UTC)
I'm H. M. Haitham from the Arabic Wikipedia, about your request in the embassy. Could you please explain further what are the tasks that needed to help in the bid for Alexandria?
Thank you..--Haitham 02:43, 17 August 2006 (UTC)
Alexandria Bid 
Hello Evan, that would be great if we have wikimania in Alexandria, it will get much more attention to wikipedia than it has now in Egypt. I'm from Alex and would like to help u with the bid. contact me on the Arabic Wikipedia IRC channel #wikipedia-ar or thru my talk page on the Arabic Wikipedia. Thank you --Mido 10:49, 17 August 2006 (UTC)
Hello Evan, I wanted to tell u that we had final OK from the library and they agreed to host the event for free in the Conference Center in the library. I hope to hear from you, any further help on the bid will be appreciated. Thank you. --Mido 03:05, 8 September 2006 (UTC)
Reference to the bid for Alexandria to be the host of Wikimania 2007, they’re interested in working with Wikimedia to make it work, but they have to be sure that the conference will be set in Alexandria 100%.
They also needs a person with a high authority in Wikimania to contact them and to set everything with them.
I hope you can get one of the Wikimedia heads to negotiate with Bibliotheca Alexandrina, to offer all the needed guarantees for them to make this conference in Alexandrina. As I said, they’re interested, but they need to be sure, and they need a person whose rank in Wikimedia is high and admitted to have a legal responsibility.
The contact methods are available in the contact page in their website. H. M. Haitham
- Contact have been already made with the BA and no need to contact from anyone from Wikimedia at the moment, we're already in touch and they've already agreed to host the event and offer the halls for free, like I told u earlier. Thanks --Mido 19:13, 10 September 2006 (UTC)
Wikimedia Canada Chapter 
I saw that you signed up as a participant. Would you be interested in helping to get the Chapter going? We're seeking approx. 10 members to help out get things rolling. Leave a note on the WM Can page or on my talk page. v:User:Historybuff 06:21, 11 April 2007 (UTC)
Striking your vote 
Thank you for your interest in the Wikimedia Board Election. The Election Committee regretfully informs you that your previous vote was received in error and will be struck according to the election rules, described below.
The Election Committee regretfully announces today that we will have to remove approximately 220 votes submitted. These votes were cast by people not entitled to vote. The election rules state that users must have at least 400 edits by June 1 to be eligible to vote.
The voter lists we sent to Software in the Public Interest (our third party election partner) initially were wrong, and one of your account was eventually included to our initial list. There was a bug in the edit counting program and the sent list contained every account with 201 or more edits, instead of 400 or more edits. So large numbers of people were qualified according to the software who shouldn't be. The bug has been fixed and an amended list was sent to SPI already.
Our first (and wrong) list contains 80,458 accounts as qualified. The proper number of qualified voters in the SPI list is now 52,750. As of the morning of July 4 (UTC), there are 2,773 unique voters and 220 people, including you, have voted who are not qualified based upon this identified error.
In accordance with voting regulations the Election Committee will strike those approximately 220 votes due to lack of voting eligibility. The list of struck votes is available at https://wikimedia.spi-inc.org/index.php/List_of_struck_votes.
We are aware of the possibility that some of the people affected may have other accounts with more than 400 edits, and hence may still be eligible to vote. We encourage you to consider voting again from another account, if you have one. If you have no other account eligible to vote, we hope you reach the criteria in the next Election, and expect to see your participation to the future Elections.
Your comments, questions or messages to the Committee would be appreciated, you can make them at m:Talk:Board elections/2007/en. Other language versions are available at m:Translation requests/Eleccom mail, 07-05.
Again, we would like to deeply apologize for any inconvenience.
For Wikimedia Board Election Steering Committee
I really liked this site. If you ever fancy doing some very low key advertising for a charity then let me know --AndrewCates 19:00, 11 February 2008 (UTC)