User talk:Magnus Manske

From Meta, a Wikimedia project coordination wiki

For bug reports or feature requests concerning my tools on Labs, please use BitBucket.
It helps me keeping track of all the requests. Beats spreading them over half a dozen talk pages ;-)

hi,

is there any chance to block anonymous user from reverting images? it seems a script can compute the link, which reverts an image to an old version. something like this is going on on de.wikipedia.

--141.53.194.251 16:45, 23 Jun 2004 (UTC)

HTML to wiki converter[edit]

I'd like to thank you for this great tool, it makes my life a zillion times easier. Thanks again and keep up the great work! Halibutt


- Kannst Du den Converter vielleicht auch auf WikiMedia stellen - die Uni Bonn wird es vielleicht nicht ewig auf ihren Rechnern laufen lassen... - Danke Sebastian

Enhanced Recent Changes as Default[edit]

Magnus amazing work!. I am currently running MediaWiki version 1.5Beta3 and was wondering how can I set as default the Enhanced recent changes for all users when they register their account. While still leave the option available for users preferences to uncheck it. I went briefly over the DefaultSettings.php but couldn't find the parameter related to this setting.

Thanks! --Martin 21:12, 11 August 2005 (UTC)[reply]

Tables / col[edit]

what do you think? --grin 10:22, 3 Dec 2004 (UTC)

RfA revisited[edit]

Hi Magnus,

Your adminship is up for review (along with everyone else who became an admin in the same n-month period). Do you still want adminship on meta?

Rock on, +sj+ 01:10, 19 Jan 2005 (UTC)

Download the client[edit]

Hi Magnus, am I stupid? Where can I get the software? I've tried your download-link but it was not reachable!? Dieter 15:41, 25 Jan 2005 (UTC)

See Talk:MediaWiki feature list#Article validation feature. See Talk:Article validation feature#Where is the code?. Brianjd | Why restrict HTML? | 06:07, 2005 Mar 13 (UTC)

Magnus, please check En validation topics - User:Beland has taken most of the current suggestions and put them together into a form that I think will work very nicely (En_validation_topics#Consolidated_proposal). There's a couple of questions I've asked you on that page as well - David Gerard 22:59, 30 May 2005 (UTC)[reply]

Magnus, I have a few requests for this feature, which taken together, I believe, will substantially improve it:

Article (stable-version) - URL:"http://en.wikipedia.org/wiki/Mathematics/stable"
  • firstly, the review is worthless to a reader unless the reader is somehow affected by it. A reader should be able to see the page with the highest rating.
    • I believe there should be another tab on the top for this, perhaps "stable", or replace article with "draft" and make the new tab "article". "article" is version with highest rating, "draft" is latest version.
    • I believe the stable version should be the default for non-logged-in users, and the draft for logged-in users.
      • (low priority) registered users should be able to choose the default in their preferences.
Click where a thumb would be to flip to thumbs up(approve), click again to flip to thumbs down(disapprove). overall rating is shown in horizontal bar. Current stable version (highest rated) is highlighted. (sorry about the bad image editing)
  • there should be a horizontal bar in the history, for each version, showing it's rating.
    • (low priority) it would be nice to be able to vote via this bar
    • (low priority) it would be nice to have a similiar feature in diffs, and be able to vote from there
  • secondly, this means that a really old version with a high rating might be the "stable" version, while a newer, lower rated, but better article, is not, there are solutions to this
    • logarithmic decay of effective ratings
    • maximum number of revisions a user can rate on an article at a time - rating an revision that you haven't rated removes your rating of the rating you made the longest time ago (not neccessarily the oldest revision)
  • ability to filter out revisions in the history that are below a specifiable rating threshold.

That's the jist of it. Feel free to discuss. Thanks. Kevin Baastalk 01:15, 4 April 2006 (UTC)[reply]

Historical atlas[edit]

Hi Magnus,

I want to help you creating an online historical atlas. At the moment atlases in book form are better then the online versions. But internet has such huge possibilities, that the online maps should become much better. Maps could be clickable giving links to other maps and data in text. Maps could give detailed information on areas in small intervals in time. Alternative maps on an area could be made, when there is a lack of sources.

I'm studying history in Leiden, Holland and no little about software. But i have been looking on historical maps since my childhood. In the 1990s an online historical atlas was made, called Centennia, but the information on it is limited (Europe and Middle East from 1000 untill now) and it is not free, it costs $89. Maps in modern historical atlases are mainly derived from German historical atlases from the 19th century. German atlases are the most accurate, but American atlases give a good overview.--Daan 14:02, 27 August 2005 (UTC)[reply]

Google Summer of Code[edit]

Hi, I'm sorry to disturb you, I know you have been working on a wiki2xml export tool. I would like to know who I have to contact about the project of Export and how it works :) Thank you for your response.

BuThAnE

missing image tool[edit]

suggestion: add *disambig* to the filter instead of adding many images with that word to the list. drini 01:30, 7 October 2006 (UTC)[reply]

Likewise *Nuvola* and *Noia* which are some widely used icon sets for disambigs and stubs. drini 01:47, 7 October 2006 (UTC)[reply]

Perhaps Crystal*.png and Qsicon*.png ? drini 01:53, 7 October 2006 (UTC)[reply]

missing image tool redux[edit]

is there a way for the script work on a given list ionstead of a random choice? For instance, in an article about a Jules Verne nove, script foudn a Jules Verne picture, but then I'd like to search for a frigate or other related items, but I don't know how to feedback the script with the list of articles I want pictures about. drini 17:06, 9 October 2006 (UTC)[reply]

Annoying you again[edit]

I was curious about what do colors represent on the image missing list. drini 21:38, 9 October 2006 (UTC)[reply]

One problem[edit]

Hi :-) I find really useful your missing images tools :-) But i have a small problem, in articles with names with strange chars it doesn't seems to work; For example the article ca:Nombre d'Avogadro has an image and interwiki links, but the search doen't find it... Result number 7 with search parametres;

  • Check in: ca.wikipedia
  • Category : Química
  • Subcat depth : 0

Thanks :-) --Joanjoc 15:20, 5 November 2006 (UTC)[reply]

Thanks for your rapid response :-) Another minor problem, runing the same query as above but with a subcat depth of 1, appears some redirects; ca:Elements químics and ca:Química física. The first page only contains the redirect code without images nor categories, but the destination page has iw not reported by your magnific tool... --Joanjoc 20:43, 8 November 2006 (UTC)[reply]

Macedonian for CatScan2[edit]

Hello Magnus, I added the complete Macedonian translation (mk) of CatScan2 on CatScan2/Interface and the manual (CatScan2/mk) and I was wondering if you could tell me when we can expect to see it implemented. Thanks a lot! --B. Jankuloski 02:31, 7 July 2010 (UTC)[reply]

It was "implemented" the moment you put the translation on the interface page. If something's not working, please tell me. --Magnus Manske 07:58, 7 July 2010 (UTC)[reply]

Portuguese Makeref/Toolserver[edit]

First, my appologies for making my request by this talk page instead of by system JIRA, but I don't know how these system works.

I created a portuguese version of makeref (here and here), but for some reason the option "Um artigo num jornal académico ou num periódico" is not working.

Can you explain what I am doing wrong?

Thanks in advice--MiguelMadeira 16:34, 27 December 2010 (UTC)[reply]

No more help needed - I discovered what the problem is--MiguelMadeira 18:06, 27 December 2010 (UTC)[reply]

Main categories[edit]

Hi Magnus - Thank you very much for the Catscan tool, it is so very useful! I am currently doing some academic research on English Wikipedia article pages and I would like to categorize them at the Main category level. For this, I was trying to use http://toolserver.org/~magnus/catscan_rewrite.php with categories such as "education", "business" and depth 50 (or so) to generate complete lists of all the articles in each category (so, 10 lists). Obviously some articles will be in multiple lists.

However I suspect this query is difficult for the toolserver because of the sheer amount of nodes it needs to explore in the category tree. Is there any way you can help me get those 10 lists?

Thank you so much!!

Best, Andreea

email agorbatai(at) hbs (dot) edu)

commonshelper2[edit]

Hello. There was a small question on local ru.wiki village pump about commonshelper's work recently and I memorised that is still doesn't work for ru.wiki. Bug is still opened but you've proposed to update commonshelperII instead of trying to fix commonshelper. I've updated necessary data in April but I suppose CHII wasn't updated according to this info. Is it possible to fix this issue? Ru.wiki can't transfer images to Commons since the beginning of 2011 if I am not wrong... :( rubin16 14:02, 14 August 2011 (UTC)[reply]

GLAMorous and Atlassian problems[edit]

The whole Atlassian thing is a complete fiasco -- I spent at least 15 minutes at that site, and never managed to login (at one point, it told me I was logged in, but I actually wasn't).

In any case, GLAMorous has problems, in that:
1) The "Copy the URL of this link to return to this view. This data is also available in XML format." links do not work if you're searching by user. I had to manually munge the URl http://toolserver.org/~magnus/glamorous.php?doit=1&category=&use_globalusage=1&ns0=1&show_details=1&format=xml to http://toolserver.org/~magnus/glamorous.php?doit=1&username=AnonMoos&use_globalusage=1&ns0=1&show_details=1&format=xml
2) An alternative "slightly detailed" mode is really needed. When I clicked on the second of the URLs above, the download stopped after 3.81 megabytes was downloaded, and the data was still in the middle of reporting on the first image (out of 1339) on the list, File:Flag_of_Turkey.svg. I think people would be interested in a view which did NOT report on individual page uses, so that they could see which of their images were most used in a reasonably-sized overview.
Thanks! -- AnonMoos 16:19, 22 December 2011 (UTC)[reply]

XSS vulnerability[edit]

I found a cross-site scripting (XSS) vulnerability in CatScan. Example: https://toolserver.org/~magnus/catscan_rewrite.php?categories=%3C/textarea%3E%3Cscript%3Ealert(%27XSSed%27);%3C/script%3E (will open an alert box). I don't know how to use JIRA, but it looks like I'll need a toolserver account (I don't have one) and/or to give out personal info, so I'm writing this here. πr2 (tc) 16:28, 28 March 2012 (UTC)[reply]

Several (all?) of your other tools are vulnerable as well. (could it be in some shared code?) πr2 (tc) 16:58, 28 March 2012 (UTC)[reply]
@PiRSquared17 I had just sent Magnus an email regarding a similar vulnerability in PetScan. Kins of obnoxious that issues like this can't be fixed in such a long time (10 years?). ~~~~
User:1234qwer1234qwer4 (talk)
21:05, 11 January 2022 (UTC)[reply]
(For reference, the above has been fixed after being submitted as Phabricator tasks.) ~~~~
User:1234qwer1234qwer4 (talk)
19:26, 5 June 2022 (UTC)[reply]

TreeViews yearly?[edit]

Hi Magnus!

Could TreeViews be expanded to include yearly statistics, or are those better found elsewhere? --Palnatoke (talk) 13:45, 9 May 2012 (UTC)[reply]

Catscan - thanks![edit]

Thanks for the Catscan fix and advice! Allens (talk) 16:26, 22 May 2012 (UTC)[reply]

Cambridge fresher's fair[edit]

Hello Magnus,

I've bid a stall on behalf of the Cambridge University Wikipedia Society at the Cambridge University fresher's fair, 2-3 October. If you are around, it would be great if you can come and help by staffing the stall for a few hours - having one of the inventors of MediaWiki man the stall will certainly be a boost!

The fair will run, roughly, from 9am to 7pm on the 2nd, and 9am to 4pm on the 3rd; details are yet to be confirmed.[1] You certainly won't be expected to stay the whole day unless you really want to! It'll be a sign-up stall for the campus Wikipedia Society, and we'll give out Wikimedia freebies at the event to promote awareness for Wikipedia-editing and Wikimedia UK.

Please contact me if you're interested in helping, even if it's just a few hours. Thanks! (Apologies for double-posting if you've seen this message on wikimediauk-l already!) Deryck C. 22:30, 5 September 2012 (UTC)[reply]

Wikipedia DVD/offline[edit]

Hi Magnus, I saw your contribution on http://meta.wikimedia.org/wiki/Wikimedia_and_Mandrakesoft. OK you may be the guy to give me some advice. I am teaching in a country riddled not only by censorship but also by slow internet. We discussed at our expat school if there is an option to install wikipedia directly onto our school server so that we can have the kids work with it via school net but without being subject to the frequent interruptions. Besides that I have a very strong interest in some advice how you practically produced that German DVD you mentioned. If you think you can help me kindly leave me a line at kipala@web.de. --Kipala (talk) 05:51, 25 October 2012 (UTC)[reply]

Cambridge meetup March 2013[edit]

Is that you? IP resolves to Sanger. Deryck C. 21:25, 20 February 2013 (UTC)[reply]

Yes, thanks! Wikimedia sites keep logging me out randomly :-( --Magnus Manske (talk) 22:37, 20 February 2013 (UTC)[reply]

Reasonator[edit]

Do you plan to create the page Reasonator? πr2 (t • c) 19:38, 8 April 2013 (UTC)[reply]

Maybe, eventually, but feel free to go ahead! --Magnus Manske (talk) 21:25, 8 April 2013 (UTC)[reply]
Created. You can check it or add more details... I don't have time to check it. Would you like to upload File:Reasonator screenshot Bach.png (like you've done for File:GeneaWiki screenshot Bach.png)? πr2 (t • c) 05:00, 9 April 2013 (UTC)[reply]

Requests for BaGLAMa[edit]

Hi Magnus!

I was wondering if you could be so kind to help me by adding the categories that I have listed here to BaGLAMa? It would really help WMSE as some of the GLAMs involved want some statistics to brag about :-).

Best,

John Andersson (WMSE) (talk) 14:47, 22 June 2013 (UTC)[reply]

Added. They will show up on the next run after the current one. Couplea days. --Magnus Manske (talk) 15:07, 22 June 2013 (UTC)[reply]
That's great! Thank you! Could you also add https://commons.wikimedia.org/wiki/Category:Media_created_by_technology_pool_of_Wikimedia_Sverige? It would be interesting to see the effect of this project. Best, John Andersson (WMSE) (talk) 20:37, 22 June 2013 (UTC)[reply]

FYI (thanks so much for this wonderful tool). --Nemo 08:14, 18 August 2013 (UTC) P.s.: The notice at top is outdated, if there's a better place where to track labs tools reports please tell me and I'll add notes to the tools' pages here on Meta too.[reply]

Reasonator not working[edit]

Dear Magnus, the Reasonator stopped working this week. Could you please have a look? Perhaps this has something to do with the Wikidata change Daniel Kinzler announced this week. Longbow4u (talk) 09:56, 14 September 2013 (UTC)[reply]

Yup, API upgrade broke it. Should be fixed now. Thanks for alerting me. --Magnus Manske (talk) 11:31, 14 September 2013 (UTC)[reply]
Thank you for the prompt fix! It is a great tool. Longbow4u (talk) 12:05, 15 September 2013 (UTC)[reply]

CatScan2 bug[edit]

(No component named CatScan2 in JIRA) Please use database suffix "wiki" for project "wikimedia" in CatScan2 in tool labs. The original error is Could not connect to incubatorwikimedia.labsdb : Unknown MySQL server host 'incubatorwikimedia.labsdb'(115) --Zhuyifei1999 (talk) 08:16, 7 October 2013 (UTC)[reply]

Should now work for incubator. --Magnus Manske (talk) 08:32, 7 October 2013 (UTC)[reply]
Thanks --Zhuyifei1999 (talk) 23:24, 7 October 2013 (UTC)[reply]
I just got Could not connect to roa-tarawiki.labsdb : Unknown MySQL server host 'roa-tarawiki.labsdb' (115) instead. Thanks. --Elitre (WMF) (talk) 16:04, 26 November 2013 (UTC)[reply]
Fixed.--Magnus Manske (talk) 16:25, 26 November 2013 (UTC)[reply]

CommonsHelper and the Afrikaans wiki[edit]

Hi Magnus, I tried to configure the Data af.wikipedia page so that CommonsHelper2 could transfer files (such as this one) from the Afrikaans Wikipedia, but after countless attempts I still can't get it to work properly. Could you please have a look? Thanks, Underlying lk (talk) 17:00, 2 December 2013 (UTC)[reply]

Flickr2Commons funktioniert nicht[edit]

Hallo, schaust du bitte einmal hierher: Talk:Flickr2commons#Transfer_failed_:_null. Danke! darkweasel94 (talk) 10:40, 10 March 2014 (UTC)[reply]

MakeRef[edit]

I am hoping you can spare a few moments to rename "MakeRef/mr" to "MakeRef/Mr" and undelete "MakeRef/Fr" (if it contains useful information, that is). Thanks! -- 79.67.241.227 18:11, 11 April 2014 (UTC)[reply]

Please also consider these changes. -- 79.67.241.227 18:11, 11 April 2014 (UTC)[reply]

Did the linked changed. Moved the page. Can't resurrect "MakeRef/Fr", as I'm not an admin here. --Magnus Manske (talk) 10:06, 16 April 2014 (UTC)[reply]

Five out of six isn't bad. Many thanks for attending to this. -- 79.67.241.227 17:07, 16 April 2014 (UTC)[reply]

Current CatScan bug[edit]

When trying to get a category scan (e.g this one in wiki format, the page names are missing. Can you please fix that? עוד מישהו Od Mishehu 10:27, 9 November 2014 (UTC)[reply]

Fixed. Note that I am rarely on meta, so bug reports here can go unnoticed for a long time. --Magnus Manske (talk) 09:00, 13 November 2014 (UTC)[reply]

Category tree[edit]

Hello. Could you tell me, please, is there any possibility for scanning a Wikipedia on some language (37788 categories) and return for each category an inverse tree of parent categories up to some level? For example, for level 2, category a is in b and in c, category b is in d and e, category c is in e, or something like that? Thank you very much in advance, IKhitron (talk) 18:53, 2 December 2014 (UTC)[reply]

There's no ready-made tool, but it's certainly possible. Which language were you thinking of, and what output format? --Magnus Manske (talk) 19:04, 2 December 2014 (UTC)[reply]
Thank for your quick answer. It's the he Wikipedia (so, unfortunately, non ASCII names can be an issue). Output format is not so important, as long as it will be compact (meaning the tree for each category is continuous). If you just add some print function and it is not a problem to add any format, I prefer CSV. But I do not want to bother you. Thank you very much, IKhitron (talk) 19:26, 2 December 2014 (UTC)[reply]

Broken Link to Missing Images Tool[edit]

https://meta.wikimedia.org/wiki/Missing_images_tool

Same here: https://jira.toolserver.org/browse/MAGNUS

CatScan usage for bots[edit]

Hello Magnus,

I'm a little bit troubled with the usage of your cool tool with a bot. How I see it my bot isn't able to tell your tool, which output format is to use. Maybe you can give me a hint to a documentation, which goes further than the gui.

Regards --THE IT (talk) 12:12, 29 January 2015 (UTC)[reply]

You can use "format=X", with X one of html, csv, tsv, wiki, json. For example, JSON. --Magnus Manske (talk) 13:51, 29 January 2015 (UTC)[reply]
Thanks a lot. That’s a step further ;-). A brief overview over all options (maybe later when the tool is finished) would be very helpful. If I hit further problems ... now I have found your xmpp-Adress. Again many thanks. --THE IT (talk) 17:31, 4 February 2015 (UTC)[reply]
Hello Magnus, sadly there is further struggle. When I try to acces the link with a python-script (via the Post method) the access is forbidden (403) with the reason 'Requests must have a user agent'. Can you give me a hint about my mistake? Regards --THE IT (talk) 17:59, 8 February 2015 (UTC)[reply]
I don't make the web server (lighttpd), but it seems you need to specify a user agent? Any should do, just needs to be present. --Magnus Manske (talk) 10:48, 4 March 2015 (UTC)[reply]

Not in the other language[edit]

Hallo Magnus, Entschuldigung für die Störung; ich hätte eine kleine Frage: Kann dein Not-in-the-other-language Tool auch für Artikeln zwischen zwei sprachlichen Versionen von Wikivoyage benutzt werden oder nur zwischen Wikipedia Artikeln? Danke im Voraus. Grüße --Nastoshka (talk) 09:40, 6 March 2015 (UTC)[reply]

3D SVG[edit]

You have new messages
You have new messages
Hello, Magnus Manske. You have new messages at [[User talk:Meetup/Cambridge/26|User talk:Meetup/Cambridge/26]].
You can remove this notice at any time by removing the {{Talkback}} or {{Tb}} template.


Please fill out our Inspire campaign survey[edit]

Thank you for participating in the Wikimedia Inspire campaign during March 2015!

Please take our short survey and share your experience during the campaign.



Many thanks,

Jmorgan (WMF) (talk), on behalf of the IdeaLab team.

23:34, 13 April 2015 (UTC)

This message was delivered automatically to Inspire campaign participants. To unsubscribe from any future IdeaLab reminders, remove your name from this list

Geograph2commons possible bug[edit]

Dear Magnus, I believe the very wonderful Geograph2Commons tool is one of yours? If so I have a possible bug. Most images can be uploaded or tell you they have already been uploaded. But some images fail with a message of "Warning" and nothing more specific. This being an example, as far as I can see it hasn't been uploaded already and has a compatible license, but if I've missed something would it be possible for the warning to say why images like this don't upload? Many thanks WereSpielChequers (talk) 09:01, 21 May 2015 (UTC)[reply]

Links sind tot[edit]

Die Verlinkungen oben im roten Kasten auf dieser Seite führen ins Leere. Aber den Bug den ich eigentlich melden wollte: Wenn man bei http://tools.wmflabs.org/catscan2/catscan2.php "Wiki" als Format auswählt, werden Dateien in voller Auflösung anstatt als Link dargestellt. (Es fehlt der Doppelpunkt)

    ist : [[File:...
    soll: [[:File:...

Danke, --McZusatz (talk) 12:04, 24 May 2015 (UTC)[reply]

Ich würde auch gerne einen Bug melden. Wo genau muss man das tun? --THE IT (talk) 09:24, 13 August 2015 (UTC)[reply]
Hier, auf meiner Deutschen oder Englischen Benutzerseite, oder (bevorzugt) hier, wenn das tool da in der Liste ist. --Magnus Manske (talk) 10:02, 13 August 2015 (UTC)[reply]

Foto Haus Prinzipalmarkt 28, Münster[edit]

Auf der Wikipediaseite Joseph Heinrich Coppenrath wurde das Foto Haus am Prinzipalmarkt 28, Münster von CommonsDelinker gelöscht:

File:Med1743.jpg
Münster, Prinzipalmarkt 28: Buchhandlung Coppenrath, 1933

Begründung: Zusammenfassung des Bearbeiters: Med1743.jpg entfernt, auf Commons von

JuTa gelöscht. Grund: No license since 24 June 2015

Dieses Foto ist 1933 aufgenommen das sind 82 Jahre her. Es gibt überhaupt kein Copyright darauf. Das sagte mir auch der Coppenrath Verlag. Was also habe ich nicht richtig gemacht? oder, besser wäre die Löschung rückgängig zu machen. --12weev (talk) 17:13, 6 July 2015 (UTC)[reply]

Der CommonsDelinker entfernt nur den Eintrag im Wikitext; das Bild wurde schon vorher gelöscht. Warum, da musst Du schon JuTa fragen. --Magnus Manske (talk) 18:33, 6 July 2015 (UTC)[reply]

BaGLAMa 2 request[edit]

Hi Magnus, Would it be possible to add Category:Media_from_the_National_Liberation_Museum_1944-1945 to BaGLAMa 2? Best, --AWossink (talk) 09:56, 15 September 2015 (UTC)[reply]

Done.--Magnus Manske (talk) 14:29, 15 September 2015 (UTC)[reply]
Awesome. Thanks! --AWossink (talk) 08:14, 17 September 2015 (UTC)[reply]

Unaproved bot[edit]

Please, pay attention to recommendations on this page and request bot flag for ListeriaBot on Portuguese Wikipedia. Unaproved bots are not allowed per local policy.

Let me know if you need any help. Kind regards.—Teles «Talk to me ˱C L @ S˲» 07:50, 2 October 2015 (UTC)[reply]

Could you kindly open the request for Portuguese Wikipedia? You can use English language as well. Thanks.—Teles «Talk to me ˱C L @ S˲» 22:10, 22 January 2016 (UTC)[reply]
Your bot has been blocked on pt.wiki. Please, place a local request in case you want to use it.—Teles «Talk to me ˱C L @ S˲» 03:39, 5 February 2016 (UTC)[reply]

Thanks for the endorsement![edit]

Hi @Magnus,

I would like to thank you for your endorsement on the StrepHit IEG proposal, it is really precious!

Cheers,

--Hjfocs (talk) 15:35, 7 October 2015 (UTC)[reply]

geograph2commons bug[edit]

I'm experiencing a bug trying to use geograph2commons. Certain images return "ERROR: Warning" with no further information. For example:

Other images seem to work ok, but some don't and I can't work out why these particular ones fail. -mattbuck (Talk) 15:06, 9 November 2015 (UTC)[reply]

Copied to [3]. -mattbuck (Talk) 15:07, 9 November 2015 (UTC)[reply]
Could it be that the apostrophe in the name of the photographer is a character that is not found in the basic ASCII character set? --Schlosser67 (talk) 12:58, 3 January 2017 (UTC)[reply]

Flickr2Commons[edit]

Hi Magnus! Firstly, congrats for your huge history of support inside Wikimedia projects! :) So, I would like to upload these great pictures from Flickr to Commons, but Flickr2Commons doesn't allow me because they're mostly bad named (just number sequences). Is there a way to automatic set a prefix to the pictures, just like "Alexander Baranov - " or something like that? Other suggestion? Regards, Sturm (talk) 19:09, 17 November 2015 (UTC)[reply]

Yes there is! Once the images are loaded in flickr2commons, there is a green-ish button "Prefix selected named", which will do what you want. It's next to the "Select all" and "Deselect all" buttons. --Magnus Manske (talk) 19:30, 17 November 2015 (UTC)[reply]
Found it! Great! :) But now I found that the prefix is inserted always with a double space, even if I do not insert any, like: "File by Alexander Baranov - (19411107842).jpg". Is there any problem to upload like this? Is there a way to avoid the double space? Hint: its between "...anov - (19..."; the double space is just visible in the wikimarkup! ;) Sturm (talk) 21:10, 17 November 2015 (UTC)[reply]
Solve it using this basic tool, ;) Sturm (talk) 23:05, 17 November 2015 (UTC)[reply]
Hummm, this Flickr account uses CC0 1.0 Universal (CC0 1.0), but Flickr2Commons tells me that "No suitable photos found. Maybe they are not under a free license?". I think that it's the case of add this license to do the list of acceptable licenses of the tool, nope? Sturm (talk) 08:56, 21 November 2015 (UTC)[reply]
Try it now. --Magnus Manske (talk) 15:40, 21 November 2015 (UTC)[reply]
Still not working, I've got the same message. Just to let you know if I am really doing it properly, I'm trying do it assuming that its ID is 135941882@N05. Sturm (talk) 16:43, 21 November 2015 (UTC)[reply]
Besides, I drop a few "tip lines" on this last days... Sturm (talk) 16:50, 21 November 2015 (UTC)[reply]
I just transferd this with flickr2sommons, from your example user. No problems. Maybe force-reload flickr2commons once to clear the cache? --Magnus Manske (talk) 18:44, 21 November 2015 (UTC)[reply]

Just amazed with your tool! Congrats again! :) But another question: When I load an album with over 600 pictures, Flickr2Commons don't show thumbnails in order to avoid problems related to memory use of the browser. But, many times its very important to see the thumbnails in order to select what pictures are reasonable to upload to the project. So, I would like to suggest a feature that turns able to show the thumbnails of a large album in sets of a maximum of 600 files (like "Page 1", "Page 2"...). What do you think? And finally, any chance of a Picasa equivalent?!?!? :D Regards, Sturm (talk) 18:15, 6 December 2015 (UTC)[reply]

Flickr2Commons is able to identify the files of this album as free licensed only when run and uploaded one by one, but not when I try to run the album set. Sturm (talk) 08:09, 9 December 2015 (UTC)[reply]

OAuth approval[edit]

Hi Magnus,

I would like to request the approval of my OAuth application Kaspar [1.0]. I already did edits with it on Wikidata. Do I have to provide more information? How does this process work? -- T.seppelt (talk) 23:36, 21 November 2015 (UTC)[reply]

What future IdeaLab campaigns would you like to see?[edit]

Hi there,

I’m Jethro, and I’m seeking your help in deciding topics for new IdeaLab campaigns that could be run starting next year. These campaigns aim to bring in proposals and solutions from communities that address a need or problem in Wikimedia projects. I'm interested in hearing your preferences and ideas for campaign topics!

Here’s how to participate:

Take care,

I JethroBT (WMF), Community Resources, Wikimedia Foundation. 03:34, 5 December 2015 (UTC)[reply]

Ilse Kokula[edit]

I am very pleased and gratefull for your help - thank you, very much. Would it be possible to add that the pictue is from 2014? Ilse Kokula, 2014? --Sarita98 (talk) 22:25, 29 December 2015 (UTC)[reply]

ListeriaBot at frwiki[edit]

Hello,

Please request the bot flag for ListeriaBot on frwiki. It can be done at fr:Wikipédia:Bot/Statut.

Regards

--Hercule (talk) 13:11, 21 January 2016 (UTC)[reply]

@Hercule: Could you do that for Magnus ? I guess that if he had to do that for every wiki, 200+ time, it would be quite tedious. I would have done that myself but I can't right now. TomT0m (talk) 13:23, 21 January 2016 (UTC)[reply]
@Hercule: I tried, but the template is broken, or something. I don't speak French. --Magnus Manske (talk) 15:41, 21 January 2016 (UTC)[reply]
I changed few thinks. Please change in the request :
  • PURPOSE_OF_THE_BOT : explain the purpose of your bot
  • USED_SCRIPT : the script you use, if any (like Pywikipedia)
  • ALREADY_USED_ON_(WITH_BOT_STATUS) : the major wikipedias where you have the bot flag. You can also indicate if it has the global bot flag
  • ALREADY_USED_ON_(WITHOUT_BOT_STATUS) : the major wikipedias where you edit without the bot flag (if any)
On the section "Remarques" you can indicate if you have another bot running on frwiki
Regards
--Hercule (talk) 15:52, 21 January 2016 (UTC)[reply]
Lively discussion: w:fr:Wikipédia:Bot/Statut#ListeriaBot. Looks like the opposers are long-time Wikidata haters: w:fr:Wikipédia:Sondage/Wikidata. Nemo 11:14, 22 January 2016 (UTC)[reply]
@Nemo bis:Lol, we are haters ? Thank you for your word. As you can read, the talks about Wikidata on fr.wikipedia have started in july 2015, since almost no problem has been solved... Looks like the supports are long-time Wikidata incompetents. --Consulnico (talk) 11:32, 22 January 2016 (UTC)[reply]
I can't read French, but we have the haters on German Wikipedia as well, not-made-here syndrome and all. --Magnus Manske (talk) 11:54, 22 January 2016 (UTC)[reply]

Future IdeaLab Campaigns results[edit]

Last December, I invited you to help determine future ideaLab campaigns by submitting and voting on different possible topics. I'm happy to announce the results of your participation, and encourage you to review them and our next steps for implementing those campaigns this year. Thank you to everyone who volunteered time to participate and submit ideas.

With great thanks,

I JethroBT (WMF), Community Resources, Wikimedia Foundation. 23:56, 26 January 2016 (UTC)[reply]

ListeriaBot @ etwiki[edit]

ListeriaBot regular updating of pages seems not to work in etwiki, for example in list https://et.wikipedia.org/wiki/Vikipeedia:Vikiandmed/Riigijuhid --WikedKentaur (talk) 18:04, 27 January 2016 (UTC)[reply]

Should be fixed now. --Magnus Manske (talk) 19:27, 27 January 2016 (UTC)[reply]

Thanks. And one more request. We use coordinate template compatible with German wiki Template:Coordinate. So calls like

{{Coordinate|text=DMS|NS=53.4139|EW=7.79646|name=Q1775366|simple=y|type=landmark|region=x}}

will work in etwiki and enwiki template Coord calls won't work in etwiki. Could you switch ListeriaBot coordinate template from Coord to Coordinate for etwiki. --WikedKentaur (talk) 18:09, 30 January 2016 (UTC)[reply]

frwiki[edit]

Hello,

after some problematic uses, ListeriaBot has been denied bot status on the French wikipedia. I don't know how this is supposed to work now. Is it possible for you to block it from editing on the French wikipedia ? Thanks, Jean-Jacques Georges (talk) 15:58, 4 February 2016 (UTC)[reply]

I'm not agree with Jean-Jacques Georges, the majority have the yes for your bot (51%, 20% against in the main, 29% against), but not enough for the "flag bot", the best is to continue using your bot but not on the main . Sebk (talk) 16:15, 4 February 2016 (UTC)[reply]
Actually there is a consensus to disable it in article namespace on the template talk page [4], but there is no discussion showing a consensus to fully disable it. If you can throttle it to avoid flooding RC, it should be fine. Akeron (talk) 16:19, 4 February 2016 (UTC)[reply]
Well, your bot needed 75%, which it did not obtain in any case, so normally it shouldn't be able to edit anywhere on frwiki. Jean-Jacques Georges (talk) 16:32, 4 February 2016 (UTC)[reply]
Jean-Jacques Georges. you are wrong... we voted for the "flag bot" not for the bot and you know it. Sebk (talk) 16:40, 4 February 2016 (UTC)[reply]

Geograph upload - no text[edit]

Hi The Geograph upload tool http://toolserver.org/~magnus/geograph_org2commons.php isn't uploading the text

eg recent upload of

http://www.geograph.org.uk/photo/3803146

gives

https://commons.wikimedia.org/wiki/File:Cottingham_Road,_Kingston_upon_Hull_(geograph_3803146).jpg

Can it capture the text Our Lady of Lourdes & St Peter Chanel Catholic Church, No.119 Cottingham Road, known locally as the Marist Church. The Society of Mary (Marist Fathers and Brothers) is an international congregation of priests and brothers which has its headquarters in Rome. A temporary church was erected in Cottingham Road by the Marist Fathers in 1925 and replaced by the present church in 1957 (Architect: John Houghton).

? Thanks Xiiophen (talk) 15:39, 6 February 2016 (UTC)[reply]

Improve wikishootme with specific wikimetric targeting[edit]

Hello.

May I disturb you? Mayne you remember me about the wiki needs picture grant. I am not here to disturb you with those details, I am only doing 20% of the work there, so if there is any curious question for you that would be for AlessioMela to make. I am here because as stated in the grant our aim is to create cooperative tools, not comptitive tools. Since I am in charge of the "social" aspect of the operation I show image tools to newbies at wikimedian meetings, ask for feedback etc

Well, long story short: wikishootme is great and is a good starting point. We are missing users with spcific tematic interest on wikidata and fixing information about the territories they're living is a nice introduction, it kinda works. Wikishootme suggest very simple tasks, less complicated than finding images on flickr and check copyroght tags etc. So it is nice, plus is a very concrete way to introduce to geo-coordinate and I have to confess that this step always takes a good 15 minutes in wikimedia classes for newbies. it is good, no doubt about it. That's why I show it with pleasure to newbies for feedback since our tool is not 100% operational.

if you remember I politely suggested that one problem of wikishootme is that it has a still considerable amount of false negatives and positives. False negatives are created by the lack of coordinates information, false positives by the slow update of the image properties on wikidata. Newbies all confirmed this impression referring to their area whilst exploring it. this means it will take some years for the tool to be efficently used and I would like to to fix the gap.

So, this year I've started to spam users about wikishootme and how to improve the data it relies on. A 20% of them seems responsive. it is not great but it is something and we can perform this strategy globally if we want.

The point is: we can do more. I can use some basic wikimetrics I know in order to select automatically "trusted local users" (as I did with itWikipedia, I just rememeber their names from many other initiatives) than filter them by a minimal acitvity on wikidata and spam a message pointing out the importance to check coordinates and image properties. It's not going to fill 100% the gap but it shouldn't take so much time compared to the results. I think I can find some users who are willing to help me on this technical part, if you just want to provide a simple supervision.

Would you support the idea? I was thinking of proposing a grant but we can start to target a group of users of one local wikipedia I can speak the language. Like the German one, as a test. My data so far with itWiki users are not complete, I need a full scale test before proposing a grant for all the platforms, so I know how to refine it. i don't want to waste WMF money... Plus italian wikipedia is mainly based on Italian territory where the copyright laws are a mess. German-speaking cuntries show a very localized activities but in area where there is for example freedom of panorama, so a spam message about finding local images shouldn't be handled with too much care,

Would you like the idea?--Alexmar983 (talk) 13:25, 19 February 2016 (UTC)[reply]

Hi Alexmar983! Obviously, I like the idea to recruit people to help fill in images on Wikidata (presumably with WD-FIST?). I am a little cautious about mass-spamming, though; maybe a note on the local "village pump" would be less aggressive? Also, be aware that Wikidata currently adds images at ~5x the rate of any other Wikimedia project (see my blog entry), and it has surpassed all but en.wp in total number of items/articles with images. --Magnus Manske (talk) 14:19, 19 February 2016 (UTC)[reply]
I have to study WD-FIST, I don't know anything about it. I am leaving for a long flight, so well talk again in some weeks.
In any case I believe the rate is huge but I cannot estimate how many months it will require to get an operational equilibrium. For countries like Italy it's probably a lot, we are talking about years. In their innocence newbies don't lies :D On the bright side, they don't lie when they say they like your tool and they would use it if it were complete, that's why I want to improve the "coverage" of items with images.
Just remember that the local village pump is in theory that on wikidata. We are using the wikimtrics from DEwiki,but we are not leaving a message on DEwiki. But if you prefer we can start with DEwiki village pump, than inform wikidata village pump at a second step, I was hoping to find more de-N interest on wikidata so they could inform later DEwiki because my German is good but not as great as my French or English.
Also, the main target are autopatrolled-like users, so it is not a big "mass". We are talking about a subset of users with a minimum activity on wikidata and editors on dewiki, AND recently active, so it is not a big number. It is spam, but very targeted spam. There are at least two cases of "mass" yet targeted spamming I've co-organized on itWikipedia, and there was no problem. One was about the new CX tool and the ther one about a DRDI information campaign. AP users are traditionally very selective. A 10-33% of them are interested and are grateful, the rest are not interested and skip the message but they don't feel harrassed, I don' t expect DEwiki users to act dfferently, especially when contacted on a different platform. There were even bigger spamming campaigns about WMF election on itWiki and they were not criticized. Recent targeted research has been performed by email from FRwiki, I don't remeber any complaints... I think that past experiences show us that spreading useful information is welcome, a simple discussion is probably enough to take care of all the suggestions. If you want it to be on DEwiki, ok. I preferred data so I could write in English where I master a more technical jargon, but it will just take me 1-2 days more to refine the presntation message.--Alexmar983 (talk) 12:52, 20 February 2016 (UTC)[reply]

I have started to spam also WD-FIST, I show it to all users who have started to show interest to the P18 property. It seems ok. I've contacted an user expert on wikimetrics by email, we'll crunch some numbers in the following days. Maybe if you prefer we can target the user active on commons but not active on wikidata. This will also boost the speed of image upload.--Alexmar983 (talk) 01:20, 22 February 2016 (UTC)[reply]

wdfist improvement[edit]

Hi, it's me again. I was playing with the wdfist tool and there is one "option" I cannot find or set up after few minutes of search, so it is possible that some users with less cross wiki experience than me will fac this problem as well. I am preparing a new message for another two local users and I thought I can introduce them to this tool as well, so I wonder if it can be improved. Maybe just a simple shortcut.

So, what "option" are you talking about?

Now my issue is that the tool is good, very handy but suggestions need to be skimmed. That is, too much football-related items for example. Very local users specialized in description of their own territory don't need them IMHO. Is it possible in the wdfist tool to remove all wikidata items without geocoordinates focusing on buildings or places?--Alexmar983 (talk) 14:40, 20 February 2016 (UTC)[reply]

Both WDQ and SPARQL queries can filter out without geocoordinates. Property is P625. --2001:630:206:FFFF:0:0:3128:B 15:57, 22 February 2016 (UTC)[reply]
The "option" is what is expressed in the secondo part of my comment, that is the possibility to list only items with geocoordinates. I am sure you can get it somehow but can you insert in the list of Permalinks at the end a specific box "show items with geocoordinates (P625)", basically the opposite of what you do with P18. It is much easier for a newbie if they can fill a box, IMHO. I am introducing the tool with the category option, not SPARQL, because SPARQL it is too technical.--Alexmar983 (talk) 18:33, 22 February 2016 (UTC)[reply]

Oauth[edit]

Magnus,

I've been working for a while to help my application shed TUSC so that you no longer have to maintain it. I've proposed an OAuth consumer. Could you authorize it, deny it, let me know what other information you need, etc.?

Thanks. Magog the Ogre (talk) 21:30, 12 March 2016 (UTC)[reply]

OK, I think I approved it... --Magnus Manske (talk) 23:45, 12 March 2016 (UTC)[reply]

Open Call for Individual Engagement Grants[edit]

Greetings! The Individual Engagement Grants (IEG) program is accepting proposals until April 12th to fund new tools, research, outreach efforts, and other experiments that enhance the work of Wikimedia volunteers. Whether you need a small or large amount of funds (up to $30,000 USD), IEGs can support you and your team’s project development time in addition to project expenses such as materials, travel, and rental space.

With thanks, I JethroBT (WMF), Community Resources 15:56, 31 March 2016 (UTC)[reply]

Thank you[edit]

Thank you very much for corrections to Bruker:pmt/Bygder --83.243.236.100 16:11, 18 April 2016 (UTC)[reply]

Funny F2C result[edit]

ListeriaBot at iawiki[edit]

iawiki is maybe the first Wikipedia that has infoboxes on all pages in the article namespace and has no local data in any article infobox - all info there comes from Wikidata.

The next step on automatisation is to have the lists populated from Wikidata. All lists are in namespace 102. [5]

ia:Appendice:Lista de statos de Brasil has Listeria-templates. How can it be started that the bot runs there? 91.9.107.250 10:31, 20 May 2016 (UTC)[reply]

It has picked up automatically by now. Check the bot status if edits seem to lag behind. --Magnus Manske (talk) 12:12, 25 May 2016 (UTC)[reply]

Observation regarding geograph2commons[edit]

I have tried to upload some files from geo-en.hlipp.de to Commons. Some worked, others not. It appears that only those files could be successfully uploaded on whose pages (not only in the title!) there were no special characters (umlauts, eszett, accented letters) that are not normally used in English. For instance, I never managed to upload pictures from Baden-Württemberg or Köln, or anything with "-straße", "-café" or "-gebäude" in the title or the description. Is this perhaps the reason for the observed inconsistencies? Besides, the script, as it is, still inserts "geograph.co.uk" in the comment section even if another server is used as the source. --Schlosser67 (talk) 10:30, 10 June 2016 (UTC)[reply]

ListeriaBot at eowiki[edit]

Hi Magnus! I am administrator of Wikipedia in Esperanto. Please, ask for bot status in eowiki for ListeriaBot. Also, global bots are allowed in eowiki. Gamliel Fishkin 11:38, 22 June 2016 (UTC)[reply]

Hi, I do not have the bandwidth to ask for bot status on (potentially) 300+ languages, which I do not speak. I applied for a global bot flag, but was told ListeriaBot does not meet the criteria (only specific purposes allowed). --Magnus Manske (talk) 11:48, 22 June 2016 (UTC)[reply]
If you want, I can ask for a bot flag for ListeriaBot at Wikipedia in Esperanto (however, AFAIK, in many language sections of Wikipedia an ask to the bureaucrats can be written in English; also, all existing for today bureaucrats of Wikipedia in Esperanto speak German). And, only as a temporary workaround, I can give to ListeriaBot an autopatrolled (autoreview) flag. Gamliel Fishkin 11:35, 26 June 2016 (UTC)[reply]

Your Bot has removed Siv-erv_logo1.png from my BNR[edit]

Hi Magnus Manske, your Bot has removed "Siv-erv_logo1.png" from my BNR entry de:Benutzer:Denalos/Software_Industrieverband_Elektronischer_Rechtsverkehr_SIV-ERV I am a member of this "Industrieverband" and I got the permission to use the logo. Please let me know what's wrong AND until now the article wasn't put into the ANR because it's unter construction. I think it's a bad idea to edit/destroy BNR-Entries without contacting the autor. Best regards --Denalos (talk) 07:33, 28 June 2016 (UTC)[reply]

My bot only removes "dead" images, that is, links to images that were deleted on Wikimedia Commons. Deleted file page is here, it was deleted by Jcb. --Magnus Manske (talk) 08:36, 28 June 2016 (UTC)[reply]

ListeriaBot@cswiki[edit]

Hi, do you wish a bot flag for ListeriaBot@cswiki? --Martin Urbanec (talk) 19:12, 8 July 2016 (UTC)[reply]

If cswiki wants that, sure. But I can't go through bot applications on dozens of wikis, in languages I don't speak, myself. Only so much bandwidth. --Magnus Manske (talk) 13:13, 9 July 2016 (UTC)[reply]
Granted. Thanks for your service. --Martin Urbanec (talk) 13:20, 9 July 2016 (UTC)[reply]

OAuth link on CommonsHelper[edit]

Hello,

It seems the link to OAuth on CommonsHelper has been removed; I am not sure if this is intentional or not but just wanted to make sure you were aware. MB298 (talk) 19:58, 17 August 2016 (UTC)[reply]

Thanks, fixed! --Magnus Manske (talk) 20:08, 17 August 2016 (UTC)[reply]

LdiF as Mix'n'Match[edit]

Hey Magnus, how can I add {{{}}} to Mix'n'Match? Is it enough to give a CSV-File with the titles and ids or sth. like that? Queryzo (talk) 20:47, 8 September 2016 (UTC) Queryzo (talk) 20:47, 8 September 2016 (UTC)[reply]

Yes, just paste or upload a tab-delimited file (like, "copy Excel into Notepad") here. --Magnus Manske (talk) 19:27, 10 September 2016 (UTC)[reply]

Funktioniert super! Könntest du die erste Version löschen (oder verstecken) [6] Bei der zweiten Version bitte den Zusatz " (new)" aus dem Titel raus. [7] Danke. Queryzo (talk) 06:06, 4 October 2016 (UTC)[reply]

Erledigt. --Magnus Manske (talk) 08:41, 4 October 2016 (UTC)[reply]

Danke! Die auto matches sind wirklich gut. Ich finde kaum einen Fehler. Wäre es möglich, alle auto matches für Wikidata-Items vom Typ Film (+ Unterklassen) automatisch zu bestätigen? Sofern Filme gleich benannt sind, kriegen sie im Lexikon wie in der Wikipedia einen Klammerzusatz. Queryzo (talk) 19:42, 5 October 2016 (UTC)[reply]

Würde ich lieber nicht machen. Namen-matches können sehr gut sein, aber der Punkt von mix-n-match ist die manuelle Kontrolle. Ich mache manchmal Ausnahmen, aber dann z.B. matching über externe IDs. --Magnus Manske (talk) 08:57, 6 October 2016 (UTC)[reply]

ListeriaBot[edit]

Looks like ListeriaBot does unneeded page edits -- it only changes order of rows, but doesn't add data: https://et.wikipedia.org/w/index.php?title=Kasutaja:WikedKentaur/EHAK_asulad&curid=430962&action=history --WikedKentaur (talk) 10:23, 18 September 2016 (UTC)[reply]

Mix'n'Match OFDb[edit]

Hallo Magnus, ich bin bereit für den zweiten Film-Katalog. Können wir noch was verbessern, z.B. Film als type, damit nicht die ganzen Personen und Romane Auto match verwirren ? Queryzo (talk) 16:46, 11 October 2016 (UTC)[reply]

"Type" unterscheidet im Moment nur zwischen "person" und "sonstiges". Keine Spezialbehandlung für "film" etc. --Magnus Manske (talk) 13:50, 13 October 2016 (UTC)[reply]
eben! Wäre es aufwendig, diese nachzurüsten? Queryzo (talk) 20:31, 13 October 2016 (UTC)[reply]
Erledigt. Gib als Typ die Q-ID des "Wurzelelements" an (hier wohl Q11424 für "Film"), damit auch Unterklassen (z.B. "Dokumentarfilm") berücksichtigt werden. Habe das mal für LdiF-IDs gemacht; auto-matches sind von ~21% auf ~17% zurückgegangen, aber dafür hoffentlich mit höherer Qualität. --Magnus Manske (talk) 09:14, 14 October 2016 (UTC)[reply]
Super, danke! Queryzo (talk) 17:07, 14 October 2016 (UTC)[reply]

Der neue Katalog ist nun online: [8] Allerdings hat sich trotz Angabe von Q11424 einiges an Nicht-Film-Items angefunden. Kannst du mal gucken? LG Queryzo (talk) 17:57, 19 October 2016 (UTC)[reply]

Das kommt daher, das in Deinem Import Schrott-Typen sind, z.B. "bestia magnifica", und nicht nur Q11424. Vergessen, Tabs rauszufiltern? --Magnus Manske (talk) 18:43, 19 October 2016 (UTC)[reply]
Update: Habe die Typen repariert und neu laufen lassen. --Magnus Manske (talk) 19:25, 19 October 2016 (UTC)[reply]
Ah, hab mich schon gefragt, was die kryptischen Typen bedeuten. Ich nehme mal, dass die Tabs in den Titeln drin waren und der Typ dann als 4. Spalte einging? Wäre dann wichtig zu wissen für den nächsten Katalog, wie man das verhindern kann. Queryzo (talk) 19:59, 19 October 2016 (UTC)[reply]

Flickr2commons[edit]

Flickr2commons stopped working. Please have a look. Thanks. Materialscientist (talk) 10:37, 21 October 2016 (UTC)[reply]

Try it now. --Magnus Manske (talk) 17:05, 21 October 2016 (UTC)[reply]
Tried with two different images (same uploader though), on two different PC platforms - no luck. Materialscientist (talk) 20:56, 21 October 2016 (UTC)[reply]
Works now (something has been reset, as I was asked to re-authorize flickr2commons). Materialscientist (talk) 05:44, 22 October 2016 (UTC)[reply]

Templator formatting[edit]

Hey Magnus, could you please check out the Templator (Vorlagen-Generator) DE and the reference "Internetquelle"? The column in the middle where all the data goes is really small. It's much large in EN version. Is it possible to fix this? Also could you please have a look at the change request over here? A space is missing between the parameter and the |. Thanks in advance it's a great tool, I use it daily. --Abu-Dun (talk) 08:48, 15 November 2016 (UTC)[reply]

Oh and would it be possible to make it possible to link to weblinks in the "Explanation and example" ("Beispiel/Kommentar") column? It would be great to get further information to some keywords like ISO 639. Right now the links aren't properly formatted, see Vorlagen-Generator: Internetquelle --Abu-Dun (talk) 08:54, 15 November 2016 (UTC)[reply]

Neue Kataloge[edit]

Hallo Magnus, wieder zwei neue Kataloge: 293, 294. Hier bitte wieder die Typen korrigieren (293 = "person", 294 = Q11424). Zudem ist mir beim 294 nun doch der Titel "Filmportal Film-IDs" lieber. Daaaanke! Queryzo (talk) 20:54, 16 November 2016 (UTC)[reply]

Erledigt. --Magnus Manske (talk) 22:11, 16 November 2016 (UTC)[reply]
Danke. Kannst du bei 294 noch einen neuen Suchdurchlauf für die Automatches machen? Queryzo (talk) 06:38, 17 November 2016 (UTC)[reply]
Läuft. --Magnus Manske (talk) 09:00, 17 November 2016 (UTC)[reply]

Korrektur in Mix'n'match bei Merge[edit]

Hallo Magnus, könnte man die Verknüpfung in Mix'n'Match automatisch korrigieren, wenn ein Item gemergt wird? Derzeit ist das mühsame Handarbeit über "Sync", Quick Statements und die Suchfunktion in Mix'n'match. Queryzo (talk) 06:53, 14 December 2016 (UTC)[reply]

Das wurde sowieso schon automatisch gemacht, aber immer nur für einen zufälligen Teil von mix'n'match. Habe noch was gebaut, das jetzt jeden Tag einmal "hinterherräumt". --Magnus Manske (talk) 23:54, 14 December 2016 (UTC)[reply]

Feature gone[edit]

Hi Magnus Manske, I previously used the Autolist tool to quickly download from a category the Q IDs of articles, and this option seemed to have gone in Petscan. Can the option to download a the results be re-implemented. Thanks! Romaine (talk) 15:37, 30 December 2016 (UTC)[reply]

File:Citation screenshot Kopie.png[edit]

Hello Magnus. Happy new year. I've found an old file by you at File:Citation screenshot Kopie.png and I had to tag it with GFDL-presumed. If you can clarify the status of the copyright on the image it'd be awesome. I'm trying to review all the files hosted here and move the appropriates to Commons. Best regards, —MarcoAurelio 16:37, 3 January 2017 (UTC)[reply]

Mix n' Match request[edit]

Internet wrestling database http://www.profightdb.com/atoz.html

Type "Episodes" in Mix'n'Match[edit]

Hallo Magnus, gestern habe ich wieder ein paar Kataloge mit Episoden eingestellt. Ich habe gesehen, dass man nach Typ sortieren kann, evt. wäre es auch sinnvoll, den Typ "Episodes" o.ä. einzuführen, dann hätte man die gebündelt. Werden ja wahrscheinlich noch einige dazukommen. Queryzo (talk) 07:35, 17 April 2017 (UTC)[reply]

And that seems to be not correct. Queryzo (talk) 11:18, 22 April 2017 (UTC)[reply]

Mix'n'match - same name[edit]

Hi,

is it possible to bring back the same name feature in Mix'n'match? It was really useful and a quick way to add matches. Thanks! --Adam Harangozó (talk) 22:40, 4 May 2017 (UTC)[reply]

Would Creation candidates serve as a substitute, or is there anything specific about "same name" you are missing? --Magnus Manske (talk) 08:36, 11 May 2017 (UTC)[reply]

V&A Museum items[edit]

Can you create a catalog of items from http://collections.vam.ac.uk/ to correspond with P3929

Done. --Magnus Manske (talk) 14:01, 13 May 2017 (UTC)[reply]

Prelinger Archives[edit]

Can you add the active film archive list to Mix n Match? http://www.prelinger.com/titles.html --MistressData (talk) 03:35, 17 May 2017 (UTC)[reply]

Done. --Magnus Manske (talk) 13:26, 18 May 2017 (UTC)[reply]

Board Game Geek Games[edit]

Could you make a catalog based on https://boardgamegeek.com/browse/boardgame to correspond with P2339--MistressData (talk) 19:00, 17 May 2017 (UTC)[reply]

Done. --Magnus Manske (talk) 07:34, 19 May 2017 (UTC)[reply]

Arcade History[edit]

Could you make a catalog based on this site? https://www.arcade-history.com/index.php?page=database --MistressData (talk) 20:00, 22 May 2017 (UTC)[reply]

Done. --Magnus Manske (talk) 10:26, 23 May 2017 (UTC)[reply]

Mix'n'Match ČSFD[edit]

Hello, I tried to import part of movie database and found some issues:

  • Because of no API on the CSFD I made some list from source html and got 300 entries. Is possible to import new entries to this catalog? What if there will be duplicates?
  • Many items from this catalaogue is already matched with WD entries (minimally 250/300), but automatching gives only 150 results and on the rest sometimes is the correct possibility via SPARQL, but sometimes not even if the WD entry already have CSFD ID filled. See [9] vs. [10].

JAn Dudík (talk) 07:55, 21 June 2017 (UTC)[reply]

I am trying to import the entire thing now. I believe I can have it auto-update as well. Initial import may take a day or so. --Magnus Manske (talk) 08:35, 21 June 2017 (UTC)[reply]
OK, 71587 imported. Didn't get any more (unique) hits from the website, though it seems there are more. --Magnus Manske (talk) 07:24, 22 June 2017 (UTC)[reply]
Great! There is another big set - people https://www.csfd.cz/tvurce/%id (P2605), example (WD), can you import this too? JAn Dudík (talk) 07:37, 22 June 2017 (UTC)[reply]
Is there a list or search function for people on that site, like there is for movies? Can't see it, and I don't speak cs --Magnus Manske (talk) 07:48, 22 June 2017 (UTC)[reply]
https://www.csfd.cz/podrobne-vyhledavani/tvurci/ JAn Dudík (talk) 19:02, 22 June 2017 (UTC)[reply]

Mix'n'match importing[edit]

I've been playing around with the Mix'n'match tool (which is amazing, thank you!) and want to try importing some catalogs. Is there a way to import catalogs which do not have unique numerical identifiers? In particular I'm thinking of some internet and print reference works, like The Encyclopedia of Science Fiction or the Encyclopedia of Library History. Thanks. Gamaliel (talk) 23:31, 29 June 2017 (UTC)[reply]

Mix n match aranea[edit]

Hi, can you do araneae.unibe.ch (P3594) Mix n match catalog. Thanks

Done. --Magnus Manske (talk) 07:00, 21 September 2017 (UTC)[reply]

Mix'n'match TGN URLs?[edit]

Hello! I don't know if you respond to pings on other pages, but could the URL which entries in catalogue 49 (TGN IDs) use be changed to "http://www.getty.edu/vow/TGNFullDisplay?find=&place=&nation=&subjectid=$1"? This URL, as opposed to the one ("http://vocab.getty.edu/tgn/$1") which is used at the moment, actually displays synonyms for the given place name (indicating its level of administrative division) and the appropriate position in the geographic hierarchy of the place. To find these with the existing URL requires two more clicks to get to the URL I'm suggesting, which slows down identification of the place. Mahir256 (talk) 14:10, 22 September 2017 (UTC)[reply]

Done. --Magnus Manske (talk) 15:09, 22 September 2017 (UTC)[reply]

Mix'n'match importer example[edit]

Hi, can you make example of what should Mix'n'match importer look like when its completed. Thanks

It should tell your that your catalog has been created, with a link. Import should take <1min, unless it's especially large (>50000 entries or so). --Magnus Manske (talk) 18:02, 13 October 2017 (UTC) ¨[reply]
I mean example of creating one. --Walter Klosse (talk) 19:00, 13 October 2017 (UTC)[reply]
Like this? (Gerne auch auf Deutsch!) --Magnus Manske (talk) 11:47, 14 October 2017 (UTC)[reply]
Thanks! btw.: ich spreche nicht gut auf Deutsch --Walter Klosse (talk) 17:46, 18 October 2017 (UTC)[reply]
What should i do when i mistakenly upload only 5 items (Link)? --Walter Klosse (talk) 17:51, 18 October 2017 (UTC)[reply]
sigh email me the file so I can do it myself... FIRSTNAMElastname(at)gmail com --Magnus Manske (talk) 22:27, 18 October 2017 (UTC)[reply]
Here you got link, but i will email the file too. --Walter Klosse (talk) 12:28, 21 October 2017 (UTC)[reply]
Done. --Magnus Manske (talk) 13:08, 22 October 2017 (UTC)[reply]

Catholic Encyclopedia ID[edit]

Mix'n'match says the Catholic Encyclopedia catalog does not have a Wikidata property. I guess it's because it's one of the first sets, but P3241 has been created a while ago. I assume only you can update the catalog with the Wikidata property association. Could you do that? Ijon (talk) 20:01, 24 October 2017 (UTC)[reply]

That is indeed one of the reasons. The other is that it was taken from Wikisource rather than from newadvent. I have created a new catalog for this, because the original does not use the same IDs. I will try moving matches over where I can automatically based on name, but I would like to close the old catalog eventually, even if we lose some matches, and start afresh. What do you think? --Magnus Manske (talk) 13:58, 25 October 2017 (UTC)[reply]
Sure, let's start afresh. Ijon (talk) 14:59, 25 October 2017 (UTC)[reply]
OK, I have imported (as much as I could) the old matches into the new catalog. The old one is now deactivated. I am running automated matching on it now, but it's as good as it gets for now. Syncing with Wikidata is under way. --Magnus Manske (talk) 15:32, 25 October 2017 (UTC)[reply]

Bug of ListeriaBot?[edit]

Dear Magnus,
I wanted to take a moment to post a message about Listeriabot activities on the French-speaking WP. I have seen here that it removed quite a lot of links to Wikidata for articles that do not exist in French yet and, as far as I can see, there has been no change to the query definition nor to the Wikidata elements that trigger links to appear in this list (ex. Q4612975 has been removed but nothing has been changed recently). Can you try to have a look to understand the root cause? Regards, Moumou82 (talk) 18:26, 4 January 2018 (UTC)[reply]

The query on that page gets the first 1000 Wikidata items related to Tunisia (country:Tunisia) with no French page associated. But, the first 1000 of what? The order is not defined in any way, and made up by the query service. So if the order changes, you get different 1000 items. I suspect that the latest items with country:Tunisia are put first; for example, this edit happened just hours before the update.
This query should be a bit more stable, as it defined an order (by item). It also gets you an actual 1000 items through the use of DISTINCT. --Magnus Manske (talk) 10:45, 5 January 2018 (UTC)[reply]
Many thanks Magnus! Moumou82 (talk) 19:11, 5 January 2018 (UTC)[reply]

Source code for Reasonator?[edit]

Hi! I wanted to try out the source code of Reasonator by myself and see how it works, I downloaded it from here: https://bitbucket.org/magnusmanske/reasonator/downloads/ But I get this message: "Warning: require_once(/data/project/reasonator/public_html/php/common.php): failed to open stream: No such file or directory in /srv/disk1/2462897/www/name_of_domain.com/magnusmanske-reasonator-5dfc86796e78/public_html/index.php on line 3. It appears that I must have a file called common.php, but it doesn't exist in your source code. How do I do? /user:bollkalle_55 2018-04-22 23:19 (Swedish time)

You'll need some more code from here. --Magnus Manske (talk) 08:23, 23 April 2018 (UTC)[reply]

GNIS Catalog Property Corrections from P804 to P590 - oops[edit]

Please correct the Property for the following GNIS catalogs in Mix'n'Match, I accidentally swapped properties in my head when filling out the fields. First import was correct (GNIS Lakes), and I failed for the remainder. I didn't want to import any more until this is corrected. I'm being more careful now.

  1. https://tools.wmflabs.org/mix-n-match/#/catalog/1127
  2. https://tools.wmflabs.org/mix-n-match/#/catalog/1128
  3. https://tools.wmflabs.org/mix-n-match/#/catalog/1129
  4. https://tools.wmflabs.org/mix-n-match/#/catalog/1130
  5. https://tools.wmflabs.org/mix-n-match/#/catalog/1134
  6. https://tools.wmflabs.org/mix-n-match/#/catalog/1135
  7. https://tools.wmflabs.org/mix-n-match/#/catalog/1136
  8. https://tools.wmflabs.org/mix-n-match/#/catalog/1137
I changed all of these properties to P590. Also did 1126, as it was using P804 too, and was created by you. Let me know if I should change it back. --Magnus Manske (talk) 07:45, 2 May 2018 (UTC)[reply]
Thanks for catching 1126. I guess the last task to clean this up is correcting the manual and auto-matches that occured before the property correction that still use P804 instead of P590. If you have a script to tackle this cool, else I'll tackle this. --Wolfgang8741 (talk) 01:58, 3 May 2018 (UTC)[reply]
I think I've cleaned up the previous auto-matches and manual matches. There weren't too many to do. Thanks. --Wolfgang8741 (talk) 03:45, 3 May 2018 (UTC)[reply]

LoC relators Mix'N'Match catalogue (1277) - ID correction?[edit]

I created the LoC relators catalogue for the P4801 property--my first catalogue--but made a mistake by providing just the relator ID for each entry (e.g. "abc") instead of the fuller path necessary for the LoC vocabularies ("relators/abc").

Would it be possible to update the catalogue to prepend "relators/" to all of the IDs? Dan scott (talk) 19:05, 5 June 2018 (UTC)[reply]

Done. --Magnus Manske (talk) 22:18, 5 June 2018 (UTC)[reply]

Artikel "Günter Jendrich"[edit]

Lieber Herr Manske. JCB hat das Foto "Günter Jendrich - Thorsten Jendrich.jpg" aus dem Artikel "Günter Jendrich" in der dt. Wikipedia entfernt mit dem Argument, es läge keine Genehmigung für die Veröffentlichung des Bildes vor. Das ist aber nicht der Fall! Der Fotograf und Eigentümer des Bildes, Herr Throsten Jendrich (er ist der Sohn von Günter Jendrich und hat das Foto 1968 auf einer privaten Feier gemacht) hatte die Erlaubnis zur Veröffentlichung unter den Wikipedia-Bedingungen sowohl an permission-de@wikimedia.org als auch an permission-commons@wikimedia.org geschickt. Von daher bitte ich darum, das Bild wieder in den Artikel einzupflegen. Beste Grüße Josefblau

Used at Logo_suggestions#2,_3,_4, any comment for Meta:Requests for deletion, please?--Jusjih (talk) 21:04, 4 August 2018 (UTC)[reply]

Dear Magnus, please look there: https://de.wikipedia.org/wiki/Benutzer_Diskussion:Magnus_Manske#Benutzer:CommonsDelinker Yours sincerely --Sockenschütze (talk) 19:20, 22 August 2018 (UTC)[reply]

Import Mix'n'match catalogues[edit]

Can you help me to import Mix'n'match catalogues? Ilham151096 (talk) 21:55, 4 September 2018 (UTC)[reply]

I can try. What specifically? --Magnus Manske (talk) 09:20, 5 September 2018 (UTC)[reply]

Search from Mix'n'match often not working[edit]

Hi Magnus, using the visual tool, often the wikidata search is not working and gets stuck at "Searching...". Here are some example queries that did not work:

Chang and Eng

John Kasich

L H Myers

Haakon II Sigurdsson

Thomas Hartman


FS100 (talk) 12:25, 19 September 2018 (UTC)[reply]

I tried "John Kasich", which works for me. Mix'n'match, as a whole, sometimes gets stuck as a result of catalog-combining searches (not from the visual tool), and because the MySQL version on Toolforge is too old to use per-query runtime limitations. I was told this will be fixed at some point. --Magnus Manske (talk) 21:29, 19 September 2018 (UTC)[reply]

Mix'n'Match catalog request[edit]

Hey Magnus! How are you? I hope everything is fine; Brazilian Wikimedians send regards :)

Could you please create a catalog for Mix'n'Match on d:Property:P5950? I might have another similar request pretty shortly (d:Wikidata:Property proposal/Museu Nacional ID) Thanks much! --Joalpe (talk) 21:59, 5 October 2018 (UTC)[reply]

Now building here. --Magnus Manske (talk) 13:46, 6 October 2018 (UTC)[reply]

New Mix'n'Match scraper fix?[edit]

Hi Magnus, I already asked this on Twitter but then figured here was probably a better place. I created a new scraper but messed up the URL part when I saved it (basically put "/(.+?).html" instead of "/$1.html"). I corrected it and saved again (with the new catalog ID), but the links still all go to "/(.+?).html". Like, literally. Can it be fixed? Or at least completely deleted so I can start over and do it right this time? Thanks! ---LadiesMakingComics (talk) 21:05, 22 October 2018 (UTC)[reply]

Oh boy, I couldn't leave well enough alone, could I? So I also made another new catalog and this time I messed up the RegEx to swap the names around, so most were getting pulled in as "Lastname, Firstname". While not ideal, I wasn't going to try to fix it until I realized that it might be impeding automatic matches (and with 32k entries, that was a problem). So I added a "new" scraper just to fix that issue. I figured that if nothing else, it would apply to entries added to the site later. But what it actually did was pull all the names again. HELP!!!! --LadiesMakingComics (talk) 20:15, 23 October 2018 (UTC)[reply]

British Museum Thesaurus not working[edit]

Hey, the [11] catalog's links seem dead because they changed the link structure. Unfortunately I don't know how, could you repair the scraper and the wikidata property? Thank you! --Adam Harangozó (talk) 12:32, 29 October 2018 (UTC)[reply]

Also, is it possible to edit existing scrapers to add catalog type where it's set as unknown? (The catalog editor option doesn't let me change it.) --Adam Harangozó (talk) 15:28, 30 October 2018 (UTC)[reply]
I also brought this up on the talk page of the BMT ID property and it unfortunately goes beyond just the link structure, they also changed the ID numbers. --Lexid523 (talk) 19:53, 9 November 2018 (UTC)[reply]

Emory Women Writers Resource Project[edit]

Hi Magnus, any chance you can make a scraper of authors from the Emory Women Writers Resource Project? Whenever you have the time. Thanks!! --Lexid523 (talk) 15:37, 5 November 2018 (UTC)[reply]

Done. --Magnus Manske (talk) 16:15, 5 November 2018 (UTC)[reply]
Great! Thanks again! --Lexid523 (talk) 16:57, 5 November 2018 (UTC)[reply]

Orlando Project[edit]

So, now that that's taken care of, there's another women writers' project database I'd like to import, but its set-up is pretty complicated (to me at least) and I have no hope of figuring it out on my own. I've gotten as far as figuring out that THIS is the link that will get you to the list of names with all the correct parameters. The links in that list all lead to similarly constructed URLs, like this:

<a href="../protected/svPeople?formname=r&subform=1&person_id=[6-LETTER AUTHOR ID]&adt_start_year=0612&crumbtrail=on&dt_end_cal=AD&dt_end_day=05&dt_end_month=11&adt_end_year=2018&dt_start_cal=BC&dts_historical=0612--+BC%3A2018-10-19&dts_lives=0612--+BC%3A2018-10-19&dts_monarchs=0612--+BC%3A2018-10-19&ls_bww=on&ls_iww=on&submit_type=A-Z">[Lastname, Firstname]</a>

So if you can work with that, or can figure out a better back door way, I would greatly appreciate it. But again, no rush! Thanks! --Lexid523 (talk) 21:16, 5 November 2018 (UTC)[reply]

Done, but queued (Toolforge is having a bad day again, just keep reloading...). FWIW, I used the regex
<a href="../protected/svPeople\?.*?person_id=([a-z]+).*?>(.+?)</a>
Note that the Mix'n'match scraper doesn't work with IDs that have trailing spaces, so some URLs, like for "aria", don't work. --Magnus Manske (talk) 09:27, 6 November 2018 (UTC)[reply]
You are the best! --Lexid523 (talk) 16:26, 6 November 2018 (UTC)[reply]

Mix'n'match Scraper for Filmaffinity.com[edit]

Hi Magnus! I have been trying to add a scraper for Filmaffinity.com (P480) without success. I am just getting an error while trying to test it (ERROR: Test has failed, for reasons unknown. Could be the server to be scraped is too slow?). The following are the parameters I am using, I am hoping you can get it working (or advice me how to fix).

Catalog name: Filmaffinity.com
Description: Movies database.
URL: https://www.filmaffinity.com
WD property: P480
Type: Media
Primary language: en
Level 1 keys: 0-9, A...Z (note, "0-9" as is, not a range)
Level 2 range: Start 1, End 2000, Step 1 (I put 2000 because the key T goes up to 1627 (here)
URL Pattern: https://www.filmaffinity.com/en/allfilms_$1_$2.html
RegEx entry: <a href="\/en\/film(\d{6})\.html" title="(.*)">(.*)<\/a> \((\d{4})\) (I tested it on regex101.com and it is working. The output will be ID, movie name (twice) and then year)
id: $1
name: $2
desc: $4 (I am just throwing the year in the description)
url: https:///www.filmaffinity.com/en/film$1.html
type: Q11424

I really have no idea what is the issue. Thanks in advance and keep up the good work!
--Cabeza2000 (talk) 23:20, 30 November 2018 (UTC)[reply]

You don't need and shouldn't protect the /, and avoid the .*
This work : <a href="/en/film(\d{6}).html" title="([^"]+?)">([^<]+?)</a> \((\d{4})\).
Compress whitespace must be checked. Eru (talk) 21:57, 2 December 2018 (UTC)[reply]
Thanks Eru, it worked. I saved it as catalog 2048, it will probably take a while to run.--Cabeza2000 (talk) 00:09, 3 December 2018 (UTC)[reply]

PetScan be-tarask.wiki issue[edit]

Please have a look at the problem. Thanks! --Red Winged Duck (talk) 12:52, 11 December 2018 (UTC)[reply]

BaGLAMa 2 request[edit]

Hello Magnus, would it be possible to add the Commons categories Images from the Royal Museum of Fine Arts Antwerp and Images from the King Baudouin Foundation to the BaGLAMa 2 list? Is this also possible retroactively, starting from March 2018? Many thanks! Beireke1 (talk) 12:05, 4 January 2019 (UTC)[reply]

Both added now. I don't look at meta often, Wikidata is better to leave requests! --Magnus Manske (talk) 14:07, 27 March 2019 (UTC)[reply]

Wikidata todo[edit]

Wikidata to do, specifically duplicity from Wikispecies, in inaccessable for 2 days. Also can't get on your bug report page. What happened? Neferkheperre (talk) 18:30, 17 February 2019 (UTC)[reply]

@Neferkheperre: there are issues with the WMFLABS servers that have spread to Tools. Some things are being moved, and will do so gracefully, some will need intervention. Patience is required whilst WMF fixes the servers and recovery occurs.  — billinghurst sDrewth 22:54, 17 February 2019 (UTC)[reply]
Thanks. Let me know. I have been creating Wikidata pages for Wikispecies articles for a couple of months now, using the duplicity function. Neferkheperre (talk) 23:23, 17 February 2019 (UTC)[reply]

Scraper request[edit]

Hi, could you make a scraper for the periodicals in the Hungarian Electronic Periodicals Archive & Database ([12])? Thanks! - Adam Harangozó (talk) 13:49, 27 March 2019 (UTC)[reply]

Should be scraping now, here. I don't look at meta often, please leave future requests on my Wikidata talk page, thanks! --Magnus Manske (talk) 14:06, 27 March 2019 (UTC)[reply]

ERROR: Less than three coulmns on row 1[edit]

Hi Magnus, I'm trying to import a catalog but keep getting "ERROR: Less than three coulmns on row 1" when trying to upload a .csv file with postal codes. Here's a sample of what they look like.

60-0000 以下に掲載がない場合 Ikanikeisaiganaibaai, Sapporo Shi Chuo Ku, Hokkaido 60-0000
64-0941 旭ケ丘 Asahigaoka, Sapporo Shi Chuo Ku, Hokkaido 64-0941
60-0041 大通東 Odorihigashi, Sapporo Shi Chuo Ku, Hokkaido 60-0041

The fields are ID, name, description and URL (not really a URL, I just want to match localities with ZIP codes). Any idea what I could be doing wrong? NMaia (talk) 07:26, 3 April 2019 (UTC)[reply]

You say CSV file, but your example is tab-delimited, which is good. Maybe you have a header line in there? --Magnus Manske (talk) 15:15, 12 April 2019 (UTC)[reply]
I don't think so. Here's the file: https://framadrop.org/r/Omh0aqrMl2#gfPxyrQIRGvrX7evyqNSq2S3BNqTcrerp8r7rAUreGQ= NMaia (talk) 14:07, 29 April 2019 (UTC)[reply]
OK, I downloaded the file. First, it needs to be TAB-separated, not COMMA-separated. Second, no quotes. Third, there is a fourth column that seems to repeat the ID from the first one. Fourth, there are duplicate IDs (e.g. "4-0000") which doesn't work in Mix'n'match. --Magnus Manske (talk) 15:01, 29 April 2019 (UTC)[reply]
Thanks! I'm looking into ways of cleaning up this dataset. Is there anything that can be done about these IDs? They are postal codes, and, although the same, seem to refer to different places in Japan. NMaia (talk) 06:40, 30 April 2019 (UTC)[reply]

Mix'n'Match Catalog for PRELIB Collectifs[edit]

Hello, I am trying to create a new Mix'n'Match catalog to scrape collectifs on PRELIB and to link the collectifs to https://www.wikidata.org/wiki/Property:P6696. But I have some troubles with the method. Could you give me a hand ? I tried the following:

Levels
URL Follow level: http://mshb.huma-num.fr/prelib/collectifs_list/?page=$1
regex Follow level: \d+

Scraper
URL pattern: http://mshb.huma-num.fr/prelib/collectifs_list/?page=$1
RegEx entry: <a href="/prelib/collectif/(\d+)/">(.+)</a>

Resolve
id: $1
name: $2

But when I test the scraper, the following error is displayed: ERROR: Test has failed, for reasons unknown. Could be the server to be scraped is too slow?

Thanks, Hopala! (talk) 08:02, 1 May 2019 (UTC)[reply]

I tested another parameters which seem to work:

I used a range level, since I could not figure out how to manage a follow level.

Scraper
URL pattern: http://mshb.huma-num.fr/prelib/collectifs_list/?page=$1
Regex block: <table>(.+)</table>
RegEx entry: <a href="/prelib/collectif/(\d+)/">([^<]+?)</a>

Hopala! (talk) 10:25, 1 May 2019 (UTC)[reply]

mix n match help, AAAS fellows[edit]

Hi! I'm working on a project to upload all the engineering fellows (current and historic) in the American Association for the Advancement of Science into Wikidata, as they are all notable enough to have Wikipedia biographies. The current fellows listing is here but only Section "Engineering (Section M) lists the engineering fellows. I'm not sure how to upload them, but I did make a file for them, but I'm not sure the entry ID is correct. Mix n Match is telling me I didn't list the url or url code, but they don't have them. What should I do? Thank you, and thank you for the time you took making this tool! It's super cool Sbbarker19 (talk) 17:18, 7 May 2019 (UTC)[reply]

I made a scraper for all (not just engineering) here, will take a few hours to fill/show. --Magnus Manske (talk) 12:46, 8 May 2019 (UTC)[reply]

Import large file / new version NKČR to MixNMatch[edit]

Hi Magnus, ill preparing large version of file (and with better structure to NKČR catalog - https://tools.wmflabs.org/mix-n-match/#/catalog/2644). I uploaded 100000 records and its about 10% of whole catalog. And unfortunately with bad "name". Is there possibility to upload file other way? And remove currently data. TSV data is http://jiri-sedlacek.cz/nkcr_mix.tsv - thanks if its possible. --Frettie (talk) 11:01, 26 June 2019 (UTC)[reply]

Importing now. --Magnus Manske (talk) 09:06, 27 June 2019 (UTC)[reply]
Turns out your file contains 828508 rows but only 300000 unique ones. Did you run into some limit somewhere? --Magnus Manske (talk) 09:43, 27 June 2019 (UTC)[reply]

Can't change scraper after creation on mix'n'match[edit]

Hey there, I created this scraper last night and wanted to change it afterwards because of a mistake I made with the link form the items to the db. And also I wanted to add a description with a new regex. So I went here and tried to entered this:

Catalog ID: 2651

Catalog name: OpenCritic game ID

Description: Identifier for a game on OpenCritic.com

URL: https://opencritic.com

WD property: P2864

Type: Video games

Primary language: en


One Range Level: 0-100, Step 1

URL pattern: https://opencritic.com/browse/all/all-time/score/$1

RegEx entry: <a _ngcontent-(.{1,5})="" href="/game/(\d{1,5})/(.{1,100})">(.{1,100})</a></div><div _ngcontent-(.{1,5})="" class="platforms col-auto"> (.{1,50}) </div><div _ngcontent-(.{1,5})="" class="first-release-date col-auto show-year"> (.{1,50}) </div>

id: $2

name: $4

desc: Released: $8 Platform: $6

url: https://opencritic.com/game/$2/-

type: Q7889

And activated "Compress whitespace". Then tested it, with the right result and hit "Save Scraper/Catalog". But it doesn't seem to have updated at all. (I also tried to do it with out entering anything in the first section except the Catalog ID) Should I create a new one or could you help me fix this one? Greetings, and thanks for all the tools --Kristbaum (talk) 10:01, 28 June 2019 (UTC)[reply]

Mix 'n' Match Landsholdsdatabasen[edit]

Would you kindly import {{{}}} into Mix 'n' Match? The complete list of entries can be found here in Danish. --Trade (talk) 23:08, 7 July 2019 (UTC)[reply]

I suspect you meant https://www.wikidata.org/wiki/Property:P6109 ? Import running. --Magnus Manske (talk) 10:13, 8 July 2019 (UTC)[reply]

FIST error running an empty query?[edit]

Hi, just wanted to say I have filed an issue #10 on FIST at BitBucket

https://bitbucket.org/magnusmanske/fist/issues/10/error-running-the-query

--CiaPan (talk) 10:16, 4 August 2019 (UTC)[reply]

Set existing Mix'n'Match scraper to auto-update?[edit]

Hi Magnus, I put this in the Manual talk a bit ago but realize that might have been fruitless. Is there any way to set an existing scraper to regularly update via automated web page scraping? I made three catalogs with (what I thought was) the exact same process but only LezWatch.TV actors is missing the line that says it will regularly update. Can I change a setting somewhere to have this one auto-update as well? Sweet kate (talk) 19:11, 18 August 2019 (UTC)[reply]

I have switched on auto-update for #2726. --Magnus Manske (talk) 08:56, 27 August 2019 (UTC)[reply]

BaGLAMa 2 request[edit]

Hi Magnus, could you perhaps add commons:Category:Media_contributed_by_Biblioteca_Nacional_Aruba to to BaGLAMa 2? Thanks in advance! --BNA-PS (talk) 13:36, 30 August 2019 (UTC)[reply]

Done. --Magnus Manske (talk) 13:36, 5 September 2019 (UTC)[reply]

Baglama2 request[edit]

Hello Magnus, Netha Hussain and I are doing a project to upload all 3000 images from https://smart.servier.com to Commons, currently pm 15% done, see c:Category:Media_from_SMART-Servier_Medical_Art. As we would like to track reuse and uptake, could you pls add that category to the Baglama2 tool? Many thanks, --OlafJanssen (talk) 20:36, 30 September 2019 (UTC)[reply]

Done. --Magnus Manske (talk) 07:28, 1 October 2019 (UTC)[reply]
Super, many thanks!! --OlafJanssen (talk) 14:03, 1 October 2019 (UTC)[reply]

Mix'n'match FemBio, remove duplicate[edit]

I created https://tools.wmflabs.org/mix-n-match/#/catalog/2863 but it is a duplicate of https://tools.wmflabs.org/mix-n-match/#/catalog/2404 , so please remove it if you can. Thanks --PeterTheOne (talk) 16:37, 20 October 2019 (UTC)[reply]

Deactivated. --Magnus Manske (talk) 09:32, 21 October 2019 (UTC)[reply]

Add Auckland Museum Images to BaGLAMa 2[edit]

Hi Magnus, would you be able to add Auckland Museum Images to BaGLAMa2 please? We have 100,000+ images on wikimedia commons. Category is "Images_from_Auckland_Museum" https://commons.wikimedia.org/wiki/Category:Images_from_Auckland_Museum Best, James

Thanks for adding Magnus! Cheers, James

Gelöschte Bilder aus deutschem Artikel und Wikimedia "Stuart Wolfe"[edit]

Hi Magnus, Dein Name klingt deutsch, daher versuch ich's mal in meiner Muttersprache.

Als Betreuer des CommonsDelinker appelliere ich an Dich, die Löschungen wieder rückgängig zu machen. Zu allen markierten Bildern wurden mithilfe des Rechte-Assistenten alle erforderlichen Informationen nachgereicht. Diese beinhalteten u.a. ein persönlicher E-Mail-Link zum Künstler selbst, Stuart Wolfe, sowie ein Cloud-Link zu einem Selfie, das den Künstler in seinem Atelier zeigt, mit einem Blatt Papier in der Hand, das mich, den Artikel Autor "Smokeonthewater", zum Verwenden der Bilder in Wikipedia ermächtigt und die Liste der markierten Dateien nennt. Offenbar wurden diese Informationen von Deinem Bot oder Dir ignoriert. Gelöscht wurden dabei Fotos, die ich im Atelier des Künstlers persönlich aufgenommen habe. In Anbetracht meines langjährigen Status' in Wikipedia finde ich das einfach nur respektlos. Daher bitte ich um eine konstruktive Antwort sowie einen Vorschlag, wie der Zustand des Artikel möglichst ohne großen Aufwand wiederhergestellt werden kann. Ich habe einschließlich des Besuchs im Künstleratelier etwa 20 Stunden ehrenamtliche Arbeit investiert und bin sehr verärgert über die Situation.

Viele Grüße, --Smokeonthewater (talk)

Hallo Smokeonthewater, Deutsch passt in der Tat! Zur Sache: Der CommonsDelinker löscht keine Bilder, sondern entfernt nur "Bild-Links" zu bereits gelöschten Bildern, die (zur Zeit der Löschung) sowieso nicht dargestellt werden können. Die Entfernung der "Bild-Links" wird zudem von Commons-Admins in Auftrag gegeben. Ich und mein Bot haben weder etwas mit der Löschung der Bilder auf Commons, noch mit der Erteilung des Entfernungs-Auftrags zu tun. Der Edit auf de:Stuart Wolfe wäre natürlich leicht rückgängig zu machen, sähe dann aber so aus, denn das Bild ist auf Commons immer noch gelöscht. Um das Bild wieder herzustellen, sprich am besten mit dem Admin, der es gelöscht hat, oder auf einer der Admin-Diskussions-Seiten. --Magnus Manske (talk) 08:34, 11 December 2019 (UTC)[reply]
Hallo Magnus, vielen Dank für die Erläuterungen und den Verweis an den Admin. Sorry, wenn ich Deine Rolle falsch eingeschätzt habe. --Smokeonthewater (talk)

BaGLAMa 2 Request for National Gallery of Art[edit]

Hi Magnus,

Would you be able to add the Category:Images from the National Gallery of Art to BaGLAM 2? Thanks. -- brwz (talk) 14:05, 18 December 2019 (UTC)[reply]

Now here. --Magnus Manske (talk) 15:12, 18 December 2019 (UTC)[reply]

Reset Mix'n'match catalog #3216[edit]

Hi Magnus, would you be able to delete reset (remove all entries) in https://tools.wmflabs.org/mix-n-match/#/list/3216/ ? I have better regex for the name field now but it appears that (1) I cannot remove entries from the existing catalog, and (2) I cannot create a new catalog with the same info (perhaps because the WD property is in use?) [it was because the name was taken, so now I've created a replacement catalog] Also (3) it appears that "Catalog editor" doesn't save any of my changes (e.g., name/title change). Am I doing something wrong? And would you be able to reset #3216 so I can get started with proper matches? Thanks for a great tool! czar 11:40, 28 December 2019 (UTC)[reply]

Also I wasn't able to convert the surnames to title case with regex tricks, if that's something you can do on your end. czar 23:21, 28 December 2019 (UTC)[reply]
  • One more :) I noticed that when I "confirm" an "Automatically matched" item, the claim is added without a reference, but if I create a "new item" (when a Q doesn't already exist), the claim is added with a reference. czar 00:30, 29 December 2019 (UTC)[reply]

Scaling WikiData[edit]

Hello, thanks for your kind message on the original Grant Proposal to scale WDQS on FoundationDB (to be deleted soon). I am working on bolder proposal that aims to scale WIKIDATA. Only if you have time, review the page at Grants:Project/Future-proof_WDQS (talk). I will make substantial improvements the next few days, so it is worth watching. I will also move the page to change the title soon. Thanks again for your support and patience. --i⋅am⋅amz3 (talk) 23:24, 26 January 2020 (UTC)[reply]

Reset catalogue #3448[edit]

My catalogue #3448 worked within the configuration interface, but hasn't produced any results after a few days, and I cannot find a log to indicate what if anything is wrong. I've given it a fairly wide range of numbers to work through, out of which it should be finding around 17,000 entries; it's not clear to me if it is encountering too many errors and failing silently. Would you mind resetting it, please? AndrewNJ (talk) 14:24, 24 March 2020 (UTC)[reply]

ListeriaBot on Zu.Wikipedia[edit]

Hi Magnus, can you please apply for a local bot flag on zuwiki for ListeriaBot or stop running it there? It is flooding recent changes and is editing someones userspace daily for seemingly no reason but it's making it hard to patrol, as userspace patrolling is necessary due to a significant number of spambots and LTAs using this project. Thanks. Praxidicae (talk) 15:16, 23 April 2020 (UTC)[reply]

Hi Magnus, it's been several months and I've received no response and the bot is still flooding RC on this very small wiki making it very difficult to patrol. Praxidicae (talk) 15:02, 8 June 2020 (UTC)[reply]

Mix 'n' match catalog deactivation request (3521)[edit]

I made an error in the ID formatting of the catalog https://tools.wmflabs.org/mix-n-match/#/catalog/3521 and created a new corrected one, can you please deactivate 3521? --Canley (talk) 00:44, 25 April 2020 (UTC)[reply]

Done. --Magnus Manske (talk) 14:52, 4 May 2020 (UTC)[reply]

Please remove an entry in BaGLAMa2[edit]

The category

Files of the State Library of Western Australia

was added but the category name is wrong, so can this entry be deleted please. The correctly named category has been added successfully. Thanks Kerry Raymond (talk) 21:41, 3 May 2020 (UTC)[reply]

Also, another with the explicit name

Category:Files from the State Library of Western Australia

Thanks Kerry Raymond (talk) 21:47, 3 May 2020 (UTC)[reply]

Both done. --Magnus Manske (talk) 14:50, 4 May 2020 (UTC)[reply]

Guide for using GLAMorgan and BaGLAMa2?[edit]

Hello Magnus,

I hope you are doing well. Could you please help to write a guide for using GLAMorgan and BaGLAMa2? You could write it here or direct me to where I could find it if it has been written already. Thank you for your help. T CellsTalk 21:00, 25 May 2020 (UTC)[reply]

BVLarramendi ID and authority control in Wikipedia[edit]

BVLarramendi ID[edit]

Dear Magnus I want to thank you for Mix'n'match. In August 2019 the upload of the Polymath Virtual Library catalog to Mix'n'match was completed. Everything went perfectly and almost all Wikidata entries already have the corresponding BVLarramendi ID. However, 10 months later this link does not appear in the authority control space of Wikipedia items (as Template: Authority control say, it would be sufficient that the entries have "authority control" template) Perhaps it is necessary to carry out some other step that I do not know.

Please excuse me if this is not the place for this question, I will be very grateful if you can give me any answer or contact to anybody to consult. Many thanks.

FIST down?[edit]

https://fist.toolforge.org/fist.php throws a 404 error. Just in case you did not yet know. Would be cool if it got fixed soon^^ Regards SoWhy 19:57, 19 June 2020 (UTC)[reply]

Just go to https://fist.toolforge.org/ which redirects you to https://fist.toolforge.org/fist/fist.php --Magnus Manske (talk) 17:09, 20 June 2020 (UTC)[reply]

Tool:Sighting not working[edit]

Set of tools "sighting" not working Wargo (talk) 21:38, 12 July 2020 (UTC)[reply]

Should be back now. --Magnus Manske (talk) 13:37, 13 July 2020 (UTC)[reply]

Fehler CommonsDelinker[edit]

Moin Magnus! Hier hat dein CommonsDelinkerBot wohl einen Fehler gemacht. Liebe Grüße --Ameisenigel (talk) 20:23, 29 July 2020 (UTC)[reply]

Append records to an existing catalogue[edit]

Dear Magnus, I tried to create a catalogue with a subset of records in CSV, now I would like to add all the records, is it possible? Should I create a new catalogue? Thanks --Luckyz (talk) 12:04, 25 August 2020 (UTC)[reply]

Hi, that is not possible via web interface, but if you give me the data, I can do it. Just put it somewhere (pastebin, some free upload service, dropbox etc.) and let me know (best on my Wikidata user page). --Magnus Manske (talk) 12:21, 28 August 2020 (UTC)[reply]

Please have a look at T256533[edit]

Dear Magnus,

I would like to bring your attention to phab:T256533, as ListeriaBot is currently consuming over 20 % of all login requests. Please be considerate of other users and of the infrastructure. This has been reported to you in [13] already, hence I'm writing this follow-up message.

Please note the block will be blocked if this is not fixed soon.

For more context, please see phab:T256533.

Best,
--Martin Urbanec (talk) 18:17, 28 August 2020 (UTC)[reply]

Changing range of URLs for Mix n' match scraper[edit]

Hi, I created and saved a scraper for a mammal database here. I think I got everything right, but I just worked with a small range of URLs. I had never set up a scrape before, and I had thought I'd be able to revise the URLs. Can this be fixed? Or deleted so that I can try again? Apologies and thanks in advance! Enwebb (talk) 22:16, 2 September 2020 (UTC)[reply]

I removed your scraper and wrote a bespoke import script for the CSV file one can download. More metadata this way :-) --Magnus Manske (talk) 18:21, 10 September 2020 (UTC)[reply]

IDs from Wikidata not found in Mix'n'match[edit]

Hallo Magnus, gibt es eine Möglichkeit, die 36 IDs from Wikidata not found in Mix'n'match zu sehen? Zusammen mit einem guten Scraper könnte man damit in einigen Fällen fehlerhaft(e) (gewordene) Schlüssel identifizieren und beseitigen. --Emu (talk) 11:39, 29 September 2020 (UTC)[reply]

Which data should go in «Entry name» and «Entry description»[edit]

Hello Magnus, we have just completed Artsdatabanken takson-ID (Q8707), an identifier for a taxon in the Norwegian Biodiversity Information Centre database, a database on species and ecosystems in Norway. I have downloaded a CSV file from (a small kingdom of) the database as a start, to learn how to import and use Mix'n'match. But I am puzzled about which data to put into import, the CSV file from Artsdatabanken has 32 columns. Should I use the Norwegian name as Entry name, and the Scientific name as Description? Or will the best match be from Entry name = Scientific name and Entry description = Q78947 for species, and so on? I see both solutions are used in imported catalogs. If I choose the later, how will I then import the Norwegian names? And for the same catalog: The two other languages, «Norwegian Nynorsk» and «Northern Sami language»? -- Magnefl (talk) 17:11, 18 October 2020 (UTC)[reply]

Hi Magnefl, you should use the scientific name as the entry name, and everything else you want into the description. If you want norwedian anmes etc as aliases, give me the mapping to the entries and the names, and I can add them separately. --Magnus Manske (talk) 10:15, 19 October 2020 (UTC)[reply]
Thank you Magnus, it all worked very well. The Norwegian names were so few I added them manually later.
I imported as a start only a tiny kingdom from the database, the Protists. I will now go to bigger parts, as the Animals, Fungus and Plants. The ID (Entry ID) is unique through all the database. Can I merge or import these new parts into the existing ADB_Protozoa and ask you to change the name to «Artsdatabanken», or should I merge all the data here locally first to a tab-separated file, containing all data from Artsdatabanken, and export it all to a new import called «Artsdatabanken»? And - will it be possible to have a automatic update?
There is currently no mechanism for you to extend the catalog, but if you give me the tabbed/CSV file, I can add it. I can also change the name of the catalog.
When I use the Game mode and find an Item that is Not appliccable I press N/A and I can see later in https://mix-n-match.toolforge.org/#/catalog/3908 that one number has gone down in Unmatched and up in N/A. Same thing if I paste a Q-number in the field and press «Set Q», and I will find this im my User's Contributions. But when I find an Item with no matches, I will press «New Element» if everything else seems OK. I thought this would create a new element in Wikidata. But the numbers of «Fully matched» and «Unmatched» does not change even after Refresh, and I will not find anything about this in my User's Contribution as if nothing happend. Am I doing this uncorrect? -- Magnefl (talk) 10:47, 20 October 2020 (UTC)[reply]
Odd. I'll look into that. --Magnus Manske (talk) 13:28, 20 October 2020 (UTC)[reply]
Everything is OK now, Thank You.
Artsdatabanken catalog is working fine, could you please merge the missing small «ADB_Protozoa» into «Artsdatabanken» to make it complete? And I see a misspelling I made: I used «Q2752675» to indicate «Entry type» for Subkingdom, the correct should be «Q2752679». Not very important, I think.
You will find a tab-separated file at https://www.dropbox.com/s/1zzjb1scvbqomyb/Artsdatabanken_Norwegian.txt?dl=0 set up with «Entry ID» tab «Norwegian name». The name code is «norsk bokmål», code no or nb. If you find time to add this, it would be helpful for me. If I had thought of it earlier or should import another catalog, I would use the «Entry description» field to something like «description[comma+ space]Norwegian name» and then move the name later inside the Element in Wikidata. -- Magnefl (talk) 09:06, 22 October 2020 (UTC)[reply]
@Magnus Manske: Hello Magnus, can you please remove the Catalogs «ADB_Protozoa» and «Artsdatabanken», and allow me to upload this data again as new and with new catalog names if necessary? As it is now it is too easy to guess wrong and make mistakes in matching because it is possible to have the same scientific name in Animals, Plants, Bacteria and Virus. Also, I would like to have a still short but better Description to avoid guesses and wrong matches, and because of this there will *not* be necessary for you to put in the Norwegian names later. What is already matched will have to stay as it is, I guess? -- Magnefl (talk) 10:19, 30 October 2020 (UTC)[reply]

Requests for lamantaylaraya[edit]

Hello Magnus!

I was wondering if you can help me by adding the site lamantaylaraya.org like a mix and match catalog. This is a simple wordpress site with a lot of information about a Music style in Mexico (in spanish). Already I have a entry in wikidata about the site (Q100737545) but I want to refer to each article inside the site. Koffermejia (talk) 17:55, 27 October 2020 (UTC)[reply]

Listeriabot[edit]

Hello, Listeriabot is no more working automatically, but there are mnany lists which should be sometimes updated. What about running once per week for all wikis? this will not make problem with too much log-ins... JAn Dudík (talk) 20:14, 27 October 2020 (UTC)[reply]

Delete duplicate Mix'n'match catalogs I created[edit]

I didn't realize I wouldn't be able to delete catalogs after creating them. These two are both duplicates of an existing property. Can you please remove them both. They have bad data in them as well that I'd like to prevent others from matching against. Thanks! - Josh404

https://mix-n-match.toolforge.org/#/catalog/4216 and https://mix-n-match.toolforge.org/#/catalog/4233

– I am asking the same thing (or replacing the «Description»-field), 3 posts up :) -- Magnefl (talk) 17:27, 8 March 2021 (UTC)[reply]

Did you write the software back in 2001 as a volunteer?[edit]

Out of curiosity, exploring the history of Wikipedia, we have a qestion: did you write the orginal WikiMedia wiki software as a 'normal' volunteer, like others are contributing to articles? Thanks and no matter in what role you did it, truelly awesome! JustB EU (talk) 13:40, 13 March 2021 (UTC) + family[reply]

Hi! Yes I did. I was a student, semester holidays, bored, wanting to try this newfangled PHP... Have been a "normal" volunteer ever since too :-) --Magnus Manske (talk) 08:47, 12 April 2021 (UTC)[reply]

ListeriaBot on arwiki[edit]

Hello, per arwiki bot policy, the bot flag is removed from any bot that has been inactive for 6 months, with alerting the bot owner before 3 days. so if your bot does not return to activity, the bot flag will removed on 1 June 2021. Thank you --Alaa :)..! 06:12, 28 May 2021 (UTC)[reply]

mix-n-match website not working[edit]

hello User:Magnus Manske, the website is loading without search box and w/o options. it seems some part is malfunctioning since july 20th, gmt 01.00 hrs. thanks a lot for creating the tool. -Agyaanapan (talk) 02:40, 20 July 2021 (UTC)[reply]

please delete wrong catalog[edit]

Dear Magnus, we have created a not completely functioning catalog (ID 4720). Please delete it, we have already created a new catalog (ID 4721). --LutiV (talk) 08:13, 23 September 2021 (UTC)[reply]

Done. --Magnus Manske (talk) 15:14, 23 September 2021 (UTC)[reply]

Please delete Mix'n'match catalog 4727 (replaced by 4731)[edit]

Please delete Mix'n'match catalog 4727. The regex in the scraper I set up to create it caused a few incorrect years of birth in the descriptions. It has been replaced by catalog 4731 which fixed that and made other improvements. I was also planning to use BitBucket to report that the Google search links in Mix'n'match are not working, but someone else already did. Thanks. -- Zyxw (talk) 08:42, 29 September 2021 (UTC)[reply]

Deactivated the catalog. Looking into the Google issue. --Magnus Manske (talk) 08:48, 29 September 2021 (UTC)[reply]

Mix'n'match catalogs I created which can be deleted[edit]

Please delete Mix'n'match catalog 4754, which was replaced by 4755 (the first one used a smaller dataset and the descriptions were too long, exceeding what appears to be a 250 character limit). Please also delete Mix'n'match catalogs 4773 and 4774, both replaced by 4775 (all used a scraper with about 4000 ID numbers in the keys level, but the first two stopped scraping upon encountering a duplicate ID). Thanks again. -- Zyxw (talk) 12:36, 21 October 2021 (UTC)[reply]

Done. --Magnus Manske (talk) 13:22, 21 October 2021 (UTC)[reply]

Import of new catalog in Mix'n'match is not loading[edit]

Hello, Magnus,

I am trying to import a new catalog available here and it is locked indefinitely in "Test is running...".

In the Wikidata Telegram group, I saw that other user is having a similar issue for about one issue.

Do you know what might be happening?

Thanks! TiagoLubiana (talk) 17:42, 10 November 2021 (UTC)[reply]

Thanks, fixed now. --Magnus Manske (talk) 09:48, 11 November 2021 (UTC)[reply]
I see this should be fixed, but I am having the same problem. Tried both from file and from URL and with several different files of different sizes, and it just hangs on "Test is running...". Thanks! Knr5 (talk) 21:50, 23 November 2021 (UTC)[reply]

WD4WP issue[edit]

Hi, I've been using the WP4WP tool you created (in order to add images to English Wikipedia), and found that around half of the time or so, it will flag articles which actually do include images. It seems that the tool currently only searches for images in the opening section of an article, and ignores any images in infoboxes, or in sections farther down the page. Would you be able to fix this, so I can focus on improving articles which are actually missing images? Apologies if this is the wrong place to ask, but I couldn't find the tool on BitBucket. Thank you, Yitzilitt (talk) 06:55, 24 November 2021 (UTC)[reply]

Hi Yitzilitt, that behaviour is actually deliberate. I used to have a tool called FIST that would scan the entire page. But it turns out there is a lot of "junk" in the list of all images on a page; every icon in some infobox, paintings of as painter as opposed to an image of the painter, etc. So for WD4WP I am using the "pageimage" property that is generated by Wikimedia/MediaWiki; it is the "official", automated definition of "relevant image of this topic on Wikipedia". I know this approach produces some false positives, but if you keep marking them appropriately in the tool, they will not show again, so over time the list will become more relevant for all users of the tool. If you know of another automated way to get "relevant" images in Wikipedia articles, please let me know! --Magnus Manske (talk) 10:59, 30 November 2021 (UTC)[reply]

Franziska Hartmann[edit]

Hi Magnus, your bot deleted a picture I uploaded for the page of Franziska Hartmann. I specifically asked the photographer for permission to use this picture which he approved. Could you help me reupload it in the correct way so it wont be deleted? Thank you

Call For Volunteer Facilitators Across Africa [edit]

The African Narrative
The African Narrative

Dear Wikimedian,

The Africa Narrative is seeking volunteer facilitators in Egypt (two volunteers) and Senegal (two volunteers) for the AfroCreatives Wikiproject.

The AfroCreatives Wikiproject aims to support and mobilize creatives and professionals from the African creative sector, as well as enthusiasts more broadly, to contribute and enhance knowledge on Wikipedia about Africa’s creative economy. Its inaugural effort will target the film sectors of Egypt, Nigeria, Rwanda, and Senegal.

The Africa Narrative (TAN) is a non-profit that develops singular, high-impact initiatives in support of African creatives and the African cultural and creative industries with the aim to strengthen capacity in the continent’s creative economy and the global visibility of African cultural soft power.

All volunteers must have full professional proficiency in English. In Senegal, they must have native or full professional proficiency in French; in Egypt, they must have native or full professional proficiency in Arabic. Volunteer facilitators should have a vast knowledge about Wikimedia projects and experience with teaching new editors. All volunteers will be compensated.

Interested participants may sign up on the project's Meta page. You may also indicate your country and email if your Wikipedia account is not associated with any email address.

https://meta.wikimedia.org/wiki/The_Africa_Narrative/Volunteers

We will be glad if you can share with or recommend an Egyptian or Senegalese Wikipedian as well.


Thank you.

Danidamiobi (talk) 10:28, 16 February 2022 (UTC)[reply]

Mix'n'match mistake when importing[edit]

Hello, is there any way to clear this catalog? https://mix-n-match.toolforge.org/#/catalog/5483

I accidentally included the descriptions in the names. I'd like to re-import with the correct names. AntisocialRyan (talk) 20:16, 29 September 2022 (UTC)[reply]

Deleted. --Magnus Manske (talk) 13:10, 18 October 2022 (UTC)[reply]
Thank you! Wd-Ryan (talk) 15:24, 18 October 2022 (UTC)[reply]

ListeriaBOT on cywiki[edit]

Hi Magnus. I've added quite a few new articles on cywiki (around 160k) all of which have your wonderful ListeriaBOT eg this one. Can you automate them please so that they will generate the lists. Many thanks for all you wonderful work. Kind regards... Robin - Llywelyn2000 (talk) 09:50, 31 October 2022 (UTC)[reply]

Mix'n'match delete test catalogs[edit]

Hi Magnus, I created a few test catalogs in Mix'n'match before realising that I couldn't delete them myself. Could you please delete them for me? It's catalogs 5586, 5609, 5626 and 5645. Thanks! Likevel (talk) 13:56, 6 December 2022 (UTC)[reply]

Deleted. --Magnus Manske (talk) 14:26, 30 August 2023 (UTC)[reply]

Is it possible to remove entries from a Mix'n'match catlog?[edit]

Hi Magnus, We may want to remove some entries from the catalog we created, #5647. Is this possible? Or is it better to delete the catalog and create a new one? I'd prefer not to do that seeing as some entries have already been fully matched. Thanks. Likevel (talk) 12:39, 12 December 2022 (UTC)[reply]

Would it be possible to get admin rights to clean these up, at least for the ones I submitted? Matthias (talk) 10:59, 30 August 2023 (UTC)[reply]

Du bist jetzt Admin auf Mix-n-match. --Magnus Manske (talk) 14:25, 30 August 2023 (UTC)[reply]

ListeriaBot not running periodically[edit]

Hi! It appears that ListeriaBot is no longer running periodically (last edits in status are from late June). Is it intended? Refreshing a list manually seems to be working. Cheers, Msz2001 (talk) 16:00, 6 September 2023 (UTC)[reply]

Listeriabot on brwiki[edit]

Hello,

Listeriabot seemed inactive on the Breton Wikipedia (I was surprised to see that it made an edit recently), and there are requests to delete the pages it created (see br:Kaozeal:Bretoned ha br:Kaozeal:Roll tud Kembre). Can you please disable the bot on brwiki? Huñvreüs (talk) 07:15, 8 November 2023 (UTC)[reply]

Since there was no reply, I blocked the bot and deleted the article it was maintaining. Huñvreüs (talk) 08:18, 15 November 2023 (UTC)[reply]

Mix'n'match wrong catalog language[edit]

Hi, could you change the language of mixnmatch:6167 to just es? I set it to es-ES which doesn't play nice with some of the URLs generated by the tooling, and I'm unable to change it. Thanks. NoInkling (talk) 05:51, 3 January 2024 (UTC)[reply]

Done. --Magnus Manske (talk) 09:03, 3 January 2024 (UTC)[reply]

Tool Translate does'nt allow me to create language for Tamil (ta) Language code ta not recognised[edit]

Kindly solve this issue on Tool Translate Issued on BitBucket:https://bitbucket.org/magnusmanske/tooltranslate/issues/29/tool-translate-doesnt-allow-me-to-create Sriveenkat (talk) 21:01, 1 February 2024 (UTC)[reply]