Community Wishlist Survey 2016/Categories/Bots and gadgets

From Meta, a Wikimedia project coordination wiki
This is an archived version of this page, as edited by Jeblad (talk | contribs) at 17:51, 21 November 2016 (→‎Abandon media viewer). It may differ significantly from the current version.

WM:WISHLIST2016/Category header

Global gadgets

  • Problem: it is very inconvenient to manage multiple forks of a script over WMF projects. It should be possible to have global gadgets, that are available by default on the gadget list of each WMF wiki. This is specially useful for keeping a single central copy of popular gadgets, such as HotCat, wikEd, navigation popups, hiding global notices or WikiMiniAtlas, etc.
  • Who would benefit: users and maintainers of popular gadgets
  • Proposed solution: mw:Gadgets 3.0 (once mw:Gadgets 2.0 is live)
  • Phabricator tickets: phab:T22153
  • Proposer: Helder 01:45, 8 November 2016 (UTC)[reply]

Community discussion

There must also be some possibilities for translation. Some gadgets on wikidata are now with translation, but only admins can add new translation to gadget code.

Let's have global meta:Mediawiki:Gadget-foo.js. Only meta-admins are allowed to edit it. And there is some user on so.wiki, which wants to use this gadget. He also writes translation for some parts of this gadget, but how to merge it to global gadget?

And second problem: if localization is in gadget-source, there should be gadgets with 20 lines of source and 2700 lines of translations (270×10 messages) - 98% of unnecessary lines (and downlloaded data) for most users There should be localizated subpage, (e.g. mediawiki:Gadget-foo.js/so) but where? If on meta, only meta-admins can edit it. If on so.wiki, local admins can edit it, but so.wikt admins cannot.

Next point: There should be language-specific gadgets which are useful only for wikis with certain language. How to share and edit it? JAn Dudík (talk) 06:51, 8 November 2016 (UTC)[reply]

The translation is considered by mw:Gadgets 2.0. So mw:Gadgets 3.0 would continue from there. Helder 11:48, 8 November 2016 (UTC)[reply]

Probably related with Wikimodule project wish. JAn Dudík (talk) 07:02, 8 November 2016 (UTC)[reply]

In my opinion language specific gadgets could stay with all others in the central repository (meta:Mediawiki), put just a huge banner at the top, that they might not work properly because of the language constraints.
How about enabling localization subpages be edited by other usergroup than just meta-admins? I dont know if that is technically possible. --Wesalius (talk) 07:12, 8 November 2016 (UTC)[reply]

In general I strongly support this idea. --Wesalius (talk) 07:12, 8 November 2016 (UTC)[reply]

This proposal would require having a Central Global Repository for Templates, Lua modules, and Gadgets. This was also proposed in last year's wishlist. Anyone interested please also see phab:T121470 for an analysis of impact, feasibility and risk, and its dependencies (in the "task graph"). --AKlapper (WMF) (talk) 09:35, 8 November 2016 (UTC)[reply]
See Also: Community Tech/Central repository for gadgets, templates and Lua modules for a few more non-technical details. Quiddity (WMF) (talk) 19:50, 8 November 2016 (UTC)[reply]

Web-based AutoWikiBrowser alternative

  • Problem:

AutoWikiBrowser is a huge success, helping people make millions upon millions of semi-automated edits, making repetitive tasks so much easier to accomplish. However it has lots of problems:

  • It is a desktop app, so
    • It's Windows only
    • It requires installation (including dependency as cumbersome as .NET framework)
    • It requires updates
    • It just can't follow you across devices
  • It was created without localization in mind so making its interface support multiple languages would amount to a rewrite
  • While it's written in a language that is reasonably popular, it's less popular among open source developers and overall less popular than modern web stack languages. As a result, it never had enough developers.
  • Very early on, it had cosmetic "general fixes" introduced to it, that now generate lots of unnecessary controversy, slow down page processing and drain developer time on supporting them.
  • Who would benefit: power users, potentially expanding reach to "simple mortals"
  • Proposed solution:

Create a web-based replacement for AWB. I envision a site on Tool Labs that hosts a mostly-JS AWB that does editing. The backend would only authenticate users (OAuth), store settings and editing lists, and collect statistics. Most of JS code can be made reusable in Node.js for use by fully-automated bot developers. Having a web app would also allow users to contribute on mobile platforms (phones and tablets) where typing is hard, but clicking "Save"/"Reject" is something people can do while commuting to work, for example.

  • More comments: Full disclosure: the proposer is a former AWB developer.
  • Phabricator tickets:

Community discussion

Deactivate old skins, old gadget, old tools // Integrate the most used/useful gadgets and tools

  • Problem: I believe WMF and devs have difficulties to manage the high personnalize interfaces. If we want to facilitate the work of the devs. A way for this, is to archive, suppress or disactivate the old/useless skin, gadgets and tools. Another way is to integrate by default the more used gadget and tools.

There are around 70 gadgets on wp:en and on wp:fr. I don't know the most majority of the gadget of wp:fr (my main wiki). There so many gadget so many new contributors just don't want to test each gadget for see if there are one, two or three gadgets which can be useful for them... I have never checked the gadgets on wp:en or wp:de or wp:es or other wikipedia for see, if they are good idea which can be deploy on my main wiki.

The more archetypal example of gadget which should be integrated as default feature is surely the "merge" gadget on wikidata. Without this gadget no-one can't do a merge... It's like we need a gadget to rename a page on wikipedia...

It's the same problematic on Wikimedia Tool Labs. To many tools. I never checked the majority of them. But some are very useful.

  • Who would benefit: Everyones
  • Proposed solution: Like the title : "Desactivate old skins, old gadget, old tools // Integrate the most used/useful gadgets and tools"

WMF could show to the communities how many people use each gadget, like the beta features. WMF could encourage the communities to clean their gadget list.

Wikipedia should not allow contributors to use "Cologne Blue" or "Modern" skin. Useless skin. Simple solution.

And one day, the WMF should encourage veteran contributors to switch to monobook to vector. Vector is the main interface since 6 years... There are 2 possibilies : 1) A new skin will be developed (It isn't the case) => So monobook will be more and more obsolete. Or Vector will be maintain for long times the main => So monobook will be more and more obsolete.

Community discussion

I don't understand your argument against keeping alternative skins. Can you please clarify? Legoktm (talk) 04:02, 10 November 2016 (UTC)[reply]

I,personally, like the Monobook skin. What's wrong with that? עוד מישהו Od Mishehu 12:09, 10 November 2016 (UTC)[reply]
Alternatives skins need work for maintien them. Work by the WMF and work by volunteer devs for adapt and maintain gadgets and similars stuffs. Work to maintain "help pages" for the differents skins or to answer to question when the help page isn't maintain. Efforts by the entire communities to understand the differents skins. And thoses efforts are efforts which isn't to invest on new feature like Editor visual or wikidata.
When your core communities doesn't use the main skin since 6 years, how do you do to push features also disturbing like the Editor visual / Wikidata / etc ?
If you want I more specific example when I ask the french community for change the design of the french main page 1 year ago (I try to push for that change and I give up for many reason, included this one), I remember a veteran/core contributor said me very directly that the design isn't good because it was adapted for vector skin, and that he doesn't like vector and disagreed with all things which look like or be adapted to vector...
When the core community use a old skin and this core community / outdated community fight against new design and new stuff. And the wikimedian communities often fight for that reason. (A massive conflit of 1 year is (or maybe) just over on wp:fr about wikidata). And one reason to that is new stuff isn't homogeneous with skin like monobook. For you it is normal and that situation have vocation to be preserved unlimited ?
I don't understand what it's surprising to said that monobook will be one day not maintain, like the maintain of Window XP is over. --Nouill (talk) 13:16, 10 November 2016 (UTC)[reply]
Regarding "could show to the communities how many people use each gadget", Special:GadgetUsage already exists? --AKlapper (WMF) (talk) 14:25, 10 November 2016 (UTC)[reply]

This proposal seems to be based on an unproven premise: that developers somehow are overtaxed in maintaining the existing skins and gadgets. If this were actually the case, the developers would likely raise the issue themselves. Anomie (talk) 15:53, 11 November 2016 (UTC)[reply]

Proof :


I am also enthusiastically in the Monobook camp, and I'm probably also using old tools that blissfully still work. If it's not broken, don't deliberately destroy it? Samsara (talk) 20:49, 12 November 2016 (UTC)[reply]

certainly agree on that one--I cannot see how it would interfere with anyone .DGG (talk) 08:47, 20 November 2016 (UTC)[reply]

Cat-a-lot improvement

  • Problem: unfortunately, Gadget-Cat-a-lot can't remove files from a specified category, when you're viewing any page, except that category's page ("Remove from this category" only works for the category page you're currently viewing).
  • Who would benefit: the Commons users trying to sort out the files on a Search results page. And some overcrowded categories would benefit too.
  • Proposed solution: add some new functionality to the gadget, i. e.: when users inputs a category name (into the only text input field of this gadget), a new functional hyperlink (e. g., "Remove from the specified category") should appear on this gadget's panel.
  • Phabricator tickets:

Community discussion

VisualFileChange category processing improvement

  • Problem: VisualFileChange can load files from a specified category, but don't have any category-related functions. To add/change/delete categories for the loaded files, you need to use "append any text" & "custom replace" functions and play with text patterns.
  • Who would benefit: Commons users trying to sort out overcrowded categories.
  • Proposed solution: add some new "action" functions to the gadget ("Add/change/delete category").
  • Phabricator tickets:

Community discussion

Abandon media viewer

  • Problem: It's slow to load and offers no real benefit.
  • Who would benefit: Everyone.
  • Proposed solution: Switch it off by default for all users who have not actively opted in, including those not logged in or without an account. Require no cookie to do this.
  • Phabricator tickets:

Community discussion

  • Support it - Disable this gadget because this is very irritating when contributors want to open the image on mobile devices. -- Satnam S Virdi (talk) 04:59, 15 November 2016 (UTC)[reply]
  • Media viewer is a very valuable addition and there is absolutely no reason to ask it to be switched off for everyone. Kruusamägi (talk) 17:02, 16 November 2016 (UTC)[reply]
  • I'd like to know if readers (as opposed to users or non-logged in editors) have a preference. Jo-Jo Eumerus (talk, contributions) 15:48, 17 November 2016 (UTC)[reply]
  • If you see Media Viewer taking a long time to load and have some free time, consider filing a speed report. --Tgr (WMF) (talk) 04:28, 18 November 2016 (UTC)[reply]
    • There are other problems with Media Viewer, such as actively hiding, i.e. not displaying, a great deal of the information, such as the file description and information on when the image was taken or whether the image has been retouched. What little information it does display, it tries to all cram on one line, rather than putting each thing on a separate line with a lead column that identifies the piece of information displayed (e.g. "Description:", "Date:" - and no, it doesn't currently display either of these!) For me, there hasn't been a single time that the information displayed by MV was sufficient for what I wanted to find out. Samsara (talk) 05:16, 18 November 2016 (UTC)[reply]
      • Shouldn't the proposal then be to get the currently missing information displayed in media viewer and increase the speed of it? [i.e. fix the problems with media viewer] Not to mention, that this would align lot more with the idea behind this survey [i.e. the question "what should the Community Tech team work on"] Kruusamägi (talk) 16:20, 18 November 2016 (UTC)[reply]
        • My concern is that the speed problem may never get fixed. I believe that's been a known problem for years now. It gets brought up, they promise to fix it, nothing happens. That's what it looks like to me - and my best explanation is that the problem is not fixable. MV needs a certain amount of JavaScript, and JavaScript is just that slow. I don't know what the design goals for MV were, but if it's just to scale the image to the size of the window, there's some really short and simple JavaScript for that and that's clearly not what they're using. Samsara (talk) 18:04, 18 November 2016 (UTC)[reply]

Quarry maintenance

  • Problem: For many months there is an unsolved problem running queries via Quarry. The engine slows down due to queries that seem to be running (though they don't run in fact) or are queued for many weeks or even months without being handled as killed.
  • Who would benefit: All Quarry users
  • Proposed solution: Until the underlying problem is solved a bot should run once a week that kills queries which are "running" for more than 30 minutes and restarts queued queries.
  • More comments: It's annoying if one does queries that under normal conditions run 20 or 25 minutes but are killed because of exceeding the 30 minutes limit.

Community discussion

Ping for @Yuvipanda -FASTILY 20:31, 13 November 2016 (UTC)[reply]

Automatic links to Internet Archive

  • Problem: Web pages disappear and we are left with broken links. Adding a permanent link is more work for editors.
  • Who would benefit: Editors that use web-based references, users that want to verify claims that use web pages as references, and users that want to learn more about a subject.
  • Proposed solution: Do the following:
  1. Automatically add a link to the corresponding page in Internet Archive if an url and an access-date is provided in a cite.
  2. Automatically add access-date to cites, if they are not provided, when an edition is saved.
  3. Automatically request archival of web pages in Internet Archive if they are not available there.
  • Phabricator tickets:

Community discussion

I have a few questions/concerns:
  • If an editor adds a URL citation & one exists in web.archive.org, presumably we could mine the latter for the most recent archived URL, the archive date (contained in the web.archive.org URL, just would need to be parsed), & set |deadurl=no. Question: would this be done real time by a script, or after the fact via a bot?
  • If a URL was not archived at web.archive.org when an editor placed a citation, would a script or bot then make a request to have it archived & then set a timer to later obtain the archived info?
  • If an existing citation without archive information was discovered by a bot, then with the bot query web.archive.org to see if the URL existed there?
  • If that URL was not archived at web.archive.org, would bot then make a request to have it archived & then set a timer to later obtain the archived info?
  • If that URL did exist at web.archive.org, what would be the criteria to select the archived URL? If it's the most recent, the check must be done to ensure that a 404 error or the like was not archived, because this frequently happens at web.archive.org. Should the bot look for a date similar to the date of the citation placement, so that the archived URL would be most similar to the one that the editor placed?
Peaceray (talk) 22:35, 12 November 2016 (UTC)[reply]

Ideas:

Archive.org URLs sometimes stop working, such as when a domain changes hands and the new owner puts up a restrictive robots.txt file. Archive.is archives aren't subject to this. Perhaps it's a better choice or both should be used.
The archive-date field should be automatically populated. This can all be done by further enhancing the cite tool that already builds a full citation given just a URL, in most cases.
Thoughts on my and Peaceray's ideas, Aracali? --2601:643:8300:92CB:F860:72D9:4462:D3DC 22:48, 12 November 2016 (UTC)[reply]
I have no preference as to whether it should be done in real time or by a bot, but it would be nice if it is done either way. My main points are to reduce the workload of editors that provide references, and to ensure that references can be accessed permanently, regardless of what the editor does. I do not think that I should prescribe the specific implementation, I am sure that people knowledgeable about bots and wikimedia processes would know what is the best or easiest way to accomplish this. If we could check for 404 errors, that would be great. Another important point is that we do not lose the information, and that's why I think that it would be a great idea to automatically archive any url used as a reference in any of the wikis. I tried to query archive.org with an arbitrary date and what it does is to return the last archival with a date earlier than the requested date: https://web.archive.org/web/20151010010203/http://www.ejercitodelaire.mde.es/ea/pag? produces the page archived on Oct 10, 2015: https://web.archive.org/web/20151008052325/http://www.ejercitodelaire.mde.es/ea/pag. Perhaps we should immediately request an archive and set the date for the url one minute or one hour into the future, and hope that is not updated and archived in-between. I am fine with doing it in Iceland, perhaps we should send archival requests to both, or to other archival systems too, to make the process more robust. I would prefer not to have to use the cite tool for this, since not everybody – myself included – uses that. Thanks for you interest and your questions, Aracali (talk) 00:31, 13 November 2016 (UTC)[reply]
I confess: I did a little primary research: when the Wayback Machine is queried with a date in-between two archivals, it seems to return the archival closest to the requested date. This simplifies things, since we would not have to change the time of the request. Possible method:
  • On edit save:
  • Check if the new text has any templates that have an url field. For each of them:
  • Check if there is an archive-url field; if there is one, do nothing.
  • Otherwise:
  • Submit archival request(s). Do not wait for termination.
  • Insert an access-date (current time) if there is none, and archive-url and archive-date fields, using the access-date. (success oriented, I confess)
  • The archive-url url can be created automatically for the Wayback Machine, but not for archive.is (as far as I know)
  • Profit.
  • A bot slowly crawls pages to archive other urls, not corresponding to new edits, and adds archive-url and archive-date fields.
  • Editors optionally and manually correct those links when they do not point to the right version.
Aracali (talk) 02:47, 13 November 2016 (UTC)[reply]
The bot should add archive-url, archive-date, and |deadurl=no. Peaceray (talk) 06:29, 21 November 2016 (UTC)[reply]
Excuse me if I misunderstood this proposal, but I believe this is the same proposal as last year's, Community Tech/Migrate dead external links to archives? SamanthaNguyen (talk) 03:06, 13 November 2016 (UTC)[reply]
I think that the difference is that I am proposing to make them available before they die, not after, as soon as an edit with a url is saved. Aracali (talk) 04:36, 13 November 2016 (UTC)[reply]
It surely covers #3 for the English Wikipedia, we should do the same for all the other Wikipedia sites. It would be cool if we could also do #1 and #2, making links to those archival pages available automatically too. Thanks, Aracali (talk) 02:40, 15 November 2016 (UTC)[reply]
@Aracali: Internet archive automatically archives urls linked from all language Wikipedias, but the record for those archived URLs is documented on-wiki immediately, you still have to implement the bot, which I believe is in the long term plan, right @NKohli (WMF):? Sadads (talk) 02:09, 16 November 2016 (UTC)[reply]
That is correct. External URLs added to articles across all ~280 Wikipedias are automatically archived by InternetArchive. There is a bot InternetArchiveBot which goes through all pages and scans for dead links and replaces them with their IA substitute. The bot is run by @Cyberpower678: and is currently only running on English Wikipedia. There are plans to launch it on other wikipedias in near future. -- NKohli (WMF) (talk) 17:04, 16 November 2016 (UTC)[reply]
That's great. I usually contribute to a non-English WP, so I did not realize that this was being done, and I am looking forward to this great feature being deployed everywhere, including also WikiData and the Commons. Some questions: Why do we wait for the link to die before offering a link to IA? Wouldn't it be better to add the link to IA as soon as it is available, so it points to the version of the web page that was used as a reference? Web pages do change with time, and the information that is being used as a reference could disappear in later versions. Should we consider automatically adding access dates if they are not included in the cite? Are we making sure that the version linked to in IA is the one corresponding to the access date? Thanks for your comments, Aracali (talk) 02:50, 17 November 2016 (UTC)[reply]
To answer your questions:
  1. That decision is up to the community, this option can be controlled via an on wiki configuration page IABot uses to determine how to run on the wiki.
  2. This can be done regardless of whether it was added immediately or later.
  3. IABot does this for cite templates missing access dates, or when converting to access dates. It can extrapolate the access date, if it's a plain link.
  4. This again can be controlled via the config page. On enwiki, it's set to get a snapshot as close to the access-date as possible, or whatever snapshot is already saved in IABot's DB of URLs.
Cheers.—cyberpower ChatHello! 04:35, 21 November 2016 (UTC)[reply]
  • This is something I have often thought of implementing. Rich Farmbrough 20:31 17 November 2016 (GMT).

New User Landing Page - Article Creation Workflow

  • Problem: en.Wiki receives around 850 or more new pages every 24 hours of which up to half are not suitable for publication. New users are unaware of what kind of articles are acceptable for Wikipedia or are unaware of the inclusion criteria before they start creating. New Page Reviewers are too few to contain the influx of inappropriate pages and to render basic assistance to new, good faith creators of articles.
  • Who would benefit: (cross-Wiki application, core software extension). Above all, the new users whose first attempts at editing are to create a new article. Other benefits are to those who maintain the processes for deleting inappropriate pages and/or offering basic help and advice to such users. Such a landing page would greatly reduce the need for answering the Why was my page deleted? questions, and processing AfD discussions. The serious backlogs at AfC and New Page Review would also be sigificantly reduced.
  • Proposed solution: Resume the development of Article Creation Flow, a WMF project that was begun as a solution to the WP:ACTRIAL project they denied despite an overwhelmingly large community consensus.. The purpose was provide succinct but comprehensive information before the users commenced creation, and to channel all new articles by first time new users through the w:WP:Article Wizard to be reviewed at WP:Articles for Creation or WP:Page Curation before being published in mainspace.
Article Creation Flow was to be the 'other half' of the overall Page Curation project of which the principle objectives are to:
  1. Keep the Wikipedia free of spam and paid advocacy (Foundation policy), vandalism pages, hoaxes, and copyright violations.
  2. To reduce - or even prevent - the loss of new users bitten by deletions of good faith articles with potential, but which at the time of creation do not immediately meet criteria for inclusion in mainspace.
  • Phabricator tickets:

Community discussion

While I don't disagree with the proposal, I wouldn't have titled it "New user landing page". That title might better be applied to this other proposal, over in the Miscellaneous section. Not all new accounts start by trying to create a new article - I remember reading somewhere that only around a quarter of them do. The other three-quarters could with advantage be told some basics when they register their account, as some of us tried to argue here last year Noyster (talk) 13:02, 14 November 2016 (UTC)[reply]

I thought users needed to have some edits under their belt before they could create a new page. In any case, i think this has merit for the first, say, three new pages created to direct through a confirmation page and have review before publication. The delay in publication has down sides for more complicated edits that involve redirects, splitting topics or see also hatnotes. Bcharles (talk) 16:41, 15 November 2016 (UTC)[reply]
Unfortunately, any registered user on en.Wiki can create a page directly in mainspace with their very first edit, and many do; that 's why it's important to understand why the community already reached an overwhelming consensus for w:WP:ACTRIAL which was not rolled out by the community in anticipation of the Foundation's promise to deliver the new Landing Page. Kudpung (talk) 16:53, 15 November 2016 (UTC)[reply]
because it violates a pillar, and was a notfix by jorm. go whine at him. i see english got its revenge with AfC. seriously, we desperately need a real on boarding process, and a cadre of welcomers to engage with newbies. i do not see any sign of a willingness to change the bitey culture. flow is necessary but not sufficient, needs part 2 - teahouse like welcomers. Slowking4 (talk) 15:09, 16 November 2016 (UTC)[reply]
en:WP:AFC is broken, has been for years, as contributions almost always languish for weeks before a volunteer gets around to even looking at them. If anything, the trend has been toward raising the bar to discourage "unqualified reviewers", which has only increased the backlog. If I'm a random anon-IP with articles waiting in WP:AFC for the last half-month, I can create an account or two right now and post them all into mainspace as my very first edits - or I can leave them to languish for another week or more. If you want volunteers to review every new article, how do you intend to recruit these volunteers? Unless that question can be addressed, I don't see a purely-technical solution here. K7L (talk) 16:58, 19 November 2016 (UTC)[reply]
though there may be some difficulties with the transition, removing under-qualified users will greatly help the workflow: first, because it take an inordinate amount of time to detect and correct their many errors. Second, because many qualified people don't participate because they see their work being swamped in a mass of bad workmanship.--good people don't usually like to participate in a project where most of the people are doing it wrong? DGG (talk) 08:52, 20 November 2016 (UTC)[reply]

Real-Time Recent Changes App for Android

  • Problem: Reviewing changes on Android is clunky and bandwidth intensive. Requires desktop version of site on mobile and multiple key punches before actual work can begin. No facility of offline reveiewing.
  • Who would benefit: Reviewers using Mobiles
  • Proposed solution: Dedicated Android based stand-alone app for Reviewing activity on wikipedia, (RTRC App for Android)
  • More comments: presently one has to load Android browser, open wikipedia, login, click on side bar, choose settings.... before starting the work. its too cumbersome on mobile. Low bandwidth in countries like India (where I'm from) add to the woes. Bandwidth issue becomes worse while travelling. I would like if I can check a few pages if I have 10-20 minutes spare time, on my mobile... like while I'm travelling, and have a few minutes in hand. I would like to be kept logged in with pre-set mostly used configurations, offline support for storing 20-50 unreviewed changes, batch processing, reviewable while offline-updated as soon as I get good signal/start internet again, etc.

Thanks.

  • Phabricator tickets:

Community discussion

  • @Anamdas: Unfortunately, currently no underlying problem to solve is described in this proposal. Also, what does "RTRC" stand for? Also, does your Android browser not allow storing cookies and keep you logged in? I am also wondering why you have to click on the side bar and choose settings every time - why would you have to change the settings within you Wikipedia every time and which settings are those exactly? I am asking to better understand the underlying problems you are facing... --AKlapper (WMF) (talk) 19:41, 14 November 2016 (UTC)[reply]
I think it means "Real time recent changes". -- NKohli (WMF) (talk) 05:23, 15 November 2016 (UTC)[reply]
Hi, AKlapper (WMF), Sorry for late reply. Thanks, NKohli (WMF) for your help. Yes, I'm talking about Real Time Recent Changes tool for reviewers. The underlying problem is that : as a reviewer, I strongly feel the need of an Android based stand-alone app for Reviewing activity on wikipedia. Regards. --Anamdas (talk) 09:28, 16 November 2016 (UTC)[reply]
It might be fair to say that "Problem = reviewing changes on Android is clunky and bandwidth intensive." Rich Farmbrough 20:35 17 November 2016 (GMT).
Thanks, Rich. Changed as advised.--Anamdas (talk) 09:33, 18 November 2016 (UTC)[reply]

Free Trial of Bots

  • Problem: Bots
  • Who would benefit: anybody who doesn't know how to use them
  • Proposed solution: Allow a trial period for people to have a go at using bots, say for 5 articles, before going through the whole permission thing
  • More comments: I'm an old fart, better at using pen and ink than computers. I can think about possible uses for bots, but am uncertain how to use them. The current rule, as I understand it, is you have to have permission to use a bot; but I don't know if I could use one if I had permission. It would be embarrassing to ask, before knowing if you can use it! I'd like to have a go before asking or promising.
  • Phabricator tickets:

Community discussion

  • @AlwynapHuw: Thanks for your proposal. Could you please edit your proposal by providing the default sections describing "Problem", "Who would benefit", and "Proposed solution" so your proposal has the same structure as others? This would allow everybody to understand better which issues you are facing, and what exactly is being proposed. When it comes to policies and permissions, I am afraid that this is up to each community to come up with social "rules" (see for example w:Wikipedia:Bot policy, Bot policy, or wikidata:Wikidata:Bots), so this proposal might be out of scope as per Community Tech#Scope. phab:T134495 and phab:T149312 might also be slightly related here. --AKlapper (WMF) (talk) 10:52, 16 November 2016 (UTC)[reply]
  • I note we have test.wikipedia.org for experimentation (and it doesn't require permissions). —TheDJ (talkcontribs) 12:26, 16 November 2016 (UTC)[reply]
  • Generally it's fine to test bots in your own user space, maybe even in Wikipedia: project space, but you need to ask for permission to run them on articles (in mainspace). I think "old farts, better at using pen and ink than computers" have a steep learning curve here. A lot of editors are curious about bots and how they work. I suggest that a "bots for beginners" demonstration of the basics, showing how a bot is set up and showing one in action (console output) would make a nice Wikimania presentation. A Community Tech member might make such a presentation. If I make it to Montreal, maybe I could show my bots in action. They run right on my Windows 7 desktop, and I recently set up my Windows 10 laptop to run them as well, if needed for backup should my desktop ever unexpectedly die. I could also demo my bot that runs AutoWikiBrowser, as that's much easier to do for a novice without programming experience. Wbm1058 (talk) 14:01, 16 November 2016 (UTC)[reply]
  • Captain Obvious here. Wouldn't it be easier to have a replica of each* wiki (aka sandbox) specifically for bot testing? I imagine it would be a slightly out-of-date version that would also re-play a dataset of new pages and edits, i.e. there would be actual simulated activity so that page patrolling etc. could be tested. And there'd be no harm done to any "real" data. *as applicable Samsara (talk) 05:00, 17 November 2016 (UTC)[reply]
  • This could be a simple change to BOTPOL on the appropriate wiki. Certainly on en:wp you could make 5 automation-assisted changes from your main account without breaking any rules. Automation assisted means that you are still approving each edit separately. Rich Farmbrough 20:38 17 November 2016 (GMT).
  • This is purely a community issue, many projects already allow for this, some specifically do not. This should be left to the communities to decide. — xaosflux Talk 12:38, 19 November 2016 (UTC)[reply]

Good Article Nominations suggestion bot

  • Problem: Lack of reviewers for GAN, large backlog
  • Who would benefit: Anyone making a GAN, arguably Wikipedia as a whole due to the speeding up of the process.
  • Proposed solution: A bot would suggest GAN's to a user, either based upon what they usually edit, or else they could select a genre (or several), and have them sent to them, much like RFC's.
  • Phabricator tickets:

Community discussion

Good article nominations. That's the title of this proposal. Wbm1058 (talk) 19:20, 16 November 2016 (UTC)[reply]

Bot that will help in analysing Biography on Wikipedia

  • Problem: We noticed that there are no tool that can be used for analysing or sorting biography on wikipedia,. There are instance, we need biography in term of gender,dead,Living, countries etc
  • Who would benefit: Anyone interested in research on biography in the movement
  • Proposed solution:
  • Phabricator tickets:

Community discussion

Misplaced talk pages

  • Problem: Some talk pages are not where they should be. This causes two major problems:
  1. Talk page of an article redirects to another article's talk page. For example if you want to discuss about the casino en:Kangwon Land, by clicking on the Talk you will be redirected to the talk page of en:High1 ice hockey team and you leave your text in the wrong place if you don't double check the title.
  2. Talk page of an article is practically inaccessible since the corresponding article page is redirect and there is no way back to get to the talk page from the page that the talk page essentially belongs to. For example en:Talk:Battle of Coruscant has some text and history while the corresponding page en:Battle of Coruscant redirects to en:Coruscant and there is NO link from the latter talk page to the former one.

Pages with such issues can be classified as follows to better find appropriate solution:

  1. Redirecting article with non-redirecting talkpage
    1. Caused by merging two pages each having its own talk page (en:Jumbo jet and en:Wide-body aircraft).
    2. Caused by moving an article to a page which already had talk page (en:Bosnian Genocide case to en:Bosnian genocide case).
  2. Redirecting talk page with non-redirecting article
    1. Caused by moving an article to a title and later on creating a new article in the remained redirect of the page while the talk page is still a redirect ("Kangwon Land" was moved to en:High1 in 2008 and in 2015 a new article is built in en:Kangwon Land).
  3. Redirecting article with talkpage redirecting to somewhere else
    1. Caused by moving an article to a title and then changing the remaining redirect and redirect it to another page while leaving the talk page untouched ("More to Life" was moved to en:More to Life (TV series) followed by changing the redirect to en:(There's Gotta Be) More to Life while the talk page en:Talk:More to Life is still indicating en:Talk:More to Life (TV series)).
  • Who would benefit: Anyone willing to start/access discussions of those pages
  • Proposed solution: Based on our findings there is no single solution to correct all misplaced talk pages, but by breaking down the problem and making specific smaller lists at least part of the problem can be solved. We managed to further narrow down the problem, but I'm not sure here is the right place to go into more details.
  • Phabricator tickets:

Community discussion

-- Wbm1058 (talk) 05:46, 18 November 2016 (UTC)[reply]

Improve CommonsHelper (Move-to-commons assistant)

  • Problem: CommonsHelper tool by User:Magnus Manske for transferring files from Wikipedias to Commons is not working well. The file descriptions generated even for the easy cases are horrible, see here or here and often can not be even parsed correctly. There are tools specifically designed to clean up the mess generated by the tool, like c:User:Magog the Ogre/cleanup.js but it would be easier to get it right in the first place. As a result file transfer from en-wikipedia to commons is very slow because it is so time consuming to fix the uploads. In the worst case scenario, transferred file descriptions are so bad they are tagged for deletion for lack of license or proper source. The user notified about pending deletion is the person or bot that transferred the file not the uploader and the file is deleted. Even if someone wants to fix the image, at that point image on Wikipedia might be already deleted so it is impossible to correct the issue without getting admins involved.
  • Who would benefit: Wikipedia and Commons users
  • Proposed solution: Start from scratch and design tools which only deal with one wiki at a time. For example for English Wikipedia, query for most commonly used templates in file namespace and come up with mappings to commons templates. Ask Commons community for help with designing the optimal description. Recognize cases that will not be handled by the tool and do not do them. In the end redirect transferring user to newly created page and ask for verification that the description looks good. Also ideally files would be imported with complete file edit history so all the contributions can be correctly attributed (copyright issue) and so we do not loose anything in the transfer.

Community discussion

Jarekt, can you put a sentence or two at the top of this proposal, to explain what CommonsHelper does? It could make this proposal more compelling for people during the voting. -- DannyH (WMF) (talk) 18:35, 18 November 2016 (UTC)[reply]

Done Thanks for suggestion. --Jarekt (talk) 18:44, 18 November 2016 (UTC)[reply]

Would it be within the scope of this new tool to help organise all files on a wiki, in a drive to move everything to Commons that can be moved? I know this is being done already on many wikis, but some dedicated maintenance function might help. Even some easily accessible statistics (# files can be moved, # files can't, # we don't know about). --Magnus Manske (talk) 14:26, 19 November 2016 (UTC)[reply]

I've written and released an Cross-platform tool, MTC!, which makes it easy to correctly transfer files to Commons from en.wikipedia. This already solves most, if not all of the inadequacies with the current CommonsHelper. -FASTILY 23:05, 20 November 2016 (UTC)[reply]

Identify high-use gadgets and ensure that they have proper long-term maintenance

  • Problem: Many gadgets start small, often as a proof of concept, but when they reach some level of popularity and high use than they need to be revisited and possibly rewritten with long-term maintenance and stability in mind. Some popular gadgets are written by people who are no longer active, or more focused on other things than maintenance of old gadgets, and if no one steps up to maintain them problems and incompatibilities accumulate. Some gadgets are copied to other projects and multiple versions diverge. Many users rely heavily on tools and gadgets to do more efficiently their tasks on Wikimedia projects and keeping the tools running is important to keep them happy and busy.
  • Who would benefit: Users of gadgets
  • Proposed solution: Identify high-use gadgets on different projects and ensure that there is a long-term plan to maintain them. That might require code rewrite to more maintainable version or conversion to extension format. Would require a link to phabricator pages where one can report errors and issues. Current approach on Commons of protected edit requests on gadget's Javascript talk pages seems quite dangerous.
For example on Commons, I suspect we should look more closely at HotCat, Cat-a-lot, VisualFileChange and ImageAnnotator. I am sure Wikipedia might have other popular gadgets.

Community discussion

I was imagining more of a process of technical community and/or WFM taking more active role in tracking and assuring that high-use gadgets stay up to date and are properly maintained. We can call it "too big to fail" list :) . I did not know about Special:GadgetUsage and it seems like a good way to identify candidates. Community voting process might also be used. Selected gadgets might need to be rewritten as extensions or placed in Central Global Repository. Wider community would be monitoring and assisting with issues related to them. --Jarekt (talk) 19:20, 18 November 2016 (UTC)[reply]

Auto-suggest Templates Tool

  • Problem:

While editing, editors require templates. But the newbies or the existing editors having less technical knowledge, cannot make full use of their editing because they are not aware of the templates Wikipedia offers.

  • Who would benefit:

The mass people who would benefit will be editors of-course. Especially the ones, who are new to Wikipedia editing and also, as mentioned above, the editors with less technical knowledge.

  • Proposed solution:

One of the solutions can be, that we develop a bot that auto-suggests the list of templates whenever a user types '{{', and also list it's parameters and a small description of the template. (like what it is used for etc.).

  • More comments:

It is difficult to show all the things (like parameters and description), so they can be shown to a user as an horizontal dropdown on mouse hover.

  • Phabricator tickets:

Community discussion

Find and Replace across wiki pages/projects

  • Problem: In our Tamil Wiki Projects, we have lot of red links since references are copied from English / Other language wiki to our wiki. It creates red links and it looks very bad in our article. We have policy that all our articles will be in local language. No redirects for other languages. Similarly people often misspell some of the words. There should be find and replace across our wiki pages. The same tool can be helpful in all other wiki's for fixing typo issues or some repeated misspelled words :-)
  • Who would benefit: Most of the person who does clean up.
  • Proposed solution: I have created a tool to find and replace with list of items. However I need to copy and paste the articles and needs to analyze and replace it. I need a tool which can do for all the wiki articles. It can have the feature to show me the page and context for confirmation before replacing it. Looking forward for something like Find in Files and Replace feature available with Notepad++. There might be other tools available, which am not aware of.
  • Phabricator tickets:

Community discussion

@Dineshkumar Ponnusamy: There is a simple "find and replace" built in to both the wikitext toolbar (see screenshot from Tawiki), and in the visual editor options menu (see screenshot from Tawiki).

For a more powerful set of features, the w:en:Wikipedia:AutoWikiBrowser software (Tamil page that I cannot read, and might not be uptodate w:ta:விக்கிப்பீடியா:விக்கிதானுலவி) might be what you're looking for. (note: it requires a Windows OS, and I'm not familiar with its usage). There are some alternatives listed in the "See also" section there, which might also fill your needs.

Does that solve your proposal? If not, please could you update it, based on this new information? Otherwise it is probably not clear enough to enter the voting stage. Thanks! Quiddity (WMF) (talk) 02:00, 19 November 2016 (UTC)[reply]

@Quiddity (WMF), thanks for looking at my proposal. Am not looking for normal find and replace inside a page. I am looking for Multiple Find and Replace in Multiple pages at a single click. Look at this change! I want this to be done with the help of a tool which has little Artificial Intelligence. I have created a windows based tool with Artificial Intelligence, but it process one article at a time, similar to AWB. Am not sure whether the latest version of AWB supports this feature! Thanks Again. --Dineshkumar Ponnusamy (talk) 04:25, 19 November 2016 (UTC)[reply]
@Dineshkumar Ponnusamy: It is possible with AWB, per the line at the top of the documentation page, "Although AWB does have an automatic mode enabled for some bot accounts, it normally just assists a human.". However, it is strongly recommend to use the manual mode (which allows rapid editing through a list of articles) to enable error-checking, and to catch any edge-cases which don't fit expected results. Hope that helps. Quiddity (WMF) (talk) 17:41, 21 November 2016 (UTC)[reply]

Need for Auto-congratulatory feature on Urdu Wikipedia

  • Problem: Urdu Wikipedia is a small community with a few dedicated contributors. We often congratulate fellow members on editing achievements manually, yet there are occasions when we probably forget and thus fail in our minimum courtesy. We would like to avoid it.
  • Who would benefit: Dedicated community contributors.
  • Proposed solution: We need a bot on Urdu Wikipedia which will automatically congratulate (1) Users on completion of every 500 edits & (2) Users on reaching an editing milestone of every 50,000 bytes.
  • Phabricator tickets:

Community discussion

Urdu village pump discussion to this effect

Import Wikidata text for disambiguation pages

  • Problem: Curating, maintaining, and creating disambiguation pages is a massive chore. Comprehensive disambiguation pages for more popular names/topics take hours of combing through Wikipedia's search results and flipping back and forth among tabs to complete birth and death information. It feels pointless and it's tiring (anyone's who's done it would probably agree) and as a result many disambiguation pages are incomplete and out of date, therefore not serving their function. Wikidata has all the data needed to help editors maintain and update disambiguation pages more easily, and is easily manipulated to autocomplete them, with minimal editor interaction.
  • Who would benefit: Editors who care about completeness of disambiguation pages, and readers who use them for disambiguation (and are currently not getting that).
  • Proposed solution: Off the top of my head, gadget that takes input from editor on what to compile (e.g. articles that start with the name Sarah, people with the surname Rodriguez), then pulls the article title, birth and death information (if appropriate), and short description (e.g. Australian actress, Australian town). Fictional characters are kept separate. Geolocation data can assist with "places called Sarah" sections. The details can then be checked, cleaned up, and false-positives pulled by the editor before saving. An added bonus is that the editor will be able to see what's missing in Wikidata, and add it there (it would feel far less pointless to fill this in on Wikidata than to repeat it endlessly for dozens and dozens of disambiguation pages).
  • More comments: There may be a better, streamlined alternative to disambiguation pages, but in the meantime since that's what we have, they should be appropriately maintained. N.b. I welcome iterations and improvements on this idea!
  • Phabricator tickets: Couldn't find any related.

Community discussion

Improve citation completion tools

  • Problem: None of the common citation completion tools (ReFill, Reflinks, etc.) seem to go very far in completing a citation, and often leave too much manual work for the editor.
  • Who would benefit: Wikipedia editors
  • Proposed solution: Put some development resources into improving these tools. Find out ways where they are lacking (like not usually adding authors or dates or using the correct citation templates). Make these cite completion tools more, er, complete.
  • Phabricator tickets:

Community discussion

Bot to fix superfluous blank lines in templates, navboxes

Machine translated from German -- please help improve! Falsche Leerzeilen und Vorlagenformatierungen, Naviblöcke

  • Problem:

Often found at the end of article pages - in particular before submission: spoken article, follow-up, navi boxes, categories - superfluous empty lines as well as incorrect representations of navigation blocks.

Oft finden sich am Ende von Artikelseiten – insbesondere vor Vorlage:Gesprochener Artikel, Folgenleisten, Navi-Boxen, Kategorien – überlüssige Leerzeilen sowie fehlerhafte Darstellungen von Navigationsblöcken.
  • Who would benefit:
  • Proposed solution:

A bot that removes the "wrong" (superfluous) blank spaces, and [another bot (?) That corrects incorrect document formatting that will lead to unsightly blank lines, such as here or additional margins like here, in navboxes.

Ein Bot, der die „falschen“ (überflüssigen) Leerzeilen entfernt sowie [ein anderer Bot (?), der] fehlerhafte Vorlagenformatierungen, die in Naviboxen zu unschönen Leerzeilen wie z. B. hier oder Zusatzrändern wie hier führen, korrigiert.
  • More comments:

Note: This is also suggested on Wikipedia: Bots / Requests and Wikipedia: Suggestions for Improvement.

Hinweis: Dies wurde entsprechend auch bereits auf Wikipedia:Bots/Anfragen und Wikipedia:Verbesserungsvorschläge vorgeschlagen.
  • Phabricator tickets:

Community discussion