Talk:Spam blacklist/Archives/2012-08

From Meta, a Wikimedia project coordination wiki
Jump to navigation Jump to search
Warning! Please do not post any new comments on this page. This is a discussion archive first created in August 2012, although the comments contained were likely posted before and after this date. See current discussion or the archives index.

Proposed additions

Symbol comment vote.svg This section is for completed requests that a website be blacklisted

Ledger 123



I recommended this link to the blacklist on English Wikipedia and you can see the comments HERE. I was advised to bring the addition here as it appears that the link is placed into multiple Wikipedias. Looks like a spam link to a retailer site that sells the software being references in the article. --Morning277 (talk) 20:44, 17 July 2012 (UTC)

I Did not find any crosswiki spam, COIbot's linkreport is empty. This user (174.55.195.168) posted the link but also edited other articles where the edits seem in order, the user only edited en:wp. I do not think blacklisting is necessary at this moment. EdBever (talk) 18:45, 21 July 2012 (UTC)
Ledger123 is neither spam, nor a retailer, but first of all an open source project that is based on the most recent version of SQL-Ledger and provides many bug fixes and extensions to the program. The list of the most important extensions is found here: http://www.ledger123.com/enhanced/. The code itself is hosted on Github at: https://github.com/ledger123/ledger123/ . Additionally, Ledger123 runs a very active mailing list. -- If it's not allowed to mention related projects, I wonder why an old fork of SQL-Ledger still remains in the article, or, to take a better known example, why nobody deletes the link to LibreOffice from the OpenOffice article. -- Vaslovag (talk) 17:40, 30 July 2012 (UTC)
Declined Declined No evidence that currently being actively abused. COIbot will monitor and we can keep further watch. — billinghurst sDrewth 05:58, 19 August 2012 (UTC)

slinky.me qwiki.com Citespamming

See WikiProject Spam Item



  • {{LinkSummary}}qwiki.com
Accounts

Mass multi project en:WP:CITESPAMming --Hu12 (talk) 19:46, 23 July 2012 (UTC)

striking through qwiki as it seems to be a somewhat more accepted link
Added Added slinky.me only — billinghurst sDrewth 05:54, 19 August 2012 (UTC)

Google redirect spam



Specifically 'google.com/url?'

See http://en.wikipedia.org/w/index.php?title=Wikipedia_talk:External_links&oldid=456669797#Google_redirection_URLs

Explanation:

The first result reads:

[PDF]Public Law 105-298

www.copyright.gov/legislation/pl105-298.pdf

File Format: PDF/Adobe Acrobat - Quick View

PUBLIC LAW 105–298—OCT. 27, 1998. Public Law 105–298. 105th Congress. An

Act. To amend the provisions of title 17, United States Code, with respect to ...

If you right-click on the bolded name of the first result (on 'Public Law 105-298'), and copy the url, you get:

  • http:// www.google.com/url?sa=t&rct=j&q=public%20law%20105-298&source=web&cd=1&ved=0CB4QFjAA&url=http%3A%2F%2Fwww.copyright.gov%2Flegislation%2Fpl105-298.pdf&ei=vmahTvikEoib-gadiZGuBQ&usg=AFQjCNH95AzJoEKz83KrtpLkLXENeJ3Njw&sig2=I_64kGBITluwmGNvw619Cg

Which is how these URL's end up here, and which can be used to circumvent the blacklist. --Dirk Beetstra T C (en: U, T) 13:02, 21 October 2011 (UTC)

  • Yes check.svg Done. I am afraid this will annoy people. --Dirk Beetstra T C (en: U, T) 13:06, 21 October 2011 (UTC)

(which are all three meta blacklisted sites). --Dirk Beetstra T C (en: U, T) 13:08, 21 October 2011 (UTC)

(unarchived)
I need some help here, please. This is apparently a problem for all tld's of google. See en:WT:EL#Google_redirection_URLs.
'google.*?\/url\?' ??
--Dirk Beetstra T C (en: U, T) 10:04, 24 October 2011 (UTC)
maybe this is a bug in the extension? "redtube.com" was a part of the url you tested here. -- 86.159.93.124 17:36, 24 October 2011 (UTC) (seth, not logged in) -- seth 20:17, 1 November 2011 (UTC)
That is what also occurred to me .. anyway it needs blocking as redirect sites should not be used - even when the target is not blacklisted (yet). --Dirk Beetstra T C (en: U, T) 08:29, 25 October 2011 (UTC)
  • I think it's better to allow this "spam" rather than annoy a lot of people. Unless an explicit error message can be printed, this is a bad filter that will trip a lot of users. See w:Wikipedia:ANI#Google blacklisted. Have mörser, will travel 18:26, 25 October 2011 (UTC)
  • It would be better if a bot rewrote those urls instead. Have mörser, will travel 18:29, 25 October 2011 (UTC)
  • I would have to agree it is a major loophole just waiting to be exploited, and should remain blacklisted. Δ 18:30, 25 October 2011 (UTC)
    • en:WP:BEANS. And blacklisting Google is going to really screw with lots of pages. Unlist. - The Bushranger 19:00, 25 October 2011 (UTC)
    • Please unlist this. This is using an axe to solve a problem that needs a scalpel. There must be ways to solve this problem, such as using edit filters, bots, or other methods than the spamblacklist. The spamblacklist should be used only for unambiguous cases where the site should always and without exception be never used. That sometimes, someone may use this method for abuse doesn't mean that every use of Google (or even most) will, and this is likely to generate more problems than it will solve. Please undo this and consider alternate means of addressing this problem. This is a real problem, and it needs to be fixed, but this solution is not a reasonable one. --Jayron32 21:12, 25 October 2011 (UTC)
      • This was grossly disproportional to the scope of the problem, and should be reversed. Georgewilliamherbert 00:02, 26 October 2011 (UTC)
  • Undo this as soon as possible it is making pages impossible to edit. Highly frustrating - even when you find out what the problem is it is ridiculously time consuming to try to find the offending link when editing a long page. When you don't know about the blacklisting of google you have no chance at all. Bad decision.Maunus 01:59, 26 October 2011 (UTC)
  • While this sounds good on paper it doesn't work out --Guerillero 02:42, 26 October 2011 (UTC)
  • Comment - A bot request just had a 7 day trial approved to handle the offending URLs (see en:WP:Bots/Requests for approval/AnomieBOT 58). Best regards. - Hydroxonium (TCV) 02:54, 26 October 2011 (UTC)
    Note the bot is enwiki only. Anomie 03:07, 26 October 2011 (UTC)
Guys, calm down. This is blocking a very small number of links (a couple of hundreds), not the whole of Google. Many regular editors are NOT going to include these links. Normal google links do NOT include the /url? part, there is no need to link there, and like with the other google loophole (which was abused), this is waiting to be abused (if it has not yet been abused). This is not 'making pages impossible to edit' - it makes it impossible to ADD a link, this is not 'screw[ing] with lots of pages' (as I said, just a couple of hundred), bots can't solve this (if it is used to circumvent blacklisting, then the bot can't repair the link anyway), etc. etc. Have a look at what I have been suggesting and what the problem actually is before making such sweeping comments. Thanks. --Dirk Beetstra T C (en: U, T) 07:06, 26 October 2011 (UTC)
Yes, I already realized that you were not blocking all of Google. I made my above objections fulling knowing the exact scope of this blacklisting, and still stand by the fact that this solution is overkill and causes more problems than it solves. I will concur that this eliminates the problem you note. It is not, however, a proper solution in that it also prevents good uses of the google.com/url linking. There are perfectly valid methods to stop this abuse, as noted above someone is already working out a bot solution. The issue here, Beetstra, isn't that you have solved a problem, its that you have refused to consider alternate solutions which could have far less collateral damage. Your attitude of "I have done this, and you all have to just live with the negative consequences because that's that" isn't terribly helpful. People here have suggested, and are working on, a way to fix this problem in softer ways, and it would be beneficial to try these before merely deciding that your solution is final and cannot be reconsidered, merely because you decided to do it. --Jayron32 15:02, 26 October 2011 (UTC)
What, exactly, would be an example of "good uses of the google.com/url linking"? Anomie 20:24, 26 October 2011 (UTC)
As a quick note, there's really no "good uses" - any use of this link can be seamlessly replaced by a link to the target URL. I don't believe it's been used for any significant amount of use to avoid the blacklist, but that isn't my major concern - having these URLs as external links means that any time a reader follows them, we're handing off some amount of their reading history to Google, which is a definite contravention of the spirit of the privacy policy if not the letter of it. Shimgray 21:34, 26 October 2011 (UTC)
Jayron32 - that is a pretty blunt statement that you make. You blatantly say that I did not consider other methods of stopping this. First, there is no single reason why to link to a google/url? link. They are redirects, you can link to the real link. Your argument is just saying that there are also good reasons to link to bit.ly or any other redirect site - there is NONE.
Regarding other solutions, I considered:
  • The AbuseFilter - which clearly should be cross-wiki one, since this is a cross-wiki issue
    • Flagging only - as if a spammer would care, they just save (but well, at least people may notice)
    • Blocking - which is just the same as the blacklist.
  • XLinkBot - currently only activated for en.wikipedia.
But as I said elsewhere and here again - this simply should never be linked, there is never a reason. And what other solutions did you have in mind? --Dirk Beetstra T C (en: U, T) 09:11, 27 October 2011 (UTC)
  • (EC) Concur with blacklist, my only suggestion if it's a real problem to user is lift the block for a short time to give time for bots to be readied for all projects. I'm not sure but it sounds like some people may be confused. For clarity Google is not blacklisted. You can still link to google.com itself or google search results like [1]. What is blacklisted is www.google.com/url? . The reason is because this functions as a redirect. I can't see any reason why they should ever be on wikipedia (they are simple redirects, they don't allow you to view the cache or something if the page is down), they mostly happen by accident when people copy the links of Google search results. They add another point of failure (Google) and also may lead to confusion (people thinking the site they're going to is Google and so trustworthy, see for example the previous mentioned search results) and also mean people are forced to go through Google to visit the external link (allowing Google to collect their data). However as made clear here, the primary reason they were blocked is because they can be abused, as anyone can use them to link to spam sites overiding the blacklist. Nil Einne 07:11, 26 October 2011 (UTC)

Unarchived again. Still needs to be solved. --Dirk Beetstra T C (en: U, T) 09:52, 1 November 2011 (UTC)

I am going to change the rule to 'google\.[^?#]*\/url\?'. --Dirk Beetstra T C (en: U, T) 11:12, 1 November 2011 (UTC)

Needed to use '\bgoogle\..*?\/url\?' - '\bgoogle\.[^?#]*\/url\?' was not accepted by the blacklist. Testing if other Google links still work: http://www.google.com/search?hl=en&q=Google+Arbitrary+URL+Redirect+Vulnerability. --Dirk Beetstra T C (en: U, T) 11:18, 1 November 2011 (UTC)

Try '\bgoogle\.[^?\x23]*\/url\?', it's choking on trying to interpret the literal "#" character as the start of a comment. But escaped it works fine on my local test installation of MediaWiki. Note that '\bgoogle\..*?\/url\?' will block a URL like http://www.google.com/search?q=Google+/url?+Redirect, as unlikely as that is to occur. Anomie 14:25, 1 November 2011 (UTC)
Hi!
what about \bgoogle\.[a-z]{2,4}/url\?? -- seth 16:01, 1 November 2011 (UTC)
That wouldn't catch domains like google.com.au, or paths like http://www.google.com/m/url?.... Anomie 17:05, 1 November 2011 (UTC)
hmm, ok. So which urls have to be blocked exactly? What is this google.com/m/-thing? If these were the only exceptions \bgoogle(?:\.com)?\.[a-z]{2,4}(?:/m)?/url\? would do.
The Abuse Filter could be a helping compromise, but it still can't be used globally, am I right? Did anybody open a ticket at bugzilla already? -- seth 20:17, 1 November 2011 (UTC)
Basically, what needs to be caught are all google urls (all tlds) where the path ends in /url? - the normal form would hence be 'google.com/url?', but also 'google.com.au', and 'google.at/url?' - and long forms are e.g 'google.<tld>/archivesearch/url?' For a full list of links that have been added (but it does not necessarily have to be exhaustive, there may be even more possible) see the post of Anomie in en:Wikipedia_talk:EL#Google_redirection_URLs.
A global filter may be an idea as an alternative, but if it is set to blocking it will have the same effect anyway (though could be more specific since the message could be made informative for specific redirects and how to avoid them) - if set to notify it is probably futile when people start to abuse it (except that we would then notice). There simply is no need to have it, just follow the link (which I hope one needs to do anyway since I hope that people read the document they want to link to), and copy it then from the address bar of your browser. --Dirk Beetstra T C (en: U, T) 08:56, 2 November 2011 (UTC)
Hi!
I see a big advantage in blocking urls with adapted messages, so that users can modify their link without being surprised about alleged spamming. However, there is still no global AF, is it?
I opened a ticket now: bugzilla:32159. -- seth 22:45, 2 November 2011 (UTC)
(unarchived) -- seth 08:42, 5 November 2011 (UTC)
The sbl extension searches for /https?:\/\/+[a-z0-9_\-.]*(\bexample\.com\b). That means our sbl entries always start with a domain part of a (full) url. That's ok because those google-links also include full urls. The problem is that those urls are encoded (see w:en:Percent-encoding) and the sbl extension does no decoding. So ...?url=http%3A%2F%2Fwww.example.com is not resolved as ...?url=http://www.example.com. Solutions could be
1. start the regexp pattern not with /https?:\/\/+[a-z0-9_\-.]*/ but with /https?(?i::|%3a)(?i:\/|%2f){2,}[a-z0-9_\-.]*/ or
2. decode urls before using the regexp matching. -- seth 11:35, 5 November 2011 (UTC)
don't archive this. -- seth 21:09, 7 November 2011 (UTC)
Sorry for the problems with the archive bot. Now it should be resolved, please just remove the first template of this section when you will want this request to be archived. Regards, -- Quentinv57 (talk) 18:00, 10 November 2011 (UTC)
thx! :-) -- seth 21:26, 10 November 2011 (UTC)

Note, that also when the blacklist would catch the links which redirect to blacklisted domains, this domain should still be blacklisted as it is still inappropriate, and can be used to avoid detection by our bots. Also, it unnecessary involves google in your linking, and not everyone may be interested in having their data being analysed by Google. --Dirk Beetstra T C (en: U, T) 08:20, 11 November 2011 (UTC)

  • If you say that these links can be restated to avoid blocking, you should EXPLAIN HOW THIS IS DONE, in VERY SIMPLE LANGUAGE in a box at the top here. Most users are not techies. I have no idea how to do it. Otherwise the block should be removed. Johnbod 15:30, 11 November 2011 (UTC)
I wrote a small stupid tool tools:~seth/google_url_converter.cgi which can be used to recover the original urls from the google redirects. -- seth 15:45, 13 November 2011 (UTC)
Johnbod - As goes for practically all redirect sites - follow the link, and copy/paste the url from the address bar of your browser. Don't copy/paste the url that Google is giving you.
To explain it further - the Google search gives you a set of google-redirects which point to the correct websites. You then click one of the redirects from Google, so Google knows that that is the result that is most interesting to you. Next time you search something similar, it will think, that that is the result of interest to you, so you it will get a higher ranking - what, it may also show up higher in rankings on searches by other people, since you thought it was more interesting. Now, as such, that is not a big issue - but if you use that google-redirect on Wikipedia, the Google rankings of that page get improved through Wikipedia. That is a loophole waiting to be abused. It is the very, very essense of Search Engine Optimisation. It is even more efficient than having your website itself on Wikipedia. --Dirk Beetstra T C (en: U, T) 10:49, 15 November 2011 (UTC)
I agree with Beetstra. But it's not always that easy to get the original url, if you want to link an excel-file for example (see w:de:WP:SBL). That's why I created the small tool. -- seth 22:24, 17 November 2011 (UTC)

Also, if you want to avoid this problem and you use Firefox, you can install this extension. MER-C 09:52, 21 November 2011 (UTC)

If... I recall correctly, this kind of loophole can be detecting looking for "usg=" in the url, instead of "url=". es:Magister Mathematicae 15:29, 18 December 2011 (UTC)

I see the point of blacklisting these adresses, but there is some kind of technical problem in this case. Normally you are allowed to keep the url that already exists on pages, but not in this case. I would like to edit the pages: sv:Bengt Nordenskiöld and sv:Who Says, but I cannot without removing the already present link. Why? -- Lavallen (talk) 11:25, 4 March 2012 (UTC)

That is not correct. Blacklisting prevents the page being saved, it is a yes/no test at the time saving. Maybe you are confusing it with AbuseFilter behaviour. billinghurst sDrewth 11:29, 4 March 2012 (UTC)
That's not correct either. If a link was placed in a page before the link was blacklisted, then it's possible to again add the same link in that page. (IIRC that behavior is like that since May 2008.) But that only works if the link is exactly the same. It won't work with another deeplink to a blacklisted domain.
Example: "example.com/something" is placed on a page Foo. "example.com" gets blacklisted. Now it's possible to add link to example.com/something a second time in the page Foo. But it's not possible to add a link to example.com/anything_else.
What was your exact problem, Lavallen? Which link did you want to add there? -- seth (talk) 19:04, 2 June 2012 (UTC)
Is there still a discussion here or can this be archived? Snowolf How can I help? 05:56, 24 August 2012 (UTC)


Proposed removals

Symbol comment vote.svg This section is for archiving proposals that a website be unlisted.

realblackvoodoo.com



[2] The Haiti Bizango Society Website. The first Haitian Bizano society website on the Internet - Confirmed by Mr. Azangon M. Loko. Address: 432/a Ruelle Sylvia, Port Au- Prince, Haiti. The Website was blacklisted while trying to add it to the Haitian Voodoo section. This website should defintely not be blacklisted by the Wikipedia spamlists.

Best Regards Laura—The preceding unsigned comment was added by Laura-sk (talk)

Eh, you just linked to it - hence it is not blacklisted here. It is however blacklisted locally on en.wikipedia, you might want to ask there. --Dirk Beetstra T C (en: U, T) 05:35, 31 July 2012 (UTC)
I do however note, that this was added by obvious socks - 'Voodoo1212', 'Magic5000', 'Voodoo3849', 'Silvia7000', 'Samuell77', maybe also 'Esptest' and 'Roberto.samos'. That alone suggests that the site definitely should be blacklisted by Wikipedia. I presume the local editors have been warned for this behaviour, and pointed towards the relevant policies and guidelines - maybe they should heed the suggestions in there. Thanks. --Dirk Beetstra T C (en: U, T) 05:41, 31 July 2012 (UTC)
see w:Wikipedia:Sockpuppet_investigations/Esptest. --Dirk Beetstra T C (en: U, T) 05:44, 31 July 2012 (UTC)
Declined Declined nothing to do — billinghurst sDrewth 06:02, 19 August 2012 (UTC)

calculate-linux.org



This is the homepage of the Calculate Linux project, and the link is very useful to its article (namely in the External Links section). The article currently has a link to calculate-linux.com, but that domain no longer works.

It seems the link was added to the list automatically. The log of it can be found at https://meta.wikimedia.org/wiki/User:COIBot/XWiki/calculate-linux.org. The preceding unsigned comment was added by 81.71.110.7 (talk • contribs) 06:37, 10 June 2012‎ (UTC)

Comment Comment The link was not added automatically, but by an administrator after reviewing an automated report. The site is hosted by a company from Saint Petersburg and the link was spammed dozens of times in about 5 days by a person from an IP address in that same city. Someone was clearly spamming wikipedia. EdBever (talk) 20:11, 10 June 2012 (UTC)
I can see how it's unpleasant that it was being spammed, but not unlisting the link now that it is clearly relevant because of the incident, would mean choosing to punish someone over having a better encyclopaedia. Providing a link to the official website is kind of important to an article. Aren't there any workarounds, like allowing the link to be added only with a moderator's permission, or only allowing it on one page?
Even if there aren't any workarounds in the software, depending on how long ago the spamming was, I still think it's worth another try. The preceding unsigned comment was added by 81.71.110.7 (talk • contribs) 01:18, 11 June 2012‎ (UTC)
Declined Declined no evidence provided to why it should be unlinked after it had spammed — billinghurst sDrewth 06:07, 19 August 2012 (UTC)

Google redirect 'spam'



Rule: function/c_urlredirect\.asp\?url=

This blacklisting is quite annoying. It makes it very difficult to legitimately link, e.g. in a discussion, to a lot of PDF documents that are found by searching Google. The original URL is mangled in such a way that it cannot easily be extracted from the link address itself. Paul B (talk) 11:02, 13 June 2012 (UTC)

It is, just follow the link, then copy the link that is shown in your address bar - that is the link you want to link to anyway (and I presume you follow the links that google suggests to look at the information they really provide, before suggesting them at a talkpage, or using them in a page - so then the link is displayed in your address bar anyway). This link would make one of the very essences of SEO possible, de-listing would be very unwise. --Dirk Beetstra T C (en: U, T) 11:45, 13 June 2012 (UTC)
The point is that that only works if the PDF document is shown inside your web browser. If it's not, so if Acrobat Reader or any other reader is started up outside the browser window, then good luck, because the mangled URL is all you're going to get. That's also why I'm referring specifically to PDF documents and not to web pages in general. Paul B (talk) 12:06, 14 June 2012 (UTC)
But we can't remove this link of the spam blacklist. And it's always better to use the link itself instead of a redirect. Trijnsteltalk 13:18, 14 June 2012 (UTC)
Paul. The redirect spam and the abuse of that Google service is equally annoying and probably more destructive as people can hide so many evil and malevolent beasties, and your complaints do not address such. That you chose to have a certain browser configuration and do not wish to utilise alternatives unfortunately has to be seen as your issue not ours. How about you complain to Google that they do not obstruct the proper url, and stop rerouting matters through false urls. — billinghurst sDrewth 13:22, 14 June 2012 (UTC)
There are several tools for decoding such urls easily. (The problematic links actually are just the directly downloadable links. The other ones will be converted by your browser automatically and visibly)
For example you can use this tool to convert the urls. For Firefox there's a plug-in for getting the cleaned urls: [3]. -- seth (talk) 14:09, 17 June 2012 (UTC)
Declined Declined broad problems exist for spammers if suggested modifications made — billinghurst sDrewth 06:09, 19 August 2012 (UTC)

db.tt



I can't find the log or request for this addition, so please forgive me if there was a very good reason for this addition. However, this is simply a domain-specific shortener for dropbox.com (which is itself not blacklisted), so I don't see why it would be on the spam blacklist unless there was some temporary abuse of it. Dominic (talk) 14:18, 29 June 2012 (UTC)

Dropbox referrals also use this domain; this is the reason for blacklisting. MER-C (talk) 02:18, 30 June 2012 (UTC)
Has it been abused for that purpose? Dominic (talk) 04:23, 30 June 2012 (UTC)
User:COIBot/LinkReports/db.tt -- redirects to referrals outnumber legitimate use. MER-C (talk) 04:31, 30 June 2012 (UTC)
More general, many sites operate domain-specific redirect sites. They give the problem that people who are active on the blacklist, for any specific link (deeplink to a document) on those servers, would also have to blacklist the domain specific redirect for that specific link. In this case, besides having to blacklist e.g. 'dropbox.com/referrals/NTM2NTQyNTk?src=referrals_twitter9' (if that was a specific spam problem), they would then also have to blacklist 'db.tt/YKaA6JUt', otherwise the blacklisting could be circumvented too easy (and spammers will do that). For that reason, youtu.be is blacklisted as well (there are quite a number of youtube.com links blacklisted over the different wikis, that number of rules would double if also the specific youtu.be links would need blacklisting), and redirect sites are often blacklisted on sight for that reason (sometimes even without abuse, or, for the really common ones, even without that they have been used, just to discourage the possibility of abuse). They are not necessary anyway (except that some links get excessively ugly in edit-view - they really have not a single use on Wikipedia - there is always the expanded link one can use). --Dirk Beetstra T C (en: U, T) 10:17, 30 June 2012 (UTC)
Declined Declined full urls have been preferred due to ability to circumvent specific urls — billinghurst sDrewth 06:12, 19 August 2012 (UTC)

galerie-obadia.com



I believe this website should not be blocked by the Wikipedia spamlists. This gallery represent a number of prominent artists and Wikipedia articles about them can't be acurately updated (with a link directing to the artist information page on the gallery site).The preceding unsigned comment was added by 79.94.27.180 (talk • contribs) 09:14, 2 July 2012‎ (UTC)

And why do you think that that link should be on those pages? We are writing an encyclopedia, not a linkfarm. There are links on most of the pages to the official website of the artist, and if the galerie is of interest, it will be linked from there. You may want to have a look at local guidelines regarding linking. I hope this explains. --Dirk Beetstra T C (en: U, T) 14:50, 4 July 2012 (UTC)
Declined Declined as above — billinghurst sDrewth 06:13, 19 August 2012 (UTC)

emptynosesyndrome.org



The link is on the blacklist because: "Attached non-notable EL's to at a handful of nose-related articles from at least four different anon IPs. Warned numerous times, blocked during the latest run, but I suspect they'll be back. The warnings aren't taking much effect. --Mdwyer 05:09, 6 February 2007 (UTC)













I'm not one of the above contributors, but I do want to add the link emptynosesyndrome.org/what_is_ens.php#symptoms in the section External links of the Empty nose syndrome article of Wikipedia, and add as description: “More ENS symptoms on the ENS awareness site, with an audio of Dr. Kern’s presentation about ENS at a nose conference.”

I think the link is needed, because not all ENS symptoms are mentioned in the article, and because the external link to Dr. Houser’s tutorial pages is broken (at least, when I visit it, it links to an internet archive that just keeps reloading without ever showing Dr. Houser’s site). I hope I made the request in the right format. Thanks. Gewell (talk) 14:33, 5 July 2012 (UTC)

You made the request in the right format, for convenience I reformatted those ip-contributor-links slightly however. FWIW, The link to Dr. Houser’s tutorial pages seem to be working perfectly fine for me. I trust an administrator here will look into this case shortly. However, as long as it is only this one link you want to add, it might be easier to request a "whitelisting" (unblock of that particual link) at Wikipedia in English: en:MediaWiki talk:Spam-whitelist Best regards, Finn Rindahl (talk) 20:40, 5 July 2012 (UTC) Finn Rindahl (talk) 20:40, 5 July 2012 (UTC)
I would also suggest to request whitelisting locally (though, this was blacklisted 5 years ago .. we could try to delist and hope that it does not reoccur). Note that "because not all ENS symptoms are mentioned in the article" is not a good reason - why does the article not mention them (we don't link to external information when the article can and maybe should contain the data itself), and when they are included, they should be referenced to appropriate sources (which may be this site, though if this site is a form of conglomerate, then I presume that there are scientific papers that would be better sources). --Dirk Beetstra T C (en: U, T) 10:31, 28 July 2012 (UTC)

Thanks for the advice. I have requested whitelisting locally, but nevertheless also request the removal of the whole site emptynosesyndrome.org from the blacklist. The reasons are: 1. I also see no reason now to keep it blacklisted for the reason that caused it five years ago. 2. Most scientific articles about Empty Nose Syndrome are not free online, so it is hard (for me anyway) to find the right scientific article for the right extra symptom. I agree that it is best to mention extra symptoms in the article itself, that only gives the “two main physical symptoms” and some psychological and emotional symptoms now. If I get the chance, I’ll probably try to include more symptoms with a referral to scientific articles or to the site (if this is allowed?). 3. The ENS site also contains a thorough tutorial on the turbinates, nose and respiratory system (on several pages), in which life threatening complications of ENS are described (see for instance emptynosesyndrome.org/turbinates_tutorial6.php). Also for this description, which I discovered only recently, I don’t know the exactly corresponding scientific articles (yet), but I do know that I am developing one of the complications myself. 4. The existence of ENS is now recognized by only parts of the ENT community, and its symptoms and risks are in progress of being detected and described. Even the name of the syndrome/disease is not fixed yet. Meanwhile turbinectomy is still a regular procedure all over the world. Another fact is that some physicians only look on Wikipedia to learn about ENS. Anyway, the risk of life threatening complications is something any surgeon planning to cut out any portion of turbinate mucosa must take into account, and should be on Wikipedia. 5. This risk is also very important information for both future ENT patients to which turbinectomy will be proposed and physicians who have to treat patients with seemingly spontaneous, atypical, and persistent, lung problems. Perhaps there are more reasons for removal from the blacklist, but this is what I can think of now. Regards, Gewell (talk) 19:44, 1 August 2012 (UTC)

It still means that the Wikipedia article can and should be improved, not that we link to sites about it. --Dirk Beetstra T C (en: U, T) 05:26, 2 August 2012 (UTC)
Removed Removed it is an old spamming, and assume good faith for now; though I would suggest that linking is per sites' guidelines otherwise it is possible to reappear. — billinghurst sDrewth 06:23, 19 August 2012 (UTC)

www.airtet.in



http://airtet.in please remove this site from spam blacklist because i don't know whats spam on Wikipedia still i don't know but i understood that don't place any link to Wikipedia i have tested with my competitor domain but i don't want harm him myself please remove if not he can do with my domain—The preceding unsigned comment was added by 49.137.187.54 (talk)

You linked to it, it is not blacklisted here. --Dirk Beetstra T C (en: U, T) 05:23, 2 August 2012 (UTC)
Declined Declined nothing to do at meta. It is listed at enWP, so please see w:en:Wikipedia:Spam_blacklist and ask at that site. — billinghurst sDrewth 06:19, 19 August 2012 (UTC)

www.stage-gate.com & www.prod-dev.com





please remove the following two websites: http://www.stage-gate.com & http://www.prod-dev.com I'm new to this wikipedia contributions thing, so I'm not sure why these websites got blacklisted. I'm simply trying to update and improve the pages related to Stage-gate model, Project portfolio management, and Portfolio management for new products and link webpage that relate to the further reading and references mentioned. The reason for this is that the information, in particular on the Stage-gate model is incorrect, and my revisions keep getting declined and reverted, so I'd like to at least add links to the pages that correctly provide information on this topic. Thanks, 70.25.21.97 15:10, 2 August 2012 (UTC)

You linked to the domains, clearly they are not blacklisted here. Hence, all Yes check.svg Done here. --Dirk Beetstra T C (en: U, T) 04:06, 12 August 2012 (UTC)
Hmm, that's interesting. I tried to test it on a page I'm working on under my User page, and for some reason I'm still getting the message that the external link is registered on Wikipedia's blacklist. EmirOzdemir10 (talk) 18:54, 16 August 2012 (UTC)
It seems like the domains are locally blocked on the English Wikipedia. So you have to bring it up there. More information can be found here. -- Tegel (Talk) 19:07, 16 August 2012 (UTC)
Declined Declined nothing to do here — billinghurst sDrewth 07:16, 19 August 2012 (UTC)

philatino.com



While I remember the coi issue back when this site was blacklisted per User:COIBot/XWiki/philatino.com but the site host several images which as freely licenced that are useful to philatelists and stamp collectors. I'm interested in uploading several images from there but being able to properly source the images when uploaded to the commons. Should the owner, who was the apparent spammer return the his cio spam links, then I would not object to blacklisting again. Is it is possible to allow links to images and the pages they are linked from but still disallow adding links to articles, or can commons links be allowed but language wikis be barred?. Ww2censor (talk) 03:36, 9 August 2012 (UTC)

I don't think that specific ignoring of certain pages is possible (oh, when is someone going to rewrite the blacklist-part of the mediawiki software), but a generally used solution is to either just remove the http:// from the front, or to nowiki it (it does not need to be a live link). Another option is to whitelist on commons (if the images are free, no need for local uploading generally). However, this is quite some time ago, maybe we should consider removing it and see how it goes? --Dirk Beetstra T C (en: U, T) 04:10, 12 August 2012 (UTC)
Removed Removed let us give the benefit of doubt and try it without. We can always put it back if it is problematic. — billinghurst sDrewth 11:25, 19 August 2012 (UTC)

galatta.com



This domain was blocked in 2008. See "Adsense spammer" at http://meta.wikimedia.org/wiki/Talk:Spam_blacklist/Archives/2008-03

Link is globally blacklisted by \bgalatta\.com\b

I wanted to expand the en:Jose Thomas Performing Arts Centre article (currently at AfD: en:Wikipedia:Articles for deletion/Jose Thomas Performing Arts Centre (2nd nomination) ) with http://www.galatta.com/malayalam/news/Mohanlal_begins_arts_centre/21413/ and http://www.galatta.com/malayalam/news/Mohanlal_as_Prospero/21812/ , but found that the galatta site was blocked. I saw the same AdSense code ("ca-pub-4598819753511212") mentioned in the 2008 reference in the source for these pages, but the galatta articles themselves seem reliable. galatta.com is a site about Indian movies, and probably covered this live theatre because a well-known film actor is connected with it. I contribute primarily to the English Wikipedia, and have no connection with Galatta and its affiliated websites. My links at English Wikipedia: Eastmain (talkcontribs) My links at Meta: Eastmain (talk) 02:41, 14 August 2012 (UTC)

Declined Declined - Delisting this domain is out of the question seeing the original reason for blacklisting (massive spamming campaign). You could request whitelisting at en.wp (en:MediaWiki talk:Spam-whitelist). EdBever (talk) 18:09, 15 August 2012 (UTC)

egyptpost.org



Blocked but appears to be a legitimate site of the Egyptian national post office. Trying to add it to the Egypt Post article as requested in user feedback and important for the article. Philafrenzy (talk) 20:12, 15 August 2012 (UTC)

Removed Removed seems realistic to remove it. nothing recorded on its local page. Will leave the egyptptc component. — billinghurst sDrewth 07:20, 19 August 2012 (UTC)

almaty-kazakhstan.net



This website is blocked and I feel that it is a legitimate site for Visitors to Almaty City in Kazakhstan. It was blocked due to me adding it to too many pages as i felt that it was usefull for people visitng those pages and I Apologise for this, I am new to Wikipedia and and it wont happen again. —The preceding unsigned comment was added by 90.54.143.72 (talk)

Removed Removed Thanks for bringing this to our attention, and the mea culpa. Good luck with your editing. — billinghurst sDrewth 12:09, 19 August 2012 (UTC)

hurryupharry.org



Not clear why this blog is blacklisted. It seems to have been blacklisted (by mistake?) before, then whitelisted and now blacklisted again. It's a generally a politically centric UK blog.

It looks to have a number of local blacklistings, but nothing global (yet).
X mark.svg Not done nothing to do here. — billinghurst sDrewth 15:58, 19 August 2012 (UTC)
I still get "The following link has triggered a protection filter: http://hurryupharry.org" when trying to link to the site (tested in the sandbox). Does "nothing to do" mean it's blocked for a reason? If so, where I can find out what it is? Sorry about this, I'm new here.

MoneyWeek.com



Hi, new to wiki so apologies for any mistakes. I’ve received an email from someone trying to create link to moneyweek.com on a wiki article. Sorry I don’t know the specific article. They tell me they can’t include the link because links to moneyweek.com are blacklisted.

I can see the URL is blacklisted here {http://en.wikipedia.org/wiki/MediaWiki:Spam-blacklist} but cannot find out the reasons why. Can someone help so I can propose it gets removed from the blacklist. Thanks.

Moneyweek’s wiki page: http://en.wikipedia.org/wiki/MoneyWeek The preceding unsigned comment was added by Andy5058 (talk • contribs) 11:53, 4 July 2012‎ (UTC)

Well, it is blacklisted on en.wikipedia, you were here, requests regarding that link hence need to be done on the talkpage on that wiki (here. Nothing can be done on this wiki about it - hence X mark.svg Not done. --Dirk Beetstra T C (en: U, T) 14:48, 4 July 2012 (UTC)
Declined Declined as above — billinghurst sDrewth 12:48, 26 August 2012 (UTC)

lemairesoft.sytes.net



Hello, it seems that sytes.net is blocked, therefore it is preventing lemairesoft.sytes.net from being used.This website provides a lot of useful informations on WWII weapons, planes and ships. Could it be possible to enable it ? Thank you very much, Gonzolito (talk) 22:09, 21 August 2012 (UTC)

Declined Declined. This domain hosts lots of sites and apparently has been spammed before. It has been blacklisted since at least 2006. Several request to delist have been denied. You can however request whitelisting of this particular site at en:MediaWiki talk:Spam-whitelist. --EdBever (talk) 07:48, 24 August 2012 (UTC)
Thanks, I'll do that. Regards, Gonzolito (talk) 12:23, 24 August 2012 (UTC)

ascenderfonts.com



The original ban was started based on the accusation started back in 2010-06 blacklist request, where COIBot reported Wikipedia user:Rexjoec and user:AC225, and several others were accused of having Conflict of interest with Ascender Coporation. If those users are indeed having conflict of interests, the appropriate response is to punish the accused (if the accused have genuine COI), not to censor references to the entire site.

When request to remove ascendercorp.com from blacklist was made in 2010-10, user:Finn Rindahl claimed additional background provided by user:MER-C was the reason to against removing ascenderfonts.com from blacklist, with user:dferg later deny the request. However, the 'additional background' stated in the 2010-10 request was never stated. If the reason (or links to it) on why certain sites should stay in the blacklist is not posted inside the discussion thread, a sound decision cannot be made. The decision derived on 2010-11 is a sign of poor decision making, if not censorship, which is against the goal of Wikipedia.

In the Wikepedia whitelist request made in 2012-06, User:Amatulic suggested to white-list specific pages, but that is not feasible. ascenderfonts.com site currently consists bundles of blog entries, newsletters, and articles, which will require whitelist multiple sections to be useful. Besides, if the web site organization changes, the whitelist becomes useless. Therefore the only logical solution is to remove entire site from blacklist.

Specific site ban aside, is blacklisting links to external sites even a good idea? If there are multiple users in the foreign site with only 1 user is a known spammer, contents made from every other users who have nothing to do with the spammer are affected by the ban. Even if the whole site is operated by known spammer, that does not imply all contents in it are contributed to spam. In the end, is current policy really effective of deterring spammers or spamming? Spammers can just change domain names or use different IDs, and spamming can be done outside Wikimedia's jurisdiction. What this blacklisting ends up becoming is a censorship policy.

If you are going to ban a link to specific site, it should be banned only when specific instances of the link are irrelevant to the respective articles where the site links appear, not simply because the site is 'bad'. The blacklist policy has its priority backwards. --142.150.48.218 23:42, 28 August 2012 (UTC)

Eh, I see at least 5 accounts spamming this, and a long-term-spamming report earlier. De-blacklisting can be considered when the link is of use, not because there are/were other methods to stop the spamming. For specific links, we can whitelist, for the rest, Declined Declined. --Dirk Beetstra T C (en: U, T) 11:17, 29 August 2012 (UTC)

Troubleshooting and problems

Symbol comment vote.svg This section is for archiving Troubleshooting and problems.

Discussion

Symbol comment vote.svg This section is for archiving Discussions.

Thinking aloud really

I am well aware of policy regarding this page however I do wonder whether making a point of listing all (almost all) spambot placed links might not be such a bad idea? The worst that is likely to happen is a howl of protest from site owners who presumably are directly or indirectly behind the placement of the links. I'd sure there will be other views but... I'll spam a few folks pages in case they miss this :) --Herby talk thyme 15:31, 7 March 2012 (UTC)

That's a practical suggestion, however, the current limitations of the spam blacklist make this less of a good idea. Legitimate links to the spammed sites could be blocked, and this extension gives no record of actions it takes. Perhaps using the global AbuseFilter for this if/when it is enabled? Ajraddatz (Talk) 15:43, 7 March 2012 (UTC)
Afaik google already takes care of our spamblacklist, publicizing this side effect of SEO on Wiki would be a good idea, but I have two concerns:
  • Spam is a war: a SEO expert (actually a bit smarter than those which are attacking WMF's wikis) could use us to destroy the rank of a competitor
  • A big part of spamming consists of google bombs, apparently meaningless texts which can influence google search results.
So, to me, a wall of shame for spammers could be a good idea, but the main point is making mediawiki less xrumer-friendly, e. g. enforcing captchas and implementing a global abusefilter or many filters on several wikis. --Vituzzu (talk) 21:37, 7 March 2012 (UTC)
Fair points both and thanks for contributing. I guess we will go on the hard way for now :) --Herby talk thyme 16:33, 9 March 2012 (UTC)
Hi!
Vituzzu said: Afaik google already takes care of our spamblacklist. Please give some evidence for this (iow: citation needed!). -- seth (talk) 15:40, 24 March 2012 (UTC)
I can only find one source, and it seems old: "Wikipedia Spam Resulting in Google & Yahoo Penalties" Search Engine Journal πr2 (tc) 17:39, 14 April 2012 (UTC)
Hi!
the "source" is a blog, where a forbes article is cited. that article, written 2007, can be reached via archive.org.[4]
But if you do a further research, then you will find out that that forbes reporter just wrote crap. He cited Jonathan Hochman (a wp administrator) in a wrong way.
The author of that blog is not confirming the rubbish that forbes wrote, no, the author is challenging it. Unfortunately all comments (and back in 2010 there were a lot of comments to that post) are deleted. You can still reach them by using archive.org.[5]
One of the commenters was Jonathan Hcohman himself and he could confirm that the forbes reporter did a mistake. He wrote:
I was there. :-) That Hochman guy actually said, “The blacklist is public, so search engines can read it. You don’t want to get on that list.”
This is one reason why it’s great to attend the conferences, because you can hear what’s actually said, rather than read about it second hand.
In fairness to the Forbes reporter, he tried to call me last night to confirm the quote, but I was putting kids to bed and didn’t get back to him in time.([6])
So there is still no source for that google-uses-sbl-hypothesis. -- seth (talk) 01:41, 21 April 2012 (UTC)
Several spammers have requested delisting claiming that their Google-ranking went down considerably after getting on the sbl. There is no real evidence though. EdBever (talk) 12:09, 21 April 2012 (UTC)
If spammers tried to misuse the wikipedia for their spamming issues, they probably also did some other blackhat seo. And many of those tricks can be detected automatically by google. In those detected cases the rank can decrease rapidly, and all that still doesn't need to have anything to do with our sbl. -- seth (talk) 14:22, 21 April 2012 (UTC)
This list could be maintained as a meta page or page series independent of the actual blacklist so we keep a human readable record, or maybe this exists? -- とある白い猫 chi? 22:49, 11 June 2012 (UTC)

www.rollingstone.com

Is this web site blacklisted ? http://www.rollingstone.com Thnaks 88.122.160.176 13:49, 29 June 2012 (UTC)

no, otherwise you would not have been able to link to that site. -- seth (talk) 17:35, 30 June 2012 (UTC)

www.emptynosesyndrome.org

Where can I find why emptynosesyndrome.org is on the list? Thanks. The preceding unsigned comment was added by ‎ 84.83.174.80 (talk • contribs) 10:11, 5 July 2012 (UTC)

It was added per this. Best regards, Finn Rindahl (talk) 10:29, 5 July 2012 (UTC)

sortable attable?

Why is User:COIBot/XWiki formatted with class="wikitable sortable attable" and not class="wikitable sortable"? It's not sortable (for me at least) ATM. Finn Rindahl (talk) 13:22, 6 July 2012 (UTC)

Maybe old - see User:COIBot/Settings.css (I think). --Dirk Beetstra T C (en: U, T) 04:02, 9 July 2012 (UTC)
this should solve it. Yes check.svg Done? --Dirk Beetstra T C (en: U, T) 04:04, 9 July 2012 (UTC)