Talk:Spam blacklist

From Meta, a Wikimedia project coordination wiki
(Redirected from Talk:Spam Blacklist)
Jump to: navigation, search
Requests and proposals Spam blacklist Archives (current)→
The associated page is used by the MediaWiki Spam Blacklist extension, and lists regular expressions which cannot be used in URLs in any page in Wikimedia Foundation projects (as well as many external wikis). Any meta administrator can edit the spam blacklist; either manually or with SBHandler. For more information on what the spam blacklist is for, and the processes used here, please see Spam blacklist/About.
Proposed additions
Please provide evidence of spamming on several wikis and prior blacklisting on at least one. Spam that only affects a single project should go to that project's local blacklist. Exceptions include malicious domains and URL redirector/shortener services. Please follow this format. Please check back after submitting your report, there could be questions regarding your request.
Proposed removals
Please check our list of requests which repeatedly get declined. Typically, we do not remove domains from the spam blacklist in response to site-owners' requests. Instead, we de-blacklist sites when trusted, high-volume editors request the use of blacklisted links because of their value in support of our projects. Please consider whether requesting whitelisting on a specific wiki for a specific use is more appropriate - that is very often the case.
Other discussion
Troubleshooting and problems - If there is an error in the blacklist (i.e. a regex error) which is causing problems, please raise the issue here.
Discussion - Meta-discussion concerning the operation of the blacklist and related pages, and communication among the spam blacklist team.
#wikimedia-external-linksconnect - Real-time IRC chat for co-ordination of activities related to maintenance of the blacklist.
Whitelists There is no global whitelist, so if you are seeking a whitelisting of a url at a wiki then please address such matters via use of the respective Mediawiki talk:Spam-whitelist page at that wiki, and you should consider the use of the template {{edit protected}} or its local equivalent to get attention to your edit.

Please sign your posts with ~~~~ after your comment. This leaves a signature and timestamp so conversations are easier to follow.


Completed requests are marked as {{added}}/{{removed}} or {{declined}}, and are generally archived quickly. Additions and removals are logged · current log 2016/02.

Projects
Information
List of all projects
Overviews
Reports
Wikimedia Embassy
Project portals
Country portals
Tools
Spam blacklist
Title blacklist
Vandalism reports
Closure of wikis
Interwiki map
Requests
Permissions
Bot flags
Logos
New languages
New projects
Username changes
Usurpation request
Translations
Speedy deletions
[edit]

snippet for logging
{{sbl-log|15347768#{{subst:anchorencode:SectionNameHere}}}}


Proposed additions[edit]

Symbol comment vote.svg This section is for proposing that a website be blacklisted; add new entries at the bottom of the section, using the basic URL so that there is no link (example.com, not http://www.example.com). Provide links demonstrating widespread spamming by multiple users on multiple wikis. Completed requests will be marked as {{added}} or {{declined}} and archived.

several shocksites/screamers[edit]





1man1jar, the 1man1jar mirror (jarsquatter), and findminecraft (which is a screamer) Links are innapropriate for use. 96.237.20.248 19:12, 17 January 2016 (UTC)

@96.237.20.248: Can you please paste the domainnames in a linksummary template, so for the domain 'example.com' you would put them here (each on one line) '{tl|LinkSummary|example.com}}'. Note that just being inappropriate is not necessarily sufficient reason to blacklist them, we'd need evidence of abuse as well. Thanks! --Dirk Beetstra T C (en: U, T) 03:31, 18 January 2016 (UTC)

1man1jar.com and findminecraft.com (DO NOT CLICK); evidence of abuse here and pretty much just that. Cross out findminecraft.

circlemakers.org[edit]



This was blacklisted before on en.wikipedia (m:Wikipedia talk:WikiProject Spam/2015 Archive Jan 1), and has now been re-blacklisted (see m:MediaWiki talk:Spam-blacklist). However, this seems like a Joe-job (m:MediaWiki talk:Spam-blacklist/archives/October 2015 - it is spammed in the strangest of places) with a cross-wiki factor, and therefore here likely more of a global-filter request taking out the IP ranges that add this. --Dirk Beetstra T C (en: U, T) 03:41, 8 February 2016 (UTC)

The right links probably are w:en:Wikipedia_talk:WikiProject_Spam/2015_Archive_Jan_1#circlemakers.org and w:en:MediaWiki_talk:Spam-blacklist/archives/October_2015#circlemakers.org :-) Syum90 (talk) 08:56, 10 February 2016 (UTC)

no2factoronline.com[edit]



Spambots (w:Special:Undelete/User:Mardetahed, w:Special:Undelete/User:Bertramkroberts). MER-C (talk) 08:31, 11 February 2016 (UTC)

Added Added --Syum90 (talk) 09:48, 11 February 2016 (UTC)

drozforskolin.org[edit]



Spambot.--Syum90 (talk) 11:26, 11 February 2016 (UTC)

Added Added --Syum90 (talk) 08:10, 12 February 2016 (UTC)

bulkfatlosssolutions.com[edit]



Spambots (w:Special:Undelete/User:Jasnomith, w:Special:Undelete/User:Neahemesy, w:Special:Undelete/User:Amillerarthur). MER-C (talk) 07:45, 12 February 2016 (UTC)

Added Added --Syum90 (talk) 08:08, 12 February 2016 (UTC)

reviewanalysis.co.uk[edit]



Spambot.--Syum90 (talk) 10:23, 12 February 2016 (UTC)

Added Added --Herby talk thyme 12:47, 12 February 2016 (UTC)

bellaveiphytoceramides.org[edit]



Spambot.--Syum90 (talk) 08:01, 13 February 2016 (UTC)

Added Added --Herby talk thyme 08:43, 13 February 2016 (UTC)

thedropnet.com[edit]



Spambots (w:Special:Undelete/User:Ab3951214, w:Special:Undelete/User talk:Sen7402). MER-C (talk) 06:06, 14 February 2016 (UTC)

Added Added --Dirk Beetstra T C (en: U, T) 07:20, 14 February 2016 (UTC)

myfreegems.com[edit]



Spambots. MER-C (talk) 06:11, 14 February 2016 (UTC)

Added Added --Dirk Beetstra T C (en: U, T) 07:21, 14 February 2016 (UTC)

Proposed additions (Bot reported)[edit]

Symbol comment vote.svg This section is for domains which have been added to multiple wikis as observed by a bot.

These are automated reports, please check the records and the link thoroughly, it may report good links! For some more info, see Spam blacklist/Help#COIBot_reports. Reports will automatically be archived by the bot when they get stale (less than 5 links reported, which have not been edited in the last 7 days, and where the last editor is COIBot).

Sysops
  • If the report contains links to less than 5 wikis, then only add it when it is really spam
  • Otherwise just revert the link-additions, and close the report; closed reports will be reopened when spamming continues
  • To close a report, change the LinkStatus template to closed ({{LinkStatus|closed}})
  • Please place any notes in the discussion section below the HTML comment

COIBot[edit]

The LinkWatchers report domains meeting the following criteria:

  • When a user mainly adds this link, and the link has not been used too much, and this user adds the link to more than 2 wikis
  • When a user mainly adds links on one server, and links on the server have not been used too much, and this user adds the links to more than 2 wikis
  • If ALL links are added by IPs, and the link is added to more than 1 wiki
  • If a small range of IPs have a preference for this link (but it may also have been added by other users), and the link is added to more than 1 wiki.
COIBot's currently open XWiki reports
List Last update By Site IP R Last user Last link addition User Link User - Link User - Link - Wikis Link - Wikis
adamsandler.com 2016-02-13 03:41:50 COIBot 205.178.189.131 R 14rosnering
Breogan2008
Edita Andriasyan
Grzegorzpolska1
Haku
HannaTheKitty12
Hgilmore96
Jack28
Jasiekzar
JasperandJH
M.
Matias Alves
Melody Concerto
Stellath
ابوهايدي
ابوهايدي
188.190.217.56
71.162.176.246
72.64.1.80
80.153.107.145
2016-02-13 03:23:46 28 9
ajaydevgn.com 2016-02-12 22:08:55 COIBot 75.126.102.239 R Hisashiyarouin
Hsik Dhaman
Rasheed143
Sameer Ahmad Abbasi
సుల్తాన్ ఖాదర్
2016-01-26 07:59:40 14 4
amassing2.sakura.ne.jp 2016-02-13 23:46:05 COIBot 182.48.12.134 R 112.69.224.134
119.230.252.246
121.87.35.13
153.187.126.97
180.145.53.125
2016-01-31 13:45:41 60 18
atheism.110mb.com 2016-02-12 00:36:03 COIBot 104.131.61.33 K3v1n2015
Linda fletcher
190.149.31.85
190.56.91.196
98.175.76.130
2016-02-01 17:23:30 26 8
bgtfans.com 2016-02-12 07:50:35 COIBot 104.27.165.210 R SSTflyer 2016-01-26 15:07:40 1836 85 0 0 3
briefstories.blogspot.com 2016-02-14 04:14:23 COIBot 173.194.207.132 R Penwritings
Zayn al aziz
2016-01-27 03:20:01 9 6
chartmasters.org 2016-02-14 01:40:43 COIBot 162.144.12.127 MJDangerous 2016-02-10 02:29:12 123 117 0 0 2
chinatownmainstreet.org 2016-02-11 23:39:12 COIBot 198.71.56.118 R Gastón Cuello
Rider ranger47
72.82.149.82
2016-02-11 14:55:10 7 2
cms.dsusd.k12.ca.us 2016-02-14 01:45:49 COIBot 216.33.93.214 Nis Hoff 2016-02-11 13:03:08 1659 9 0 0 2
codeacademy.com 2016-02-14 10:56:54 COIBot 23.21.104.100 R Ha1230
Imirak
Kevlar33
Kompowiec2
2016-01-20 11:14:53 36 3
collegetimes.tv 2016-02-14 09:11:17 COIBot 104.28.30.103 R Ronz 2016-02-03 00:51:23 1355 145 0 0 4
cyclingup.eu 2016-02-13 16:58:59 COIBot 31.186.169.163 WouterEde 2016-02-14 10:04:18 15 15 0 0 2
davidsmeaton.com 2016-02-13 11:54:15 COIBot 162.255.119.253 Poom40234 2016-01-30 12:03:21 96 5 0 0 4
extraholidays.com 2016-02-13 03:29:56 COIBot 69.20.46.60 R 167.124.124.21
50.195.72.217
2016-02-11 13:59:37 5 2
fifedirect.org.uk 2016-02-14 12:39:44 COIBot 194.247.90.31 R Open-box
PP&PT2014
2016-02-13 07:56:50 79 8
footloosemovie.com 2016-02-13 10:07:28 COIBot 206.220.43.92 MOSIOR 2016-02-07 17:20:58 410 6 0 0 4
fortgreene.patch.com 2016-02-14 04:03:47 COIBot 50.56.4.27 R Stephen.w-wiki 2015-12-17 21:24:54 115 2 0 0 2
hadassah-med.com 2016-02-10 06:46:12 COIBot 194.90.237.150 77.11.17.236 2016-02-08 12:19:55 16 33 0 0 9
hddvd.highdefdigest.com 2016-02-14 03:17:52 COIBot 98.158.194.195 2602:306:33C5:2C90:28B9:AF12:5127:BBA1
David.moreno72
Hy Brasil
2016-02-11 09:40:58 16 2
image1.spreadshirtmedia.net 2016-02-12 22:12:57 COIBot 23.218.91.24 R 213.127.251.69
213.60.135.82
2016-01-26 08:30:19 35 3
inler.ch 2016-02-13 03:24:16 COIBot 217.26.54.18 R Auslaender
Williamluk atletico
2016-02-04 10:24:47 8 7
jackassmovie.com 2016-02-13 14:11:32 COIBot 206.220.43.92 MOSIOR 2016-02-07 18:00:40 410 7 0 0 6
jesusmarie.com 2016-02-13 00:06:32 COIBot 217.70.184.38 R Factory
L'Ariadna
Ludovica1
Reychstan
2016-02-01 08:59:41 6 3
larsloekke.dk 2016-02-13 13:25:09 COIBot 104.18.39.26 R Grey-Fox
Kenti-002
85.81.225.254
2016-02-06 10:11:40 12 7
learnfree.me 2016-02-14 04:40:35 COIBot 104.18.32.178 R Shinkazamaturi
114.109.190.245
2016-02-02 14:38:32 6 2
livermore.patch.com 2016-02-13 19:38:10 COIBot 50.56.4.27 R Namoroka
à¦
জয় দাশ
2016-01-05 19:44:21 2 2
m.twitter.com 2016-02-13 07:41:51 COIBot 199.16.156.200 R Benny De Junior Makamu
Kishore boss
NOD32user
Pablo D'Conceição
Rajashekar B Official
SkymasterUK
WenMus
WindraQadriR
Wxzbb
114.125.191.121
8.37.225.210
2016-02-07 09:38:07 48 8
mary-mary.com 2016-02-14 01:21:46 COIBot 104.16.40.155 R Rxy 2016-01-29 11:46:00 14064 6 0 0 2
oligarh.org.ua 2016-02-13 14:12:30 COIBot 93.158.216.195 94.248.53.50 2016-02-13 16:10:29 208 15 0 0 2
pixabay.com 2016-02-13 23:40:40 COIBot 78.47.54.114 2003:6C:CD58:B900:30C0:4D63:F37B:3C95
2A02:2F0B:B04F:FFFF:0:0:BC1A:CBA0
2A02:2F0B:B0CF:FFFF:0:0:BC19:BD94
34 super héros
AdriennePage776
Ahmadvei
AlejandraMatney
Alfonso493
AngelitaLawlor
Apps.educ
Arnulfo42T
BabbaQ
Beau-Holdsworth
Biotecnologo Loco
BobPattondvarnb
CheriVentimiglia
Clusternote
Crazy1880
DanialTrimble
David Condrey
DeboraWilkie9
DougLtc673
EONUrsula70
EliMesser061985
GORIGORU
GenevieChristy
Geo Swan
GerardoX09
Hans Braxmeier
Harvey Stillnot
Heldermpel8
IslaHardiman
JanKeenanqwtqk
Jdrobbie
Jean-Philippe Fleury
JewelMeans
KristyIfpsqv
LilyDriscoll
LisaMChapman
MAKY.OREL
Mel-Merel
MelbaBickford28
MirtaBreedlove
MisterSanderson
MonteRiggins
Pirannu2311
Rauchg13
Sporti
Sun897
Tarmsand
Wayne66Tebltttwg
Whalestate
180.183.70.18
217.174.154.171
79.242.26.53
83.38.172.175
2016-01-31 05:26:54 215 26
pleasanton.patch.com 2016-02-13 21:54:47 COIBot 50.56.4.27 R M Todorovic 2016-01-28 14:58:31 613 3 0 0 2
segurancapublica.net 2016-02-12 23:56:46 COIBot 69.89.31.76 R BOPEfollower
Vauvout
85.186.19.222
2016-02-03 17:09:38 14 2
sipl.com.ua 2016-02-14 04:59:23 COIBot 193.169.188.224 109.237.81.38
178.54.94.22
194.31.46.63
217.76.194.218
94.248.31.168
94.248.49.220
2016-02-13 18:36:59 9 2
todayinsport.com 2016-02-14 03:21:05 COIBot 104.28.26.96 R Discographer
Tennisvine
2016-01-16 00:02:58 18 4
towson.patch.com 2016-02-14 01:37:26 COIBot 50.56.4.27 R 123home123
TheFrog001
龍伯
2016-02-07 13:37:52 14 2
tweetdeck.com 2016-02-14 01:19:41 COIBot 199.59.148.21 R Metamorforme42
摩茲拉
2016-01-10 21:40:21 6 4
upperstclair.patch.com 2016-02-14 10:43:35 COIBot 50.56.4.27 R Oscar1losada 2015-12-29 16:05:01 404 6 0 0 2
woopsa.org 2016-02-14 10:37:22 COIBot 185.31.17.133 193.134.219.68
31.164.124.125
2016-02-10 08:56:21 7 3
www400.jimdo.com 2016-02-13 05:19:35 COIBot 52.18.92.52 R Johndoe971
Sebastian lima
Сергей ДЯГИЛЕВ МЛ
2016-02-02 21:43:42 8 4
youtube.cl 2016-02-14 03:37:00 COIBot 173.194.207.93 R Rodm23 2016-01-18 01:21:40 91 3 0 0 2
youtube.fr 2016-02-14 08:13:07 COIBot 173.194.207.190 R Miss Kan
Salegosse77
Suwa
Voltcorp
89.225.79.71
2016-02-05 00:23:10 13 2

Proposed removals[edit]

Symbol comment vote.svg This section is for proposing that a website be unlisted; please add new entries at the bottom of the section.

Remember to provide the specific domain blacklisted, links to the articles they are used in or useful to, and arguments in favour of unlisting. Completed requests will be marked as {{removed}} or {{declined}} and archived.

See also /recurring requests for repeatedly proposed (and refused) removals.

Notes:

  • The addition or removal of a domain from the blacklist is not a vote; please do not bold the first words in statements.
  • This page is for the removal of domains from the global blacklist, not for removal of domains from the blacklists of individual wikis. For those requests please take your discussion to the pertinent wiki, where such requests would be made at Mediawiki talk:Spam-blacklist at that wiki. Search spamlists — remember to enter any relevant language code

syriadirect.org[edit]



This site offers good independent information about the Syrian civil war and it doesn't contain spam or anything like that. Honestly, I don't even know why it was put on the list in the first place but I would like to see it on the green so people can use it as reference in their articles about this conflict.

Seems to be collateral damage for a regex response to spam. We can probably do a lookbehind regex fix for this.  — billinghurst sDrewth 09:01, 18 December 2014 (UTC)

bergspider.net[edit]



Sir 3 Months Ago My website has been blacklist. Sir I dont Know much about wiki policies that is why I made a mistake.Now I know wiki polices.Sir kindly help me to remove my site from blacklist.Sir My site names is Bergspider.net —The preceding unsigned comment was added by Thomasfan11‎ (talk)

Declined nothing to do, not globally blacklisted. This was only blacklisted at English Wikipedia, you will need to take your request to w:Mediawiki talk:Spam-blacklist and ask there.  — billinghurst sDrewth 13:08, 10 August 2015 (UTC)

trevorcook.typepad.com - The Magazine of Jackson Wells Morris[edit]



I don't quite understand why the link for The Wells: Magazine of Jackson Wells Morris, is being blocked/blacklisted. I need to use it for reference as this is one of the good sources that i have besides, the newsletter from trevorcook.typepad.com.—The preceding unsigned comment was added by Shazrinarmk2015 (talk)

Hmm, is this even blocked? http://trevorcook.typepad.com. --Dirk Beetstra T C (en: U, T) 08:46, 6 October 2015 (UTC)
Indeed, does not seem blocked. --Dirk Beetstra T C (en: U, T) 08:47, 6 October 2015 (UTC)

[23:24] <sDrewth> findrules typepad
[23:24] <COIBot> 1: [global] \bsheetalasingh\.typepad\.com\b (sheetalasingh.typepad.com )
[23:24] <COIBot> 2: [w:en (bl)] \bhotgigs(?:\.typepad)?\.com\b (hotgigs(?:.typepad)?.com )
[23:24] <COIBot> 3: [w:en (rl)] \btypepad\.com\b (typepad.com )
[23:24] <COIBot> 4: [w:ar (bl)] \bhotgigs\.typepad\.com\b (hotgigs.typepad.com )
[23:24] <COIBot> 5: [w:bs (bl)] \bhotgigs(?:\.typepad)?\.com\b (hotgigs(?:.typepad)?.com )
[23:24] <COIBot> 6: [w:sq (bl)] \bhotgigs(?:\.typepad)?\.com\b (hotgigs(?:.typepad)?.com )
[23:24] <COIBot> 7: [w:ta (bl)] \bhotgigs(?:\.typepad)?\.com\b (hotgigs(?:.typepad)?.com )
[23:24] <COIBot> 8: [w:ur (bl)] \bhotgigs(?:\.typepad)?\.com\b (hotgigs(?:.typepad)?.com )

Declined nothing to do, not blacklisted  — billinghurst sDrewth 12:27, 25 January 2016 (UTC)

@Shazrinamk2015: I found the problem (digging through the blacklist logs), you tried linking to www.google.com.sg/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0CCIQFjABahUKEwiL7YG-7qDIAhXPGY4KHeO1B8U&url=http://trevorcook.typepad.com/weblog/files/the_well_newsletter.pdf&usg=AFQjCNHH5AFBzHkmOK_q2UemE-IW6CAgbA&sig2=zrXBfWk5x1-foPGU_c3L-g (the google redirect to http://trevorcook.typepad.com/weblog/files/the_well_newsletter.pdf). You likely copied and pasted the original link from a google search result page. Please use the direct link, not the google redirect. --Dirk Beetstra T C (en: U, T) 12:34, 25 January 2016 (UTC)

cosplay.de[edit]





cosplay.de and cosplay.com is on the spam blacklist. It seems this is because of someone actually spamming this link. Sadly this causes dcm-cosplay.de to get blocked. dcm-cosplay.de is the website of the german cosplay championship. Can cosplay.de get unblocked or dcm-cosplay.de whitelisted? --Y-93 (talk) 21:30, 20 October 2015 (UTC)

I will adapt the rule a bit. --Dirk Beetstra T C (en: U, T) 11:39, 22 October 2015 (UTC)
I adapted the rule, http://dcm-cosplay.de should now save properly. --Dirk Beetstra T C (en: U, T) 11:55, 22 October 2015 (UTC)
Works. Thanks. --Y-93 (talk) 20:47, 26 October 2015 (UTC)
Declined modified Not removed, instead an adaptation to the regex has been applied.  — billinghurst sDrewth 05:53, 24 January 2016 (UTC)

pv-magazine.com[edit]



I tried to use http://www.pv-magazine.com/news/details/beitrag/spain-to-link-mallorca--ibiza-_100017146 as a reference, but it was blocked. Why? --Gunnar.Kaestle (talk) 18:35, 29 November 2015 (UTC)

@Gunnar.Kaestle: You just added the link .. so I don't think that it was blocked by the meta blacklist (maybe local?). --Dirk Beetstra T C (en: U, T) 08:54, 30 November 2015 (UTC)
Declined noting to be done, local issue only  — billinghurst sDrewth 06:00, 24 January 2016 (UTC)

bestmusic.ro[edit]



I would like to use [bestmusic.ro/dan-spataru/biografie-dan-spataru/ this] as a reference on this wiki page, but according to this wiki page, it seems to be blocked by the global block \bbestmusic\.ro\b. I cannot find a reason for it, and I would still like to use it as a reference. All Other Usernames Are Bloody Taken (talk) 19:43, 1 January 2016 (UTC)

@All Other Usernames Are Bloody Taken: This link was very spammed before being blacklisted, see w:en:Special:Permalink/172890016#www.bestmusic.ro. Local whitelisting is a more proper way in this case for now.--Syum90 (talk) 08:55, 11 January 2016 (UTC)
It is seven years old, which is an age ago. I would think that start with a local whitelisting of the domain name, and if it is successful and no spam occurs, then we can look to a global removal.  — billinghurst sDrewth 06:14, 24 January 2016 (UTC)

traveloka.com[edit]



This is a legit tech start-up website of id:Traveloka. Back in 2013, maybe when they're expanding someone used their company website URL to spam and the URL got into the blacklist. Kenrick95 (talk) 16:03, 3 January 2016 (UTC)

Second this. Spambots are no longer using this link for spam purposes, so I don't think there's enough reason to let it in the list anymore. Muhraz (talk) 16:05, 3 January 2016 (UTC)
Third. I think they didn't mean to spamming on Wikimedia projects, so please remove the URL from the list. Thank you. RaymondSutanto (talk) 16:14, 3 January 2016 (UTC)

Hmm, this was beyond 'when they're expanding someone used their company website URL to spam' .. en:Wikipedia:Sockpuppet_investigations/Lauriejackpot1/Archive - that appeared to be quite large scale. I am uncomfortable with global delisting at this time, as this request seems to be from only one Wiki (id). May I suggest local whitelisting first? --Dirk Beetstra T C (en: U, T) 06:16, 4 January 2016 (UTC)

Done. Muhraz (talk) 11:55, 6 January 2016 (UTC)
Declined managed by local whitelist  — billinghurst sDrewth 06:01, 24 January 2016 (UTC)

google.at[edit]



The following link has benn blocked, but I cannot see any reason, as it is a link to a speach of a honored person in a democratic environment without any harm to anybody: google.at/url?sa=t&rct=j&q=&esrc=s&source=web&cd=19&cad=rja&uact=8&ved=0ahUKEwisrvOWu5XKAhVMCBoKHejCBGg4ChAWCEgwCA&url=http://zarja.at/images/galerija/000308.doc&usg=AFQjCNETs8j-LElzmGKz4jLH9DibNTYNyA&sig2=LUk5dDdpPUj57ncC_-11lA&bvm=bv.110151844,d.d24 Could this be deblocked please? thanks user: Bojan2005, 06.01.2016, 17:29 (CET)

@Bojan2005: You are linking to a google search result link, not to the actual link you likely want: http://zarja.at/images/galerija/000308.doc. --Dirk Beetstra T C (en: U, T) 03:18, 7 January 2016 (UTC)
Declined google search blocked, which is a known abusing methodology, direct linking will resolve matter  — billinghurst sDrewth 06:04, 24 January 2016 (UTC)
Does this filter block books.google.tld as well? I think there are many of these links in use, and they are kind of useful. —MisterSynergy (talk) 16:09, 27 January 2016 (UTC)

co.nr[edit]



The entire top-level domain .co.nr is being blocked by a rule: \bco\.nr\b. This is causing me problems on DoomWiki.org as we use the SpamBlacklist extension. --QuasarEE (talk) 15:57, 28 January 2016 (UTC)

Hi!
co.nr is blocked because otherwise it could be used to circumvent to blacklist. So if you need to link to a specific domain or many subdomains of co.nr in your wiki, then please use your local whitelist. -- seth (talk) 21:13, 29 January 2016 (UTC)

currencyliquidator.com[edit]



I want to request the removal of currencyliquidator.com from the blacklist of wikipedia. I am gonna contribute the currency exchange rates from this website on the Iranian rial page. This website will provide the latest exchange rate which even the other website listed in the template on this page would never been able to provide. I have gone through website exchange rates and they are latest one. So kindly, remove currencyliquidator from the blacklist. Thanks.—The preceding unsigned comment was added by LettyAbs (talk)

@LettyAbs: Currency exchange rates are available from elsewhere as well, no need for using this specific site in general. For specific links, please request whitelisting on the specific wiki where you want to use them. Thanks. --Dirk Beetstra T C (en: U, T) 09:24, 19 January 2016 (UTC)
Declined Declined  — billinghurst sDrewth 06:22, 24 January 2016 (UTC)

francescogeminiani.com[edit]



has been suspected to be spam in 2008. It doesn't look like spam to me. -- seth (talk) 22:24, 13 February 2016 (UTC)

I'd say it was spammed back then. Looking at the page, I'm not sure it is useful for our projects. Looks like it host comercials such as in http://www.francescogeminiani.com/volumes.php or http://www.francescogeminiani.com/practical.php
Not sure if it is worth de-blacklisting. —MarcoAurelio 22:40, 13 February 2016 (UTC)
Definitely spammed back then (commercial site being added to many pages throughout - page mainly exists to sell the material, not to provide Wikipedia with much information about the subject). But it is 8 years since, and don't see much harm in removing it (and it can always be re-added - the bots will monitor). --Dirk Beetstra T C (en: U, T) 03:24, 14 February 2016 (UTC)
Hi!
There was a local request (w:de:WP:SBL#francescogeminiani.com) for linking to francescogeminiani.com/catalogue/catalogue.php which can probably be used as a reference.
the sbl log (since 2013-09) says that apart from the failed link additions of the requesting user at WP:SBL and at w:de:Francesco Geminiani, in the largest 20 wiki projects there have been just 2 failed link additions:
2015-02-06, www.francescogeminiani.com, 146.90.87.62; w:en:Catalogues_of_classical_compositions (two times)
So as I understand your answers, we can give it a try (and don't need to whitelist the domain at dewiki), right? -- seth (talk) 11:45, 14 February 2016 (UTC)
Removed Removed Dirk Beetstra T C (en: U, T) 11:52, 14 February 2016 (UTC)

Troubleshooting and problems[edit]

Symbol comment vote.svg This section is for comments related to problems with the blacklist (such as incorrect syntax or entries not being blocked), or problems saving a page because of a blacklisted link. This is not the section to request that an entry be unlisted (see Proposed removals above).

derefer.unbubble.eu deblock[edit]





This authority is used 24.923 times in main space in dewiki!. It is used to clean up Special:Linksearch from known dead links, by redirecting them over this authority. It is hard to find a better solution for this task. --Boshomi (talk) 16:38, 24 July 2015 (UTC) Ping:User:BillinghurstBoshomi (talk) 16:49, 24 July 2015 (UTC)

Please notice Phab:T89586, while not fixed, it is not possible to find the links with standard special:LinkSearch. in dewiki we can use giftbot/Weblinksuche instead.--Boshomi (talk) 18:04, 24 July 2015 (UTC)
afaics derefer.unbubble.eu could be used to circumvent the SBL, is that correct? -- seth (talk) 21:30, 24 July 2015 (UTC)
I don't think so, the redircted URL is unchanged, so the SBL works like the achive-URLs to the Internet Archive. --Boshomi (talk) 07:44, 25 July 2015 (UTC)
It is not a stored/archived page at archive.org, it is a redirect service as clearly stated at the URL and in that it obfuscates links. To describe it in any other way misrepresents the case, whether deWP uses it for good or not. We prevent abuseable redirects from other services due to the potential for abuse. You can consider whitelisting the URL in w:de:MediaWiki:spam-whitelist if it is a specific issue for your wiki.  — billinghurst sDrewth 10:09, 25 July 2015 (UTC)
what I want to say was that the SBL-mechanism works in the same way like web.archive.org/web. A blocked URL will be blocked with unbubble-prefix to the blocked URL.--Boshomi (talk) 12:54, 25 July 2015 (UTC)

Unblocking YouTube's redirection and nocookie domains[edit]



\byoutube\.com/.*(?:tqedszqxxzs|XePjp-H3TBI|khM48EQyVdc|A4jgXQQns8A|oVBOnv\-xrEY)\b

Just for being curious I checked this list and entries seem to be pretty obsolete:

https://www.voutube.com/watch?v=tqedszqxxzs Dieses Video ist nicht verfügbar.

https://www.voutube.com/watch?v=XePjp-H3TBI vitruvian man 1 - Leo's it text.wmv

https://www.voutube.com/watch?v=khM48EQyVdc Dieses Video ist nicht verfügbar. (2011 Hunter Mariner)

https://www.voutube.com/watch?v=A4jgXQQns8A Unknown and 44" Hunter Baker Street ceiling fans

https://www.voutube.com/watch?v=oVBOnv-xrEY 48" Hunter Summer Breeze ceiling fan (^-replace voutube with youtube) so someone with write access please remove the whole line(as well as this entry here). Also pretty strange how these came to the list. Are these just example entries? Djamana (talk) 23:52, 2 February 2016 (UTC)





Apparently youtu(dot)be and youtube-nocookie(dot)com, both of which are official YouTube domains owned by Google, are on this blacklist. For over ten years, the SpamBlacklist MediaWiki extension has loaded this blacklist on third-party wikis, big and small. This is quite an issue for third-party sites such as ShoutWiki, a wiki farm, since SpamBlacklist doesn't currently have the concept of "shared" whitelists — blacklists can be shared (loaded from a remote wiki), whitelists cannot. Given that the main YouTube domain isn't blocked, and also that YouTube itself hands out youtu(dot)be links, I don't think that "but it's a redirecting service" is a valid argument against it, and therefore I'd like to propose removing these two entries from the blacklist. --Jack Phoenix (Contact) 23:17, 29 August 2015 (UTC)

There are several links on youtube blacklisted here on Meta, as well many, many on local wikis. Youtube has videos that get spammed, there are videos that should simply not be linked to. Leaving open the redirects then makes the issue that not only the youtube.com link needs to be blacklisted, but also all redirect to those links. That gives either extra work to the blacklisting editors, or leaves the easy back-door open. On wikis it leaves more material to check. That in combination with that redirect services are simply never needed, there is an alternative. Additionally, Wikipedia has their built-in redirect service which also works (I mean templates, like {{youtube}}).
That there is no meta-analogue of the whitelist is a good argument to push that request of years ago to re-vamp the spam-blacklist system through and have the developers focus on features that the community wants, and certainly not an argument for me to consider not to blacklist something. Moreover, I do not think that the argument that it hampers third-party wikis is an argument either - they choose to use this blacklist, they could alternatively set up their own 'meta blacklist' that they use, copy-pasting this blacklist and removing what they do not want/need.
The problem exists internally as well, certain of our Wikifarms do allow for certain spam, which is however inappropriate on the rest of the wikifarms, and on the majority by far (in wiki-volume) of the wikis. That also needs a rewriting of the spam-blacklist system, which is crude, too difficult. A light-weight edit-filter variety, specialised on this would be way more suitable. --Dirk Beetstra T C (en: U, T) 04:05, 30 August 2015 (UTC)
  • Oppose unblocking for the reasons given above. Stifle (talk) 08:32, 21 October 2015 (UTC)
Declined Declined  — billinghurst sDrewth 06:22, 24 January 2016 (UTC)
youtu.be can only be used for youtube.com, so it's no redirecting service, so remove it from the black list. If you need to block certain yt video (what I consider btw as a little stupid) just update that system and include youtube.com as well as youtu.be that's it.
Djamana (talk) 20:08, 2 February 2016 (UTC)
@Djamana: Why do you consider that blocking of a specific YouTube video a little stupid? --Dirk Beetstra T C (en: U, T) 04:11, 3 February 2016 (UTC)

Partial matches: <change.org> blocks <time-to-change.org.uk>[edit]





I tried to add a link to <time-to-change.org.uk>, and was told that I couldn't add the link, as <change.org> was blacklisted. Is this partial-match blacklisting (based, I guess, on an incorrect interpretation of URL specifications) a known bug? Cheers. --YodinT 15:46, 21 October 2015 (UTC)

This is more of a limitation to the regex, we tend to blacklist '\bchange\.org\b', but a '-' is also a 'word-end' (the \b). I'll see if I can adapt the rule. --Dirk Beetstra T C (en: U, T) 07:46, 22 October 2015 (UTC)
change.org is not here, it is on en.wikipedia. That needs to be requested locally and then resolved there. --Dirk Beetstra T C (en: U, T) 07:48, 22 October 2015 (UTC)
Thanks for looking into this; is it worth replacing the regexes globally to fit URL specs? I'm sure I'm not the only one who will ever be/have been affected. --YodinT 11:27, 22 October 2015 (UTC)
@Yodin: Sorry, but there are no global regexes to replace, change.org is only blacklisted on en.wikipedia. You'll have to request a change on en:MediaWiki talk:Spam-blacklist (so there is a local request to do the change, then I or another en.wikipedia admin will implement it there). --Dirk Beetstra T C (en: U, T) 11:38, 22 October 2015 (UTC)
Thanks Dirk; just read this (sorry for the repeat on regexes there!). Isn't the main blacklist here also using '\bexample\.com\b'? I can come up with the general case regex if you like! --YodinT 11:44, 22 October 2015 (UTC)
You mean for every rule to exclude the '<prefix>-'-rule (i.e. put '(?<!-)' before every rule in the list - well, some of them are meant to catch all '<blah>-something.com' sites, so that is difficult. And then there are other combinations which sometimes catch as well. It is practically impossible to rule out every false positive. --Dirk Beetstra T C (en: U, T) 12:01, 22 October 2015 (UTC)
I see... much more complicated in practice than I thought. My idea was to apply it to a wider class of false positives, including the '<prefix>-' rule and more, by replacing "\b" with a regex rule which covers all and only the unreserved URI characters (upper & lowercase letters, decimal digits, hyphen, underscore, and tilde; with "dots" used in practice as delimiters). But this wouldn't cover the '<blah>-something.com' examples you gave, and having read some of the maintenance thread below which covers false positives, I won't try to press the issue! Maybe one day? Until then, I hope this goes well! Cheers for your work! --YodinT 12:26, 22 October 2015 (UTC)
@Yodin: If the foundation finally decides that it is time to solve some old bugzilla requests (over other developments which sometimes find fierce opposition), and among those the ones regarding overhaul of the spam-blacklist system, then this would be nice 'feature requests' of that overhaul. In a way, stripping down the edit-filter to pure regex matching 'per rule', with some other options added (having a regex being applied to one page or set of pages; having the regex being excluded on one page only, having the whitelist requests being added to the blacklist rule they affect, whitelisting on one page or set of pages, etc. etc.) would be a great improvement to this system. --Dirk Beetstra T C (en: U, T) 14:25, 23 October 2015 (UTC)
Closed Closed nothing to do, a block at enWP, nothing global.  — billinghurst sDrewth 09:45, 22 November 2015 (UTC)

Discussion[edit]

Symbol comment vote.svg This section is for discussion of Spam blacklist issues among other users.

Expert maintenance[edit]

One (soon) archived and rejected removal suggestion was about jxlalk.com matched by a filter intended to block xlalk.com. One user suggested that this side-effect might be as it should be, another user suggested that regular expressions are unable to distinguish these cases, and nobody has a clue when and why xlalk.com was blocked. I suggest to find an expert maintainer for this list, and to remove all blocks older than 2010. The bots identifying abuse will restore still needed ancient blocks soon enough, hopefully without any oogle matching google cases. –Be..anyone (talk) 00:50, 20 January 2015 (UTC)

No, removing some of the old rules, before 2010 or even before 2007, will result in further abuse, some of the rules are intentionally wide as to stop a wide range of spamming behaviour, and as I have argued as well, I have 2 cases on my en.wikipedia list where companies have been spamming for over 7 years, have some of their domains blacklisted, and are still actively spamming related domains. Every single removal should be considered on a case-by-case basis. --Dirk Beetstra T C (en: U, T) 03:42, 20 January 2015 (UTC)
Just to give an example to this - redirect sites have been, and are, actively abused to circumvent the blacklist. Some of those were added before the arbitrary date of 2010. We are not going to remove those under the blanket of 'having been added before 2010', they will stay blacklisted. Some other domains are of similar gravity that they should never be removed. How are you, reasonably, going to filter out the rules that never should be removed. --Dirk Beetstra T C (en: U, T) 03:52, 20 January 2015 (UTC)
By the way, you say ".. intended to block xlalk.com .." .. how do you know? --Dirk Beetstra T C (en: U, T) 03:46, 20 January 2015 (UTC)
I know that nobody would block icrosoft.com if what they mean is microsoft.com, or vice versa. It's no shame to have no clue about regular expressions, a deficit we apparently share.:tongue:Be..anyone (talk) 06:14, 20 January 2015 (UTC)
I am not sure what you are referring to - I am not native in regex, but proficient enough. The rule was added to block, at least, xlale.com and xlalu.com (if it were ONLY these two, \bxlal(u|e)\.com\b or \bxlal[ue]\.com\b would have been sufficient, but it is impossible to find this far back what all was spammed, possibly xlali.com, xlalabc.com and abcxlale.com were abused by these proxy-spammers. --Dirk Beetstra T C (en: U, T) 08:50, 20 January 2015 (UTC)
xlalk.com may have been one of the cases, but one rule that was blacklisted before this blanket was imposed was 'xlale.com' (xlale.com rule was removed in a cleanout-session, after the blanket was added). --Dirk Beetstra T C (en: U, T) 04:45, 20 January 2015 (UTC)
The dots in administrative domains and DNS mean something, notably foo.bar.example is typically related to an administrative bar.example domain (ignoring well-known exceptions like co.uk etc., Mozilla+SURBL have lists for this), while foobar.example has nothing to do with bar.example. –Be..anyone (talk) 06:23, 20 January 2015 (UTC)
I know, but I am not sure how this relates to this suggested cleanup. --Dirk Beetstra T C (en: U, T) 08:50, 20 January 2015 (UTC)
If your suggested clean-ups at some point don't match jxlalk.com the request by a Chinese user would be satisfied—as noted all I found out is a VirusTotal "clean", it could be still a spam site if it ever was a spam site.
The regexp could begin with "optionally any string ending with a dot" or similar before xlalk. There are "host name" RFCs (LDH: letter digit hyphen) up to IDNAbis (i18n domains), they might contain recipes. –Be..anyone (talk) 16:56, 20 January 2015 (UTC)
What suggested cleanups? I am not suggesting any cleanup or blanket removal of old rules. --Dirk Beetstra T C (en: U, T) 03:50, 21 January 2015 (UTC)
  • I have supported delisting above, having researched the history, posted at Talk:Spam_blacklist/About#Old_blacklisting_with_scanty_history. If it desired to keep xlale.com and xlalu.com on the blacklist (though it's useless at this point), the shotgun regex could be replaced with two listings, easy peasy. --Abd (talk) 01:42, 21 January 2015 (UTC)
    As I said earlier, are you sure that it is only xlale and xlalu, those were the two I found quickly, there may have been more, I do AGF that the admin who added the rule had reason to blanket it like this. --Dirk Beetstra T C (en: U, T) 03:50, 21 January 2015 (UTC)
Of course I'm not sure. There is no issue of bad faith. He had reason to use regex, for two sites, and possibly suspected additional minor changes would be made. But he only cited two sites. One of the pages was deleted, and has IP evidence on it, apparently, which might lead to other evidence from other pages, including cross-wiki. But the blacklistings themselves were clearly based on enwiki spam and nothing else was mentioned. This blacklist was the enwiki blacklist at that time. After enwiki got its own blacklist, the admin who blacklisted here attempted to remove all his listings. This is really old and likely obsolete stuff. --Abd (talk) 20:07, 21 January 2015 (UTC)
3 at least. And we do not have to present a full case for blacklisting (we often don't, per en:WP:BEANS and sometimes privacy concerns), we have to show sufficient abuse that needs to be stopped. And if that deleted page was mentioned, then certainly there was reason to believe that there were cross-wiki concerns.
Obsolete, how do you know? Did you go through the cross-wiki logs of what was attempted to be spammed? Do you know how often some of the people active here are still blacklisting spambots using open proxies? Please stop with these sweeping statements until you have fully searched for all evidence. 'After enwiki got its own blacklist, the admin who blacklisted here attempted to remove all his listings.' - no, that was not what happened. --Dirk Beetstra T C (en: U, T) 03:16, 22 January 2015 (UTC)
Hi!
I searched all the logs (Special:Log/spamblacklist) of several wikis using the regexp entry /xlal[0-9a-z-]*\.com/.
There were almost no hits:
w:ca: 0
w:ceb: 0
w:de: 0
w:en: 1: 20131030185954, xlalliance.com
w:es: 1: 20140917232510, xlalibre.com
w:fr: 0
w:it: 0
w:ja: 0
w:nl: 0
w:no: 0
w:pl: 0
w:pt: 0
w:ru: 0
w:sv: 0
w:uk: 0
w:vi: 0
w:war: 0
w:zh: 1: 20150107083744, www.jxlalk.com
So there was just one single hit at w:en (not even in the main namespace, but in the user namespace), one in w:es, and one in w:zh (probably a false positive). So I agree with user:Abd that removing of this entry from the sbl would be the best solution. -- seth (talk) 18:47, 21 February 2015 (UTC)
Finally an argument based on evidence (these logs should be public, not admin-only - can we have something like this in a search-engine, this may come in handy in some cases!). Consider removed. --Dirk Beetstra T C (en: U, T) 06:59, 22 February 2015 (UTC)
By the way, Seth, this is actually no hits - all three you show here are collateral. Thanks for this evidence, this information would be useful on more occasions to make an informed decision (also, vide infra). --Dirk Beetstra T C (en: U, T) 07:25, 22 February 2015 (UTC)
I am not sure that we want the Special page to be public, though I can see some value in being able to have something at ToolLabs to be available to run queries, or something available to be run through quarry.  — billinghurst sDrewth 10:57, 22 February 2015 (UTC)
Why not public? There is no reason to hide this, this is not BLP or COPYVIO sensitive information in 99.99% of the hits. The chance that this is non-public information is just as big as for certain blocks to be BLP violations (and those are visible) ... --Dirk Beetstra T C (en: U, T) 04:40, 23 February 2015 (UTC)

Now restarting the original debate[edit]

As the blacklist is long, and likely contains rules that are too wide a net and which are so old that they are utterly obsolete (or even, may be giving collateral damage on a regular basis), can we see whether we can set up some criteria (that can be 'bot tested'):

  1. Rule added > 5 years ago.
  2. All hits (determined on a significant number of wikis), over the last 2 years (for now: since the beginning of the log = ~1.5 years) are collateral damage - NO real hits.
  3. Site is not a redirect site (should not be removed, even if not abused), is not a known phishing/malware site (to protect others), or a true copyright violating site. (this is hard to bot-test, we may need s.o. to look over the list, take out the obvious ones).

We can make some mistakes on old rules if they are not abused (remove some that actually fail #3) - if they become a nuisance/problem again, we will see them again, and they can be speedily re-added .. thoughts? --Dirk Beetstra T C (en: U, T) 07:25, 22 February 2015 (UTC)

@@Hoo man: you have worked on clean up before, some of your thoughts would be welcomed.  — billinghurst sDrewth 10:53, 22 February 2015 (UTC)
Doing this kind of clean up is rather hard to automatize. What might be working better for starters could be removing rules that didn't match anything since we started logging hits. That would presumably cut down the whole blacklist considerably. After that we could re-evaluate the rest of the blacklist, maybe following the steps outlined above. - Hoo man (talk) 13:33, 22 February 2015 (UTC)
Not hitting anything is dangerous .. there are likely some somewhat obscure redirect sites on it which may not have been attempted to be abused (though, also those could be re-added). But we could do test-runs easily - just save a cleaned up copy of the blacklist elsewhere, and diff them against the current list, and see what would get removed.
Man, I want this showing up in the RC-feeds, then LiWa3 could store them in the database (and follow redirects to show what people wanted to link to ..). --Dirk Beetstra T C (en: U, T) 03:30, 23 February 2015 (UTC)
Hi!
I created a table of hits of blocked link additions. Maybe it's of use for the discussion: User:lustiger_seth/sbl_log_stats (1,8 MB wiki table).
I'd appreciate, if we deleted old entries. -- seth (talk) 22:12, 26 February 2015 (UTC)
Hi, thank you for this, it gives a reasonable idea. Do you know if the rule-hits were all 'correct' (for those that do show that they were hit) or mainly/all false-positives (if they are false-positive hitting, we could based on this also decide to tighten the rule to avoid the false-positives). Rules with all-0 (can you include a 'total' score) would certainly be candidates for removal (though still determine first whether they are 'old' and/or are nono-sites before removal). I am also concerned that this is not including other wikifarms - some sites may be problematic on other wikifarms, or hitting a large number of smaller wikis (which have less control due to low admin numbers). --Dirk Beetstra T C (en: U, T) 03:36, 8 March 2015 (UTC)
Hi!
We probably can't get information of false positives automatically. I added a 'sum' column.
Small wikis: If you give me a list of the relevant ones, I can create another list. -- seth (talk) 10:57, 8 March 2015 (UTC)
Thanks for the sum-column. Regarding the false-positives, it would be nice to be able to quickly see what actually got blocked by a certain rule, I agree that that then needs a manual inspection, but the actual number of rules with zero hits on the intended stuff to be blocked is likely way bigger than what we see.
How would you define the relevant small wikis - that is depending on the link that was spammed? Probably the best is to parse all ~750 wiki's, make a list of rules with 0 hits, and a separate list of rules with <10 hits (and including there the links that were blocked), and exclude everything above that. Then these resulting rules should be filtered by those which were added >5 years ago. That narrows down the list for now, and after a check for obvious no-no links, those could almost be blanket-removed (just excluding the ones with real hits, the obvious redirect sites and others - which needs a manual check). --Dirk Beetstra T C (en: U, T) 06:59, 9 March 2015 (UTC)
Hi!
At User:Lustiger_seth/sbl_log_stats/all_wikis_no_hits there's a list containing ~10k entries that never triggered the sbl during 2013-sep and 2015-feb anywhere (if my algorithm is correct).
If you want to get all entries older than 5 years, then it should be sufficent to use only the entries in that list until (and including) \bbudgetgardening\.co\.uk\b.
So we could delete ~5766 entries. What do think? Shall we give it a try? -- seth (talk) 17:06, 18 April 2015 (UTC)
The question is, how many of those are still existing redirect sites etc. Checking 5800 is quite a job. On the other hand, with LiWa3/COIBot detecting - it is quite easy to re-add them. --Dirk Beetstra T C (en: U, T) 19:28, 21 April 2015 (UTC)
According to the last few lines, I've removed 124kB of non-hitting entries now. I did not remove all of them, because some were url shorteners and I guess, that they are a special case, even if not used yet. -- seth (talk) 22:25, 16 September 2015 (UTC)

Blacklisting spam URLs used in references[edit]

Looks like a site is using the "references" section as a spam farm. If a site is added to this list, can the blacklist block the spam site? Raysonho (talk) 17:45, 5 September 2015 (UTC)

Yes they can.--AldNonymousBicara? 21:56, 5 September 2015 (UTC)
Thanks, Aldnonymous! Raysonho (talk) 00:07, 6 September 2015 (UTC)

url shorteners[edit]

Hi!
IMHO the url shorteners should be grouped in one section, because they are a special group of urls that need a special treatment. A url shortener should not be removed from sbl unless the domain is dead, even if it has not been used for spamming, right? -- seth (talk) 22:11, 28 September 2015 (UTC)

That would be beneficial to have them in a section. Problem is, most of them are added by script, and are hence just put at the bottom. --Dirk Beetstra T C (en: U, T) 04:51, 4 October 2015 (UTC)
Maybe it would seem more preferable to have "spam blacklist" be a compilation file, made of files one of which would be "spam blacklist.shorteners"  — billinghurst sDrewth 12:15, 24 December 2015 (UTC)
This seems like a nice idea. Would certainly help with cleaning up of it (which we don't do nowadays). IIRC, it is technically possible to have different spam blacklist pages so this is technically possible, just needs a agreement among us and someone to do it. --Glaisher (talk) 12:17, 24 December 2015 (UTC)

@Beetstra, Lustiger seth, Glaisher, Vituzzu, MarcoAurelio, Hoo man, Legoktm: and others. What are your thoughts on a concatenation of files as described above. If we have a level of agreement, then we can work out the means to an outcome.  — billinghurst sDrewth 12:39, 25 January 2016 (UTC)

  • I am somewhat in favour of this - split the list into a couple of sublists - one for url-shorteners, one for 'general terms' (mainly at the top of the list currently), and the regular list. It would however need an adaptation of the blacklist script (I've done something similar for en.wikipedia (a choice of blacklisting or revertlisting for each link), I could give that hack a try here, time permitting). AFAIK the extension in the software is capable of handling this. Also, it would be beneficial for the cleanout work, that the blacklist itself is 'sectioned' into years. Although being 8 years old is by no means a reason to expect that the spammers are not here anymore (I have two cases on en.wikipedia that are older than that), we do tend to be more lenient with the old stuff. (on the other hand .. why bother .. the benefits are mostly on our side so we don't accidentally remove stuff that should be solved by other means). --Dirk Beetstra T C (en: U, T) 13:05, 25 January 2016 (UTC)
Is it really possible to have different spam blacklist pages? What would happen to the sites that use this very list to block unwanted spam? —MarcoAurelio 14:23, 25 January 2016 (UTC)
It is technically possible. But this would mean that if we move all the URL shortener entries to a new page, all sites using it currently would have to update the extension or explicitly add the new blacklist to their config or these links would be allowed on their sites (and notifying all these wikis about this breaking change is next to impossible). Another issue I see is that a new blacklist file means there would be a separate network request on cache miss so their might be a little delay in page saves (but I'm not sure whether this delay would be a noticeable delay). --Glaisher (talk) 15:38, 25 January 2016 (UTC)
Hi!
Before we activate such a feature, we should update some scripts that don't know anything about sbl subpages yet.
Apart from that I don't think that a sectioning into years would be of much use. One can use the (manual) log for this. A subject-oriented sectioning could be of more use, but this would also be more difficult for us. -- seth (talk) 20:49, 27 January 2016 (UTC)

Unreadable[edit]

Why is the list not alphabetical, so I can look up whether a certain site is listed and then also look up when it was added? --Corriebertus (talk) 08:55, 21 October 2015 (UTC)

hi!
there are advantages and disadvantages of a alphabetical list. for example it would be very helpful to group all url shorteners at one place (see discussion thread above). sometimes it's better to have a chronological list. additionally to that regexp can't be really sorted domain-alphabetically.
if you want search the blacklist, you can use a tool like https://tools.wmflabs.org/searchsbl/. -- seth (talk) 17:16, 30 October 2015 (UTC)