Jump to content

Talk:Volunteer Response Team/Reports/2014

Add topic
From Meta, a Wikimedia project coordination wiki
(Redirected from Talk:OTRS/Reports/2014)
Latest comment: 9 years ago by CristianCantoro in topic Response time

Preserving history

[edit]

Hi. The new report at <https://tools.wmflabs.org/otrsreports/annual/2014> looks beautiful. Really nice job!

Please be sure to preserve a copy of this report on Meta-Wiki in wiki form and/or Wikimedia Commons in PDF form. Neither Wikimedia Labs nor its predecessor the Toolserver are as stable and supported for long-term archival as Meta-Wiki or Wikimedia Commons. --MZMcBride (talk) 06:14, 26 February 2015 (UTC)Reply

+1 --Nemo 13:36, 26 February 2015 (UTC)Reply
Hi MZMcBride and Nemo, yes, we'll upload a PDF version of it soon (but expect nothing fancy; I was planning to re-organize a few things in the html [mostly the info breakdown table] and then basically just print is out in PDF form; it doesn't look that bad, though, since both Bootstrap and our own design elements were already written with print in mind). I've also manually archived all subpages of tools.wmflabs.org/otrsreports/annual/ at archive.org. — Pajz (talk) 14:42, 26 February 2015 (UTC)Reply
Great to hear. Thank you! --MZMcBride (talk) 00:13, 27 February 2015 (UTC)Reply
Done. — Pajz (talk) 19:43, 2 March 2015 (UTC)Reply

Response time

[edit]

WMIT is interested in knowing the response time for info-it. If would also be nice to have this metric for all queues: to me it seems likely to be the most important one. Would that require OTRS development? --Nemo 13:36, 26 February 2015 (UTC)Reply

Hi Nemo, the problem is rather to find a good way to present it; I felt that including more plots for the smaller queues in the annual report would make it hard to read, and didn't want to compromise on the level of detail (always suspicious when someone only reports the average response time / picks certain percentiles); also, if queues are small, the relevance of these metrics is doubtful. The plan was to include the metrics for all queues (above some size X) in the (one of the?) more-internally focussed monthly reports. But, back to your question, some percentiles of the first-response time for info-it:
          0%           5%          10%          15%          20%          25%          30% 
1.863889e-01 2.517500e+00 6.271111e+00 1.408368e+01 2.048861e+01 2.655465e+01 4.084417e+01 
         35%          40%          45%          50%          55%          60%          65% 
5.539264e+01 8.202139e+01 1.307709e+02 1.802118e+02 2.366617e+02 2.608981e+02 3.359575e+02 
         70%          75%          80%          85%          90%          95%         100% 
4.255683e+02 5.643051e+02 9.207425e+02 2.075942e+03 1.000000e+07 1.000000e+07 1.000000e+07 
(Read the 1.000000e+07 as "NA", I just had to replace it with a very high number for the plots.) This is based on 156 tickets created in 2014 that are in info-it (only counting merged tickets once), and either contain a first response, or do not contain a response, but are not yet closed ("NA"/1.000000e+07), as of Feb. 18. I've quickly made a plot for -it, see https://dl.dropboxusercontent.com/u/22742936/info-it%20first-response%20time%202014%20empdist%202.png. I understand you'd like to see data for all queues, and I'll see what I can come up with (but probably won't get to it before the weekend). — Pajz (talk) 15:52, 26 February 2015 (UTC)Reply
Thanks, very helpful. Looks like we have a problem... As for the presentation, given the numbers of tickets for such queues anything above 90 % risks making little sense. Just a table with 50th, 75th and 90th or so percentile for all queues would be golden. --Nemo 21:07, 26 February 2015 (UTC)Reply
From what I understand from the plot and the data I can say that on info-it:
  • 50% of the messages receive an answer within 8 days (192 h)
  • 75% of messages receive an answer within 24 days (576 h)
  • 85% of messages receive an answer within circa 86 days (2076 h)
  • the remaining 15% of messages (circa 24 in absolute numbers) receive an answer in a longer time or does not receive an answer at all.
I think these three numbers could be compared on all lists, or at least the 50th percentile and the 75 percentile; I fear that the smaller the queue the lower the percentile where you find meaningful data. For example, since there are 156 tickets this means that 5 percentile points equal to ~ 8 messages. I think it could be interesting to learn what causes a message to take such a long time and if we can do something to help OTRS operators to deal at least with some of the problematic patterns that may be there in that 15%. --CristianCantoro (talk) 13:52, 8 March 2015 (UTC)Reply