Wikimedia monthly activities meetings/Quarterly reviews/Discovery, October 2016
This page is currently a draft. More information pertaining to this may be available on the talk page. Translation admins: Normally, drafts should not be marked for translation. |
Sadly the note taker for this review was unexpectedly not present, so the notes for this quarterly review are incomplete. If you have any questions about the slides, please contact Dan Garry and he will happily answer any questions.
Notes from the Quarterly Review meeting with the Wikimedia Foundation's Discovery team, 26 October, 11:00–11:45 PDT.
Please keep in mind that these minutes are mostly a rough paraphrase of what was said at the meeting, rather than a source of authoritative information. Consider referring to the presentation slides, blog posts, press releases and other official material
Present: Deb Tankersley, Wes Moran, Maggie Dennis, Dan Garry, Katie Horn, Wes Moran, Yuri Astrakhan, Katherine Maher, Michelle Paulson, Zhou Zhou, Joady Lohr, Lisa Gruwell, Heather Walls, Gretchen Yen
Slide 1
[edit]Dan: Generally, our KPIs (key performance indicators) are trending slowly upwards over time. For the first time we can compute year-on-year statistics. The large increase in satisfaction was due to a combination of an implementation change which artificially inflated the numbers, and gradual increases due to our changes; see slide 14 for more info. The decrease in zero results rate was due to the launch of the completion suggester, and gradual decreases due to our changes.
Slide 2
[edit]Deb: Our first goal was to switch to BM25, which will fix a multitude of relevance issues and improve satisfaction with search results. The tests that were done in Q1 were promising, but not conclusive. This was not achieved in Q1 due to our uncertainty about the impact of the change and whether it would be positive. There was a slight decrease in user satisfaction and increase in zero results rate this quarter, likely due to the school holidays.
Slide 3
[edit]An example of what we internally call TextCat; the language of the user's query is dynamically detected and the results are reordered to match. Refactoring of MediaWiki core was performed to make this possible.
Slide 4
[edit]The Discernatron is a query result ranking tool which helps us discern whether the results that are being shown to users are relevant or not.