Wikimedia Foundation metrics and activities meetings/Quarterly reviews/Discovery, April 2016
Please keep in mind that these minutes are mostly a rough paraphrase of what was said at the meeting, rather than a source of authoritative information. Consider referring to the presentation slides, blog posts, press releases and other official material
- 1 Discovery
- 1.1 Slide 1 (KPIs: User satisfaction, Zero Results Rate)
- 1.2 Slide 2 (Improve intra-wiki relevance)
- 1.3 Slide 3 (Zero results rate over time)
- 1.4 Slide 4 (Zero results rate by search type)
- 1.5 Slide 5 (Generate satisfaction model)
- 1.6 Slide 6 (Improve www.wikipedia.org)
- 1.7 Slide 7 (Upgrade Wikidata Query Service)
- 1.8 Slide 8 (Migrate Wikivoyage to new map tile service)
- 1.9 Slide 9 (Map tile server usage)
- 1.10 Slide 10 (Core workflows and metrics)
- 1.11 Appendix (screenshots)
Slide 1 (KPIs: User satisfaction, Zero Results Rate)
Tomasz: many directions this past quarter
Dan: KPIs, user satisfaction - how long users spent after clicking through on search results. User satisfaction increased to 35% from 28%. Zero results decreased - This means the completion suggester had a bigger effect on the zero results rate than previously thought. The zero results rate is presently 22% instead of the 30% as reported in the quarterly review and a 26% drop from q2.
Slide 2 (Improve intra-wiki relevance)
Dan: Improve intra wiki relevance, roll out completion suggester. green - rolled out as beta feature in Q2. Positive results and was rolled out to production on all wikis in Q3. Discovery created an infrastructure called Relevance Forge that allows us to tests Elastic Search relevance changes in a testing env.
Slide 3 (Zero results rate over time)
Dan: Results rate was moving up and down with a slight move upwards pre completion suggester. We've since seen a drop to fewer zero search results for prefix search. The zero results rate should never be zero as there are legitimate cases where we should return zero
Slide 4 (Zero results rate by search type)
Wes: Does this graph include bots?
Geoff: What's a prefix search?
Dan: It searches the start of an article. Sub optimal as it's not looking at the content of the article. Not ideal but its much faster than full text search
Slide 5 (Generate satisfaction model)
Dan: Goal was to run a on wiki survey to have qualitative data for search satisfaction. We didn't get to this and it was de-prioritized.
Katherine: Will this carry over to Q4?
Dan: We are not planning carrying this over to Q4. We'd like to do it but it is not high priority any more.
Slide 6 (Improve www.wikipedia.org)
Deb: Goal: Why do users bounce of the sit at such a high rate? 2/3 tests were run. Page images were added and showed an increase in user session to 5.5% from 1.7%
Katherine: How does the 60/30/10 measures of success line up with sessions.
Deb: We did see a drop in bounce rate but that was likely due to the completion suggester and not A/B tests
Dan: Lack of reliable data is dependent on running one more test and the analysis of tests #@
Katherine: Need to draw a better line between measures of success and outcomes
Slide 7 (Upgrade Wikidata Query Service)
Dan: Goal: Upgrade to latest BlazeGraph (backend) and allow for geo search.
Katherine: who are the primary users of this?
Tomasz: we have a lot of people that ask us about uptime, and we want to make sure they're supported
Slide 8 (Migrate Wikivoyage to new map tile service)
Tomasz: targeting Wikipedia
Katherine: goal for this year still?
Tomasz: moving tools over, getting hardware ready, then Q1 next year is the goal
Wes: didn't you prototype the Labs service over last 6 months?
Tomasz: we worked with Dario to figure out limited testing, then graduate to production support. "mezzolevel services"
Dan: describe to users as a beta, but then describe to ops as production
Slide 9 (Map tile server usage)
Slide 10 (Core workflows and metrics)
Dan: New ops engineer (Guillaume) to support elastic, wdqs, maps