Research:Effectiveness of the new participant pipeline for Wiki Loves campaigns
Many photo competitions or other engagement campaigns that aim for participation by the general audience make use of CentralNotice banners to recruit readers and invite them to participate in the competition or activity. In order to inform community discussions and decision-making, it would be helpful to better understand the effectiveness of the centralnotice banners and the later parts of the engagement pipeline. Especially centralnotice banners are increasingly considered as valuable and precious real estate, where multiple priorities are competing over the readers' attention. However, most of our understanding on the effectiveness of banners is directly related to fundraising.
In this project, we will aim to get a better understanding of the effectiveness of banners and landing pages in the context of trying to engage readers to contribute content. Specifically, we will limit ourselves to Wiki Loves style campaigns. This understanding should help organizers and the community to collaboratively make effective choices regarding banner usage, and help organizers to make effective choices in landing page design.
New contributors that join through Wiki Loves Monuments, generally go through these steps:
- User sees banner
- User clicks on banner and goes to landing page
- User goes through a number of steps, specific to the country (e.g., identify a monument)
- User creates a Wikimedia account
- User uploads image through Wiki Loves campaign
The assumption is that these contributors can be identified after the fact through their uploads that are labeled with a dedicated template, the upload date and the registration date. What we don't know, is how many people could have participated: how many people did we lose in each of these steps?
There are two initial main questions that we will focus on:
- For each country, in which 'phase' do we lose the most potential participants (funnel analysis)
- What diet would be appropriate for future campaigns.
I welcome additional questions that could be answered in the same analysis process, given the required investment of effort.
Some relevant sub-questions that have been identified:
- Given a unique landing page visitor, what is the first banner they click on (first view, second view, m'th view)?
- If their first clicked banner is 'm', what is the distribution of m? How many people clicked on the first banner, how many on the second, etc.
- Given a unique uploader (eventual success!) what was the last banner they clicked on?
- How many uploaders made use of the 6th, 11th, 21st banner view? (the hypothesis based on fundraising insights is that only the first few banners are effective)
- Given a banner click, how many people click on the banner a second time? Third time?
- How do these numbers differ from mobile vs from desktop?
- How do these numbers differ between countries?
- Are certain landing page designs more effective than others?
- What can we say about where users are coming from? (tourists vs residents)
These questions need further ironing out, based on what data is available.
We will use different datasets to capture interaction with banners and the respective landing pages
- webrequest-logs allows us to collect page views to landing pages and the referrers of the requests
- Banner activity contains data about banner impressions, campaigns, status codes and others across all wikis and languages in minutely resolution
- EventLogging from CentralNoticeBannerHistory Summary of recent items in the client-side log of CentralNotice banner events, for CentralNotice campaigns with the banner history feature enabled
- (potentially others)
- Onboarding (1 month)
- Identify relevant datasets for capturing main question (1 month)
- Exploratory analysis (1 month)
- Detailed analysis (3 months)
Once your study completes, describe the results and their implications here. Don't forget to make status=complete above when you are done.