This page documents work completed for Invitations improvements in phase 2 of the Teahouse project.
Invite experiment #1
What: A/B testing the sending of 2 bot-delivered invitations.
Testing for: Effectiveness of a HostBot-delivered invite template that also includes the name of a Teahouse host vs an invite that is from HostBot only. Both of these templates will be compared against the existing hand-delivered template process to determine whether bot-delivered invitations encourage or discourage new editors to visit the Teahouse.
- Note, the "Visit the Teahouse" button doesn't work on Meta, it will look more like this example in Heather's sandbox.
Would you be willing to have your name used in the test for version A? If so, please list yourself below. It would be nice to have a few names rotated in the invite tests :-)
- SarahStierch (talk) 22:57, 18 July 2012 (UTC)
- Ryan Vesey (talk) 00:51, 19 July 2012 (UTC)
- Rosiestep (talk) 02:28, 19 July 2012 (UTC)
- Writ Keeper (talk) 13:16, 19 July 2012 (UTC)
- Jtmorgan (talk) 21:05, 22 July 2012 (UTC)
- Doctree (talk) 11:45, 31 July 2012 (UTC)
- Hajatvrc (talk) 19:10, 2 August 2012 (UTC)
- Osarius (talk) 14:06, 3 August 2012 (UTC)
- Theopolisme (talk) 21:55, 7 August 2012 (UTC)
- benzband (talk) 20:54, 10 August 2012 (UTC)
- Nathan2055 (talk) 22:34, 13 August 2012 (UTC)
- RB-AXP (talk) 12:20, 28 October 2012 (UTC)
Preliminary Trial: July 23 - August 5, 2012
Between July 23rd and August 5th (inclusive), HostBot sent out 630 invitations to new editors, for an average of 48 invitations per day. 36 of these editors (5.7%) were subsequently blocked from editing, roughly same 'error rate' as manual inviting yields (5-6%). 19 of these new editors had visited the Teahouse (asked a question, created an intro, or both) as of 8/9/2012, for a response rate of 3%. This number may rise a bit over time as people trickle in. The full results of the experiment are presented in the table below.
Automatic invitation experiment results
|invites sent||total visitors||response rate||total qna visitors||total guest intro visitors||total visitors (qna + intro combined)||total blocked||blocked rate|
|total hostbot invites||630||19||3.0%||9||12||n/a||36||5.7%|
|manual invites (pilot period sample)||390||n/a||n/a||n/a||n/a||n/a||4||5.5%|
|manual invites (7/5 - 8/5 sample)||163||5||3.1%||n/a||n/a||n/a||4||2.5%|
3.8% of invitees (12) who received a personalized invitation visited Teahouse, compared with only 2.2% (7) of invitees who received a generic invitation subsequently visited. Although this difference in response rate is well within the statistical margin of error (due to the overall low response rate and small sample size), we believe that these results are encouraging and warrant further experimentation.
Although our overall response rate was in line previous findings on response rates for new editors invited manually through various channels during the pilot period, we performed an additional analysis of more recent manual response rates to make sure that guests invited by HostBot were not less likely to visit Teahouse than guests invited manually. In a sample of 163 editors who were manually invited within the last month or so (7/5/2012 to 8/5/2012) only 5 subsequently visited Teahouse, a response rate of 3.1%.
This result, coupled with the previous findings, makes us confident that automatic invites are no less effective for bringing new editors to Teahouse than manual invites. Furthermore, the comparable block rate for automatic invitees, although higher than that for the July manual sample, is still lower than the proportion of invited editors who were subsequently blocked in previous samples of manual invitees, suggesting that automatic invites are unlikely to lead to an increase in disruptive editors visiting Teahouse.
Extended Trial Results: August 15 - September 16th, 2012
On August 15th, HostBot was approved for an extended trial. The A/B testing of automated personalized vs. generic invite templates has continued since August 16th.
During roughly one month period, 1805 automatic invites were sent out to new editors. Roughly half of these editors received a generic invitation, and the other half received a personalized variant that was 'signed' by one of the hosts listed above. 73 of these invitees visited the Teahouse before September 17th, a response rate of 4%. This response rate is statistically equivalent to the response rate observed from manually-invited new editors during the pilot period.
Editors who received a generic invite template responded at a slightly higher rate than those who received a personalized variant, (4.3% versus 3.7%, but these differences were within the margin of error and are not significant.
Although the Guestbook received roughly 1.5 times the traffic of the Q&A board, visitors invited through the two templates visited the Q&A board and the Guestbook were equally likely to participate in either space. Seven guests (5 personalized invitees and 2 generic invitees) visited the Q&A board and created a Guest profile during the study period.
Of the 1805 invites sent, 126 (7%) were sent to editors who were subsequently blocked from editing. Like the response rate reported above, this block rate is in line with the block rate of manually-invited guests during the pilot period.
Extended automatic invitation experiment results: overview
|invites sent||total visitors||response rate||total qna visitors||total guest intro visitors|
|total hostbot invites||1805||73||4.0%||32||48|
The lack of significant differences between the two invite conditions indicates that this approach to personalization may not be more engaging than a generic invite. The results of previous A/B tests (see section below) suggest that we may not have made the invitations personalized enough: for instance, only the final sentence of the invitation message was changed to first-person voice, and we did not include a specific invitation to ask a question or contact the inviter. If we continue to experiment with invite language, we should incorporate elements like these into the personalized template.
The August metrics report shows a dramatic increase in new editor participation in the Teahouse since automated invites began. Look to the September metrics report (due out in the first week of October) to see if this pattern of sustained participation has continued. The Guestbook has experienced a more dramatic increase than the Q&A board, suggesting that new editors want to participate even if they don't presently need help. It would be great if we (hosts) could reach out personally to welcome these users (rather than sending them additional automated messages) to make them feel more welcome and give them opportunities for personal interaction that the Q&A board provides, but the Guestbook does not.
Providing Guestbook visitors with suggestions of different ways to get involved after they have created their profiles could also be helpful: many profile creators talk about the editing work they're interested in doing, but finding places to contribute can be a daunting task. Perhaps a link to SuggestBot or to a list of relevant WikiProjects?
Anecdotally, the researcher drafting this report has not observed an increase in disruptive behavior during this time period, although other Teahouse hosts may have other observations. The Teahouse has proven resilient to occasional disruption and vandalism so far.
We see no reason to stop A/B testing as long as automated invitations continue. Any changes to our experimental conditions (such as template language) will be recorded here, and additional results will be published here in another two months or so.