Jump to content

Research:New editor support strategies

From Meta, a Wikimedia project coordination wiki
Tracked in Phabricator:
Task T137987
Created
18:30, 31 May 2016 (UTC)
Duration:  2016-May – 2016-October
This page documents a completed research project.

This project will involve interviews with experienced editors who participate in a range of new editor support, edit review, new page review, and anti-vandalism activities, in order to understand how to design tools to help editors like these provide quick, constructive support to new editors who appear to be struggling. This research will yield insights into the motivations, workflows, strategies, and challenges of editors engaged in these types of work that will inform efforts by the Wikimedia Collaboration Team and others to develop successful, scalable, and generalizable strategies for engaging experienced Wikipedians in new editor support activities and increase the retention of good-faith new editors.

The findings from this research will inform the design of the Edit Review Improvements feature.

Background

[edit]

The Wikimedia Foundation has tried a range of software features to teach new editors, get them help, and support them in common tasks. Examples include Moodbar/Feedback Dashboard, Special:GettingStarted, and the New User education features in the Visual Editor (as well as, to some extent, the Visual Editor itself), as well as grant-funded projects like The Teahouse, The Wikipedia Adventure, and The Co-op. Other products such as the Drafts namespace and Special:NewPagesFeed were created to support new article development and review, giving article creators (many of whom are new editors) a better forum for building out draft articles, and providing article reviewers with a set of tools to streamline their work.

The current project seeks to join several of these research and development threads, building on the results of previous experiments in order to define requirements for new edit- and article-review tools that experienced editors can use to provide assistance to new editors in the context of their curation work.

Methods

[edit]

The primary research activity associated with this project will be semi-structured interviews with experienced Wikipedians who have been involved in activities such as:

Timeline

[edit]
  • May - August 2016: requirements gathering via informal interviews
  • September - October 2016: write up findings and recommendations

Policy, Ethics and Human Subjects Research

[edit]

This research is conducted, collected, and shared by the WMF Design Research team according to the Wikimedia Foundation's user privacy and data retention guidelines.

Results

[edit]

We interviewed 10 Wikipedia editors who had participated at the Teahouse, Articles for Creation, PageTriage, and FeedbackDashboard projects. Some interviewees reported their experience with multiple projects, others only shared their experience on a single project. This section contains a synthesis of these editors' assessments of their motivations for participating in these projects, what makes these projects successful and valuable, and what they found frustrating and challenging about participating in these projects.

Articles for Creation

[edit]
What works
  • Accepting AfC drafts is satisfying work; it feels good to move a good quality draft to mainspace. One editor said that he feels that reviewing for AfC is a better use of his time, as a content creator, than writing new articles himself, because it gives him the opportunity to add much more content to Wikipedia than he would if he were writing articles from scratch.
  • Sometimes it only takes a few relatively quick fixes (some formatting, adjusting non-neutral language, correcting grammar, adding an easy-to-find source) to make an AfC draft ready for main space.
  • The AFCHelper script is an excellent tool, and it makes the review process much easier than it would be if editors had to perform all steps (moving pages, adding categories, removing templates, alerting authors, etc) manually.
  • AfC reviewers are a small but active community; they collaborate and support one another.
  • Submitters receive an invitation to the Teahouse when their draft is rejected, providing them with opportunities to get clarification, second opinions, and next steps. This also reduces the burden on the reviewer, who may otherwise feel pressure to field rejected submitters questions personally.
What doesn't work
  • The backlog. There are usually hundreds of unreviewed articles in the queue. It can feel overwhelming. Reviewers feels compelled to review quickly. One editor noted that this often leads him to reject drafts that could probably be good articles with a little more work. He estimated that he only accepts 10% of drafts now, but given more time he would be happy to devote more time to improving these articles and increase that percentage substantially.
  • A lack of subject matter expertise. Reviewers may not feel that they have the subject matter expertise to assess the notability of certain drafts, or the validity of certain sources. Some reviewers address this issue by posting drafts to WikiProjects, but that may not be enough to alert or encourage contribution from relevant subject matter experts.
  • No response from submitters. Submitter often do not responded to feedback from reviewers, which can make reviewers feel that the time that they spend suggesting improvements to declined drafts is wasted. This problem is exacerbated by the backlog; some draft authors may give up if their draft is not reviewed within a week or two.
  • No way to screen out invalid or bad-faith drafts. Reviewers must wade through a substantial number of hoaxes, attack pages, and test/unfinished drafts when reviewing. There is no good way to flag/screen these or separate out real/good-faith drafts.
  • Many draft submitters don't understand core content policies. A llarge proportion of good-faith drafts suffer from the same set of fundamental problems: a lack of notability, clear conflict of interest, a lack of reliable sources, and a non-neutral tone. Frequent culprits are articles about companies, bands, and private individuals.
  • Little room for re-review. There is little or no incentive to re-review previously declined articles, even if they've been updated: the backlog is too large. And it is not always clear whether a submitter is still working to improve an abandoned or rejected draft (unless the submitter knows to re-submit it)
  • Copyright violation detection. There is no easy way to check for copyright violations in article text. There are external tools for this, but they are not well integrated. Copyright violations are not a large problem in terms of number of drafts affected, but copyright violation is a serious policy issue, so it needs to be detected and caught before the draft is accepted.
  • Unfriendly messaging. The AfC rejection templates may come across as impersonal or even hostile to submitters. One reviewer stated that he was concerned that the language of the rejection templates could be dispiriting to newcomers, and worked to have the tone of the rejection adjusted.
  • Overly conservative selection criteria. AfC reviewers may be too conservative when deciding what to accept. One reviewer remarked that the bar for sources is set much higher for AfC drafts than for articles that already exist on Wikipedia. Reviewers may be criticized if they accept a draft that is later deleted in mainspace.
  • Content merging is time-consuming. Often, part of a draft could be incorporated into an existing article even if the draft as a whole is not suitable for inclusion. However, merging content into mainspace is a time-consuming, manual process that is not supported by theAFCHelper gadget.

Moodbar/Feedback Dashboard

[edit]
What works
  • Moodbar provided an effective, easy-to-use feedback mechanism for new editors who may not have known how to use talkpages.
  • The dashboard allowed responders to filter/sort feedback by 'mood' (happy, sad, neutral). This made it easy for a responder to focus on supporting the people they wanted to support: for example, to find people who were having a bad time, and intervene.
  • The dashboard allowed for very rapid feedback, since the feed was updated in real time. This allowed responders to help editors quickly, potentially before a frustrated editor had given up and left the site.
  • New editors were able to click a button to let the responder know that their feedback was helpful, which let responders know that their work was valued.
  • A kind of community of like-minded editors grew up around the dashboard: "moodbar responders".
  • Responding to feedback created a diff. Responses were posted on the new editor's talkpage under the responder's name. Responders appreciated that their work on the Feedback Dashboard was reflected in their edit history.
What doesn't work
  • No curation mechanism. Moodbar feedback was captured in free-text responses that the community couldn't curate, opening up the possibility that people could use this venue for personal attacks, slander, copyvio, etc. in violation of content and behavioral policies.
  • Some responders found the need to respond with free text (rather than templates) burdensome and tedious.
  • There was an unresolved tension around the point of Moodbar: it wasn't clear whether Moodbar was "for" reviewing or mentoring. The type of people who like to review are not always the same people who like to mentor. Both types used the Feedback dashboard, in different ways and for different reasons.
  • The Feedback Dashboard supported relatively "shallow" interaction with new editors. Some responders felt that providing support to Moodbar comments was not memorable enough and did not provide same motivational benefit as more sustained interactions on the Teahouse or OTRS. One responder explained that he wanted to be able to easily have more extended interactions with people who were struggling than Moodbar allowed. Responder's feedback was posted on the new editor's talkpage, and occasionally that new editor would respond, but not very often (or else, the responder was not alerted to that response).
  • The interactions were not as visible to other Wikipedians as they are at (for example) the Teahouse.
  • High signal/noise ratio. A lot of Moodbar feedback were 'test' edits, or were otherwise not actionable and/or unconstructive. There was no good way to weed out these comments and concentrate solely on the feedback from new editors who needed help.
  • Endless feed. Although there was no backlog per se, the feed of new comments was endless. Responders would have appreciated natural stopping points or better cues to their progress, to reduce the feeling of obligation and increase their sense of accomplishment.

Teahouse

[edit]
What works
  • No backlog: if a particular editor doesn't feel like they have the time or expertise to answer a question, they are confident that another Teahouse host will do so. Almost all questions receive answers, and the overall volume of questions is well-matched to the number of available answerers.
  • High quality service: most TH hosts give good-quality answers, and let the questioner know that they responded, even when the question has been asked many times before.
  • Clearly geared towards new editors. New editors and Wikipedians alike understand the purpose of the Teahouse and how they should participate.
  • Enjoyable. Two editors noted that answering questions at the Teahouse doesn't feel like work. In contrast with more bureacratic and high-stakes projects like AfC, the Teahouse has few rules, requires no commitment, and doesn't put the host in the position of being a sole decision-maker or arbiter.
  • No drama. The Teahouse is relatively free of disputes and 'wikidrama'. Most hosts and guests are polite and friendly.
  • Feedback and validation. Teahouse hosts enjoy being thanked by questioners, and seeing newcomers who visit the Teahouse go on to become Wikipedians.
What doesn't work
  • Occasionally, less experienced hosts give terse, rude, unhelpful or incorrect answers.
  • Occasionally, questioners get upset with hosts, or vent their frustration at the Teahouse.
  • host get a lot of the same questions over and over: newcomers fail to grasp basic concepts like Notability, COI, and Reliable Sources. Most hosts feel that it is important to give a personal answer, even to very common questions, but it can get tedious.
  • Questioners don't always follow-up or respond, which can be demotivating when a host has put a lot of work into their answer.
  • Hosts don't always notify the questioner that they have responded, which may result in the questioner not being aware that their question has been answered.

Page Curation

[edit]
What works
  • The Page Curation system serves as an effective firewall against obvious spam, attack pages, etc.
  • The Page Curation system provides better metadata about new pages and the editors who create them, better filters, and effective review mechanisms (Curation Tools) than Special:NewPages.
What doesn't work
  • The system is constantly backlogged, with hundreds or thousands of unreviewed pages at any given time.
  • Anyone can review new pages ('patrol'), and many do not know how to do it effectively. According to one editor, many of these patrollers accept pages that should be rejected. There are no effective mechanisms for 'reviewing the reviewers' and/or supervising the process, and patrolling should require a particular userright to prevent inexperienced editors from engaging in review.
  • The Curation Tools menu does not present the full suite of deletion criteria options.
  • The system was intended to work in tandem with a standard new user landing page, that would help teach new users what to do (and discourage them from creating articles), but this part of the system was never developed.
  • There are not enough patrollers. One editor noted that requiring a special userright to participate would incentivize more people to participate, because attaining userrights is a mark of status on Wikipedia.
  • According to one editor, restricting new article creation to Autoconfirmed editors only would have substantially reduced the burden on PageCuration, but this change was not implemented by the Wikimedia Foundation.

Design challenges

[edit]

Themes based on the findings from these interviews.

Framing edit/article review as new editor support activity

[edit]

Many editors who are actively involved in edit and/or article review processes think of their work as being about quality control or fighting vandals, not about supporting and teaching new editors. In order to build a tool that facilitates a well-established activity (edit review) but with a specific focus on helping newcomers learn how to be better editors, the product team will need to understand how to encourage the people who currently perform these activities in other contexts to adapt their workflows when using the new system. To change the way they work, they need to change how they think about their work: re-framing it in a more proactive, prosocial way.

The product team will also need to understand how to motivate people who do not currently engage in article/edit review activities, but who support new editors in other ways (for example, answering questions at the Teahouse), to engage in this new kind of edit review.

Striking a balance between flexibility and learnability

[edit]

Wikipedia editors are accustomed to customizing their tools and workflows to suit their particular needs. Reviewers need to be able to easily find, triage, and address the kinds of edits and articles that they enjoy reviewing and/or that they feel are most important to review. ORES provides an efficient and reliable way of filtering out vandalism and displaying only good-faith edits (and vis versa). This should provide a valuable first-stage filter for reviewers, whether they wish to focus primarily on vandalism, or on new editor support. Additional faceted filtering options—such as filtering by topic, recency, or other characteristics of the editor, the edit, or the article—can provide reviewers with the flexibility to focus on the articles that are relevant to their particular interest, expertise, or sense of importance.

At the same time, too many filters and options can clutter the interface, making it difficult for new reviewers to learn how to participate, and potentially making it difficult for even experienced reviewers to do their work efficiently and enjoyably.

Striking a balance between review efficiency and meaningful interaction

[edit]

Efficiency is a particularly important consideration when dealing with a high-throughput process like edit review, because of the huge volume of edits being made in real time on a large project like English Wikipedia. It is also an important consideration when addressing article review workflows, which may experience a lower (though still substantial) volume of new content every day, but where the review process itself is more complex and time-consuming.

In both cases, editors need to be able to perform the necessary actions efficiently so they can move on to the next item. However, the quickest solution is often not the best one, from a new-user-support standpoint: reverting an edit, or rejecting an article, and then dropping a template notification with a canned rationale onto a new editor's talkpage discourages that new editor from trying again. Ideally, it should be as easy to provide encouragement, explanation and actionable advice to a good-faith newcomer who makes a mistake as it is to revert & warn them.

Making edit review feel important, but not urgent

[edit]

One reason article reviewers give for not taking more time on each review (time they could spend improving the draft, or advising the creator) is that AfC and NPP have a perpetual backlog of hundreds or thousands of articles. With edit review activities, there can be a similar anxiety that reducing velocity will allow damaging edits to slip into the encyclopedia uncaught. In both cases, reviewers perform the work because they believe that it is important to the encyclopedia. They plow through their reviews fast—often to the detriment of the reviewed editor—because the unreviewed backlog and/or the constant flood of new edits coming in creates a sense of urgency that drives them to finish each task as quickly as possible. There is probably something also inherently satisfying about accomplishing a large number of tasks in quick succession.

The product team needs to communicate that offering support to newcomers in the context of review is an important activity, a valuable contribution to the encyclopedia, and give reviewers a sense of satisfaction for taking the time to do it well. At the same time, the team will need to reduce the sense of urgency that makes reviewers feel like they need to move as fast as possible all the time. Ideally, reviewers should feel that completing one detailed review, which provides encouragement and personalized feedback, is at least as valuable, and as satisfying, as speeding through five or ten reviews with a quick accept/decline. They should also feel that Wikipedia as a whole won’t suffer if they choose to take a day (or a month) off of their reviewing activities; others will cover for them and that they can jump back in at any time without feeling backlogged or out of touch.

See also

[edit]

Subpages

[edit]

References

[edit]