IP Editing: Privacy Enhancement and Abuse Mitigation/Improving tools/Local discussions 202006
As a part of the project to build tools to offset potential negative aspects of hiding IP addresses, the team has been reaching out to communities to get an understanding of their local workflows, if they do something we're unaware of and need to address, and to see if they are worried. This is to make sure we get an understanding of the needs and fears of different communities in different situations. All posts were translated into the local languages.
This project is the result of changing norms around what can be done with IP addresses and how information on them can be published. The Wikimedia Foundation is involved in a multi-year project to find solutions to these issues, to not throw IP masking at unprepared communities, where the patrollers and investigators have been given no extra support. This involves discussions on Meta, conversations with especially affected groups – such as the stewards – to understand additional burdens it might mean for them and how we can work to either counter those or, if we can’t, make other work easier so that the total burden doesn’t grow.
- There was limited feedback on the tools, but generally positive feedback on the IP info feature, mixed feedback the finding similar editors tool and mainly scepticism around a database for documenting long-term abusers.
The IP info feature feedback was generally "sounds good", with few specific requests.
- Quote: “Would be good if the masked identities are interrelated if they belong to the same range.”
One editor raised concerns around our ability to train the finding similar editors system properly and that it would exacerbate admin behaviour.
- Quote: “So the system would be based on machine learning about vandal behaviour. There’s a significant risk the system will learn what admins are watching out for. Pages that are heavily watched today will be watched by the technology. To give an example: When administrators revert the removal of “far-right” from descriptions, the machine will learn this is an undesirable behaviour.”
Most editors who commented on the database for long-term abusers thought it wouldn’t be useful. They were concerned about limited access, the local tradition of not writing down information about vandals to avoid feeding and encouraging their behaviour, and the ability to find the right information.
- There was a longer discussion around whether this would mean that we should disallow unregistered users from editing. The consensus was that we should continue to allow unregistered editing and that this was no reason not to.
When asked outright, no one seemed very concerned about this or worried it would threaten the wiki's ability to handle vandals, given persistent aliases for IPs and the ability to do effective range blocks.
- Quote: “Agreed. And IPv6 addresses are impossible to memorise anyway.”
The French Wikipedia had an early exposure to the IP masking conversation back in August 2019 when one of their CheckUsers started a thread, following the launch of the IP masking consultation on meta-wiki. At that time, the key concern of the community was that masking IP posed a great risk to counter-vandalism. The conversation initiated on 19 February 2020 built upon that previous thread while inviting the community to comment on the three key features envisioned: IP details, finding similar contributors and the database of vandal.
Overall, while some still questioned the relevance of masking IP , ,  and wondered whether community inputs would make a difference  , the project and new features received favorable reactions. The envisioned improvements were generally welcomed , ,  though some concerns were voiced regarding the vandal database , and AI-powered suggestions , , .
The discussion centered around some key ideas. The IP details feature looks promising and useful. Regardless of whether ip is masked or not, it's crucial to include in the IP details which institution is behind the IP as it’s vital to spot self-promotion for example . Leveraging AI for predicting suspicious IP range and similar users has a huge potential  but bears risks inherent to AI: false-positives and bias as this has been noticed with ORES on wiki and other AI products elsewhere . The vandal database will be hardly achievable due to the complexity of the registration. Registering vandals does not only involve their IP and topics of interest, it includes idiosyncrasies and editing patterns which are hard to map. Some explained that it's preferable to improve the existing abuse filter . Others commented that as long some users such as CUs could still access IPs masking it would make sense .
Tamil Wikisource, Tamil Wikipedia, Punjabi Wikipedia
Tamil Wikisource, Tamil Wikipedia, Punjabi Wikipedia, We wanted to reach out to some smaller communities that have picked up speed slightly later than the other ones we asked. We honestly didn’t get any feedback worth mentioning here, more than a general comment supporting the work because protecting the integrity of IP editors is important.
The Chinese discussion didn’t have a clear consensus either way, but focused more on the idea in general than the specific tools, partly because participating users had already left detailed feedback on the tools on Meta. A concern that was raised was the lack of local checkusers – who would be able to see the IP addresses and if no one local had that access, what would that mean for the ability to do vandal-fighting? Our main impression was that the community felt that they would prefer to know more about the plans before they could comment on them.
Thirty users commented in the Arabic Wikipedia discussion (as of when we summarised the discussion after a month). Fifteen of these were newcomers, who might have a more limited experience in fighting vandalism and what this could entail. In general the Arabic community was much for masking IPs on privacy grounds, though concerns were raised – a specific point of interest was who would have access to the IPs that today are public, and how access to that information would be monitored. There were also fears that tools to find similar editors, or even just determining the location, might be misleading – IPs in the Arabic-speaking region of the world tend to be very dynamic, has a higher use of proxies and by surfacing this information we might give editors a false confidence in the information given to them, whereas those who know enough about IPs to make use of it today, when it’s more hidden away, have a clearer understanding of the limitations of the knowledge they get.
Most of our surveyed communities seem to feel generally confident in their continued ability to protect the wikis, but this is dependent on tools that are still very much at an early stage and this feedback does in no way invalidate other feedback given elsewhere. The feedback on the IP info feature was the most positive. Based on this feedback and the feedback on Meta, we will either abandon or de-prioritise the plans for the LTA database, which seems to be less universally useful across wikis.
The Anti-Harassment tools team is currently wrapping up its work on CheckUser Improvements. Our goal is to build the anti-vandalism tools that the communities need in order to operate in the absence of IP addresses. This work needs to happen before we can consider masking IPs. We will be building new tools and improving existing ones to work independent of IP addresses.
As the next step, we are preparing to do research into patrolling workflows on several projects next in order to determine how we can make improvements, especially with regards to IP patrolling. In the meantime we have also begun digging into the technical work needed to show relevant IP information our patrollers need to do their work. You can read more about it on the IP Info feature project page.
Your feedback is more than welcome on the talk page or on either of the project pages. If you wish to write to us privately, you can do so by emailing the product manager responsible for this project at niharikawikimedia.org.