Please do not post any new comments on this page. This is a discussion archive first created on January 7th 2014, although the comments contained were likely posted before and after this date. See current discussion or the archives index.
The following discussion is closed: close given change from Geoff and lack of response, will archive in couple days unless reopened. Jalexander--WMF 22:42, 18 December 2013 (UTC)Reply[reply]
The policy says that information will be shared on a need to know basis, but doesn't describe how this need is determined or how the list of those on the "need to know" list is managed. Is it anticipated that all employees will be on the list? The policy mentions ArbCom needing to contact someone whose rights have expired or been removed, but this is hard to really envision. What ArbCom, what project? Are the de.wp ArbCom members allowed to individually request a copy of the IDs of en.wp checkusers? Do you really imagine that ArbCom is going to... write a letter to the address on the ID, or something? The scenario posed is just really strange and unlikely, and points up a serious weakness in the policy as a whole: no real definition as to who can get access and under what conditions. Nathan T 19:56, 14 October 2013 (UTC)Reply[reply]
Thanks for taking the time to ask about this, Nathan. Generally, there is only so much we can do in the policy to limit who can get access and when. People who are trusted under this policy have a lot of different forms of power, and can do a lot of different - unpredictable - things with it. Because of that, there must be some flexibility in the policy, and there must be some trust in the judgment of the WMF employees who will make calls on the spot when information is needed.
This specific phrase means what it means - access will be given to the people who need to know, on a case-by-case basis. If people don't need to know, they won't be given access. That includes employees; tech wouldn't get access to information needed to solve a non-technical problem; HR won't get access to information needed to solve technical problems, etc. Hope that helps clarify. -LVilla (WMF) (talk) 20:24, 1 November 2013 (UTC)Reply[reply]
If I'm understanding your response correctly, then there was a comment made earlier by someone either on this page or on a mailing list that I'd like to reiterate: can we make it a mandatory part of the policy that if you (the WMF) give access to any personal details of any person under this policy to anyone that "needs to know", then that person is notified with whom their information was shared, and for what reason, in a timely manner? Thehelpfulone 22:14, 2 November 2013 (UTC)Reply[reply]
The following discussion is closed: closing this section as it seems that discussion has stopped and the topics are being discussed elsewhere on the page. Will archive in a couple days unless someone reopens. Jalexander--WMF 22:27, 23 December 2013 (UTC)Reply[reply]
As has been repeatedly pointed out, OTRS "volunteers" often have access to much more sensitive and private information than admins/CUs do, so controls on them should be much stricter than they are; currently, the screening of candidates is very poor, and (shockingly) there isn't even a public election process for these "volunteers" to have access to such private information.
The real problem lies in who has access to private information. It should be mandatory to go through a public process, to obtain OTRS access, wherein the community would have the chance to express concerns about the candidates. (If this is required of admins and checkusers, why isn't it required for OTRS access, where the stakes are so much higher?)
This policy proposal is, IMO, weak, and at this rate will get even weaker. I note the proposed period of retention of "submitted materials" (identification) has now been reduced from 3 years to 6 months. Needless to say, it often takes years for victims to even realize that their private information has been misused, or to detect abuses. (This isn't exclusive to identity-theft victims.) Even if abuses can be detected in just 6 months, the victims still need time to prepare their defense, ask for help, and be informed on what to do, etc., so the now-standing period of 6 months is ridiculous. The original proposal, 3 years, was much more reasonable. In no case should it be shorter than one year (minimum).
Some "volunteers" have, and will, threaten to leave if they are forced to identify themselves to the WMF (something they think will ruin their "hobby"). I fail to see how this is a bad thing. Indeed, it is a very good and desirable thing that such people stop having access to sensible private information, as soon as possible. In particular, OTRS volunteers at Commons often have access to not just full names, dates of birth, etc., but also full ID copies—again, access to such documents should obviously be much stricter than it is; as things are, it seems that even OTRS volunteers from other wikis can read tickets sent to Commons, and this needs to change.
Also: In the original proposal, it was very clear that only people of 18+ years of age could have access to such documents. Now, again, the proposal changed, and an addendum was added: "except email response team members who must be at least sixteen (16) years of age." I do not pretend to know the meaning of this change. If "email response team members" include OTRS members, then this is very concerning, as they are the ones who have access to the most private documents. This exception for 16 year-old kids is thus absolutely incomprehensible. DanielTom (talk) 16:09, 21 November 2013 (UTC)Reply[reply]
The exception for 16-18s already exists. It isn't something new in this proposal. --Krenair(talk • contribs) 16:47, 21 November 2013 (UTC)Reply[reply]
Hello @DanielTom: There is an ongoing discussion about our options regarding the access policy. There are a number of different perspectives expressed there, so we appreciate that you are sharing your point of view. You may want to join the discussion above as well. Thanks, Stephen LaPorte (WMF) (talk) 22:06, 3 December 2013 (UTC)Reply[reply]
Hi DanielTom. I just wanted to add that the inclusion of OTRS members to the policy scope is new, not the 16 year old age requirement. The reason why the exception language was added to the policy was because OTRS members are now being addressed in the policy also, not because we were creating a new exception to the age minimum. Hope that clarifies things. Mpaulson (WMF) (talk) 22:58, 3 December 2013 (UTC)Reply[reply]
@Stephen: there is no public election process for OTRS "volunteers". Apparently we must have elections for CheckUsers, but not for OTRS access. This logic, which for sure is not that of Aristotle, nor of Condillac, must baffle any rational person. Perhaps Mpaulson could explain to me, the meaning of her words in the discussion you linked me to, where it is said that an "alternative" to identification would be to make it "clear to the general public that certain information that is otherwise nonpublic may be shared with members of the community who have been granted certain access rights by the community". It seems to me, that what is meant here by "the community", is a group of less than 10 people. Or will the WMF insist that the community be actually given a choice in who is allowed to have access to sensible private information? I insist that the WMF should raise the minimum age for OTRS access to 18, and make elections mandatory for this sort of privileges. Disse. DanielTom (talk) 12:44, 4 December 2013 (UTC)Reply[reply]
I wonder why you use quotation marks when writing about OTRS agents, @DanielTom. Are you trying to suggest that people get paid for responding to e-mails sent to Wikimedia? As far as your insistence is considered, have you ever had anything to do with OTRS (have you ever e-mailed one of the addresses, have you ever had access to OTRS), or are these all merely theoretical assumptions? I'm especially interested in learning how you know that OTRS agents often have access to dates of birth and copies of IDs, and what made you think that users from other wikis have access to e-mails sent to Commons. Thank you. odder (talk) 12:54, 4 December 2013 (UTC)Reply[reply]
@odder: your comment adds absolutely nothing to this discussion, other than your bad-faith insinuations. If you are so curious (or want to confirm what you already know), you may ask me those sorts of questions at my talk page, but having already gotten a few threats I am not going to let this degenerate into a section about my person. DanielTom (talk) 13:32, 4 December 2013 (UTC)Reply[reply]
I asked you those questions here, and would like to receive your replies here as well, so that everything can be seen in one place. It seems to me at this point that you are trying to influence a process you have never had anything to do with, and one you don't know anything about. The fact that you are posting erroneous assumptions and are using low quality rhetoric tricks (such as using quotation marks when referring to OTRS agents) and are trying to rebuff other people's comments by accusing them of bad faith is disrespectful, and I would wish you stopped doing that, as it only hurts your point. odder (talk) 13:48, 4 December 2013 (UTC)Reply[reply]
While helping on OTRS, I have never asked any correspondent for such stuff as dates of birth or copies of ID; I would positively advise those emailing in to avoid doing it. Off the top of my head I cannot think of any scenario where this would either be needed or desirable as OTRS is *not* guaranteed to be secure or confidential, in particular the WMF makes absolutely no guarantee of these sorts. In practice a bit of domain name checking, Googling for background (such as Employer sites, Facebook, Linkedin profiles etc.) provides plenty of information to make most correspondents seem credible for any claim being made. Push comes to shove, one might (out of sympathy for a troubled correspondent) offer to have an undocumented phone-call/Skype chat to discuss a problem and then follow up with email evidence as needed. Anything much more complex than this starts to look "above my pay grade", as an unpaid volunteer, and suitable to pass on to a WMF employee or sometimes a local chapter who might offer relevant support through a local network of 'real people', for example by offering training about the projects or just having social wiki-meets where they can discuss problems face to face with volunteers or employees. --Fæ (talk) 15:02, 4 December 2013 (UTC)Reply[reply]
@odder: This is supposed to be a page for community feedback. If I am wrong, feel free to correct me. If you think people with access to private information shouldn't have to be > 18, and shouldn't have to be elected, then feel free to present your point of view. But this discussion page is not limited to contributions from people with OTRS access – although you could not tell from looking at it, as most comments here seem to be from people who think their hobby will be ruined if they have to identify themselves to the WMF, and threaten to resign their posts (which is the true pressure WMF is facing). Where are the regular users, or those whom this policy was originally supposed to defend, who dare to face up to the countless admins and pressure groups who want to keep their privileges unchecked? Where? Nowhere, and I can say from personal experience they are wise in not participating in these discussions, but until someone shows me why unelected and unchecked 16 year old kids should be allowed to have access to other people's private information, I won't shut up. (I've already made my points, and don't intend to waste more time here as I find this environment hostile, not to mention the off-wiki threats. Unfortunately, I don't believe any effective policy will come out of this – as the title of this section indicates –, despite WMF's good intentions. Have it your way!) DanielTom (talk) 15:38, 4 December 2013 (UTC)Reply[reply]
@DanielTom: of course this is a page for community feedback, and I don't think anyone here is of the opinion that it should be limited to people with OTRS access. The fact that they are the most vocal ones is because the current draft requires them to identify to the WMF, something that is not obligatory at this point. That, as you say, regular users are not taking an active part in this particular discussion is quite obvious to me: they simply are not subject to this policy, as they do not have access to nonpublic information (though I agree with you that their personal information is what's this policy is trying to protect). You might have made your points, but it doesn't change the fact that they are based on wrong assumptions, or at least assumptions which you are unable to defend ("16 year olds cannot be trusted with private information" is a good example). Finally, if you have received off-wiki threats because of your involvement in this discussion, please report them to appropriate bodies; I'm sure that, should you be able to identify the wrongdoers, the WMF would be more than willing to help you; and in case of e-mails sent through on-wiki interface, local admins or stewards could assist you, too. odder (talk) 16:07, 4 December 2013 (UTC)Reply[reply]
How would you justify removing the current OTRS agents who would not meet the new requirements? What have we done wrong? I also object to the assumption that I must by default be irresponsible with private information I can access because of my age, and that I will magically become trustworthy at your stupid arbitrary age line. --Krenair(talk • contribs) 00:09, 9 December 2013 (UTC)Reply[reply]
The following discussion is closed: close given lack of response/staleness, will archive in a couple days unless reopened. Jalexander--WMF 22:44, 18 December 2013 (UTC) Reply[reply]
Past practice of retaining our IDs (or not)
The following discussion is closed: closing given lack of response since Michelle's comment, will archive in a couple days if no further response 22.214.171.124 19:32, 2 January 2014 (UTC) Reply[reply]
Answering Snowolf's question of why the ID has to be retained over retaining the data in it, LVillawrote: "(...) the ID portion of the proposal was mostly based on past practice." I would really like to know what practice this is, because I've always been told that the WMF did not retain any of our IDs. If anyone from the Foundation could explain the meaning of this, or perhaps tell us that it was just a mistake on Luis's part… odder (talk) 17:51, 13 December 2013 (UTC)Reply[reply]
Hi odder! I think it was just a miscommunication. I think what Luis meant by "mostly based on past practice" was that we previously verified someone's identity under the current Access to nonpublic data policy by checking identification documents they submitted to us. "Submitting" in this case could mean someone emailing a copy of their ID, showing us their ID at Wikimania, mailing a copy of their ID to us, etc. However, it was also past practice (as I understand it) to not retain copies of the identification documents submitted to us after we checked them. Hope that clears things up! Mpaulson (WMF) (talk) 22:53, 18 December 2013 (UTC)Reply[reply]
Explicitly excluding administrators
The following discussion is closed: closing because of lack of additional comments and changes earlier on the admin question, please reopen if more, will archive in a couple days if not Jalexander--WMF 23:45, 7 January 2014 (UTC)Reply[reply]
Under Community members covered by this Policy it states "Community members with access to any tool that permits them to view nonpublic information about other users (such as the CheckUser tool);". Technically speaking, this could include any deleted nonpublic information that administrators can view, i.e. deleted user contributions. Can we tweak this wording or add another sentence where appropriate to explicitly exclude this policy applying to nonpublic information viewable by any (wiki) administrator? Thehelpfulone 22:41, 5 September 2013 (UTC)Reply[reply]
Hi, helpful one. We definitely have been struggling with the wording on this one. Note that any changes in the policy may need to be made in the Confidentiality agreement as well, which presently reads:
Nonpublic information. Nonpublic information is private information, including private user information, which you receive either from tools provided to you as an authorized Wikimedia community member or from other such members. Authorized Wikimedia community members may include, for example, oversighters, checkusers, functionaries, volunteer developers, and other similar authorized roles. Nonpublic information includes IP addresses and other personally identifying information that are not publicly available. It does not include information about a user that that user otherwise makes public on the Wikimedia projects.
I see your point and think it is a good one. I was hoping that you might draft some language for consideration. It will need to be phrased in a way flexible enough to stand the test of time and changing environment within the Wikimedia movement. If you have no time to do this, I fully understand. We will try to figure it out. Thanks so much for your comments (which are always helpful). :) Geoffbrigham (talk) 02:31, 6 September 2013 (UTC)Reply[reply]
Thanks Geoff. It is indeed a tricky one, and I added the word "wiki" because information viewable by Bugzilla and OTRS administrators for example shouldn't be excluded. The last sentence It does not include information about a user that that user otherwise makes public on the Wikimedia projects. does partially cover this, but given that the policy should cover anything that a user makes public but is then suppressed (so it's only viewable by oversighters), I'm not too sure what the best wording would be! Thehelpfulone 12:34, 9 September 2013 (UTC)Reply[reply]
Hi Thehelpfulone. As an update, we are having internal discussions on this and will return with a proposal sometime soon. Thanks! Geoffbrigham (talk) 10:46, 17 September 2013 (UTC)Reply[reply]
Looks like no progress on this in the last month. I'm a little out of practice about how user-rights related to these roles can be assigned, but assuming the rights associated with the administrator right are fairly stable across projects (i.e. advanced rights from oversight, CU, etc. can't be devolved to them) then you could just explicitly exclude administrators. That's as simple as saying "This policy does not require that users with the administrator role [potentially specifying a standard set of rights] identify to the Foundation." Then maybe build non-public information protection into the general terms of service.
Any deleted revision could potentially have information that a user might not want to be public - for instance, personally identifying information posted to a talk page by a user misunderstanding the implications, or a logged out edit that creates a link between a user account, a name, and a IP address, etc. Since anyone can encounter this type of thing even before the revision gets deleted, it's not possible to perfectly ringfence information - anymore than it is to require literally everyone to ID. Nathan T 19:31, 14 October 2013 (UTC)Reply[reply]
So I'm trying to come up with some appropriate wording for the administrator part, my concern is that just saying "administrator role" could be confusing because I know many projects and community members who consider CU/OS 'administrative/administrator' roles and I don't want to confuse that if we can. Would sysop, as the official 'technical' term that is in the system on all projects be too technical of a word to use? Within the community it may not be as long as it's translatable, something like "This policy does not require that users with the rights normally granted to local sysops, such as the ability to view standard deleted revisions, to identify to the Foundation" (to cover people like global sysops as well). Jalexander--WMF 02:39, 19 October 2013 (UTC)Reply[reply]
Hi All! We talked about this a bit and wanted to know what you would think about removing "functionaries" and saying “This policy does not require users whose rights only include the ability to view standard deleted revisions.”? Mpaulson (WMF) (talk) 22:30, 25 October 2013 (UTC)Reply[reply]
Hi Michelle, apologies for the delayed response - I didn't notice that you'd replied. That seems fine, but I just noticed the previous sentence "For illustrative purposes only, some examples of groups of community members to which this Policy applies include: OTRS administrators, email response team members, and Stewards." Can we be consistent in so far as choosing the terminology OTRS administrators and team members or email response team leaders and team members, not a mix of both? Thehelpfulone 01:52, 8 December 2013 (UTC)Reply[reply]
So the difference between "OTRS administrators" and "email response team members" is on purpose and, in my opinion, correct to be different. The team itself is the 'email response team' and not necessarily just OTRS (which is just the software) but administrators are separated out because they are administrators on the OTRS software and the access that, specifically, entails. Jalexander--WMF 19:01, 9 December 2013 (UTC)Reply[reply]
@Jalexander: I find your answer confusing. Is this supposed to suggest that there is a different volunteer response team than the one which uses OTRS? Or is this just a safeguard in case we change the software to something else? In that case, I think we shouldn't be using the term OTRS administrators, either… Would you mind clarifying this a little, please? odder (talk) 19:26, 9 December 2013 (UTC)Reply[reply]
Sorry, I'll try to explain my thinking a bit more, it's mostly based on 'why' I think people are/should be covered. Basically:
OTRS administrators: Are a special category specifically because of their administrative access to the OTRS and the access that entails. If we changed software at some point they would likely have similar access but I don't know that. They are also, obviously, covered as members of the volunteer response team.
Volunteer response team members: Are a special category because of their role as response team members who, while they generally use the OTRS software do not always. There is certainly the possibility that they could use other tools at some point (for example I could foresee them helping the fundraisers which use a different system, and I've always wanted to use another system for OTRS but I don't see that happening anytime soon) but I think the bigger thing is that there are certainly members who answer emails (or occasionally phone calls) outside of the OTRS system at times. Usually these come from staff members or Jimmy/board members but it can also be because people reach out to them outside of OTRS because of their OTRS userbox or something like that.
New Version of the Access to Nonpublic Information Policy Draft
The following discussion is closed: Closing this section for lack of response for a while within the section, please reopen if that is not the case. Will archive in a couple days if not reopened. Jalexander--WMF 00:38, 8 January 2014 (UTC)Reply[reply]
In light of the feedback we have received over the past couple of months, we have redrafted the Access to Nonpublic Information Policy draft. This new draft addresses many of the bigger concerns expressed by the community. Below, I have outlined some of the major changes from the old draft as well as potentially contentious points that require further attention and discussion by the community. Please read over the new draft and let us know what you think!
We have announced this new version via Wikimedia-L and Announce-L. However, if you believe there are other places or lists we should utilize to alert the community about these changes, please let us know and we’ll be happy to send out additional alerts. Mpaulson (WMF) (talk) 22:39, 20 November 2013 (UTC)Reply[reply]
Major Changes & Points of Discussion
No requirement to submit identification documents. You will only be required to submit your real name, date of birth, email address, and mailing address. You may submit this information to us in whatever manner is most convenient for you.
Questions to the community:
Do you believe this level of identification is sufficient? If not, would submission (but not retention) of a copy an identification document of the community member’s choosing be sufficient?
Do you believe that there is any reasonable method of verifying names and dates of birth, in the case of submission without accompanying identification documents?
Do you believe that there is any reasonable method of verifying names and dates of birth, in the case of submission with accompanying identification documents from all over the world?
If we are able to verify that the identification document submitted is valid, is there a way to verify that the identification document belongs to the particular community member who submitted the identification document?
Storage & security of submitted materials. We will electronically store the submitted materials at the same or greater level of security provided to the personal, nonpublic information of WMF staff. We will destroy the original copy of the information you submitted to us (for example, destruction of the paper you mailed us or permanent deletion of the email you sent us containing the information) once it has been recorded.
Verification & duty to update. We may ask you to verify your email address or mailing address (for example, by returning a postage-prepaid verification postcard sent to your submitted mailing address). We ask that you notify us in a timely manner if there is a change to your name, email address, or mailing address.
Retention of submitted materials. We will store the submitted materials only for the period in which you have access rights plus six months.
Disclosure of submitted materials. We have narrowed the language describing the scenarios where we may disclose your submitted materials to third parties.
Questions to the community:
What do email response team members think of their inclusion in this policy draft?
What do other members of the community think of the inclusion of the email response team?
What do you think about the email response team having a lower minimum age (16) than other community members with access rights (18)?
I think we have answered your questions. Let me know if we have not. Cheers. Geoffbrigham (talk) 21:50, 3 December 2013 (UTC)Reply[reply]
I am an OTRS administrator and just wanted to note that we are aware of the update which would include all agents and are working on a plan which would of course include contacting the Agents and informing them ASAP. Now, speaking for myself only, I have no issue with the new revised policy as it relates to members of the email response team.
As for the lower age for OTRS members: There isn't much we can do about that, unless we remove all users who are under age 18 (which I definitely am opposed to). In my opinion it should be all or nothing - we would need everybody to identify. Even a 16 year old Agent. There might be more complications on the Legal side of things there but that's for another discussion.
There is definitely a large amount of nonpublic information available to OTRS agents. It is, in my opinion, most definitely comparable to oversight/checkuser/etc. in the terms of the sensitivity of the information available to those with access. This is one of the primary reasons I don't have an objection the the Foundation making this change. Other admins and agents may disagree - that's fine. Just my opinion. :-) Rjd0060 (talk) 02:04, 21 November 2013 (UTC)Reply[reply]
Overall I am happy with the changes made. All of the major bases have been hit, it seems, and I don't think that it is unreasonable to get us (identified users) to send back a prepaid postcard. I am curious to see how that works across international borders, though. WRT to the age limit, I would still favour removing it completely. If a 16 year old can handle OTRS, they can handle CU/OS. I'm not really expecting that to make it into the policy though. Ajraddatz (Talk) 02:07, 21 November 2013 (UTC)Reply[reply]
Speaking personally also, I have no issue with the amendments, particularly now that you don't have to actually submit documents like the old days, just name+dob+contacts. Hopefully this assuages some concern. A big point for me, that I thought I should mention to clarify for OTRS respondents who might be concerned, is that OTRS respondents can still use pseudonyms (ie. real-sounding fake names) as their OTRS name to respond with/have listed in the system (eg. I could have 'Damien Broderickil' in the OTRS system as my name if I chose). That doesn't change, from my reading of the policy. Daniel (talk) 02:33, 21 November 2013 (UTC)Reply[reply]
@Rjd0060: - Thank you for your comments. I apologize, but I'm a little confused about your comments with regards to age. What do you mean by "it should be all or nothing - we would need everybody to identify. Even a 16 year old Agent."? Under the new draft, everyone who has access rights would need to identify (regardless of age). The age requirement is separate -- all volunteers with access rights need to be at least 18 with the exception of email response team members who only have to be at least 16. Does that address your comment or did I misinterpret what you said? Mpaulson (WMF) (talk) 22:25, 3 December 2013 (UTC)Reply[reply]
Yes, I know it would apply to all. I was just supporting that statement. Rjd0060 (talk) 23:30, 4 December 2013 (UTC)Reply[reply]
Ajraddatz - Thank you for sharing your opinion. I'm curious as to what others think about lowering the overall age minimum to 16 (rather than just allowing email response team members to be at least 16). Mpaulson (WMF) (talk) 22:25, 3 December 2013 (UTC)Reply[reply]
Daniel - Thank you for your comments. You are correct that OTRS respondents can still use pseudonyms in their responses to people who write into to OTRS. This policy only asks that a real name be submitted to the Foundation. Mpaulson (WMF) (talk) 22:25, 3 December 2013 (UTC)Reply[reply]
Still one of my earlier questions isn't answered either. Where is all the personal information needed for? As I wrote before, the government of the Netherlands warns a lot for identity theft, and as with these information identity theft is relative easy, they recommend to give as less as information and only those information which is really needed (also because a lot of privacy leeks occurred the past years in a lot of companies (digital burglary, missing USB sticks, etc)). And as I asked before, I still don't understand why my date of birth or postal address is needed, or from anyone else. Maybe "Our users expect that we know who has access to their nonpublic information.", but that doesn't answer the question why the date of birth or postal address are needed. (I know that someone needs to be 16 or older, but that does not require a date of birth.) As leeks occur often in the digital world, I still do not see how WMF is preventing that and what means are used to secure those stored info.
This policy says something about access to nonpublic information and the identification community members, but personal information users provide according this policy is also nonpublic information that should be covered by this policy and thus the staff with access to this information should identify themselves too. As I think there should be some segregation of duties, identifying to themselves sounds odd to me, and I think the WMF staff with access to this nonpublic information should identify themselves somehow to the community.
Further I think the users who supply personal information have the right to know what happens with their information. If the information is stored, and later accessed, the user in question should be informed about who has accessed and where it was for.
Concerning the OTRS users, I do not think they need to fall under this policy. The information stored in the OTRS system is the information supplied by the sender disclosed by himself who choose actively to contact, while the information of a checkuser goes deeper. However I do think that some policy, a lighter version, is needed to assure confidentiality.
To answer the question: "Do you believe this level of identification is sufficient?" I think more than sufficient for what this identification is needed for.
The goverment of the Netherlands gives a specific list of how an identity card or passport can be checked, and consider it only possible to check for the authenticity of the document by visual inspection (holograms, watermarks, etc) and that checking for authenticity is not possible by seeing a copy or scan of it. So the question which starts with "If we are able to verify that the identification document submitted is valid" isn't reliable as it is suggested with this question.
"is there a way to verify that the identification document belongs to the particular community member who submitted the identification document?" -> I think it is very easy to know the information from someone else and send that information in instead of yours. Faking a copy of an ID document is relative easy as well. If someone needs to identify (to get a social allowance, or to open a bank account, etc) that requires a face to face identification: you coming to a place with your ID to show to a trained person from that organisation who checks on the picture is really you and if all the security features are in place. If you want to be sure of that someone is the actual person who has access to an account, they show the ID, get a token from the employee, locally log in there with your account and add that token to a specific page. That is the only way to be sure of this.
I still have questions on the effectiveness of this policy? And where does the need come from to have this new policy, what is the current policy (the one we use before this proposed one will get active) and what is wrong in it that this change is needed, and what will be changing exactly? Perhaps I have missed it, but I haven't seen it...
So I have some questions, I consider them basic questions, and I hope someone from WMF can answer those. Thanks! Romaine (talk) 03:25, 21 November 2013 (UTC)Reply[reply]
Speaking for myself as an OTRS-agent I have no problems with providing my real name, date of birth and my home adress. As long as the information is for the foundation's use only ofcourse. And if they don't send mail to my home adress. Yes some tickets contain highly sensitive material so I can understand that the foundation wants some conformation that you are who you say you are. Natuur12 (talk) 12:36, 22 November 2013 (UTC)Reply[reply]
Hi Natuur12. Thank you for taking the time to share your opinion! Just to let you know, we will not be using your submitted address to send you things. The only time we would mail you something would be in relation to this clause: "complete verification of email address or address processes (such as returning a postage-prepaid verification postcard sent to their submitted address or responding to a confirmation email sent to their submitted email address) if requested to do so". Mpaulson (WMF) (talk) 22:47, 3 December 2013 (UTC)Reply[reply]
Let's assume someone is granted permission at 2014-01-01 and he resigns at 2015-01-01. Then, when is the expiry date of data retention? 2014-07-01 or 2015-07-01? The expression "the period in which you have access rights plus six months" seems quite misleading. – Kwj2772 (msg) 09:05, 27 November 2013 (UTC)Reply[reply]
That doesn't strike me as ambiguous. The period is from 2014-07-01 to 2015-07-01 — a period being an interval of time, including the starting moment, ending moment, and all the moments between. So that means the materials are stored for all of those moments, plus all the moments extending for six additional months, which would presumably be until 2016-01-01. (I'm honestly not fussed about whether it's interpreted to mean the end date plus six calendar months, or the end date plus 183 days.) --Pi zero (talk) 14:04, 27 November 2013 (UTC)Reply[reply]
Hi Kwj2772. Pi zero correctly interprets this provision. The period in which you have access rights in the scenario posed is 2014-01-01 to 2015-01-01, which means WMF can retain the information for 6 calendar months following 2015-01-01, which would be 2016-01-01. Mpaulson (WMF) (talk) 22:38, 3 December 2013 (UTC)Reply[reply]
Hi Everyone! Thank you again for your input on this thread. I just wanted to check in to make sure everyone's questions were addressed. I think we've got everything, but if you feel like we missed something, please let us know and we'll do our best to address your question or concern. Mpaulson (WMF) (talk) 21:48, 18 December 2013 (UTC)Reply[reply]
Hi Michelle; I still need to reply on Pajz but that shouldn't make any difference as I'm still waiting for your reply here. Trijnsteltalk 22:25, 18 December 2013 (UTC)Reply[reply]
No, there are still a lot of open questions and complaints on this page and I haven't even managed to write sth. about the raised question of whether the WMF should require identification at all. --MF-W 22:41, 18 December 2013 (UTC)Reply[reply]
Sorry guys! I think my comment was not very clear. I meant within this particular sub-thread entitled "New Version of the Access to Nonpublic Information Policy Draft" not the entire discussion page! Mpaulson (WMF) (talk) 23:09, 18 December 2013 (UTC)Reply[reply]
Briefly: I think lowering the overall age to 16 makes sense. –SJtalk 13:47, 28 December 2013 (UTC)Reply[reply]
The Foundation will share submitted materials…
The following discussion is closed: closing for lack of discussion and response after Rubina's last change, will archive in a couple days if not reopened. Jalexander--WMF 00:42, 8 January 2014 (UTC)Reply[reply]
The new draft continues to mention that the Foundation will share submitted materials with third parties when such disclosure is:
(B) required by law;
(C) needed to protect against immediate threat to life or limb;
(D) needed to protect the rights, infrastructure, or safety of the Wikimedia Foundation, its employees, or contractors.
First of all, I'm not sure whether the phrase "submitted materials" is the best one to use here since the Foundation promises to destroy the original media in which the materials were submitted "immediately upon transfer to the electronic database." Perhaps the word data or information might be better here; but that might as well be just my wrong understanding of English as a non-native speaker.
Secondly, and that's my main point here, I'm very concerned about the scope of (A) and (D), and somehow second Snowolf's opinion that our personal information should ideally only be shared when required to do so by law (or even a court order, but I understand Michelle's point above).
If the Foundation is planning to share that very sensitive information with other parties, then I'd like to be immediately informed (and preferably beforehand) why they are sharing it, with whom they are sharing it, for what reasons are they sharing it, what will that information be used for, and what will happen to it once that task, research or whatever is finished, so that I can have time to withdraw that information immediately if I feel to be at risk.
In general, I loath the idea of the Foundation sharing my personal information with anyone else than law enforcement agencies, and even that isn't something I'm comfortable with because that information could then be retained by those agencies forever, and as a non-US citizen I'd have no way whatsoever to influence their removal.
In my opinion, all those situations except perhaps (B) are too general, and I would like the Foundation to work on them in a way that would limit those possibilities to a few, precisely named situations; removing (A) might be a good start. odder (talk) 09:29, 21 November 2013 (UTC)Reply[reply]
I fully agree with odder. Especially (A) makes me cry. I really see no no no reasons to share these private information with someone else. Even a NDA is signed by the third party it increases the risk of a leak. I would like to hear the ideas behind (A). If (A) should stays a sentence like "The community will be informed about any disclosure including the name of the third party and the reason" as minimum requirement. Raymond (talk) 10:01, 21 November 2013 (UTC)Reply[reply]
I agree with the above comments. If my personal details are being revealed to a third party (not that I would ever want them to be), I would expect to be notified when it happens and given a justification why. Ajraddatz (Talk) 17:56, 21 November 2013 (UTC)Reply[reply]
+1. (A) is a can of worms as it can allow anyone with a idea of how to make money out of our information to use a NDA to get hold of it all. (D) for the reasons expressed earlier on this page, i.e. "rights" in this context could be turned around to mean many different things including property, value of contracts and potential rights that a future WMF might believe they have such as over the projects, project assets or "brand value". --Fæ (talk) 17:02, 22 November 2013 (UTC)Reply[reply]
Seeing that no one from the WMF has responded to this in almost two weeks, I'm assuming you don't care about my (and other people's) concerns, which leaves me baffled as to why you requested our feedback at all in the first place. odder (talk) 12:46, 4 December 2013 (UTC)Reply[reply]
Sixteen days have passed since I started this section, and still there hasn't been any response whatsoever from the WMF. (Just sayin'.) odder (talk) 13:48, 7 December 2013 (UTC)Reply[reply]
Hi odder. We will be responding in the next few days. Thank you for your patience. In the meantime, we'd like to hear what others think about this section. Mpaulson (WMF) (talk) 00:34, 10 December 2013 (UTC)Reply[reply]
Hi odder and all those who commented on this issue, thanks for your patience. We’ve been working on tightening the language within this section.
Odder, your first question regarding the phrase “submitted materials” has caused us to reflect further on refining this phrase. How do you feel about rephrasing it as “submitted materials and information”? The reason we distinguish “materials” from “information” is because, for instance, confidentiality agreements would need to be retained in their original form. The definition in d(1) would be altered to read: “In consideration of the privacy of the community members with access rights, the materials, documents, and identification information submitted to the Wikimedia Foundation under this Policy (collectively “submitted materials and information”) will be kept confidential.” We would redraft d(3) to read: “The submitted materials and information will be transferred to a secure electronic database upon receipt, and the original medium in which the materials were submitted will be destroyed immediately upon transfer to the electronic database, except in the case of signed legal documents, such as confidentiality or non-disclosure agreements, which must be retained in their original form.”
As to the question of sharing submitted materials and information with third parties, the community has expressed special concern over provisions d(1)(A) and d(1)(D). Regarding d(1)(A), we have thought hard about how to balance the concerns of the community with the need to rely on certain third parties (people and services) to function as an organization and a website. For example, this provision is necessary for use of third-party communications channels such as e-mail storage providers (albeit for temporary storage), third-party backup services, and independent identification verification services (if that is something the community believes is a good alternative to collecting identification documents or simply recording identification information), services which the Foundation is ill-equipped to handle on its own. On other occasions, we may need to share submitted materials and information with our outside counsel, or allow security auditing firms to access the system as a whole. Therefore, we can’t remove these provisions altogether, but we’ve attempted to narrow them as follows -- WMF will not share submitted materials or information unless: (1) a non-disclosure agreement has been reviewed and approved by WMF legal, a process which entails a review of the nature of the information to be shared and the third party with whom we are sharing the information; (2) the third party warrants in the agreement that non-public information such as submitted materials and information will be used only for the purposes contemplated by the agreement; (3) disclosure is deemed reasonably necessary; and (4) third parties agree to return or permanently destroy the materials and information as soon as reasonably feasible after the need for the submitted materials or information ceases.
Regarding d(1)(D), reference to “property” has previously been removed, and replaced with “the rights, infrastructure, or safety of the Wikimedia Foundation, its employees, or contractors.” This could be rephrased to include only “the safety of individuals or Wikimedia technical infrastructure” How do you feel about this proposed change? Rkwon (WMF) (talk) 23:22, 12 December 2013 (UTC)Reply[reply]
Hi Rkwon (WMF), thanks for such a detailed answer. I think that the change in wording you proposed is good; as I said, it's just a side issue, not of much importance to me, but I'm glad you decided to make it more clear. Your answer to the second part of my query is much more important, and I'd like to focus on this, because your response raises as many questions as it answers.
@Odder: Thanks odder, conversations like these definitely help us to think deeply about issues. Again, thanks very much for your support as we work to improve this draft. Please see my responses to your comments inline. Rkwon (WMF) (talk) 00:37, 20 December 2013 (UTC)Reply[reply]
The proposed wording of that part of the policy does indeed change some details, but not the essence of it, and does not relate to the many questions and worries voiced by many different people on this talk page. I would've thought it should be clear by now that there is a general disagreement towards the possibility of the Foundation sharing our personal non-public information with any third party.
I see no reason for you to share my birth date and address with any of the entities you mentioned: e-mail storage providers, third-party backup services, independent identification verification services, not even security auditing firms. None of them have any business in accessing my personal information, and given the amount of resources at the Foundation's disposal, it should be relatively easy to set up a secure e-mail address and a backup service without the need to refer to third parties, I imagine.
If you are still planning to share that information with third parties, then, as I wrote at the top of this section, I'd want to be immediately (or, preferably, even beforehand) informed about (1) why you are sharing it (I mean, is it really necessary for other parties to have that information about me?), (2) with whom you are sharing it, (3) for what reasons you are sharing it with, and (4) what will that information be used for. That is, I believe, quite a reasonable expectation seeing that you do not plan to give us the ability to withdraw our data immediately, a right that is only natural for many of the volunteer functionaries who live in the European Union (or other jurisdictions with similar legal rights, for that matter). And that's already a lot, without even questioning whether, as you put it, disclosure is deemed reasonably necessary, which so vague a statement you could fit an elephant in it.
@Odder: We generally have no reason to share submitted materials and information with third parties unless d(1)(B) or d(1)C are implicated, or in certain narrow circumstances such as sharing with outside counsel. However, when materials are submitted to us, they become part of a technical and organizational infrastructure that necessarily involves third parties and (as minimal as feasible) sharing. Using a “secure e-mail address and a backup service” still constitutes sharing with third party platforms, i.e., parties that provide services which the Foundation cannot provide for itself but that have agreed to take measures to keep non-public information secure and confidential. This sort of sharing is technologically unavoidable if submitted materials and information are to be collected at all.
This raises the question of why they must be collected. Currently, collection of submitted materials and information are required in order for us to comply with current Board-approved policy, which states that “[o]nly persons whose identity is known to the Wikimedia Foundation shall be permitted to have access to any nonpublic data or other nonpublic information…” However, Michelle Paulson asked here: “Should we recommend repealing the current policy to the Board? Should we keep working at editing the Access to Nonpublic Information Policy draft until it creates meaningful accountability for community members with access rights? Some other idea? Please note that the Board, rather than staff or the community, has the sole power to repeal the current policy or to adopt any draft for a new version of the policy that may be created as a result of this discussion.” But there was a lack of community discussion and consensus regarding whether that policy should be repealed. Would you consider voicing your thoughts in that thread? We are open to the idea of changing this draft of the policy such that it no longer contains an identification requirement, which would help to lessen concerns regarding submission methods, storage, and disclosure of such information. However, we would keep portions that deal with responsibilities of community members with access rights.
Again, we have the option of not taking any information, which frankly we think may be the best approach given the pushback. That would render moot this and other issues of concern. Rkwon (WMF) (talk) 00:37, 20 December 2013 (UTC)Reply[reply]
As far as d(1)(D) is considered, I think that wording it as the safety of individuals or Wikimedia technical infrastructure is a step in the right direction, as long as you replace Wikimedia with Wikimedia Foundation. odder (talk) 17:27, 13 December 2013 (UTC)Reply[reply]
@Odder: I see how that’s confusing. We will change that to “the technical infrastructure of the Wikimedia Foundation and Sites.” Rkwon (WMF) (talk) 00:37, 20 December 2013 (UTC)Reply[reply]
The following discussion is closed: closing because it appears the conversation has drawn to a close, if not please reopen, will archive in a couple days if not Jalexander--WMF 00:48, 8 January 2014 (UTC)Reply[reply]
Has the WMF considered Jumio's "Netverify" as a solution? DanielTom (talk) 21:56, 22 November 2013 (UTC)Reply[reply]
I would like to hear more about this. My suggestion is that you raise it on the talk pages. Thanks. Geoffbrigham (talk) 23:41, 22 November 2013 (UTC)Reply[reply]
Thanks, I did so, and suggested that you contact Jumio directly to see what they have to say about this "problem". ~ DanielTom (talk) 16:29, 28 November 2013 (UTC)Reply[reply]
It's a software that verifies IDs in real time. The process is exceedingly secure (encrypted), supports IDs/Passports/Driver licenses from over 90 contries, and is PCI Level 1 compliant. Its idea is, you hold your ID in front of a webcam, and Netverify will read and extract the data from the document in a few seconds, to confirm your identity and send it to the WMF. I could go on about its virtues, accreditations, and which large companies use it (though I will mention Pokerstars), but I'm not a salesperson—and, in any case, you can find this information easily yourself. The WMF can contact Jumio, and see if they can be of assistance, as I believe they can. ~ DanielTom (talk) 19:37, 28 November 2013 (UTC)Reply[reply]
Part of being PCI compliant means that data is stored, securely, meeting regulatory requirements. (Regarding "image", as far as I know, no print screen is taken, it just tells you whether the ID is valid or not.) As I said, they use strong encryption, and it's a safe system with perfect track record, but the WMF can ask them (Jumio) for details, discuss with them, and then decide if this is a system worth implementing—I'm the wrong person to ask. DanielTom (talk) 21:54, 28 November 2013 (UTC)Reply[reply]
To me, the idea of a third party storing our non-public, personal information of this kind is absolutely unacceptable. The Wikimedia Foundation has an annual budget of over US$50 million, surely they can spare a few staff hours to handle this themselves without the need of relying on a third party software. odder (talk) 09:40, 4 December 2013 (UTC)Reply[reply]
Hi DanielTom. We will get in contact with Jumio and get more details about their service to see if it's a viable option, but I'm curious as to whether you (or anyone else out there) know of any similar services that are open source that we could look into as well. Thanks for the suggestion! Mpaulson (WMF) (talk) 21:21, 3 December 2013 (UTC)Reply[reply]
Hi. Two obvious reminders: 1) Jumio has experience (and must comply) with industry specific legislation, so I would suggest you ask them directly how they see the Netherlands "problem", given that they list the Netherlands as a "supported country" on their website. 2) As to pricing, Netverify is supposed to save companies' money by eliminating manual costs (&c.), but this may be only true for those with high levels of "transactions", so obviously you should make sure the WMF would only be paying for the number of verifications they'd run. (I trust this is needless advice.) DanielTom (talk) 22:57, 3 December 2013 (UTC)Reply[reply]
Thank you for suggesting Jumio! I looked into using NetVerify and found several potential issues. The first is that they do not collect information from anyone whom they know to be under the age of 18. This means that community members between the ages of 16 and 18 who have access to nonpublic information would not be able to use the service. Second, Jumio’s data retention policy states “We don’t keep personal data longer than absolutely necessary for our services, unless we are obliged to do so by law.” This is somewhat vague and we do not have a good indicator of how long data is actually kept. Data may be deleted immediately after verification or kept for some indefinite period of time. We emailed Jumio for more details about their policy but have not received a response. Third, users of their service also consent to the disclosure of their personally-identifying information in more situations than Wikimedia asks its users to consent to. Jumio reserves the right to disclose data “for any purposes related to Jumio’s business, including, but not limited to providing you with Jumio’s services and personalizing and improving those services for you.” This includes disclosures to companies that Jumio partners with to provide this service for Wikimedia as well as disclosures that are “reasonably necessary to protect the property or rights of Jumio, other users, third parties or the public at large.” “Property or rights of Jumio” is not defined, and this is language that many users were uncomfortable with when it was in our proposed Access Policy. Furthermore, in the event of a merger, acquisition, or bankruptcy or other sale of all or a portion of Jumio’s assets, Jumio may transfer your information to a successor organization or acquirer. Finally, it’s worth noting that Jumio’s product is not open source; Wikimedia strives to use open source products whenever possible, and we would like to hear from the community whether anyone knows of an open source product that provides the same service. What do people think about these issues? Should we look into Jumio further? Or are these issues something that would prohibit us from using their service? RPatel (WMF) (talk) 18:46, 9 December 2013 (UTC)Reply[reply]
Just to update the community, we spoke to a representative from Jumio last week and he will be getting back to us on whether we would be able to use the service for individuals under the age of 18 and with some more information about their data retention policies. RPatel (WMF) (talk) 23:04, 18 December 2013 (UTC)Reply[reply]
Another update: Jumio responded that they will allow us to use NetVerify for users that are 16-18 years old. RPatel (WMF) (talk) 00:15, 19 December 2013 (UTC)Reply[reply]
Thank you to those who participated in the Net Verify discussion! We looked into using Jumio as DanielTom proposed, and although there may be some benefits to using their service to verify IDs, there are also disadvantages, some of which are noted above. Overall, it does not appear that the community is interested in us pursuing this option, so we will be closing this part of the discussion for now. RPatel (WMF) (talk) 19:59, 2 January 2014 (UTC)Reply[reply]
More about the ID procedure
The following discussion is closed: closing since it seems discussion has died in this section, please reopen if still needed. Will archive in a couple days if still closed. Jalexander--WMF 22:36, 22 January 2014 (UTC)Reply[reply]
I approached James Alexander and he adviced that I should put my thoughts on this talk so here it comes. My opinion has changed a bit after the changes, but after all the identification rule still stands. The WMF still wants to make a list of everyone with CU/OS/steward etc rights. And I do understand that it's important, but why do you want to know our address? Isn't the place and email address enough (besides name/age - I don't like date of birth either to be frank)? And you will keep the info for as long as someone has those rights, +6 months. Why the extra 6 months? To be honest I'm still not fully comfortable about this policy.
Another question: where do you stand right now? James said that you are considering to get rid of the identification rule; what can we expect? Trijnsteltalk 16:18, 10 December 2013 (UTC)Reply[reply]
As a clarification, would a post office box be accepted for a mailing address? --Rschen7754 19:06, 10 December 2013 (UTC)Reply[reply]
Hi Trijnstel, I'd be interested in why you're asking why the Foundation wants to know your address. I mean, here's what I do not fully understand about some of the criticism raised on this page: I think we are all aware that the primary reason why personally identifiable information is requested is to be able to pursue violations of WMF policies through legal means and/or to be able to facilitate legal proceedings initiated by third parties against volunteers. But, based on this, I do not quite understand your question. Isn't it clear that to achieve these goals, they also need (at the very least) your address and that your bare name is insufficient? You're also asking why the information is needed for +6 months. Well, it's been pointed out that some violations of relevant policies may be uncovered only after your resignation from your role (or the removal of your access rights). This certainly sounds plausible given the intent of the policy. If a user with CheckUser privileges intentionally violates the privacy rights of a Wikipedia user, hopefully the community will remove his rights as soon as possible; but if the extra six months are removed from this Policy, such commendably quick reaction by us users could turn out to just benefit the infringor. In fact, if he is clever, he'll immediately request the removal of his access rights the moment he learns that some legal action is considered. That is to say, I share some of the criticism brought forward on this page (mostly regarding disclosure of the information). But I think that if you subscribe to the general intent of the policy (perhaps I'm getting you wrong at this point), you cannot really, I think, demand to reduce the amount of data needed from volunteers under the current version of the draft. Best, — Pajz (talk) 22:12, 15 December 2013 (UTC)Reply[reply]
@Stephen LaPorte (WMF), I certainly respect Pajz, but he doesn't answer my questions. Why do the staff need my address? Isn't the other information (name/email) enough? And I understand the 6 months, but when there are concerns about someone, then they were certainly before they resigned. You could make a clause which says that you can retain the information for as long as needed when there are those concerns. This would mean that you need to destroy it from everyone else (I don't expect it will happen as we only had a few of those cases, very little). And that latest question: where do you stand right now? Will you get rid of the identification rule or not? Trijnsteltalk 18:46, 22 December 2013 (UTC)Reply[reply]
@Trijnstel: The WMF would retain addresses to increase the accountability for users with access to nonpublic data. The address information would not be disclosed, except for the limited circumstances described in the policy, but it would make it easier to identify the user that is associated with a specific account. With a name and email address, there are only limited ways to verify someone's identity. We proposed retaining information for six months after a user resigns, based on feedback we received that cases of abuse may be difficult to immediately detect. Six months was chosen as a reasonably short period, but your alternative may also address the same potential issue. You mentioned that this is rarely an issue -- do you think that retaining information for six months would add any benefit? Currently, we are still evaluating feedback and following the open threads on this page, so the identification requirement is still open in this draft. Do you think we should have an identification rule? If so, what do you think it would look like? Thanks again for your feedback. Best, Stephen LaPorte (WMF) (talk) 23:44, 23 December 2013 (UTC)Reply[reply]
The following discussion is closed: closing since it seems like discussion has ended here. Please reopen if needed and will archive in a couple days if still closed. Jalexander--WMF 22:37, 22 January 2014 (UTC)Reply[reply]
If I want to edit some political article in Chinese Wikipedia, with a new account to avoid identity being found. (This is allowed in zh:WP:SOCK#LEGIT.) And the edits makes the communist party mad, so they find the CUer to check the ip. Then, they can find me and prevent me from future edits. Accroding to this propose:
"(v) law enforcement, administrative bodies, or other governmental agencies if required by law, provided that the community member notifies the Wikimedia Foundation unless restricted by law from doing so;"
"The specific scopes and categories of state secrets shall be stipulated by the state secret-guarding department together with the Ministries of Foreign Affairs, Public Security and State Security and other central organs concerned."
So, the CUer can give them my IP without telling Wikimedia, and without breaking any Wikimedia rule, while I vanish from Wikipedia? --維基小霸王 (talk) 16:34, 17 December 2013 (UTC)Reply[reply]
Hi 維基小霸王! Thank you for your question. The short answer to your question is unfortunately, under certain circumstances, yes. When any person or entity holds or has access to the personal information of others, there is always a risk that they may be compelled under local law to disclose that information. But depending on the jurisdiction and the specific circumstances involved, that person or entity may have methods of recourse or ways to legally challenge demands for disclosure.
And while we generally require (under this policy draft) the disclosing party to notify us if they disclose information to law enforcement, administrative bodies, or other governmental agencies if required by law, there may be certain situations where the disclosing party is legally required not to notify us, in which case we do not expect them to do so. These kinds of situations are, of course, not ideal, but there is little that can be done to change that without making major changes to the way the Wikimedia projects are supported by the community. Mpaulson (WMF) (talk) 22:41, 18 December 2013 (UTC)Reply[reply]
This might be a radical idea, but why not just disallow people in countries with situations like this from becoming CUs? PiRSquared17 (talk) 22:54, 18 December 2013 (UTC)Reply[reply]
Interesting idea, PiRSquared17. But how would we decide which countries to disallow? To a certain extent, this kind of situation could happen anywhere. Mpaulson (WMF) (talk) 23:04, 18 December 2013 (UTC)Reply[reply]
Unfortunately ,there are already 5 CUs on Chinese Wikipedia, and they are all from PRC. Though we don't limit other Chinese wikipedians from becoming CUs, this is how things currently is.--朝鲜的轮子 (talk) 13:21, 5 January 2014 (UTC)Reply[reply]
Well, I think that since Wikimedia is located in the U.S., the CU could only follow the Chinese laws/rules and would not break any U.S. and/or Wikimedia laws/rules. I think that the CU should be harder to find, and you could help make him harder for the authorities to find. I also think that your suggestion about zh's CUs engaging in dialogue about their situation with Wikimedia would clarify a few things that this proposal is currently lacking. TeleComNasSprVen (talk) 12:44, 26 December 2013 (UTC)Reply[reply]
You say: "the CU could only follow the Chinese laws/rules and would not break any U.S. and/or Wikimedia laws/rules". To be clear, that's what the "applicable" law is for - if a CU is in China, US law is unlikely to apply; and vice-versa.
"I think that the CU should be harder to find": Did you have an example in mind of how they are not hard to find? We're of course in favor of protecting CU privacy.-LVilla (WMF) (talk) 23:22, 2 January 2014 (UTC)Reply[reply]
By the way, it is also very possible--perhaps even more feasible for government to just search in ISP database and filter out "The ip that connects in the time period when you are editing wikipedia", and find out your IP, rather than finding a CUer and force him/her to do that.--朝鲜的轮子 (talk) 05:18, 6 January 2014 (UTC)Reply[reply]
As far as we know, the Pierre-sure-Haute situation occurred not because the French government did "strange things" to identify the CU who was involved, but because he was a very publicly self-identified member of the Wikimedia.fr leadership. We obviously can't (and shouldn't!) prohibit CUs from identifying themselves publicly. As far as searching the ISP connection database - for CUs connecting through https, ISPs can see only what domain is connected, not what pages are connected; and the CU log is private/protected, so they can't correlate stamps from that log with access to the domain. So I don't think CUs can be discovered that way, unless they are using http to access the site.
Disclosure of IPs on SPI cases
The following discussion is closed: closing because it looks like this question was answered (by LVilla near the end). Please reopen if needed and will archive in a couple days if still closed. Jalexander--WMF 22:38, 22 January 2014 (UTC)Reply[reply]
I have a yes or no question relating to the issue LFaraone has talked about above. The answer to the question may significantly alter procedures on the English Wikipedia.
If a sockpuppet investigation determines a positive link between an account and an IP, does stating this connection publicly (and therefore disclosing a user's IP) count as "a necessary and incidental consequence of blocking a sockpuppet"?
@Deskana: Hard for a non-SPIer to know, I suspect. Why is this done? Why is it done publicly? Can you link me to an example of the current practice? I think when we drafted this we were under the impression that CUs only disclosed IPs under fairly specific circumstances, and that's what we were trying to capture here. But if the language doesn't match current practice then obviously we should think about fixing the language. (More likely the language than the practice, but of course we should also review the practice.) —LVilla (WMF) (talk) 22:13, 3 January 2014 (UTC)Reply[reply]
@LVilla (WMF): Thanks for the response. Disclosure of IPs is not generally done on SPI cases. That said, very rarely, the known IPs or ranges of a sockpuppeteer may be given out. This may be done, for example, so that admins can keep an eye on the range for sockpuppetry because a rangeblock is infeasible due to collateral damage. This is why I asked my question; depending on whether the answer is yes or no, this practice is either allowed to continue or not, respectively. Does that help you answer my question? --Deskana (talk) 22:20, 3 January 2014 (UTC)Reply[reply]
Is the reason this is done publicly just because there is not an alternative private channel? Or just habit? Or... ? —LVilla (WMF) (talk) 22:47, 3 January 2014 (UTC)Reply[reply]
Sometimes they do it to publicly shame or harass the alleged "sockpuppeteer". ~ DanielTom (talk) 22:51, 3 January 2014 (UTC)Reply[reply]
@LVilla (WMF): There is not an alternative private channel. CheckUsers have the CheckUser wiki, but there's no private space to communicate this information to administrators efficiently.
@LVilla (WMF): What Keegan said. There is no private channel to contact just administrators and it would be infeasible to get all admins to sign up to a private admin mailing list. Even if there were such a mailing list, I think it would be misguided to describe it as a "private list" given the number of people on it (over 1,000, or even more if you're considering any admin on any project) and the fact that all that's required is an RfA. --Deskana (talk) 13:30, 5 January 2014 (UTC)Reply[reply]
In that case, I think Avi's analysis is basically right - current practices would be permitted as necessary and incidental, given the (bad) options for non-public communication. —LVilla (WMF) (talk) 23:24, 7 January 2014 (UTC)Reply[reply]
If I am reading the new policy properly, it would be allowed under b(vi) which states: "the public, when it is a necessary and incidental consequence of blocking a sockpuppet or other abusive account." This is actually less restrictive than the current policy, and should certainly allow release to limited sectors of the public when it is a "a necessary and incidental consequence of blocking a sockpuppet or other abusive account" such as filing an abuse case with an ISP or conversing with other CUs or admins whilst trying to minimize or prevent abuse to a project. -- Avi (talk) 01:45, 5 January 2014 (UTC)Reply[reply]
Rethinking the access policy: Response to recent feedback
The following discussion is closed: closing for lack of comments in this section, please reopen if needed more. Will archive if still closed in a couple days. Jalexander--WMF 22:35, 22 January 2014 (UTC)Reply[reply]
We would first like to thank the community for all of the feedback they have provided on this Access to Nonpublic Information Policy draft, particularly over the last week and a half. You have given us a lot to think about and we understand a lot of your concerns, particularly regarding the privacy of those community members who this policy would apply to if adopted by the Board.
After internally discussing the feedback received extensively over the past week, we think it may be more productive to first take a step back and discuss why this draft was created and whether replacing the current Access to nonpublic data policy is really the direction we should be moving towards or not.
The primary reasons why we created this new draft are:
We do not believe that the current practices regarding collection and retention of community member identification are in compliance with the Board’s current Access to nonpublic data policy and hoped to bring the policy and practices closer to fulfilling the original intent of the policy;
We hoped to provide more guidance to community members who have access to nonpublic information as to when they should access or share nonpublic information; and
We hoped that the draft would better explain to the rest of the community (including inexperienced users) why certain community members have access to nonpublic information regarding other community members and the obligations (such as confidentiality) community members in those roles have with regard to that nonpublic information.
We believe the latter two goals are worth further exploration and discussion with community. However, we think that the Foundation and the community should squarely address the first issue before diving into the latter two goals, particularly in light of the feedback we have received thus far.
The established practice to fulfill this obligation has generally been for such volunteers to submit an identification document of their choice to a Foundation staff member in the medium of their choice and the Foundation would not retain a copy of the identification document, nor the information contained in the identification document. When this policy was adopted, the Foundation was an exceedingly small organization and did not have the administrative capacity to do much more. Given those constraints, the Foundation believed that this practice met the bare minimum requirements set forth by the policy. But we do not believe that such a practice ever fulfilled the intent of the policy.
Community members in these roles have access to sensitive information. Some are entrusted with personally identifying information about other users. Others have the ability to view material that has been suppressed for sometimes serious reasons. The intent of the policy was to provide personal and legal accountability for the rights held by these select individuals. The first step in having the ability to provide for such accountability is knowing who exactly is accountable. However, the current practice of checking and immediately destroying identification documents means that Foundation honestly does not know who these individuals are. And it means that we are inaccurately representing that “[o]nly persons whose identity is known to the Wikimedia Foundation [are] permitted to have access to any nonpublic data or other nonpublic information produced, collected, or otherwise held by the Wikimedia Foundation” to the general public” to those who trust us with their personal information.
So, where does that leave us? We believe there are two basic options (but, of course, are open to hearing more possibilities from the community):
Present the Board with a draft Access to Nonpublic Information Policy that includes: (a) the collection, verification, and retention (or whatever would be required to provide meaningful accountability) of the identification documents by those who have access to nonpublic information in a manner that is acceptable to the community; and (b) a confidentiality agreement that obligates those individuals to use their access rights and the information they have access to as a result of those rights in a clearly prescribed manner.
In the spirit of full disclosure, we had not thought of the former option until we read your feedback and went back to the drawing board over this past week. We had been so focused on what we could do to ensure that the spirit of the current policy was met that we created a new Access to Nonpublic Information draft before we really thought through whether having this kind of policy at all still made sense for the Foundation, the community members with access rights, or the community at large.
We are open to considering either of these options and believe each has its own benefits and disadvantages.
But we would like to hear your thoughts on the matter. Should we recommend repealing the current policy to the Board? Should we keep working at editing the Access to Nonpublic Information Policy draft until it creates meaningful accountability for community members with access rights? Some other idea?
Please note that the Board, rather than staff or the community, has the sole power to repeal the current policy or to adopt any draft for a new version of the policy that may be created as a result of this discussion. Right now, we want to hear the community’s thoughts and ideas on a tough subject, so that we may can make an appropriate recommendation to the Board. We would also like to hear the thoughts of the Ombudsman Commission on this matter and encourage them to participate in this discussion.
We thank you in advance for your feedback and patience as we work together to find a solution that makes sense for the community. Mpaulson (WMF) (talk) 23:13, 23 October 2013 (UTC)Reply[reply]
↑ Before we get into serious discussions about those possibilities, we would like to address whether we should repeal the current policy, which mostly deals with identification.
Michelle, I think this response is actually talking past the biggest issue people have raised on this page. You're speaking to us as if the objection is us being known to the Foundation; for the most part, it's not. The main objection raised by commentors here has been the manner in which you propose to retain our personal details, and the format in which you propose to require that identification (i.e. photocopies of actual ID documents containing all personal information). It has nothing to do with whether or not we want the Board policy repealed or changed, but everything to do with how you propose to implement that policy. I would be much more interested in hearing the WMF's responses to the data security concerns that have been raised on this page than in being presented again with "Well, either we collect your information (in some way, with some protection, maybe, not specified, that doesn't matter) or we Change The Entire Policy". Fluffernutter (talk) 23:35, 23 October 2013 (UTC)Reply[reply]
I disagree - quite a few users (including myself) have raised concerned over the need for the change. Mpaulson's explanation does, in my mind anyway, do a lot to settle my concerns in that area. The concerns over secure storage of the information and timeframe of that storage are important, but I feel that clearly establishing the rational behind the proposed change is a good starting point for any discussion. Ajraddatz (Talk) 23:39, 23 October 2013 (UTC)Reply[reply]
I see your point, but I would feel better knowing that certain things are being removed from the table before weighing in here. --Rschen7754 23:54, 23 October 2013 (UTC)Reply[reply]
Well, it comes off as a bit binary, like "either we identify this way exactly as proposed, or we don't identify at all." I think if Michelle addressed the specific concerns expressed recently, it would go a long way. --Rschen7754 04:55, 24 October 2013 (UTC)Reply[reply]
Hi Fluffernutter and Rschen7754. The reason I haven't responded to the individual objections and concerns raised by commentators above is because I think it's important to first figure out whether a policy requiring some form of identification to the Foundation is needed at all. If not, we should then discuss whether there should be other policies or guidelines regarding the terms under which certain community members have and use access rights and whether those policies or guidelines should be drafted by the community or the Foundation (obviously with community consultation). If so, as James said, literally everything is on the table for discussion, including what type of retention is used, what kind of security would be adequate, what kind of confidentiality obligations there are, etc. The draft presented here was really just a starting point. Mpaulson (WMF) (talk) 18:17, 24 October 2013 (UTC)Reply[reply]
Identity documents, especially passports, have a lot of information that the WMF doesn't need to meet the requirements of the policy. When someone identifies, keep a record of the name, address and phone number (and maybe age, but not date of birth). That might alleviate a substantial portion of the concerns about identify theft. Nathan T 01:51, 24 October 2013 (UTC)Reply[reply]
Hi Nathan. Thank you for your comment. This is definitely something that should be discussed as a possibility if the policy is to be rewritten. As a preliminary matter though, I'd love to hear your thoughts on whether you believe that identification to the Foundation should continue. Mpaulson (WMF) (talk) 18:26, 24 October 2013 (UTC)Reply[reply]
As I wrote on the mailing list, I'd like to hear from the Board. This is the Board's policy and the Board's prerogative. Is the Board interested in updating the access to non-public info policy? If so, why and in what ways? If not, we can simply close this discussion and move on, right? The suggestion that Wikimedia Foundation staff appeal to the Board to repeal the current policy and then write a new policy itself seems like a non-starter. --MZMcBride (talk) 03:00, 24 October 2013 (UTC)Reply[reply]
Hi MZ. I think there may be a miscommunication here. The reason why we are re-evaluating this policy is because we do not believe that we are in compliance with the policy as it is currently written. Noncompliance should be addressed and resolved, which means that we should not simply close the discussion and move on.
Repealing the policy is one option available to the Board and I'm sure they would be interested in hearing the community's thoughts on such a repeal prior to making that decision. Again, this option does not mean that the community or the Foundation (with community consultation) could not create other policies or guidelines addressing how access rights are held and used -- just that such policies would not include the need for identification.
Another option -- which I didn't discuss previously for the reasons discussed below -- to ensure that "[o]nly persons whose identity is known to the Wikimedia Foundation shall be permitted to have access to any nonpublic data" is to change the practices (but not the policy) by which identification and retention is done, such that the Foundation actually knows who these individuals are. The policy as written does not specify how identification should be done, any security provided to the submitted identification documents, under what circumstances the submitted documents could be disclosed to third parties, or how long we should keep them. Given the feedback received over the past week and a half or so, those very things seem to be the biggest concerns of the community and to leave them unaddressed in the policy (but change the practices) seems to be unwise. Submitting any kind of identification is a serious matter and the circumstances surrounding such identification should be clearly described. Do you agree?
And yet another option is for the Board to replace the current policy with a new policy that the Foundation and the community will be in compliance with. If the community believes this to be the best option, we will work together with the community to address the issues in this draft and make it into something that the works for the community and then present it to the Board. Mpaulson (WMF) (talk) 18:52, 24 October 2013 (UTC)Reply[reply]
Thank you, Michelle :) That's a very helpful summary for me. I'd be interested to read more about the importance & impact of two specifics which seem to be not only a clarification, but a strengthening of the previous policy.
The switch from "identity is known" (which could be, say, a confirmation by a trusted party that User is RealName) to "a copy of a unique government ID issued for an identity is stored on a server in the US" (which still requires someone to confirm that the RealName on the ID belongs to User).
Hi SJ. This is a good question. The switch was more meant to specify how the identity is known and ensure that it is consistently known. What had been done in the past frequently meant that the identity was known at some point by someone at WMF. We were not recording the identities when identification documents were checked, so unless the person who checked the identification document in question remembers what was on that piece of identification, it's not known to WMF. This also presumes that the WMF staffer that checked the identification document is still at WMF. That is not really the same as the identity being known to WMF. Does that make sense? Mpaulson (WMF) (talk) 22:30, 30 October 2013 (UTC)Reply[reply]
The switch from "may include proof of majority" to "must include proof of majority".
The switch here was meant to get rid of the ambiguity. My understanding was, in practice, that all community members with access to nonpublic information (except OTRS agents) were required to be 18 years old and the age of majority for their jurisdiction. The reason for this in the current policy, as I understand, was to ensure personal accountability. From a legal standpoint, it is easier to hold an individual personally accountable if they are the age of majority. I saw no reason to leave the ambiguity of "may" in this draft policy if the purpose was to actually have an age minimum. That said, the age minimum (like anything in this policy draft) is not set in stone. It can be raised, lowered, or done away with all together. I'd like to hear what other community members think about the age minimum. Mpaulson (WMF) (talk) 22:30, 30 October 2013 (UTC)Reply[reply]
Thanks, that makes sense. Just to get a more explicit picture of the possibilities: information could look like
"name, age, email, username, address, photo, phone; type of ID, ID date, related files; type of verification, verifier, verification date" ?
Where type of verification could include many things, such as "looked at ID over videochat", "received scan of ID by email", "ran through ID checker", "third-party identification service", "sent PIN to phone", "Paypal micropayment", "postcard to address".... and for most roles a single confirmation could suffice. Related files could include the email sent with a document attached; or a videoconf transcript; a copy of the verification; other extended metadata, or the text of the documents used. I haven't heard a reason to keep a copy of the documents themselves.
Until now, as I understand it, of the above items only username and verification date was kept? What standard do we use for roles that require background checks? –SJtalk 08:31, 3 November 2013 (UTC)Reply[reply]
Imho a longer IP retention (of people with higher privileges) + proof of legal age + an upper tier of content suppression reserved for *some* staff members would be enough. Basically our definition of "sensitive informations" is definitely much much wider of the one used by most of different judicial systems around the World. To me it's an overkill to threat us as we were dealing with tax declarations and medical records. Also keep in mind identification via IP is *always* needed since WMF cannot assure account cannot be stolen. --Vituzzu (talk) 14:05, 25 October 2013 (UTC)Reply[reply]
Though at least for me, identification via IP would be useless as I can edit from several IPs a day (college student) and do occasionally travel. --Rschen7754 19:17, 25 October 2013 (UTC)Reply[reply]
IMO requiring a committed identity or some other form of public key cryptography would be the best solution for that. But what do I know. PiRSquared17 (talk) 19:21, 25 October 2013 (UTC)Reply[reply]
Why even require proof of age? Sj brings up a very good point somewhere, that there are some very mature under 18 year olds out there who could do the job well. Why must they be over 18? Obviously it isn't a legal requirement since OTRS volunteers can be 16... Ajraddatz (Talk) 19:40, 25 October 2013 (UTC)Reply[reply]
@Rschen7754: but you can always say "hey my account has been hacked, I'm not responsible of this!". Actually 16-years requirement is badly conceived, it should be legal age in its country of residence, being 16 in most of world outside USA doesn't mean anything. --Vituzzu (talk) 12:05, 26 October 2013 (UTC)Reply[reply]
I read this talk. As an OTRS volunteer I am a stakeholder in this. I wanted to acknowledge this conversation. I have no opinion on this matter at this time. Blue Rasberry (talk) 23:59, 25 October 2013 (UTC)Reply[reply]
I congratulate the WMF to making this statement that it is also possible to discuss to change the current policy into the completely other direction from this proposal (not requiring identification anymore at all). It would have been nice if this could have been there from the beginning, if the current policy needs to be changed at all in the opinion of the legal team, but this way it is also very good to see that the outrageous draft is really merely a draft. --MF-W 12:04, 8 November 2013 (UTC)Reply[reply]
Just a note that above (in a section that address the old draft so will be closed soon) there are a couple comments in support of keeping some kind of policy. Jalexander--WMF 20:11, 9 December 2013 (UTC)Reply[reply]
Protect the volunteers by warning those who write in that their emails will be read by anonymous volunteers
The following discussion is closed: closing because it looks like the discussion has ended, please reopen if needed and will archive in a couple days if not Jalexander--WMF 22:54, 22 January 2014 (UTC)Reply[reply]
As I don't do much OTRS work, the extra effort required in sending the Foundation my ID again is not worth it, so I wouldn't continue - and to be fair that's not much of a loss as I do so little work. However, I wonder how many other volunteers will also not bother submitting their ID, for one reason or another. Some users may be put off by the implied position that the Foundation is protecting themselves and/or those writing in rather than protecting the volunteers. It may be simpler and more in keeping with the spirit of co-operation between the community and the Foundation, if those writing in were warned that their emails are being read by volunteers not Foundation staff, and to be aware of that when disclosing private information. If people wish to safely disclose private information, with all the legal consequences, they should write directly to the paid staff of the Foundation. SilkTork (talk) 10:08, 22 November 2013 (UTC)Reply[reply]
Those who write in to OTRS are also volunteers. For example, at Commons there is a standard email template for users releasing files (permissions) where their identity proof is required (including real names). Anonymous "volunteers", who refuse to identify themselves to the WMF, should not have access to that kind of information. It's fine if some people don't want to identify themselves, they have every right to stay anonymous, just don't let them have access to OTRS and other people's identifiable information. DanielTom (talk) 14:39, 22 November 2013 (UTC)Reply[reply]
What do you mean when you say who refuse to identify themselves to the WMF? How would you discribe identify? Natuur12 (talk) 15:10, 22 November 2013 (UTC)Reply[reply]
A government issued ID should be required (e.g., government-issued ID card, Passport, or driving license). Recent (< 3 months) household bills, or bank/credit card statements, work as proof of address. This is common practice—I leave it to you to search which verification documents are required in other large websites (vide Pokerstars' FAQs, for instance). The original WMF policy proposal required a "government-issued form of photo identification containing [...] name, date of birth, and the issuing agency number", and that is how it should be. DanielTom (talk) 15:58, 22 November 2013 (UTC)Reply[reply]
I would suggest Jumio's "Netverify" as a solution. The Netherlands appear in their Supported Countries list. If interested, the WMF can contact Jumio and see what they have to say about this problem. DanielTom (talk) 16:26, 28 November 2013 (UTC)Reply[reply]
@SilkTork: - Thank you for taking the time to comment here. Adding clear indicators to the public that if they write to OTRS, they are writing to volunteers rather than WMF staff, is a good suggestion. We might consider doing that irrespective of whether the Access to Nonpublic Information Policy draft ends up being applicable to email response team members. I'd like to hear whether other community members think such indicators would be sufficient or whether email response team members should still be covered by the Policy draft. Mpaulson (WMF) (talk) 21:15, 3 December 2013 (UTC)Reply[reply]
@DanielTom: - Thank you for your suggestions and opinions. We will look into your Netverify suggestion and respond in the Netverify thread below (which was copied from your post to Geoff's page). Mpaulson (WMF) (talk) 21:15, 3 December 2013 (UTC)Reply[reply]
Where do people use utility bills to prove addresses? In my country, the usual thing to do is to look up the person in the citizen register (e.g. http://www.ratsit.se/). Also, utility bills are often electronic and do not necessarily contain an address. --Stefan2 (talk) 16:14, 19 December 2013 (UTC)Reply[reply]
This is not uncommon in the US (which does not have a citizen register) for some purposes. The idea is similar to that of the prepaid postcard suggested elsewhere on this page, but making use of mail that it is assumed many people would already have available. Anomie (talk) 14:34, 20 December 2013 (UTC)Reply[reply]
In the UK this happens as there is no 'citizen ID' of any kind. Authorities recognize that someone might have multiple identities but don't care that much; the background of this being a UK civil liberties issue was originally a reaction to authoritarian regimes around World War II and suggestions by government ministers and bureaucrats since then to introduce a standard ID card have always met with overwhelming public resistance. The only time it matters if you are using IDs for criminal purposes (such as money laundering or fraud), apart from that you are free to call yourself multiple names, pretend to be several people, and even pay your utility bills using a bank account under your pet's name or a long dead relative. --Fæ (talk) 10:38, 21 December 2013 (UTC)Reply[reply]
The standard disclaimer already does this, it is supposed to be on the footer of every email from OTRS volunteers:
Disclaimer: all mail to this address is answered by volunteers, and responses are not to be considered an official statement of the Wikimedia Foundation. For official correspondence, please contact the Wikimedia Foundation by certified mail at the address listed on https://www.wikimediafoundation.org/
We could have an automated reply to every first email from a correspondent to OTRS explaining this in more detail, and then give them the option of deleting their request from the database in the first 24 hours, however this would be more "belt and braces" rather than an improvement. --Fæ (talk) 10:49, 9 December 2013 (UTC)Reply[reply]
Thanks for that Fae. If I understand correctly, that message goes out to people in response to an email they have already sent. So they may have already sent private information which they would not have disclosed had they been aware they were not sending it to a secure address. Indeed, if senders were aware they were sending it to a system where their information can be read simultaneously and anonymously by many anonymous volunteers, I doubt they would be sending in some of the information they do send. Any OTRS volunteer can read emails sent in - it's not just limited to the one who responds. Any OTRS volunteer who then inadvertently misuses information puts every other volunteer under suspicion - in particular the one who actually responded. Any disclaimer has to be explicit that private information is being sent to hundreds of volunteers (currently anonymous volunteers), and that disclaimer has to be BEFORE the information is sent. The information on the "Contact Wikipedia" link says: "The links on the left should direct you to how to contact us or resolve problems. If you cannot find your issue listed there, please email us at email@example.com." No clear indication that "us" is not the Foundation, nor that such an email will be accessible to (how many - several hundred?) anonymous users. On the Subjects link, the advice is a little more helpful: "While your email will likely remain confidential, certain unforeseen circumstances, including but not limited to security breaches, subpoenas, or leaks, might compromise that confidentiality." Though that message is still not being clear that emails are accessible to a wide range of anonymous volunteers.
How does one remove oneself from the list of OTRS volunteers? SilkTork (talk) 15:27, 15 December 2013 (UTC)Reply[reply]
Removal works by writing in to OTRS (probably info-en would do), or you can ask on the OTRS wiki cafe page and an OTRS admin will sort you out. --Fæ (talk) 07:25, 19 December 2013 (UTC)Reply[reply]
Thank you all for the constructive discussion and suggestions. We will be putting up the following disclaimer on pages that link to firstname.lastname@example.org:
Disclaimer: E-mail to this address is reviewed and responded to by volunteers from our user community. Please understand that the Foundation cannot guarantee confidential treatment of any sensitive information you include in your message.
If you see any pages we haven't gotten to, feel free to reproduce the disclaimer there (but only for e-mail addresses that don't actually go directly to the Foundation) AVoinigescu (WMF) (talk) 23:46, 18 December 2013 (UTC)Reply[reply]
This is more an issue of legal advice rather than something to vote on. I would expect that if anyone has better ways of expressing a disclaimer, or a better way of achieving the same outcome, that they would be seriously considered. --Fæ (talk) 09:15, 19 December 2013 (UTC)Reply[reply]
Wait what? As OTRS volunteers, we agreed to the otrswiki:Project:General disclaimer which says we can't reveal any information. Why does this need to be added? It's only going to turn people off that we can actually help and force them to contact email@example.com, which is just going to refer them back to us anyways. Legoktm (talk) 01:24, 23 December 2013 (UTC)Reply[reply]
This may be an undesirable outcome, but I see this as a distinct improvement to transparency. OTRS is frequently misrepresented as being confidential or suitable for highly personal information, and us OTRS volunteers ought to be a little more proactive in correcting this by separating our good practices for confidentiality from giving an impression of a guarantee of confidentiality whenever we spot it. By the way, pointing to a closed wiki for an issue of transparency is not ideal. --Fæ (talk) 05:51, 23 December 2013 (UTC)Reply[reply]
I agree, people should not misrepresent OTRS in that way. But I don't think putting up a disclaimer that says "by the way, your email is being sent to people who might leak it publicly" isn't the way to solve the issue. As for the policy being on a closed wiki, I'm not aware of any copies of it being available publicly, otherwise I would have linked to those instead. And copying it to somewhere public would violate the agreement itself awkwardly. Legoktm (talk) 10:12, 23 December 2013 (UTC)Reply[reply]
It would be nice to put the disclaimer in a positive way. Though it should be completely clear that there is no guarantee of confidentiality or protection of private information when people write to the normal OTRS queues, at the bottom of the "legal" disclaimer, we could link to a public OTRS community written statement of the level of reasonable confidentiality and respectful behaviour we require of OTRS volunteers, perhaps even saying a little about how this is monitored. Would anyone care to suggest a current page on meta that does this, or consider drafting one? --Fæ (talk) 10:28, 23 December 2013 (UTC)Reply[reply]
I would like to see this reworded to at least acknowledge that, you know, that we're not all unprofessional goofs out to get the people sending us emails. How about "Disclaimer: E-mail to this address is reviewed and responded to by volunteers from our user community. Please understand that although all volunteers on this team have agreed to a confidentiality agreement, the Foundation cannot guarantee confidential treatment of any sensitive information you include in your message." (my bold). Sven Manguard (talk) 20:17, 23 December 2013 (UTC)Reply[reply]
Yes, I know I used agreed and then agreement. If someone has a better word for either of those slots, go ahead. What's important is that we're bound by a confidentially agreement is in there, not how that's worded. Sven Manguard (talk) 20:18, 23 December 2013 (UTC)Reply[reply]
"Anonymous unelected volunteers" would be more accurate. ~ DanielTom (talk) 22:08, 23 December 2013 (UTC)Reply[reply]
We're not anonymous though, the OTRS admins and presumably the WMF has our real names plus ages. Legoktm (talk) 23:27, 23 December 2013 (UTC)Reply[reply]
Right, and they know you didn't give them a fake name because they check your ID. Hmm. Oh, wait, they don't. (Although I always thought they did. Wikipedia:Identification says anyone with "access to nonpublic data policy"—which according to WMF policy includes OTRS volunteers—must provide a "copy or scan of driver's license/passport" to the WMF. Is this being followed? If so, why all the resistance?) ~ DanielTom (talk) 00:17, 24 December 2013 (UTC)Reply[reply]
The OTRS confidentiality agreement currently only covers information shared by users of the OTRS wiki on the OTRS wiki -- it doesn't cover information they receive through info@ and other similar e-mail addresses. Moreover, while I'm sure OTRS volunteers as a group are conscientious people who take their responsibility seriously, people may not be comfortable sharing certain information with a group of anonymous volunteers regardless of any confidentiality obligations. Hence the disclaimer. I'm also sympathetic to trying to frame the disclaimer positively, but it's already longer than we'd prefer. Since it has to ideally appear everywhere people are directed to info@, it needs to be concise. AVoinigescu (WMF) (talk) 00:08, 24 December 2013 (UTC)Reply[reply]
That's certainly not how I read it. IANAL, but (quoting from the disclaimer which I assume is OK to do) "“Confidential, Proprietary or Protected Information”' shall include, but not be limited to non-public information got from OTRS". OTRS is never explicitly defined, so I would assume it includes ticket.wikimedia.org which is the actual "OTRS" interface. If I'm reading it wrong, maybe we should fix that rather then discouraging people from contacting the OTRS team.
You also say "anonymous volunteers", except as I stated earlier, OTRS volunteers aren't anonymous. If you feel that people are giving OTRS admins fake names, I think that's a much bigger issue that should be discussed separately. Legoktm (talk) 07:28, 24 December 2013 (UTC)Reply[reply]
Hi Legoktm. Sorry for not responding to this earlier. You're right that the definition is a little ambiguous. Given that the NDA is intended for users of the OTRS wiki (see the Preamble), for something to meet the the definition of "Confidential, Proprietary or Protected Information" I was reading it as having to satisfy (i) -- i.e. be disclosed through the OTRS wiki with (ii) just specifying a particular example. There's no "or" between (i) and (ii), so I don't think (ii) is a separate criteria. In any event, OTRS volunteers will be covered under an updated confidentiality agreement under the new Access Policy, so we'll make sure to clear any ambiguity there. As the "anonymous" volunteer comment, I should've been more precise -- I meant that a person sending an e-mail to info@ has no way of knowing which OTRS volunteer(s) reviews and responds to that e-mail. So to the e-mail sender, the reader is unknown, even if they aren't anonymous in the strict sense. Sorry for any confusion.--AVoinigescu (WMF) (talk) 12:56, 8 January 2014 (UTC)Reply[reply]
Comments on the current draft
The following discussion is closed: closing as the discussion appears over, please reopen if necessary. Will archive shortly if not. Jalexander--WMF 19:23, 11 February 2014 (UTC)Reply[reply]
Since I last reviewed the policy, significant changes have been made. As it stands, while I disagree with the direction the policy has gone, it is a much more agreeable policy, tho enough significant questions remain open to make me rather uncomfortable with submitting my information to the WMF under this policy. I will try and analyze the significant portions of the policy and offer my comments. SnowolfHow can I help? 20:00, 13 December 2013 (UTC)Reply[reply]
@Snowolf: Thank you for taking the time to review another draft of the policy. If you are still uncomfortable providing identification under this policy, please consider raising that point in the conversation above entitled Rethinking the access policy: Response to recent feedback. We are still discussing whether it is best to continue with a policy that requires submission of identification material, and your perspective would be helpful. (As a process note, I will respond inline below, but I copied your signature to make it easier for others to follow). Stephen LaPorte (WMF) (talk) 20:40, 19 December 2013 (UTC)Reply[reply]
I will comment in that section hopefully soon, but yes, I cannot possibly provide private informations to the Wikimedia Foundation under these conditions. SnowolfHow can I help? 21:50, 27 December 2013 (UTC)Reply[reply]
(a) Minimum age. We value our community members, no matter what their age. However, access to nonpublic information requires legal accountability in part because of the need to ensure confidentiality with respect to others’ nonpublic information. For this reason, any community member who applies for access rights must be at least eighteen (18) years of age, except email response team members who must be at least sixteen (16) years of age.
That seems a bit arbitrary to me: according to the English Wikipedia, there are over 20 countries where the age of majority is 19 or older. What is the advantage offered by going to a 18+ from the 18+ and legally adult in the country of residence/citizenship? SnowolfHow can I help? 20:00, 13 December 2013 (UTC)Reply[reply]
@Snowolf: We are open to different age requirements, because the age of majority varies, and the age of majority is not always determinative of whether someone can be held legally accountable for their responsibility. Legally accountability for minors may depend on a number of factors, as well as the jurisdiction. Our goal is increasing the likelihood of legal accountability, and more importantly, the general maturity of users entrusted with significant roles. I understand that on case-by-case basis, some young users may be more mature than older users, and all age minimum requirements may be arbitrary in some respect. We can consider your proposal to raise the requirement to 18 years old as well as legally an adult in the country of residence -- I would encourage you to join the conversation above on lowering the age to 16, so that we can come to a common understanding of how the minimum age should be set. Stephen LaPorte (WMF) (talk) 20:40, 19 December 2013 (UTC)Reply[reply]
Your reply is not really addressing the point. The current policy's requirement is to be over 18 as well as the age of majority in the country of residence. I asked why you seek to lower it. I am still interested in hearing the rationale behind this. SnowolfHow can I help? 21:50, 27 December 2013 (UTC)Reply[reply]
Hello @Snowolf: ah, thanks for clarifying. In a response below, Manprit explains that this was removed to reduce ambiguity in the requirement. Now, we require "legal accountability", which technically may be slightly different from "age of majority". I believe that legal accountability this was reason that "age of majority" was used in the policy previously. Thanks, Stephen LaPorte (WMF) (talk) 02:25, 16 January 2014 (UTC)Reply[reply]
(b) [...] Community members with access rights must meet the following identification requirements:
submit to the Wikimedia Foundation their name, date of birth, current valid email address, and mailing address;
have an account linked to a valid e-mail address; and
complete verification of email address or address processes (such as returning a postage-prepaid verification postcard sent to their submitted address or responding to a confirmation email sent to their submitted email address) if requested to do so; and
inform the Wikimedia Foundation of any change to their name, address, or email address within a reasonable time following such change.
How is the WMF gonna ensure that the name and date of birth are accurate? How are they preventing somebody from impersonating me when identifying to the WMF, going rogue and me getting sued by the WMF for it? It is remarkably easy to provide proof of a physical address, but even more so when that is not even required (see the "or").
Between the choice of the old draft and this change, this change makes me a lot more comfortable, sure, but it also doesn't seem to provide all that much point to identification. If somebody is acting in bad faith, gaining advanced userrights with the intention of abusing them and/or cause havoc, they'll submit false information. SnowolfHow can I help? 20:00, 13 December 2013 (UTC)Reply[reply]
@Snowolf: We may not be able to completely confirm that that a name or date of birth are accurate, but we currently face the same potential problem with identification cards. We proposed this change to the draft to avoid an impression that we were capable of verifying that information is accurate. We received pushback when we proposed a more stringent verification requirement, and so we are discussing this approach as a better description of what we are capable of doing. We are still open to using an identification verification system, as being discussed under Netverify. Stephen LaPorte (WMF) (talk) 20:40, 19 December 2013 (UTC)Reply[reply]
So if I understand you correctly, it is not legally relevant whether the information is accurate or not? Yes, you cannot verify whether a passport is false or not, but surely you will agree that it is less likely for a passport to be falsified than for a letter to be falsified... Is it the WMF's position that it's not worth the extra work? Is it because of legal issues in Germany and the Netherlands? What I've been trying to understand is the reasons for these changes.
Yes, it is important whether the information is accurate, but we are considering how to accomplish this in practice and communicate it accurately in our policies. The initial proposed change was to retain certain information to make it more likely that users with access to non-public data could be accountable for their actions. Michelle explains more above -- we have some concern about whether the current practice is consistent with the Board's policy on access to non-public data. The changes are still under consideration at the moment, so I appreciate that you have taken the time to share your views. Stephen LaPorte (WMF) (talk) 23:11, 31 December 2013 (UTC)Reply[reply]
(c) Confidentiality. To ensure that community members with access rights understand and commit to keeping the nonpublic information confidential, they will be required to read and agree to a short Confidentiality Agreement that outlines
what community members should treat as confidential information;
when they are allowed to access nonpublic information;
how community members may use nonpublic information about other users; and
when and to whom they may disclose the nonpublic information and how they must otherwise refrain from disclosing nonpublic information to anyone, except as permitted under applicable policies
What pecuniary sanctions, or any other penalties, are provided under this agreement for users who break it?
When can we see a draft of this agreement? Surely, it should be an integral part of this discussion, as if the confidential agreement contains unacceptable terms this whole policy would be impacted by that.
@Snowolf: The “Confidentiality agreement for nonpublic information” draft is available here. Regarding pecuniary sanctions and other penalties, the draft states: “The Wikimedia Foundation may pursue available legal remedies, including injunctive relief or, in the case of willful intent, monetary damages.” We anticipate that, most often, the only sanction pursued against violating users would be injunctive relief. For example, while we hope such an event would never occur (and has not occurred to date), we would want to be able to legally restrain someone who was sharing the personal information of others in violation of this Policy from continuing to do so. Rkwon (WMF) (talk) 20:51, 19 December 2013 (UTC)Reply[reply]
"and has not occurred to date" but that is not the case, at least in the view of the English Wikipedia's Arbitration Committee (even tho at the time the WMF's Counsel disagreed with their position on this). You have to realize that the community policies regulating checkuser and oversight are murky at best and were developed before revision delete was even implemented... Possible violations of them happen daily. Different projects have different views on what can and cannot be disclosed... The specter of the WMF suing its functionaries for damages because the Czech Wikipedia's interpretation of the policy (to name one random wiki) and the Counsel's office interpretation of the policy differ is not something out of this world. Us functionaries have to do the best we can with what little policy we have, the idea that we can be sued if the WMF disagrees is very, very chilling. SnowolfHow can I help? 21:50, 27 December 2013 (UTC)Reply[reply]
@Snowolf: Just to clarify, when I stated that such an event "has not occurred to date," I was referring only to the fact that WMF has never felt it necessary to legally enforce the terms of the confidentiality agreement against any volunteers with access rights. As to differing interpretations, if there is doubt regarding checkuser and oversight policies or regarding actions which a user believes may violate the terms of his or her signed confidentiality agreement, he or she is more than welcome to reach out to the WMF legal department. Rkwon (WMF) (talk) 09:54, 31 December 2013 (UTC)Reply[reply]
In the past, some WMF staffers have made known in private that the WMF's lawyers disagree with community interpretation of certain policies (namely, the Oversight policy). If we are to be bound by an agreement forcing us to obey policies, it would be nice to have all views of the WMF on what these policies are be public, rather than raised at times unexpectedly in private forums and then never followed up on when questioned about it. Confidentiality of the venue where this issue was raised about a year ago prevent me from giving more details on the matter, sadly.
The Wikimedia project is fairly unique from all other websites in that 100% of the content or close to it (if we count Danny's old reward board stuff and content coming from agreements negotiated by the Board) is coming from volunteers, and in that volunteers man most of the administrative positions on the projects and globally. I already raised the issue with the very senior staffer involved, who said he would get back to me and he never did, after which I considered this outlandish interpretation to be just that. SnowolfHow can I help? 21:50, 27 December 2013 (UTC)Reply[reply]
@Snowolf: Unfortunately, I am still unfamiliar with the situation to which you refer, so I cannot offer you any specific guidance on WMF Legal's interpretation of any particular policy. But, as much as possible, we wish to help community members adhere to WMF policies. We are open to suggestions on how to do this better. Other than responding to specific questions via legal@ or legal department talk pages, do you have any suggestions in mind that may address your concern? Rkwon (WMF) (talk) 09:54, 31 December 2013 (UTC)Reply[reply]
(i) Secure & confidential storage. In consideration of the privacy of the community members with access rights, the materials, documents, and identification information submitted to the Wikimedia Foundation under this Policy (collectively “submitted materials”) will be kept confidential, and access to these materials will be limited to a “need to know” basis within the Wikimedia Foundation. Submitted materials will be recorded electronically and held at the same or a greater level of security granted to the personal information of Wikimedia Foundation staff.
(iii) Submission methods. Community members with access rights may submit the required materials to the Wikimedia Foundation electronically, in-person, or through the public or private mail carrier of their choosing. The submitted materials will be transferred to a secure electronic database upon receipt and the original medium in which the materials were submitted will be destroyed immediately upon transfer to the electronic database.
The WMF's storage solution for the private information of their staffers is not a matter of great concerns to us volunteers. That is a matter that ought to be negotiated between the staffers, the WMF, etc.. I would like specific assurances about where our information is stored. I am still greatly disappointed that the WMF is unwilling to talk on this matter (see MPaulson complete unwillingness to discuss it last time I brought it up). We are asked to blindly trust that the WMF knows best where and how to store our stuff. I for one am unwilling to do that. I would like specific assurances and details.
It seems now that the WMF has moved away from the locked cabinet, which is a good thing, but it has moved into a very nebulous electronic system. A "secure electronic database" could be anything from a zipped access archive with a 6 digit password to a off-line computer, with all the data encrypted, all access logged, etc.. Anything less than a fully offline system which has storing of these information as its only purpose is not acceptable to me. Some people in this movement have given a lot to it, and in return received real life harassment and threats (thankfully not I), they deserve to be fully protected. In light of the fact that I was somewhat recently contacted by the Wikimedia Foundation telling me that they disclosed my hashed password and email address to pigs+dogs who had access to the Wikimedia Labs servers, I would like specific assurances from the WMF about how it's gonna handle our data, not some vague statement.
Regarding destruction methods, I would like some assurance that the WMF intends to shred documents before throwing them out (as it is my understanding that people have no right of privacy in American trash bins). What about information submitted electronically? Is the WMF committed to erase all documents with 3 passes at the very least?
@Snowolf: Please note this response addresses all 3 of the questions above regarding storage and submission methods.
We can assure you that the information will be stored in a secure manner, it will be limited to people with a need to have access to it, it will only be disclosed in limited circumstances, and we particularly describe that the the level of security will be consistent with our our storage of internal personal data. The electronic database that stores our internal data is password-protected, requires frequent password changes, and uses secure encryption certificates. We are willing to discuss further if you have other specific questions about how the information is stored, but I hope you can understand that there are reasonable limits to the commitments we can make regarding specific storage methods.
The Legal team’s practice is to shred sensitive materials (or use a similarly secure destruction method). We also shred any sensitive information provided by users. As for electronically submitted documents, we delete emails as soon as we verify the information that they contain, but we are not able to technically guarantee the number of passes. Rkwon (WMF) (talk) 20:51, 19 December 2013 (UTC)Reply[reply]
I am still missing an answer on whether this secure storage system is connected to other systems. I am happy to finally get some details, and yes, LVilla, I would be okey even with one pass, but I would like said one pass to happen, as otherwise the data is trivial to recover. I would like some more details as to who has access to it. Is it every member of the LCA team? I would assume not. Is it every member of the legal team? My main interest is that there is a log of who accessed what, so that data is only accessed if really needed. SnowolfHow can I help? 21:50, 27 December 2013 (UTC)Reply[reply]
@Snowolf: Only those who need to have access to the information will have access - this includes some but not all LCA team members and some but not all members of the legal team, depending on the situation. Rkwon (WMF) (talk) 09:54, 31 December 2013 (UTC)Reply[reply]
The Wikimedia Foundation will not share submitted materials with third parties, unless such disclosure is:
(B) required by law;
(C) needed to protect against immediate threat to life or limb; or
(D) needed to protect the rights, infrastructure, or safety of the Wikimedia Foundation, its employees, or contractors.
(A) that opens the door to the WMF giving data to third party for statistical analysis of the geographical location of its functionaries on behalf of the WMF, to the WMF giving access to functionaries email address to third parties for survey purposes on behalf of the foundation, etc..
@Snowolf: Please see the responses to odder’s question here for our proposed changes to A. As for your specific points regarding third party statistical and survey analysis, could you clarify your concerns? If the changes do not address your concern, how do you propose to adjust the provision? Please note however that we will not give data to third parties for statistical analysis of the geographical location of functionaries, nor would we provide data to third parties for survey purposes. Rkwon (WMF) (talk) 20:51, 19 December 2013 (UTC)Reply[reply]
(B) law is a very vague term. Is the WMF referring to US and Californian law, or any government that lodges a request with the WMF?
@Snowolf: We are referring to applicable U.S. law. As a matter of policy, we do not comply with foreign subpoenas or law enforcement requests that are not valid under U.S. law. However, we reserve the right to disclose under (C) if a non-U.S. authority provides us with credible evidence of immediate threat to life and limb. Rkwon (WMF) (talk) 20:51, 19 December 2013 (UTC)Reply[reply]
(C) I am not very comfortable with this, given it is not uncommon for law enforcement to make false claims of imminent danger to get around due process.
@Snowolf: We are very aware of this potential issue. However, if we have credible evidence sufficient to persuade us of an imminent physical threat to an individual, in circumstances it may be irresponsible for us to withhold information. Unfortunately, there are circumstances where individuals make specific threats about their intention to harm others (or themselves), and in those circumstances, it may be reasonable to share information with law enforcement. Rkwon (WMF) (talk) 20:51, 19 December 2013 (UTC)Reply[reply]
I understand your position but I respecfully disagree. The Wikimedia Foundation benefits from volunteers who come from countries such as Iran or the People's Republic of China where freedom of expression is a major issue and where disclosure of a volunteer's identity to local law enforcement could cause physical harm and threat to them. It should not be that hard for such government to conjure up credible information... I think the WMF should strongly consider a policy by which it provides volunteer informations only when forced to.
(D) Again, this to me sounds like there's nothing to prevent the WMF from disclosing a volunteer's personal information if they criticize the WMF or one of its officers.
@Snowolf: Please see our response to odder’s comment here in which we propose a revision of the language in D. The new draft would permit disclosure if necessary to protect “the safety of individuals or Wikimedia Foundation and Sites’ technical infrastructure.” Mere criticism of WMF or its officers would not be sufficient to allow disclosure under D absent some threat to safety or Wikimedia infrastructure. Rkwon (WMF) (talk) 20:53, 19 December 2013 (UTC)Reply[reply]
All together, I still think that the WMF should retain volunteer information for one purpose: to be able to release it when asked by a court of law, valid subpoena, etc. - the chief example being the WMF being sued over a volunteer's abuse.
@Snowolf: Thanks again for taking the time to consider the policy in detail and provide feedback. I hope you can understand that there are feasible limits to the types of guarantees that we can make about how we handle or store data, partly for security reasons, and partly because it would be impractical to enshrine into policy specific technical details regarding storage as technology evolves and improves rapidly. The current version of the policy goes beyond the standard guarantees in most other policies. If there are specific policies or practices that you have seen elsewhere which better address these issues, I would be interested in hearing more about them. Please also note that we are continuing to evaluate the draft as a whole: if you think that this draft is not going in the right direction, please let us know what you would prefer to see. Best, Stephen LaPorte (WMF) (talk) 20:40, 19 December 2013 (UTC)Reply[reply]
I understand it to an extent. Some of the issue raised here, as the last time around, seem to run into answers that do not address the question, plainly and simply. In several of my question, I tried to ask if the WMF would be willing to disclose the reasons behind some of the proposal. This is important to me, because some things do not seem to make much sense as they stand, and maybe they do in the proper context. I really think we would benefit all from some discussion of the context and the reasons as to why some things came to be proposed this way. I do not necessarily seek to change the policy, as it is a big step up from the first draft. On the security of our data, I see we have made some progress in getting assurances, and I have offered some more questions that I would hope you can answer, and they seem important to me. You have to realize that it wasn't long ago that our hashed passwords and emails were disclosed by mistake to users on Wikimedia Labs... It's not easy to trust and it's not easy to trust when the first steps were going on the wrong path (that mistake and the flat out refusal that I got the first time I asked about this) :) SnowolfHow can I help? 21:55, 27 December 2013 (UTC)Reply[reply]
I am extremely concerned about point (b). The current identification process in a thousand ways better (and more secure) than what it's being proposed. I did not see any issue in sending a digital scan of my passport to the Foundation so that they could verify my personal details, as long as they deleted it at a later date or store it at a secure place. But the current draft says that providing my name, date of birth and mailing address is enough; under this standard, almost everyone could be identified, since everyone could submit false information. So, I'd stick with requesting a scan of an ID (or passport or driver license, or whatever). — ΛΧΣ21 06:13, 16 December 2013 (UTC)Reply[reply]
Identifying without an ID scan is not really identifying. Is it? Anyone can claim anything. Without the scan, there is no point in "identifying" at all. --Sue Rangell (talk) 20:46, 13 January 2014 (UTC)Reply[reply]
Change in disclosure criteria
The following discussion is closed: closing as the discussion appears over, please reopen if necessary. Will archive shortly if not. Jalexander--WMF 19:22, 11 February 2014 (UTC)Reply[reply]
The new policy appears to be much more restrictive in the situations where data can be disclosed by a functionary. Specifically, the only rules that allow the disclosure of data to another member of the project (i.e., nongovernmental, non-ISP, etc) are:
"other community members with the equivalent or greater access rights to fulfill the duties outlined in the applicable policy for the access tool used;"
"the public, when it is a necessary and incidental consequence of blocking a sockpuppet or other abusive account."
"When necessary for investigation of abuse complaints"
"Where the user has been vandalizing articles or persistently behaving in a disruptive way, data may be released to… other third-party entity to assist in the targeting of IP blocks…"
So, for example, the new policy would make all of the Wikipedia:Long-term abuse pages on enwiki inconsistent with policy, because details on commonly used ranges by long-term abuse cases would not be disclosable to non-Checkusers.
Furthermore, there is commonly collaboration between the CheckUser and Oversight teams; data may need to be shared between members of the teams to work effectively. The policy as currently worded appears to prohibit this. Could the criteria provided in the new policy perhaps be revisited? LFaraone (talk) 01:50, 2 January 2014 (UTC)Reply[reply]
@LFaraone: Could you elaborate on why you think the CheckUser <-> Oversight collaboration is blocked by this? Our intent was that that sort of thing would be permitted by "equivalent or greater access rights", but if it isn't clear, we can fix that.
I don't think ranges should be covered under this policy, since they're not "nonpublic information" in the same way specific addresses are. But I admit I haven't given that angle much thought. Can you link to a specific LTA page that gives an example or two of the kind of thing you're concerned about, LFaraone? Thanks! —LVilla (WMF) (talk) 22:13, 3 January 2014 (UTC)Reply[reply]
Is CU considered to be "greater" than OS? And some wikis have an Arbitration Committee; however, most of them do not handle private information (except for en.wikipedia) - yet a few wikis do have the authority to appoint CU and OS (fr, nl, ru). --Rschen7754 01:49, 4 January 2014 (UTC)Reply[reply]
@Rschen7754:Is CU considered to be "greater" than OS? Depends on what the information is. For a lot of the things CUs and OSs discuss (say, IP addresses) then CU and OS have "equivalent access rights", so that discussion is allowed. There are a few cases where CU does not have access to OS-level information, and in those cases the policy does not allow OSes to share that information with CU - but as far as Philippe knows, this is already what happens (e.g., some CU questions, OS will respond with simply "yes" or "no" instead of giving detailed responses that would violate this policy). As far as Arbcoms - the key, again, is what information they have access to, not about perceived status/priority/etc. If two groups have access to the same data, then they have access and can share; if they don't have access to the same data, then they can't share. I'm totally happy to have suggestions for making this more clear if you have them, or just keep answering questions :) Thanks! —LVilla (WMF) (talk) 23:18, 7 January 2014 (UTC)Reply[reply]
Another question: What about checkuser-l? Each wiki has their own set of CUs (well, all the big wikis, anyway), and I'm aware that IP information is emailed across wikis, and stored on the CU wiki... --Rschen7754 02:00, 8 January 2014 (UTC)Reply[reply]
@Rschen7754:@LFaraone: I think I'd like to tweak the language to say: "(i) other community members with the same access rights, or who otherwise are permitted to access the same information, to fulfill the duties outlined in the applicable policy for the access tool used". The first part ("other community members with the same access rights") addresses the fact that CUs have the same rights on different wikis, and the second part ("who otherwise are permitted to access the same information") is supposed to address the CU <-> OS problem. Do you think that works? —LVilla (WMF) (talk) 01:28, 23 January 2014 (UTC)Reply[reply]
The following discussion is closed: closing as the discussion appears over, please reopen if necessary. Will archive shortly if not.Jalexander--WMF 19:22, 11 February 2014 (UTC)Reply[reply]
Where a minimum age is set for legal reasons, it should be replaced with "the person must be considered a legally competent adult for all [or just contractual/business?] purposes under the laws of the State of California [or is it Florida, I forget], the United States of America, and of the country and all political subdivisions in which the person holds citizenship and/or residency."
This would allow fully-emancipated minors and others who meet all of the above requirements to have such roles, while explicitly requiring those who live in states or countries where the age of majority is above 18 to wait and explicitly forbidding those who are otherwise not considered "legally competent adults."
As a matter of pure administrative convenience an additional minimum age of 18 can be attached to the above. This would make a few people wait a few months in exchange for not having to deal with the paperwork and legal research required to verify the eligibility of emancipated minors and those from countries whose default emancipation age is under 18. However, in no case should someone who is not considered a legal adult in the "legal location" of the Foundation and the "legal location" of the individual be allowed to volunteer where an age restriction exists purely for legal reasons.Davidwr/talk 20:17, 10 January 2014 (UTC)Reply[reply]
Thank you Davidwr for your comment :) Currently, the minimum age requirement is set at 18 under the policy draft (except for email response team members who may be 16), which is the age of adulthood in the State of California (the “legal location” of the Foundation). We recognize that the age of adulthood in other countries may be different, however, as I mentioned in the Lowering age to 16 discussion, we felt it would create ambiguity if we required community members to reach the age of legal accountability in the “legal location” of the individual. Additionally, the administrative cost of policing these differing requirements would be staggering, even if the onus is on the individual to verify the eligibility of emancipated minors in their jurisdiction. So, as you mentioned, setting a uniform minimum age of 18 would avoid the inconvenience and ambiguity of a more flexible age requirement. MBrar (WMF) (talk) 22:57, 23 January 2014 (UTC)Reply[reply]
Lowering age to 16
The following discussion is closed: closing as this discussion appears to have run it's course as far as the current policy discussion is concerned. Since there does not appear to be consensus the legal team has decided to keep the current policy (16 for OTRS 18 for other positions under the policy) Jalexander--WMF 11:18, 12 February 2014 (UTC))Reply[reply]
Hi all, one of the WMF staffers above wanted to know what other people thought of reducing the age requirement to 16, so please share your thoughts!
The legal implications could probably be solved by some sort of parental permission.
The reasoning for doing this is two-fold:
There have been many competent admins that are under 18. People should be judged on their abilities, not automatically disallowed due to age. I trust the community (used generally) to continue to elected only the most mature and best candidates.
OTRS agents have access to more private information than many functionaries. I see more private info on the info-en queue than I do as an oversight at Wikidata. If they can be trusted at 16, why not others?
I look forward to hearing some other perspectives on the issue. Ajraddatz (Talk) 22:36, 3 December 2013 (UTC)Reply[reply]
Except 16 year olds cannot be trusted with private information. The minimum age for OTRS access must be raised to 18. What we currently have is a glaring breach of security which the WMF should rectify as soon as possible. DanielTom (talk) 23:26, 3 December 2013 (UTC)Reply[reply]
Why can't 16 year olds be trusted with private information? There are already 16 year old OTRS volunteers who haven't gone posting it everywhere. If you are going to make a blanket statement that all 16 year olds can't be trusted with private information, you'd better be ready to back it up with proof that every 16 year old can't be trusted. Ajraddatz (Talk) 23:46, 3 December 2013 (UTC)Reply[reply]
I would be concerned about a blanket lowering of the age to 16; I would not want to have a member of en.wikipedia's arbcom be under 18. --Rschen7754 02:16, 4 December 2013 (UTC)Reply[reply]
Why? It seems like a lot of people have problems with it, but none are able to give any sort of logical reasoning (yet, anyway). Does a person's judgement really turn from bad to good the day they turn 18? Also, this is about identifying and having access to non-public data. Enwiki could continue to have an 18+ restriction for arbcom if it wanted to. Ajraddatz (Talk) 03:04, 4 December 2013 (UTC)Reply[reply]
Some permissions and accesses to non-public data are more sensitive than others; also, I think there are legal reasons as well (note that they also must be of age in their locality). --Rschen7754 03:18, 4 December 2013 (UTC)Reply[reply]
The legal reasons can be easily removed (if they exist in the first place). I was granted top secret security clearance in Canada when I was 17. My parents might have signed off on it, but either way, the legal implications of that are significantly higher than any information that might cross my path on Wikimedia. Could be different in the US, I suppose. Ajraddatz (Talk) 03:29, 4 December 2013 (UTC)Reply[reply]
I don't think OS on Wikidata is a very good comparison; we have less than 10 suppressions a month. OS on en.wikipedia, or CU, or steward, or en.wikipedia ArbCom (note w:en:User:AGK/ACE2012) are all very high-profile positions that do regularly expose those who hold the permissions to real-life harassment. en.wikipedia ArbCom does regularly do investigations into editors alleged to be pedophiles, too. --Rschen7754 03:44, 4 December 2013 (UTC)Reply[reply]
You forget that the smaller-wiki OS like what is found on Wikidata is the norm, not the enwiki situation. Projects could definitely make it more restrictive if there was consensus for that, but for the majority of projects, I'm still not seeing an argument against 16 year olds having access to the tools. OTRS volunteers are also harassed (as I myself have been in that capacity), yet they can be 16. Also, there are many people 18+ who could not handle the stress from harassment that can happen with these or other permissions (like admin), and likewise many 16 and 17 year olds who probably could. This proposal is about removing a barrier and put the focus on a case-by-case basis instead. Ajraddatz (Talk) 03:49, 4 December 2013 (UTC)Reply[reply]
Because of checkuser-l and the CU wiki though, as well as #wikimedia-checkuser/privacy, the lowest barrier to entry will be effective everywhere. --Rschen7754 04:02, 4 December 2013 (UTC)Reply[reply]
That's a very good point, and does separate OTRS from functionary roles to some extent. The OTRS mailing list does deal with hard cases and harassment, but not to the extent that those would I suppose. However, you haven't addressed my point about some 16/17 year olds being able to handle the harassment/stress and 18+ clearly not being able to (see some of the enwiki arbcom resignations) - surely there are some people in that age range who could do the job just as well as you or I? I'm probably fighting my usual losing battle of case-by-case selection over fixed rules here, but hopefully it's at least food for thought. Ajraddatz (Talk) 04:09, 4 December 2013 (UTC)Reply[reply]
At least in my opinion, not being of age would make it harder to use the legal process if necessary, to get relief from harassment. --Rschen7754 10:46, 4 December 2013 (UTC)Reply[reply]
There should only one rule: being in legal age in oneself's own Country, for example being 16 in most of EU-countries is meaningless. --Vituzzu (talk) 13:38, 13 December 2013 (UTC)Reply[reply]
I don't believe that harrasment should be an argument in this matter for OTRS-agents. If a case is really hard you can just leave it to someone else or ask someone to help you. I know an OTRS-agent who is a younger than I am but who handeled a difficult case ways better than I would have done. And believe me that case was really nasty. Vituzzu here has a valid argument. What if a 16 year old persons abuses the information. Who is legally responsible? I think that those questions are more interesting than deciding if 16 year olds ar capable of dealing with really difficult cases. Natuur12 (talk) 14:46, 17 December 2013 (UTC)Reply[reply]
Thank you all for your discussion on whether the minimum age for access to nonpublic information should be lowered to 16. The discussion raises some very important points, like the fact that some 16 year olds are incredibly resourceful and responsible enough to handle such access, and that OTRS agents may be 16 years old despite being able to see certain private information. We would just like to note that we did not set the minimum age for OTRS agents as it is just a reflection of standing practice and has no legal significance. We are curious to hear the thoughts of other members of the community on this issue, so please free to contribute further to this discussion. :) MBrar (WMF) (talk) 00:28, 19 December 2013 (UTC)Reply[reply]
MBrar (WMF): One of the issues that Wikipedia in general struggles with, and that this policy seems to imply but not directly tackle, is how to handle contributions from people younger than the age in which they can be an independent party to a legally binding contract/agreement (henceforth: "age of majority"). If I recall correctly, we've had images deleted from Commons because the uploader, who was not of the age of majority during the time of the upload, got upset with Commons over some petty fight, realized that because he was too young to be party to the licensing agreements they weren't valid, meaning that the release under a free license was not irrevocable, and was therefore able to take his images with him when he left. Had that person been over the age of majority when they uploaded their photos, they wouldn't have been able to force the deletion. When dealing with a policy that requires certain groups to become signatories in confidentially agreements, the age of majority problem must be explicitly addressed. I don't have an issue with 16 year old OTRS agent if they are a citizen of one of the small number of territories that consider 16 to be the age of majority, and at the same time, I would strongly object to an OTRS agent that was 18 in one of the small number of territories that consider the the age of majority to be 19, 20, or 21. The sailent point is that they must be able to enter into a legally binding contract - the confidentially agreement - independently and in their home country. The Access to nonpublic information policy must, in my opinion, make that clear in the wording of the policy. Sven Manguard (talk) 17:44, 28 December 2013 (UTC)
Thank you Sven for your comment and I sincerely apologize for the delay in responding. You raise a very interesting point! The policy’s current minimum age requirements represent a cut-off point to simply ensure that community members younger do not apply for access rights. When approving an individual’s request for additional responsibility, their ability to enter into a confidentiality agreement is assessed before approving their request. However, the "age of majority" is ambiguous in a lot of countries. Many allow minors to be considered an "adult" in relation to contract accountability, marriage, crimes, etc. at very different ages. We chose to include a specific age in the policy because age 18 was our prior standard, most countries will hold someone legally accountable at that age, and it is unambiguous. The second sentence of the “(a) minimum age” paragraph attempts to convey the age requirement is needed to ensure community members can be legally held to the confidentiality agreement. Do you feel this sentence needs further clarification within this paragraph? MBrar (WMF) (talk) 21:54, 6 January 2014 (UTC)Reply[reply]
MBrar (WMF) - To me "access to nonpublic information requires legal accountability in part because of the need to ensure confidentiality with respect to others’ nonpublic information" is not the same as "the confidentiality agreement is a legally binding contract, and the ability to enter into a legally binding contract in your home jurisdiction is a requirement". It needs to be unambiguous. Personally, I think that there are several sections of the policy where the authors got caught up in trying to make the policy seem nice, and as a result, wound up with diluted or unclear meaning (see the comments in the "Change in disclosure criteria" and "Comments on the current draft" sections). While being nice is good, a policy with legal considerations has to first and foremost be unambiguous. To accept anything less would open the door to years of fighting over the boundaries. Sven Manguard (talk) 22:09, 6 January 2014 (UTC)Reply[reply]
I understand your point on making the policy language as clear as possible. Like I mentioned above, the age of legal accountability is often ambiguous within certain jurisdictions. For this reason, we set a concrete minimum age in the policy. We felt it would introduce more ambiguity to require community members to simply reach the age of majority required in their home jurisdiction, since this will be different in different contexts. Also, it would be difficult to police such a requirement and would likely introduce additional legal complications. MBrar (WMF) (talk) 01:22, 8 January 2014 (UTC)Reply[reply]
Also, the determination should be made according to the age of consent where ever the servers and offices are located (I'm not sure where that is), because that is where the lawsuit will take place if, say, somebody suffers identity theft or other harm because we allowed a minor to have access to somebody's personal info. --Sue Rangell (talk) 19:27, 8 February 2014 (UTC)Reply[reply]
Suggestion about adding some hash algorithms for verification of private data, such as Checkuser data
The following discussion is closed: closing as the discussion appears over, please reopen if necessary. Will archive shortly if not. Jalexander--WMF 00:22, 13 February 2014 (UTC)Reply[reply]
Is there already a hash algorithm or something that helps verify a piece of private data? If not, I believe a hash algorithm can greatly reduce the possibility that user with access falsifies data or makes typos while using privacy data, and increases the verifiability of privacy data.
For those who need more clarification, let me outline what the hash algorithms should like:
The hash algorithm itself should be stored by Wikimedia privately, which would calculate a hash from a given piece of private data. The hash generator only allows input from existing, legitimate privacy data--In other word, you can't input your own data and get a hash.
As a common hash algorithm would do, It would be easy to verify if a piece of data matches its hash, and it would be difficult to deduce the data from the hash, or create two data with same hash.
It should be difficult to deduce this hash algorithm from a series of known input/output pairs and/or verification algorithm(if that is made public). The hash algorithm may also be changed over time to prevent accidental cracks.--朝鲜的轮子 (talk) 13:23, 5 January 2014 (UTC)Reply[reply]
朝鲜的轮子, I'm not sure I understand your proposal, so it would be nice to get some clarification. We couldn't make the hashes public since that would correlate private data, even if the values weren't known. If the hashes are secret, then for example, a checkuser says multiple accounts were editing from the same IP address, and the hash of that IP was ABCDXXX. The the users could generate a hash their IP and verify it was indeed ABCDXXX, or if not, somehow protest the claim. But we would have to rate limit the hashing mechanism to something very low, so any random user can't hash random ip's and attempt to find one that hashes to ABCDXXX. As Geoff says, nothing like this exists currently. Maybe elaborate with an example, and we can evaluate if something like that should be implemented? CSteipp (talk) 01:51, 9 January 2014 (UTC)Reply[reply]
I'd elaborate that, the hash-related activities is limited to people with access to privacy data to verify data and ensure they are not false--but not available to common users. There exists cases that original data is lost so that they can no longer be verified(For example, CU data become stale after 3 months). In regard to the question you asked about random generation attack, I think first we can have a hash verification page which only privileged users can access and which returns "yes" or "no" on a given pair of data and hash, while the algorithm is kept private. We can also put some kind of logging and anti-flooding to prevent abuse (For example, let's say you won't normally try out hash verification page for one specific hash 100 times a day). Also, the hash is not required for every piece of CU data, it should only be used if a CU think they need to have verification when they want to use data in discussion.--朝鲜的轮子 (talk) 03:07, 9 January 2014 (UTC)Reply[reply]
Hi, 朝鲜的轮子 - thanks for thinking about this problem. It is a problem we take seriously - if there was a solution that allowed CUs to do their job without access to IPs, that would both protect users, and protect CUs from pressure from governments and other sources.
Because of those questions, earlier this year we hired a consultant to investigate how Checkusers use IPs, and whether or not we could use hashes (or some other approach) in order to reduce the exposure of IPs to Checkusers. Unfortunately, their conclusion was that the problem would be very difficult to solve, for many reasons. Chris pointed out one (that we'd have to build a rate-limited hashing service so CUs could have conversations with each other); there are several others as well - for example, we would have to build internal replicas of all the third-party tools Checkusers use for things like whois, geolocation, etc. Because of the difficulty, I don't think that we can mandate such a radical change - it could only be done after there is extremely detailed research into the requirements.
Letting CUs see the hashes instead of actual IPs can be a possible variant of hash planning, but this is not what my current proposal means. I just mean a method to verify whether a piece of data is correct. For example, if one Checkuser want to claim what he see about a data before 3 months, or some users asked their CU results to be reviewed after 3 months has passed, then hashes can help verify a data and make the problem clearer.--朝鲜的轮子 (talk) 08:59, 11 January 2014 (UTC)Reply[reply]
Which begs the question why that problem needs to be solved through the Access Policy. As far as I can tell, what you describe is a proposed solution to a trust issue between the community and checkusers. It doesn't contribute to the protection of users' privacy. — Pajz (talk) 11:08, 11 January 2014 (UTC)Reply[reply]
If there is a way to ensure a piece of data is true, not falsified or typoed, and that way is better than merely trust, why should not we do it? And better trust between community and checkusers will benefit users, of course.--朝鲜的轮子 (talk) 01:34, 13 January 2014 (UTC)Reply[reply]
As with the other privacy documents, this draft is just that: a draft. We want to hear from you about how we can make it better. As suggested in the discussion about timelines above, we plan to hold the community consultation period for this draft open until 14 February 2014.
Lack of specifics for respecting the right to be forgotten
The following discussion is closed: closing as this discussion appears done, please reopen if necessary. Will archive shortly if not. Jalexander--WMF 11:19, 12 February 2014 (UTC))Reply[reply]
In a nut shell, the current policy proposal lack any specifics regarding the time factor, which is at the core of the daily routine of Check User operations. Thus more specifics are needed to strike a balance between protecting users' privacy and fighting vandalism.
Note that in current Wikimedia Foundation's use of Nonpublic Information, the Check User operations generally rely on the most recent data within three months, which is akin to the implementation of the "Right to be forgotten".
Nonetheless, for Check User operations, some data that is more than 90 day old is kept in the Checkuser Wiki, but it remains opaque when and what kind of data can be kept longer than the 90 day.
It is also unclear whether and how the Check User administrators can download, exchange, and actively store such data in their own devices outside the Checkuser Wiki.
It is also unclear whether and how such kind of data can be used as reliable evidence to fight vendalism.
There is also no techincal and legal risk assessment how the Check User administrators can keep the data safe in their own devices.
Hence, I have the following policy proposal for consideration.
Regarding Check User data that is more than three-month old, in order to insure the integrity of en:digital evidence, while balancing the need between data protection and fighting vandalism,
Check User administators can only base their Check User judgements on (a) the current user activity data/server log data (within three-month old), and *(b) the data stored and properly registered in the Checkuser Wiki.
At any given time, all other Check User working data that cannot be validated by the said dataset is not admissable for Check User judgements.
The community of global Check User administrators must set, review and implement standard procedures to "selectively" keep some of the about-to-expire IP data into the Checkuser Wiki for future use. Any single record must be signed off by two Check User adminstrator with reasons why such record need to be kept and for how long.
To protect users' privacy against excess stroage, oversight mechanism is needed to regularly review whether the community of global Check User administrators store too much unnecessary data.
Proactively storing such data is strongly discouraged.
Using secure official channels for communication among Check User administrators is strongly encouraged.
To decrease the frequency and risks of data leaking, it is advised to share only coded anonymized references that can lead to the records stored in the Checkuser Wiki and the current user activity, instead of the raw data themselves.
To decrease the legal and political risks of Check User administrators, they are encouraged to seek technical support from the Wikimedia Foundation to limit their footprints in their own devices when conducting Check User operations.
Regarding Users' Right to their own information
Normal users can request for the records on who, when, why their own registered accounts were under Check User operations.
Normal users cannot have access to the records based on their IP addresses.
Under special circumstances, normal users may request for the relevant and/or derivative records on their own registered accounts were under Check User operations, but the IP addresses must be redacted.
After consulting the Chinese Wikipedia community, I have the following to report:
One Check User administator (an active one) expressed that he does not need to store data in his own devices to be a productive Check User administrator.
One Check User administrator (the recently elected one) expressed his conerns that the proposal is impractical or impossible to implement. He argued that he is confident to take technical measures including data encryption to secure the locally stored recoreds.
Several editors have expressed the general concerns that the Check User administrators may have to respond to local authorities. (Currently all five Check User administrators are from the People's Republic of China.)
As a past receipt Yahoo Fellowship myself, I feel compelled to express my concerns based on the precedent of en:Shi_Tao, a mainland Chinese journalist who is jailed because Yahoo! China cooperated with Chinese authorities by providing private information to his account. Yahoo!'s role in this case has been examined and studied.
Hence, although the above policy proposal in no way is tailored or designed for a certain jurisdiction, I personally believe that the policy proposal can limit the legal and political risks, not only to normal users who participate in Wikimedia projects, but also to Check User administrator themselves. By keeping identifable data within Check User Wiki with proper administrative and tehcnical measures to insure its data integrity, and by replacing necessary data exchange with coded anonymized references, and by giving normal users to check whether, when, by whom, their accounts are checked, this proposed policy should fill some of the gaps in the current Access_to_nonpublic_information_policy proposal.
--Hanteng (talk) 18:59, 14 January 2014 (UTC)Reply[reply]
Thanks for putting this up at last. I'd wait for further responses at this time.--朝鲜的轮子 (talk) 03:07, 15 January 2014 (UTC)Reply[reply]
Hi Hanteng. Thank you for these suggestions. While I think these are topics worthy of discussion, I do not think they fall within the purview of the Access to Nonpublic Information Policy draft discussion. These types of decisions are generally handled by local communities (and in some cases, the global community) and thus, if these ideas were to be adopted at all, they would be adopted as community policies rather than in a broader Board policy like the Access policy. Mpaulson (WMF) (talk) 22:31, 22 January 2014 (UTC)Reply[reply]
Do you think any part of this proposal may relate to any policy we are currently discussing? For example, I am quite interested in the question about whether and how a user can know by whom and when they have been checkusered, as this is not addressed in the current policies.(Not to say I don't care the rest of the proposal, but I think is better to discuss it by parts)--朝鲜的轮子 (talk) 09:23, 23 January 2014 (UTC)Reply[reply]
About nine days had passed, and there had not been any comments on the proposal itself. So I'd just leave some of my comments when I discussed this issue with Hanteng.
Though Hanteng himself considered most of my comments "not in scope of this proposal", I think that part of my comments relate to likely consequence of enforcing this proposal. Regarding he also said he will consider discuss this at London Wikimania this year later, I'd like to express my view first.
"Check User administators can only base their Check User judgements on (a) the current user activity data/server log data (within three-month old), and *(b) the data stored and properly registered in the Checkuser Wiki.""Proactively storing such data is strongly discouraged." To what extent should we enforce those two idea? Does it mean"If a piece of data is expired and it is not stored in CUwiki, you should STRICTLY NEVER USE IT EVEN IF USING IT WILL SAVE THE WORLD"? Should we punish someone for saying or implying "I remember from a piece of stale data(which is not stored in CUwiki) that..." or someone who is suspected of making use of any stale data not stored in CUwiki? And also, checkusers can make some other reasons not related to stale data to aviod controversy if they are tempted to use data stale data not stored in CUwiki.
Secondly, the wiki system as it stands now is not the best option for storing such data. Assuming checkuser wiki works the same as wikipedia sites except that only checkusers can view the pages, where there are no log of each visit, and the data does not expire until people agree to remove it, it means anyone with checkuser access can see everything stored on CUwiki. Also, given that people need to discuss for retaining and destroying data, it is completely reasonable for anyone to visit a data even if they do not have anything to do with that data at all. This greatly increase the exposure of privacy data, which I believe is excessive. In the past, there had been at least one CUer who had his/her privilege removed because of doing more-than-necessary checking, so I believe there stands more than unlikely chance that some checkusers do gather data for their own interests. Actually, due to the fact that people have the freedom to remember and forget anything as they wish, the best way to protect privacy is not having a complicated rule which we have no means to enforce and causes even more problems, but to just reduce the exposure of data to the minimum.
If this part should be adopted, I would advise to have a system which has access log(like CU tool) and separation between local and global data(for example, if a piece of data only relates to someone who had only been active in Chinese wp, there is no point for CUers of other languages to know it), at least.
The part in this proposal I'd currently support are that using some hash mechanism to ensure data integrity and that allowing user to get the records of by whom and when they have been checkusered(but not the detail of each check).--朝鲜的轮子 (talk) 07:40, 24 January 2014 (UTC)Reply[reply]
Also mention actions to improve positive aspects
The following discussion is closed: closing as this discussion appears done, please reopen if necessary. Will archive shortly if not. Jalexander--WMF 11:19, 12 February 2014 (UTC))Reply[reply]
The "Use and disclosure of nonpublic information" enumerates types of work and specific activites solely for preventing problems (vandalism, socking, etc.). However, some roles also access this info to promote positive aspects, for example, OTRS to accept or confirm license permission or simple helpdesk emails, and developers to improve and debug the software. At least one positive aspect might be a nice addition to the intro of this policy section. More importantly, the first sentence of part (a) implicitly excludes all such things, setting the scope as "help prevent, stop, or minimize damage to the Sites and its users." The next sentence could allow positive-impact activities ("only use their access rights and the subsequent information they access in accordance with the policies that govern the tools they use to gain such access"), but if so it contradicts the first sentence. And I think it really is up to those who design the tools and write their policies what activities are allowable with them. Therefore, that second sentence should be first and the focus, rather than a secondary position. DMacks (talk) 04:40, 15 January 2014 (UTC)Reply[reply]
Hi DMacks, thank you for the suggestions! What do you think of the following language?
Community members with access rights provide valuable services to the Sites and its users -- they fight vandalism, respond to helpdesk emails, ensure that improperly disclosed private data is removed from public view, confirm license permissions, investigate sockpuppets, improve and debug software, and much more. But community members’ use of access rights is limited to certain circumstances and contexts.
(a) Use of access rights & nonpublic information. All community members with access to nonpublic information may only use their access rights and the subsequent information they access in accordance with the policies that govern the tools they use to gain such access. For example, community members with access to the CheckUser Tool must comply with the global CheckUser Policy, and, unless they are performing a cross-wiki check, they must also comply with the more restrictive local policies applicable to the relevant Site. Similarly, community members with access to a suppression tool may only use the tool in accordance with the Suppression Policy. When a community member’s access is revoked, for any reason, that member must destroy all nonpublic information that they have.
That new wording definitely resolves my concerns. Thanks! One bit of clarification for the "all...they have" at the end of (a). There are lots of independently available (and revokable) tools that allow access to independent and/or overlapping sets nonpublic information. Need to avoid saying that if someone with Tool A and Tool B loses Tool A, he would need to destroy the info he has from Tool B. DMacks (talk) 19:18, 25 January 2014 (UTC)Reply[reply]
Hi DMacks, the language has been changed to: "When a community member’s access to a certain tool is revoked, for any reason, that member must destroy all nonpublic information that they have as a result of that tool." Does that make sense? Thanks! RPatel (WMF) (talk) 23:23, 3 February 2014 (UTC)Reply[reply]
Recommendation to the board regarding access policy
The community consultation for the Access to Nonpublic Information Policy draft is coming to a close, so, in this announcement, we would like to outline the following:
A recap of the consultation process
The impact of the consultation on the draft
Our findings as a result of this consultation
Our recommendation to the Board to repeal the current Access to nonpublic data policy and adopt a modified version of the Access to Nonpublic Information Policy draft that does not include an identification requirement
The Access to Nonpublic Information Policy draft (“draft”) has been under community consultation since 03 September 2013. As the closing of the consultation period (currently set for 14 February 2014) approaches, I think it would be helpful to recap what happened during the consultation, the changes the draft underwent, and what the Wikimedia legal department has learned as a result. We sincerely thank the community members who took the time to read the draft constructively, share their legitimate concerns, and contribute useful suggestions.
The first draft was originally created to replace the current Access to nonpublic data policy (“Policy”), which requires that community members who have access to nonpublic information be “known” to the Wikimedia Foundation (“Foundation”). As I noted in the discussion, the reason why we believe that the Policy needs to be updated is that - both historically and currently - Wikimedia Foundation staff have found it difficult to balance privacy of identification with the Policy's requirement to assure accountability, meaning that current identification practices do not comply with the Policy. Current identification practices largely consist of: (1) community members who have been selected for particular access rights providing an identification document of their choice (or a copy of such identification document) to a designated member of the Foundation’s staff; (2) the staff member “checking” it; and (3) the staff member either returning the identification document or destroying the copy of the identification document without retaining any information contained in the identification document. Because identification is only required once in most cases and staff at the Foundation changes over time, the identities of many community members with access rights have become unknown to the Foundation over the years.
In an attempt to address this problem, we released our first draft of a new policy, which specifically addressed how copies of identification documents were to be submitted by community members with access rights and how they were to be retained by the Foundation. The draft was also intended to provide guidance to community members with access rights as to when they should access or share nonpublic information and to explain to the greater community why certain community members have access rights and the obligations individuals in those roles have with regard to that nonpublic information.
There were parts of the draft the community liked; there were parts that the community thought could be improved and indeed helped us improve; and there were parts that caused great controversy. Submission of copies of identification documents and retention of those copies by the Foundation was perhaps the most controversial topic for a variety of reasons: concerns over local restrictions on copying or transferring government-issued identification documents; discomfort with the Foundation retaining copies of the identification documents; disagreement over the circumstances in which the Foundation could disclose the contents of the identification documents; questions as to whether submission of identification documents would be sufficient to verify someone’s identity, and many more. What we learned was verification of identity is a complex problem with no simple solution for our community and the Wikimedia Foundation.
With these issues in mind, we asked the community whether it made sense to simply revoke the current Access to nonpublic data policy. Although no consensus developed as a result of this discussion (as was the case with the other proposed alternatives), it seemed that many community members continued to want a policy that: (1) addressed access to nonpublic information; and (2) required community members with access rights to be known to the Wikimedia Foundation. But those participating also continued to have serious reservations about the submission and retention of identification documents.
Explaining which community members were covered by the draft – those that have access to a tool that permits them to view nonpublic information about other users or members of the public; those with access to content or user information which has been removed from administrator view; and developers with access to nonpublic information;
Requiring community members with access rights to be at least 18 (or 16 in the case of email response team members) in accordance with current practices;
Requiring community members with access rights to provide their name, date of birth, current valid email address, and mailing address to the Wikimedia Foundation – without providing any identification documents or copies thereof – which would be retained by the Foundation for the amount of time the community member had access rights plus an additional six months; and
Outlining when community members with access rights may use their rights, how they may use the nonpublic information they access, when and to whom they may share the nonpublic information, and what their confidentiality obligations are.
Recommendation to the Board
We believe that there is value in having a global policy that sets the minimum requirements that community members with access to the nonpublic information of others or restricted content must meet. We also believe that there is great value in providing guidance to the individuals in these important roles about how such access rights and the resulting information should be used. The establishment of such a policy will not only serve the community members in these roles, but the greater community as well by providing better understanding about how access rights and nonpublic information are used.
Local communities are, of course, free to create and adopt more restrictive guidelines and requirements, but a global policy will ensure the establishment of a consistent foundation upon which local rules may be built.
That said, we do not believe that requiring identification in the manner that is currently laid out in the draft is an effective way to ensure accountability. Without proper identification and verification procedures in place, it would be disingenuous for the Wikimedia Foundation to claim complete and accurate knowledge of the identities of all community members with access rights. And, as noted by the community, there are various logistical and cultural obstacles in creating and maintaining proper identification and verification procedures.
This modified draft would still provide for minimum age and confidentiality requirements (which community members currently in possession of access rights and community members applying for access rights will have to certify and agree to, respectively). It would also provide rules surrounding the use of access rights and the resulting nonpublic information. However, it eliminates the identification requirement currently present in both the Access to nonpublic data policy and the draft.
We recognize that this is a controversial issue and people will have understandably varying and conflicting positions on the matter. We make this recommendation to the Board with the understanding that it may reasonably choose a different approach or solution to this unique challenge. However, after analysis of the relevant discussions throughout the community consultation period and considerable research into alternatives, we believe that this recommendation provides a sensible and honest approach to how access rights are handled.
Replace IP addresses of unregistered edits with unique guest names
The following discussion is closed: closing as this discussion appears done, please reopen if necessary. Will archive shortly if not. Jalexander--WMF 11:20, 12 February 2014 (UTC))Reply[reply]
While looking into privacy matters, it would be worth considering changing the current situation where an unregistered user or unlogged in user, reveals their IP address. Many users are not aware of the consequences of revealing their IP address. By replacing each individual IP address with a unique guest name, this will preserve the privacy of unregistered editors, yet allow the tracking of their edits. It would also protect those registered users who sometimes accidentally edit while logged out. Each unique guest name could be identified with the tag User:UGN, followed by a random sequence of letters and numbers to be applied to that IP address: User:UGN005abd77-43a. By using such a system, privacy and accountability are covered, and we are treating everyone with the same level of respect. SilkTork (talk) 17:30, 30 January 2014 (UTC)Reply[reply]
However, that reduces our ability to handle open proxies, and to conduct rangeblocks. --Rschen7754 18:55, 30 January 2014 (UTC)Reply[reply]
In addition to what Rschen said, some services like ACC require IPs to be viewable to us to make close calls about what to do. So unless a CheckUser type tool can be developed where users can find 'UGN' sequences, it is going to do more harm than good. John F. Lewis (talk) 23:59, 3 February 2014 (UTC)Reply[reply]
You also forget that knowing the IP address is important for marking vandalism sources such as schools or public hotspots, which helps admins in making blocks.--Jasper Deng (talk) 05:20, 4 February 2014 (UTC)Reply[reply]
Also, IP information is very relevant when choosing the length of the block. Blocking an IP without being able to see it would inevitably lead to a lot of mistakes, and the proposal would create an unnecessary extra workload for checkusers and steward alike. SnowolfHow can I help? 15:51, 8 February 2014 (UTC)Reply[reply]
The community consultation for this Access to Nonpublic Information Policy draft has closed as of 14 February 2014. We thank the many community members who have participated in this discussion since the opening of the consultation 03 September 2013. Your input has provided valuable insight about this sensitive and complicated topic. You can read more about the consultation and the next steps for the Policy draft on the Wikimedia blog. Mpaulson (WMF) (talk) 20:28, 14 February 2014 (UTC)Reply[reply]
Wow, no public notice for a draft proposal that will repeal the identification requirement for those who have users' most sensitive information--and the only protection that that information isn't available to immature 12-year olds or other malefactors. Something like this definitely deserved more intense notification than a stupid ArbCom or Stewardship election or RfC on paid editing. --ColonelHenry (talk) 02:25, 5 March 2014 (UTC)Reply[reply]
Obviously the period for community consultation has long ended—frankly, the final proposal has somewhat slipped out of my attention. Having noticed the posting of the updated draft document today, I decided to nevertheless share the following comments (written back in February). Feel free to consider it a waste of time.
Writing solely in my individual capacity as a user of the Wikimedia Sites, I would like to share a few observations on this matter. To summarize, I respectfully disagree with the proposal of the Legal Counsel and her interpretation of the community discussion as well as the LCA team's overall approach toward the objections by community members brought forward during the consultation period. The Access Policy should include a requirement to submit identifying information.
“writ[e] most of the policies and select from amongst themselves people to hold certain administrative rights. These rights may include access to limited amounts of otherwise nonpublic information […] They use this access to help protect against vandalism and abuse, fight harassment of other users, and generally try to minimize disruptive behavior on the Wikimedia Sites […] These user-selected administrative groups are accountable to other users through checks and balances: users are selected through a community-driven process and overseen by their peers through a logged history of their actions.”
The Foundation ensures that my IP address is kept strictly confidential? Cool! Any tweaks? — Well, some community members may access it of course. — Oh? Who would that be? — Actually, we have no clue. But they have to abide by our tough rules. — Great, I figure there will be serious consequences if they don't! — Oh yes, there are very powerful bodies that enforce it. — Puh. And what can they do if someone messed up? — They can report it to the Foundation. — Wow. And what can the Foundation do? — Well, of course they can't do real-life stuff but they absolutely could ban him from Wikimedia! — Hurray.
(2) The debate on this page is fundamentally biased, and it is my impression that the Legal Counsel does not adequately recognize that in her recommendation. An estimated 90 percent of those who comment here represent just one of the stakeholders: Those with access to nonpublic information. Of course they have an inclination to resist identification requirements. Just as I do. It actually seems downright stupid to be eager to provide the Foundation with your real name and your address if there's only a downside but never ever any upside to it. A typical example of other stakeholders are customers that write to firstname.lastname@example.org. You cannot claim with honesty that they have had any voice in this discussion. However, in 2012 alone, more than 20,000 tickets have been created in the info-en queue, and putting the number of unique customers at at least 15,000 should be a rather conservative estimate. OTRS agents can easily retrieve real-name information (as provided in the emails), sometimes IP addresses, email addresses etc. from these people. Checkusers can potentially access inordinate amounts of IP address information from registered users on Wikimedia Wikis. But neither OTRS customers nor “ordinary” Wikipedians are significantly represented in this debate. That, of course, is no surprise and I don’t think one can really change it. But when Michelle writes that “after analysis of the relevant discussions throughout the community consultation period and considerable research into alternatives, we believe that this recommendation provides a sensible and honest approach to how access rights are handled”, I doubt that is the right way of looking at it if you strive to balance the conflicting interests. And that also brings me to the next point:
(3) I am unhappy about the way the Foundation has handled some substantial criticism on this page. The truth is that, yes, there were those who expressed their resistance to identification requirements per se. I do not agree with them, and I've outlined why above. But there were, from my impression, even more who accepted the general idea of identification but criticized the extent to which the LCA team reserves the right to use it under the current draft Acess Policy. I will give two examples for that.
ii) Requirements, (d)(i)(A), (d)(i)(C), (d)(i)(D): Another suggestion brought forward by users was to limit the information sharing to disclosures requested by law-enforcement authorities. Under this proposal, the Foundation would generally keep the information for the sole purpose of being able to help with legal investigations. Under a somewhat less restrictive proposal, it was suggested to include an exception that would additionally allow for disclosure of the identifying information for the purpose of protecting the interests of the Wikimedia Foundation provided that other members of the relevant Community body agree with that. No matter which of the alternatives you prefer, they would all be significantly more restrictive than the clauses in the current draft.
In both examples, a more restrictive approach was, in my opinion, feasible and, most importantly, it would still have been in the spirit of the purpose of the Access Policy (see 1.). That is why I am unhappy about how this was handled: It just stands to reason to me that despite existing possibilities to create an Access Policy that does instate some accountability, the LCA team has chosen to insist on more wide-ranging disclosure scenarios and now effectively, in regard to user accountability, has left everyone with a take it or leave it proposition. If the LCA team were really interested in addressing the interest of the Community, it should have presented to the Board at least a version of the draft Access Policy with identification requirements but with less comprehensive possibilities for disclosure. (Even more so if you bear in mind that, as explained above, the Community is inadequately represented on this page in the first place.)
(4) My favorite argument, as always, is that people can fake their submitted IDs. This is not just some argument, it is the ultimate, unchallenged favorite argument of the “net community.” My current working hypothesis is that this is the great argumentative undertaking of the internet age, and its ultimate purpose is the replacement of common sense logic with the authority of the techies. The structure is well-known and always identical: Measure A can be circumvented. => You should not implement A. This is usually “proven” by providing great examples of how easy it is to circumvent A; on conventions, this is the point where the internet activist audience bursts into laughter. Gotcha! Needless to say, this is nonsense. You would either have to show that A will (necessarily) be circumvented, or (as usual) that the cost of implementing A does outweigh the benefit of implementing A. Neither is the case here. Even if a criminal volunteer who already has access might provide a counterfeit statement of identity, identification requirements at least act as a deterrent. OTRS is a good example here: Many volunteers have access over a long period of time. I, for instance, got access in 2007. If I had been forced to submit identifying information in 2007, got angry about a fellow Wikipedian last week and now had malicious intentions, the submission from 2007 would still constrain me. Also, less responsible people might be deterred alltogether from requesting access in first place. Finally, a requirement to submit identifying information underscores that the Foundation and the Wikimedia movement in general take the privacy of users seriously. Not having any identification requirement for anyone (not even Arbcom members dealing with highly sensitive cases, stewards and OTRS admins) sends a clear message that the Foundation does not really care about user privacy (as for the user it is certainly irrelevant if a volunteer or a WMF employee is responsible for a privacy breach).
(5) If the LCA team had shown some more flexibility in setting the means of identification, concerns by Dutch and German users could have been addressed. For instance, it would be possible to simply include an exemption clause to the end that if the scanning/copying and/or transmission of governmental identification documents is prohibited by law, WMF staff will provide other means for providing identification that are customary in the respective country. Of course these do exist (for instance, German volunteers could identify via PostIdent (which can also be made use of by foreign organizations directly)). You could also offer to provide different means of identification on request (at the discretion of Foundation staff) in case someone does not want to submit a copy of their ID.
In light of the above, it is my opinion that the Board should reject the Recommendation. — Pajz (talk) 09:38, 18 March 2014 (UTC)Reply[reply]
This policy page was fully translated into German at one point (I was among those who worked it out) yet it currently says only "18%" have been translated. So what happened to the rest? — Pajz (talk) 22:13, 8 May 2014 (UTC)Reply[reply]
This was translated when it was a draft (and this is still a draft even if dicussions about the core elements are closed, until the Foundation finalizes it with its lawyers; in US but also to perform further international legal checks because OTRS members are not all in US and may be there may be additional rules for some countries to fill the remaining gaps).
Look at the history of the translations, you'll find a lot of simplifications, many minor ones for clarity. Almost all of them were performed before the draft was submitted to the Foundation and things have eveolved back and forth during community discussions. Now we only see extremely minor edits like punctuation and coordinations.
You may still use the obsolete German versions to correct them to match; most of these edits are simple to do, and that document is not so long that it cannot be done in less than 2 minutes. verdy_p (talk) 01:35, 12 May 2014 (UTC)Reply[reply]
PS do Roshni, Pajz ping folks like the U/replyto/@ templates do? I've used them here as a test.--Elvey (talk) 17:40, 4 June 2014 (UTC)Reply[reply]
I've never heard these hacking allegations before and I suggest that any information regarding that should be forwarded to the OC. Extraordinary claims require extraordinary proof. SnowolfHow can I help? 18:55, 15 May 2014 (UTC)Reply[reply]