Research talk:Anonymity and conformity over the net

From Meta, a Wikimedia project coordination wiki
Jump to navigation Jump to search


  • Could you provide us with either a link to the survey or list of its questions? --EpochFail 16:42, 22 September 2011 (UTC)

Yes i'm going to give a couple of links with invite codes to try it. Remember the survey is with invite only so once somebody uses one it becomes inactive. If you need more ask for more either here or through my talk page. My convenient/snowball sample through facebook showed that people would generally won't conform(as far as statistically significance goes) and will propose their best idea(contrary to many people believing that under anonymity people would act differently.) when the rest of the people in the group oppose them. A random sample will show better results and accuracy though.

The three topic scenarios are shuffled along with the anonymity states. Put simply for each scenario there is a version with a different anonymity state. The text doesn't change much but only at the point where the way of communication for the online meetings is determined. The plan is to check for differences between the same scenarios under different anonymity states. Therefore i am talking about a between groups testing.

Just a note. what i am trying to prove is if the critical key information, the best idea that a participant has would be revealed even when a group supports a different idea. Since the inflow of alternatives is important for avoiding groupthink behavior, having participants contributing is essential. I don't care though about median values for each group and percentages rather than comparing statistically with Kruskal-Wallis and Friedman tests that there is a statistically significant difference or correlation. Basically if a certain anonymity state would create something more or less than another one. The control here is when people use their real names and according to previous literature people are expected to conform with the group than telling their opinion.

The links are:

--Michael Tsikerdekis 08:38, 23 September 2011 (UTC)

  • I do not seem to be able to open any of the links provided above. Any changes - where can I see the survey questions? Goran S. Milovanovic 20:43, 16 November 2011 (UTC)
  • It seems that the answers for some of the questions are constraining. It seems like you should give the survey taker some instructions about choosing the best answer available if their solution doesn't appear.
    • I guess you are talking about the second page. Indeed you are right about that. The problem is i need them to choose what they perceive as the best answer available. In theory they should propose their own but i prefer quantitative research over qualitative. Maybe i'll put a parenthesis for each scenario saying "pick the option that you believe is best out of the three options available"--Michael Tsikerdekis 16:58, 28 September 2011 (UTC)
  • It also seems strange that "Not Important" doesn't show up as an option. Are you looking at that question as a likert scale?
    • The "not important" option i preferred not to use. The questions of importance are likert scales(3 point) and their purpose is to help me divide the group answers based on the level of importance. Those that think that the topic is extremely important and so on. The problem is that the more importance levels i give the more groups i will have after the division and therefore the less people into each group, which could make the results useless. Drawing conclusions from a group of 20 people won't be a concrete result. I am expecting that people who might deem the questions are not important will probably go for the somewhat important option. --Michael Tsikerdekis 16:58, 28 September 2011 (UTC)
  • When I got to page 3 I found a bug. The progress bar went blank, but the percentage read "75%". Windows 7 64bit + Chrome 14.0.
    • Yes nice catch. Indeed there was a bug there. The progressbar should appear now. --Michael Tsikerdekis 16:58, 28 September 2011 (UTC)
  • It concerns me that you state what you are trying to prove with the survey at the end, but I don't see any ethical/norms issue with it. I'd recommend simply stating your research questions, but this is matter of preference.
--EpochFail 14:12, 28 September 2011 (UTC)
    • I preferred this to be as a thank you note for the participants. Understanding how the survey works but also hopefully some of them might provide me with feedback which i can later write in my final paper. --Michael Tsikerdekis 16:58, 28 September 2011 (UTC)
  • Hi Michael,

I have a few concerns.

  1. I am worried that the data that you are going to collect will not be very useful for your research as you signal people that you expect them to behave differently under different conditions of anonymity. One way of solving this is not to present 3 different cases to each person but choose one random case and show that to a person and ask how they will act.
You are right. I have thought about that. Definitely just one case may be ideal in order to avoid any participant bias but this will triple the amount of the sample needed for the three cases. I could on the other hand just use one case but this way i won't be able to see what happens in every case which is different(some have higher trade offs and penalties other cases lower). In the pilot what most people told was that they responded according to the situation as they thought best regardless of anonymity states which is good news in the sense that they really didn't care to satisfy what i want to prove. Besides they don't really know the inner workings of the experiment until they complete the survey. What i am thinking of doing though is to add a couple of qualitative questions that will improve the internal validity of the research. Basically questions that will provide me with some "why" answers to what i will get from the quantitative questions.--Michael Tsikerdekis 07:08, 12 October 2011 (UTC)
I want to add to the above that i am thinking about adding 2 more questions for each scenario. The first is a qualitative and the second a 10 point likert scale. The first might provide me with some interesting why answers and the second will show the perception of the individual and his anonymity. Public awareness can be manipulated through accountability and anonymity which the scenarios do manupulate these variables but there is no way to know how this would translate as a number unless the individual gives it. Based on that i can run correlation tests and evaluate. Basically the higher the anonymity perception the more likely an individual will go against the crowd(that is well known by literature). But this way i will also know how perceptions differ across anonymity states, cases but also within the same cases and anonymity states. * Could you please explain your choice? (Optional) * According to the scenario above, how did you perceive your level of anonymity?--Michael Tsikerdekis 08:59, 12 October 2011 (UTC)
  1. In addition, I think you should have a clear introduction of your survey and think very careful if you are collecting all necessary control variables. If you are forgetting important information now, then you won't be able to add that later.
I am thinking about this all the time but i can't think of anything else other than the qualitative questions that i want to add. I get gender, age, how important is each case and then likelihood for proposing their alternative solution under the three anonymity states. The control is the real name. It is really similar in that sense to the Ash's conformity experiment and it is straight forward. If i isolate each case and i ran a kruskal-wallis test if there are any differences between the anonymity states within the same case they should appear. If anyone has anything else that believes that might be worth adding into the survey please feel free to write it here. Many minds are better than one :-) --Michael Tsikerdekis 07:08, 12 October 2011 (UTC)
PS, i think i just understood what you meant about the clear introduction. You see i made the survey in order to retain the last step of a user so that in case you have a power outage you can start from where you left off. Problem is that here, i posted the links and everyone can access them. Once all of them are accessed and people click continue in the first screen then it automatically resumes in the second page of the questionnaire, and that is probably the first page that you saw but i assure you it wasn't. Besides, there needs to be a first introductory page in order to obtain an inform consent otherwise it would have been unethical. --Michael Tsikerdekis 14:47, 12 October 2011 (UTC)
  1. Finally, why do you want to use the Wikipedia context, this survey seems quite generic and can be run in many different online contexts.
My main research is about design and social media. There are two reasons for conducting the research here. First, i understand that although these principles may apply to the Internet the problem is scientific here. In order to be able to make inferences with a confidence i need to have a random sample. There isn't a global Internet directory with users to do that. Wikipedia as a community has such a directory so random samples can be drown. The generalisation will be for wikipedia but after that you could argue that wikipedians are internet users and internet users are likely to act like that everywhere. The second reason is one of importance. Since this is a collaboration problem and wikipedia is a community of collaboration, the benefits from the results are significant. For example i have my full name here, but others prefer nicknames. What happens when people are called to vote for a specific topic. Would anonymity or not, affect their opinion and make them side with the crowd or not. I want to see the likelihood of that happening. According to literature and experiments there was an effect. People willingly were following the other participant's wrong answer. Of course here a want to see what happens when someone proposes a new idea. I am more interested in the influx of new ideas in a conversation. --Michael Tsikerdekis 07:09, 12 October 2011 (UTC)

Drdee 20:52, 11 October 2011 (UTC)

  • Just a comment re "you could argue that wikipedians are internet users and internet users are likely to act like that everywhere". In some respects Wikipedians are a very long way from being typical internet users. Altruism is very big here, so is respect for copyright. They probably aren't the only differences. Our readership may well be more reflective of the Internet, and we know that our active editors are quite different to our readers. WereSpielChequers 23:32, 12 October 2011 (UTC)
    Yes. This is not a random sample for the social media space. Wikipedians and Wikipedia users are not a representative sample of social media users. There has been a great deal of ongoing discussion about how the Wikimedia experience frustrates the expectations of social media users. ~ Ningauble 00:06, 13 October 2011 (UTC)
    I believe both of you are right in what you said. Definitely wikipedians are different than the average internet user. But in the specific case that i am examining, what are the chances that internet users will behave different in the disclosure of information under anonymity states than wikipedians? It may just be the case that altruism has a different effect on them but i am not measuring means or percentages by differences between anonymity states. In theory if there are differences between anonymity states then probably internet users will display the same differences(different means and percentages over every state but still between states there should be differences). At least i am hoping that this would be the case. As an additional proof i already have a convenience/snowball sample from facebook which is indicative but a purely random sample over a population should increase the validity of the results. In fact i believe that a sample of wikipedians will be more valid than a class of students and of course the results will indicate what is happening at talk pages here when controversial matters are discussed(basically if people are susceptible to groupthink behavior due to lower influx of ideas because of conformity). If on the other hand, people behave differently here well then i will have an interesting case and maybe altruism counts more than we originally thought :-) --Michael Tsikerdekis 09:26, 13 October 2011 (UTC)
I don't know how we differ from other Internet users re anonymity, and I'm not sure how to find out. Amongst those of us who are pseudonymous or anonymous potential reasons include the real life consequences that have happened when people have tracked down fellow editors, both vandals and lawyers. I suspect that we are more vulnerable to that than some sites. But the important thing with such a skew is to make you aware of it so that you can put appropriate caveats on your results; It is a possible reason why you might not go ahead with this, not a reason for us to decline the project. WereSpielChequers 19:57, 13 October 2011 (UTC)
I understand and thank you very much from bringing this to my attention. Definitely statistics should be added in the final paper regarding the population of wikipedia and how it might be different than the general internet population. --Michael Tsikerdekis 21:01, 13 October 2011 (UTC)


  • Privacy and security
    • What privacy mechanisms does your surveying system support (SSL, encrypted/password protected storage)? --EpochFail 16:42, 22 September 2011 (UTC)
As far as privacy mechanisms go, ssl is a paid service to be recognized by browsers. Self-signed won't work so basically i can offer ssl. The results are stored on a private server on an sql database. The mysql is access only locally through ssh OR through a password protected/with self-signed certificates openvpn connection. Intercepting the data through ssh or openvpn as far as i know is really hard. --Michael Tsikerdekis 08:38, 23 September 2011 (UTC)
I'll have to ping RCom on this one. As a computer scientist, I'm not sure I feel comfortable with private data being sent without encryption. However, the most private thing asked in the survey is my age group which is likely to only be of any concern for few people.
Eagerly waiting to hear back what they have to say. I know that according to US and European laws they are not sensitive data [1] [Wikipedia:Data_Protection_Directive] but i have no idea if it is a privacy concern. According to me, the one that could intercept packages from one's network could see that someone between 21-30 of age(although he/she would have to guess that age_group 1,2,3,4,5 means for certain age ranges) sent a couple of questions from a site that the attacker can't access, because it is with invite only, and therefore he would have to guess what answer 2 for question1 means.(I use identifiers for questions and answers and they are send with POST commands which encrypt the answers up to a point). In fact unless you take part in the survey there is no way you could guess what is even about. --Michael Tsikerdekis 16:58, 28 September 2011 (UTC)
The outcome of the discussion on the RCom list is that encryption is recommended as a best practice but we do not wish to enforce it as a requirement for surveys at this stage. --DarTar 18:53, 11 October 2011 (UTC)
    • How will you anonymize the private data you collect so that it can be released publicly? --EpochFail 16:42, 22 September 2011 (UTC)
The mysql table has only field for the user unique invite code but also a unique id. The simplest way to go with this is to hide the user invite code and just display each row with the automatically assigned id by the mysql database.--Michael Tsikerdekis 08:38, 23 September 2011 (UTC)
If the dataset reflects what I saw in the survey, that sounds great. --EpochFail 14:34, 28 September 2011 (UTC)
  • Ethics
    • Does your university have a human subjects research ethics and review board and if so, have they reviewed your research proposal? I imagine something like an IRB or an independent research ethics committee. --EpochFail 16:42, 22 September 2011 (UTC)
Yes my university does have a Ethics board but they haven't reviewed the proposal. Board Masaryk University. The purpose and all relevant information is in Czech but you can use if you want to get a general idea. I personally don't see the need for a review of the proposal since the Wikipedia:Ethics_Committee_(European_Union) has directives specifically for clinical trials and also even the Wikipedia:Institutional_Review_Board has exceptions for the reviewing process.
  • While IRBs can be more inclusive or restrictive, under the statute, exemptions to IRB approval include research activities in which the only involvement of human subjects will be in one or more of the following categories:
    • 2. Research involving the use of educational tests (cognitive, diagnostic, aptitude, achievement), survey procedures, interview procedures or observation of public behavior, unless:
      • information obtained is recorded in such a manner that human subjects can be identified, directly or through identifiers linked to the subjects; and
      • any disclosure of the human subjects' responses outside the research could reasonably place the subjects at risk of criminal or civil liability or be damaging to the subjects' financial standing, employability, or reputation. --Michael Tsikerdekis 08:38, 23 September 2011 (UTC)
  • Credibility
    • Do you have an advisor and/or research lab (other than the dept of your university)? --EpochFail 16:42, 22 September 2011 (UTC)
Simply put, no i don't aside from my supervisor. Cooperation is always more than welcome as long as the publishing is done in a computer's science journal. (PhD in computer science requires computer science publications or at least a multidisciplinary that covers computer science as well) --Michael Tsikerdekis 08:38, 23 September 2011 (UTC)
I'm confused. Are you saying that you don't have an advisor or that your advisor is not a part of this work? I've sent an email requesting your university email address to confirm your credentials.--EpochFail 14:34, 28 September 2011 (UTC)
Sorry for the confusion, hope i can make it more clear. My advisor is my supervisor for my PhD studies. He is informed about my general research but also this specific research survey which was also written in my thesis proposal. Basically he is the man with the answers when i am stuck with my research. When you say advisor i presume you mean the same as supervisor? --Michael Tsikerdekis 16:58, 28 September 2011 (UTC)
That is indeed what I was hoping for with "advisor". Could you share his contact. Just the name would suffice. It should be listed on the project page. (We should probably add that as a field to the template.)
    • Do you have any relevant previous publications? --EpochFail 16:42, 22 September 2011 (UTC)
I do have a publication on a peer-reviewed conference proceedings. I cited here btw wikipedia:Pseudonymity#cite_note-tsikerdekis-15. Other than that i am now trying to get a publication to a journal, hopefully will be this one :-) --Michael Tsikerdekis 08:38, 23 September 2011 (UTC)
    • Will anyone other than yourself be responsible for data collection and analysis? --EpochFail 16:42, 22 September 2011 (UTC)
Sort answer, no unless it is a serious problem for the community here. The data collection is automatic through the invite process and the website storing it on the mysql. The data as data will be public for anyone to use as long as they cite the research and don't use it as their own. Hence, the analysis can be done by virtually anyone but the analysis for publishing will be done by me. Of course findings from other people's analysis of the data will be considered for publication as i wouldn't want to miss any important piece of information.--Michael Tsikerdekis 08:38, 23 September 2011 (UTC)
That sounds fine. I was really intending to ask if you would be the only one to see the 'raw dataset in its non-anonymized form. --EpochFail 14:34, 28 September 2011 (UTC)

Re. Recruitment mechanism: User_talk vs. Special:EmailUser[edit]

I've been a fan of requiring recruitment requests to be posted on user talk pages so that the process is more transparent, but I don't feel too strongly. There are obvious benefits (some cited by the researcher) to sending email directly to Wikipedians. How does the rest of RCom feel about this? See my thoughts below --EpochFail 16:42, 22 September 2011 (UTC)



  • Increased accountability and trackability
    Wikipedians can use the templatelinks system and Special:Contributions to track where recruitment messages have been posted to easily find out just how much recruiting a researcher has been doing.
  • Less intrusive to Wikipedia
    Many Wikipedians may prefer not to be solicited via their email and keep wiki-communications on wiki.
Is there any survey that covers what is the preferred way by wikipedians? (Just wondering because using the optimum would be the best) --Michael Tsikerdekis 08:38, 23 September 2011 (UTC)
  • Wide access
    For many Wikipedians, the User_talk page is only available means of communication


  • Visibility
    Some Wikipedians who would be willing to participate may be less active or won't notice a post on their talk page
  • Lack of privacy
    For some studies, just the fact that an editor was sent a recruitment request could be a breach of privacy depending on how the results of the study are presented.



  • Private message
    Sending a user an email is more like a "private message" than a broadcast for other to see. This is generally preferred for studies with privacy elements
  • Better response rate
    Anecdotally, using Special:EmailUser has been reported to give a better response rate.
  • Reduced negative attention
    Some scrutiny is good, but sometimes uninvolved editors will take it too far. See [2] for example.


  • Untrackable
    No feasible way to track the recruitment requests made by an editor
Do you mean that wikipedia has no means of tracking which account sends emails to another account? Because for the response rates since every user has an invite code the response rate can be tracked back.--Michael Tsikerdekis 08:38, 23 September 2011 (UTC)
Sorry I didn't see this message earlier. Though admins and WMF employees can track use of Special:EmailUser, the logs are not available to the general public (to my knowledge). I find it very valuable that any research activities are completely transparent to the general community in order to support the self-policing mechanisms of the community. --EpochFail 20:42, 11 October 2011 (UTC)
  • Intrusive
    Many Wikipedians would prefer Wiki matters to remain on wiki. Few people enjoy unsolicited messages in their email.

There have been no comments on this issue, but I'm sure that some people will take issue with sending hundreds of emails to Wikipedians. I've pinged RCom to try to get a discussion going, but if we don't get some participation here, the next step would be to bring in a wider audience by posting on Wikipedia. --EpochFail 17:25, 28 September 2011 (UTC)

For this specific study, and given the target sample size, I second EpochFail and recommend posting recruitment requests via user talk pages. We can have a new discussion after the second round of recruitment is completed, as proposed in the project page, if this method results in a low response rate. --DarTar 18:50, 11 October 2011 (UTC)
This sounds reasonable to me. I'd like to start a conversation within the Wikipedia community about recruitment methods like these, but I'm not sure how to do that productively. That's part of a bigger topic for RCom though. In this case, I'm interested in proceeding with User_talk postings to gauge the reaction and adjust appropriately. --EpochFail 20:42, 11 October 2011 (UTC)
The discussion could happen in a page like this Wikipedia:Wikipedia_talk:User_pages although as far as i can tell its only for the discussion of guidelines :-S. Btw that could be a whole survey by itself. Asking what are the preferred channels of communication by wikipedians and what is acceptable to receive where, there is definitely literature to support research on that.--Michael Tsikerdekis 07:27, 12 October 2011 (UTC)
I also think that talk page recruitment is to be preferred here, since it does not make the research impossible, but is more compatible with the community culture here. -- Daniel Mietchen 21:26, 11 October 2011 (UTC)
I'd prefer that we use talkpage messages rather than Email. Talkpage messages have the advantage that there is already an opt out mechanism in the No Bot template, But the only feasible way a user has to opt out of Emails like this is to disable Email, and we don't want them to do that because they'd lose the ability to do password resets etc. Also as we have a large minority who haven't enable email you would introduce a potential skew to this exercise. My expectation is that those who withhold their Email from Wikimedia will tend to have different attitudes towards anonymity than those who trust Wikimedia with their Email address. WereSpielChequers 23:20, 12 October 2011 (UTC)
Excellent point! Never thought about it this way. Maybe those that are sensitive with their emails and privacy here will be more likely to propose their alternative solutions when anonymous, or maybe the opposite will happen. And excluding them from the sample won't cover the whole population. --Michael Tsikerdekis 09:26, 13 October 2011 (UTC)


I made a couple of corrections in the survey so that linguistically will be solid and so that i can get all the information needed from the participants. I actually included even qualitative questions so that i can triangulate between the quantitative and qualitative data. Hopefully, this will will give me a better insight in what the results are trying to say to me. Can i proceed with the invites through people's talk pages? --Michael Tsikerdekis 09:09, 10 November 2011 (UTC)

I have a few remaining concerns:
  1. I see that you've changed the language in Methods to specify that you'll be making talk page postings. It looks like you still need to update the Timeline section.
  2. How many requests for participation do you plan to post overall? Will you be making all of the posting's at once?
  3. Can you give us an example of what your request to participate will look like?
Otherwise, everything looks good to me. I'm excited to see the results of your work. --EpochFail 15:15, 10 November 2011 (UTC)

  1. I updated the Timeline and Methods so that they can be more specific and reflect a more accurate timeline. I still haven't decided when the data should be made public. Should i make them before publishing or after? What do you think?
  2. For correlation and difference studies the rough rule is 30 for each group. I test 3 categories of anonymity times 3 for importance. According to this 270 would be ideal. Since importance is a matter of choice for the individuals there is no way for me to control how many participants would fall under each category of importance. I am not expecting many to select the "Not at all important" category. Given that, i think that even 200 should do it. If the response rate was 100% then 200 invites would be enough. I have no idea how well wikipedians respond to surveys but for internet surveys response rates are usually within 20%-40%. With a 30% i would need to send around 600 invites. Using a random algorithm i can get 600 random people from the total of active users(150.000). In order for each member to have an equal chance of being selected(so that the sample will be truly random) i need to send all the invites to the list of 600 users. I cant just pick 200 and see how it goes and then decide if i would sent another 200. If you know what is the response rate for wikipedia i can revise the number of 600 invites. Another approach of course would be for me to send 10 invites as part of a pilot study. If i get a 7/10 response rate then i can conclude that i need 285 invites instead of 600. So it depends on the committee. If you are comfortable with me sending, say 500 invites expecting a 40% response rate then i will go right ahead. If you want me to investigate if i can lower the number of invites, i can proceed with a 10 people exploratory invite to determine the response rate. Of course if you do know the response rate then even better! :-)
  3. As for the invite message, i decided to use the invite of a fellow researcher that posted another research project on wikipedia recently. I will make an extra section here so that we can improve the message and who knows it might even become a template for future researchers. --Michael Tsikerdekis 16:47, 10 November 2011 (UTC)
  1. You should try to make the data public as soon as possible for what doesn't cause you trouble. Unless you have a reasonable concern that your work might get scooped (pretty unlikely given survey work), I'd post the anonymized dataset as soon as you have it ready.
  2. I like the idea of doing a pilot to check how much of a response you will get (and ensure there aren't any other issues). Could you propose a size and timeline for a pilot? --EpochFail 20:21, 13 November 2011 (UTC)
  1. I do agree, since it is online why not make it interactive and for everyone. As soon as i get the first responses back(from the pilot study) i can create php scripts that will print all the results and tables. I will strip the ids and randomize the records so that they will be reported back not in the chronological order that they were received in the db.
  2. Usually pilots are really small, 10-20. My proposal would be 15. It should be enough to test everything and give response rates. There might also be a way that i could use this first participants and include them in the rest of the survey but i am gonna research a bit into that and get back to you. The issue is mostly perceptual. Anyway, with a 15 member pilot study, i would give about 15-20 days for them to respond back. After that depending on the response rate i can get post back here the results and you can give me the okay to complete the rest of the survey. --Michael Tsikerdekis 11:27, 14 November 2011 (UTC)
This sounds good. If you will update your project to reflect the pilot study plan, I'll call for a straw poll so we can build concensus. --EpochFail 14:02, 14 November 2011 (UTC)


"differ- ent"? "eas- ier"? Are these intentional? I cleaned it up because it looked like it was left over from set formatted text copy and pasted from somewhere else. Ottava Rima (talk) 16:10, 10 November 2011 (UTC)

You guessed right about the errors. I was already writing the paper for the publication and copied the references from there...sadly pdfs split words like that when copied. Thanks for fixing them :-) --Michael Tsikerdekis 16:48, 10 November 2011 (UTC)

Invitation message[edit]

Your invitation to participate in a Wikimedia-approved survey in online behavior.[edit]

Hello, my name is Michael Tsikerdekis, currently involved as a student in full time academic research at Masaryk University. I am writing to you to kindly invite you to participate in an online survey about interface and online collaboration on Wikipedia. The survey has been reviewed and approved by the Wikimedia Foundation Research Committee.

I am contacting you because you were randomly selected from a list of active editors. The survey should take about 7 to 10 minutes to complete, and it is very straightforward.

Wikipedia is an open project by nature. Let’s create new knowledge for everyone! :-)

To take part in the survey please follow the link: www.[to be defined].

Best Regards, ~~~~

PS: The results from the research will become available online for everyone and will be published in an open access journal.

I made some minor wording fixes to the invitation an added links. Forgive me if I've been too bold. I think you may want to make it more apparent that you are a student. You may also want to include the fact that you intend to share the results with the community once you have completed the study. Otherwise, this looks good to me. --EpochFail 20:19, 13 November 2011 (UTC)
I made the changes that you proposed. I think it is better now. --Michael Tsikerdekis 11:05, 14 November 2011 (UTC)
FYI: I made another correction from "Wikipedia Foundation" to "Wikimedia Foundation". --EpochFail 20:21, 16 November 2011 (UTC)

Poll: RCom support for this project[edit]

I'd like to take a poll to determine if there is consensus among RCom and other discussion participants for this study to move forward and for RCom to give its formal approval.

  • Support Conditional support: This project is of the canonical type that the RCom's subject recruitment group was intended to handle. The survey is likely to be both interesting and non-intrusive for Wikipedians. The results should prove to be interesting academically and practically (i.e. learn how Wikipedians prefer to deal with troublesome people). We'll be testing the waters a bit with the mass subject recruitment needed for this project, but it's about time for that. --EpochFail 14:41, 14 November 2011 (UTC)
    • I've update my !vote to "conditional support" based on Steven's suggestion to bring this to the village pump. I fully support this so long as the Wikipedian's at the pump don't bring up major concerns about the recruitment method. I'd recommend that the right time to visit the village pump would be in between the pilot and proper study. --EpochFail 21:15, 16 November 2011 (UTC)
  • Support, agree with Aaron.--Ymblanter 17:16, 14 November 2011 (UTC)
  • Support though if you haven't taken this to the Village Pump of the relevant project yet, I strongly encourage you to do so after the RCOM support comes through. I also made some bold changes to copyedit, including making the title more inviting by starting with "your invitation" instead of the slightly ambiguous "kind". Also, most important of all, you can't say "Wikipedia approved", because this was approved by a Wikimedia body. Steven Walling (WMF) • talk 18:07, 14 November 2011 (UTC)
  • Support: I do not see a single problem with this. It sounds interesting indeed. Good luck with your research. Goran S. Milovanovic 20:40, 16 November 2011 (UTC)

Per a discussion the RCom-l mailing list, this poll does not appear to have consensus of the RCom members. A new poll will be opened shortly. --EpochFail 14:59, 27 November 2011 (UTC)

I read the messages that were written on the mailing list and i felt that i should answer to some of the concerns that were raised. I don't want anyone signing on the project if everything is not covered (or at least what is humanly possible in scientific research). I removed who posted what because i wanted to focus on the content and i sorted everything numerically. I want to make some of my points even clearer as I also felt that some of the issues/responses mostly depended on perception reflecting ones personal opinion. You are more than encouraged to ask and reply with any other questions you might have.--Michael Tsikerdekis 18:18, 27 November 2011 (UTC)

  • we will be approving for the first time some kind of large-scale recruitment approach via user talk pages for a student project: this is something we've never done before and we should only do it if there's a good reason.
I do agree i am still a student and the project consitutes a "student project" but it is not a bachelor's or a master's study but a PhD study. In any case, i believe that the value of a project does not only lie in the academic degrees that one holds but also in the value of what it is to be researched. Knowing if there is any effect in the way people vote under different anonymity states is important. Consider that the voting that takes place for this project is governed by the same rules that i want to investigate. Some of you use pseudonyms and others real names. In addition, aside from the research itself, i presented you with literature on the topic and i talked a bit about the process and analysis. If you want me to tell you exactly which variables are going to be obtained quantitatively and qualitatively as well as the methods of analysis that i plan to use, i can create a seperate section here that will document the process.--Michael Tsikerdekis 18:18, 27 November 2011 (UTC)
  • the advisor of the proponent doesn't seem to be involved at all in this project and is not even named in the proposal. Aaron asked the proponent to share the name of his supervisor in September, but he hasn't done so (yet?)
I haven't yet read Aaron's post about the request to post information about my advisor. My advisor has an extremely busy schedule and although not a co-researcher of what i am trying to do, he completely approves of and supports the project that was also described in my PhD thesis proposal and of course when there is something that i don't know, i consult with him and other experts first. In this survey i was also fortunate enough to have another professor that is teaching social science research methods review the project.--Michael Tsikerdekis 18:18, 27 November 2011 (UTC)
  • the proponent says that no funding is supporting this research and that this study is "conducted with the author's own efforts"
yes i don't have a budget for my research. In fact as a PhD student I have a small enough budget as it is to pay my bills, tuitions fees etc. Surveys such as this are ideal when budget is limited and they are also quite efficient for obtaining concrete results. Offering economic incentives may sometimes produce biased results.--Michael Tsikerdekis 18:18, 27 November 2011 (UTC)
  • no one else other than the applicant will be implied in the data collection and analysis and the proponent doesn't seem to have an actual research record
I have already published as well as presented a scientific paper in an international conference this past summer in Rome, Italy and currently have a paper under peer-review for a another scientific journal. Compared to a professor's research record there is no comparison. I left an open invitation which still stands for cooperation by anyone who would want to cooperate in this research and as i have written before i can provide you with the variables and analysis which i am going to follow (measuring differences between each treatment, correlation, and maybe ordinal logistic regression analysis, the later depends on the results).--Michael Tsikerdekis 18:18, 27 November 2011 (UTC)
Can you link us to your previous publication? --EpochFail 19:00, 29 November 2011 (UTC)
Yes, here it is: [[3]]. It can also be found through google scholar. The conference has a time limit before it opens the data to the public but if you want to have a look i can sent the paper via email. --Michael Tsikerdekis 19:36, 29 November 2011 (UTC)
Update: My article that was under peer review has been accepted by the Eminds International Journal on Human-Computer Interaction which is an Open Journal(Gold open access). It was about a research survey that i conducted earlier this year about Anonymity and Aggression. --Michael Tsikerdekis 07:34, 14 December 2011 (UTC)
  • there is no trace in this proposal of an approval by an ethics committee. The proponent says that this is not applicable (and it's true that IRB policy is very different between the US and other countries), but some official record would help us assess the credibility of the proposal.
Masaryk University has an Ethics Review Board which handles complaints for researchers and forces the punishment depending on the "crime" that one commiteed. It expects from all researchers that they adhere to the latest ethics code. I follow the American Sociological Association: Ethics and the Code of Ethics for all of my research projects.--Michael Tsikerdekis 18:18, 27 November 2011 (UTC)
  • I don't want to be fussy about this particular request, and we should move forward if there is consensus, but the general problem with our passive support of SR requests is not settled yet as far as I am concerned. I have limited time and resources to allocate to reviewing research collaboration requests and it's fair to say that 80-90% of my RCom activity is taken by reviewing surveys of any kind. I don't think we've ever followed up with previous surveys to see if they were successfully completed, if they produced any interesting results, if their results were ever shared with the community. What I am trying to say is that we could spend our effort more wisely to maximize the usefulness of our research outreach program and help researchers help us in this process.
100% agree with this. I don't want the RCom to only approve the research but to be able to evaluate its progress over time. Personally the results i think will be valuable. If i do not find significant differences, i believe that the voting process is not affected by anonymity states. If i find significant differences between treatments, then we have something interesting in our hands. Needless to say that sharing the results with the community would be beneficial to all parties involved. I believe both the community and myself have something to gain by making the research work and of course keep future research similar to this going.--Michael Tsikerdekis 18:18, 27 November 2011 (UTC)
  • I'd like to recommend that Michael withhold his dataset until he feels comfortable publishing it, but I have been advising him that he is unlikely to get scooped on the results of a survey he performed. I don't feel comfortable suggesting that he risk the rewards of his work against his own judgement.
i am skeptical about completely releasing everything before publishing but, i would definately want to give access to the review committee members as results start coming in. I am able to make a link of an interactive page randomizing the results and give it to the RCom.
The more I think about it, the more comfortable I feel allowing the resulting dataset to be withheld at least until submission of the manuscript. WSC brought up the concern and the mailing list that, if we require researcher to release their datasets before the researcher is allowed to publish his results, it will only be a matter of time before a researcher gets scooped and their efforts wasted. --EpochFail 19:00, 29 November 2011 (UTC)
  • As for the reuse rights. It seems like we have discussed this and decided that studies that simply come to RCom for help vetting their proposed research would not need to provide rights to distribute the manuscript. As a side-note, it is important that the language he references is changed to specify that the right to "adapt" the manuscript is *not* passed on to the Wikimedia Foundation. I'd like to tell him that this requirement does not apply to him since he did not receive substantial support, but on the other hand, his research plan involves contacting a rather large amount of Wikipedians. Thoughts?
I'm not particularly fussy about this. If Wikimedia allows me to conduct my research and obtain results it stands to reason that i will return the favor for the whole community. As far as i understand the logistics of reuse rights and please correct me if i am wrong, it is not any different than having a co-author on the project. If you think that i should extend the rights to Wikimedia i would be more than happy to do so. I only asked because i do not have the knowledge about the logistics of the process itself. I know that publishers such as Elsevier have a policy that covers "work for hire" so it should cover something like this as well. --Michael Tsikerdekis 18:18, 27 November 2011 (UTC)
  • Well, I think we should react somehow. Michael, would it be an option to sample a small fraction first? Would it be useful in any way?--Ymblanter 18:52, 29 November 2011 (UTC)
Yes, actually this is what i mentioned to Aaron over email while i was waiting for the votes. The smallest fraction which i could sample would be the pilot 15 people. It won't give me any conclusive results but it will give me the response rate and it will show me any potential problems with the survey. For the next step i could sample anywhere from 100 to 300 people (that is with 100% response rate). Keeping the sample at the minimum 100 will give me 33 participants for each treatment, enough to see if there are any significant differences between the anonymity states. With this of course, if i want to see differences between genders, and if theoritically each of the 33 participants is split 50-50 to males and females, i will have ~16 participants for comparisons. This is below the recommended limit for testing differences but it could still give an indication, in theory. The true reason that i wanted more than 100, is that i would like to measure if the importance of a problem for an individual affects the behavior in anonymity states. The questionaire has 3 importance levels, and 3 anonymity states and ideally you need 20-30 people for each subcategory-category...therefore 200-300 participants are needed to have conclusive results about the importance in relation to anonymity. So, to answer to your question, i definately believe that it would be useful to sample a small portion first(the pilot 15 people) and after that take it a step further for 100-150. At that point i can see the data and probably determine if the importance of a topic for an individual plays any role(if it might be worth adding another 100 people in the list to investigate further). I believe that even a sample with 100-150 participants should be enough to ascertain if indeed anonymity states affect the way people conform with others. PS: i do have qualitative data in the survey which could help me understand if importance plays any role. For example, if i see that someone says 'I don't care about the topic so i will go with the flow' then that should also indicate to me that importance might play a role. Maybe, i could even use the qualitative results to support the theory about topic importance in the paper without the need for an additional sample. --Michael Tsikerdekis 19:26, 29 November 2011 (UTC)
  • Well, one thing that I do not understand about this project is related to its methodology, but putting that question aside for a moment, I have to reiterate that I do not see any procedural problems in relation to it. Someone mentioned that this will be the first time to try to recruit a significant number of participants via their talk pages. Since I am definitely not an expert on Wikimedia/Wikipedia specific problems related to this approach to subject recruitment, I will not interfere with that discussion. I can not estimate how would users of Wikipedia react upon such an approach. The question related to methodology is the following one: am I missing something important about this study, or are participants being ask to rate the degree of perceived anonymity after explicitly being informed that they are either listed with their full names or completely anonymous in hypothetical situations encompassed by each page/question? Please elaborate. Thx, Goran S. Milovanovic 22:26, 29 November 2011 (UTC)
Goran you understood correct :-). Users are asked to identify how much anonymous do they perceive they are(1-10) after they are clearly informed about their anonymity level. When i was presenting in a class about a previous study which i was identifying the relationship between anonymity and aggression, a student pointed out that users that are more technically acquainted with computers might perceive anonymity in a different way. A user might say that while he is anonymous in the website the website can still track the IP address, and someone can track him with the IP address through the ISP to his home. So, keeping that in mind, it is likely that the user might behave in a different manner than a user that will just see that he is anonymous and say “okay, that's a 10, I am completely anonymous”. I plan to make the following with this variable:
  1. Perform correlation tests within anonymity states. In theory I expect to see that as perception of anonymity increases, so does the likelihood of one sticking to their originally best choice.
  2. Perform correlation tests across anonymity states. Since the scale(1-10) measures someone being completely known to completely anonymous, I could argue that the correlations can be performed across different anonymity states. I do though need to be cautious with the results from this kind of correlation test across anonymity states. It will also depend from the data. If someone rates his or her perception as 10(completely anonymous) in the scenario where s/he uses his/her real name that would challenge the results from this correlation analysis. It is in a way an outlier.
  3. Finally, the obvious test is descriptive statistics. We can see the perceived level of anonymity that people have within anonymity states. As far as I know nobody so far has done research to give the perceived level of anonymity that individuals have across different anonymity states on a scale from one to ten.
In short, this is basically a measure to increase internal validity for the results of the study.
--Michael Tsikerdekis 12:22, 30 November 2011 (UTC)
  • Thank you, Michael. Ok, as far as I am concerned, I would have no more questions for Michael, I don't have anything to add to our discussion - so I will wait for the next poll on this research to be called. Goran S. Milovanovic 05:26, 7 December 2011 (UTC)
  • I agree with Goran. As far as I am concerned, the research can go through.--Ymblanter 21:48, 9 December 2011 (UTC)
I'm still waiting on User:DarTar to chime in before calling another poll since many of the comments that Michael responded to were his. I've pinged him a couple of times recently. I'll create the poll on Monday if we don't hear back from him by then and no one has beat me to it. --EpochFail 22:50, 9 December 2011 (UTC)

Poll: RCom support for this project (attempt #2)[edit]

Once again, I'd like to take a poll to determine if there is consensus among RCom and other discussion participants for this study to move forward and for RCom to give its formal approval. --EpochFail 23:30, 13 December 2011 (UTC)

  • Support: See reasoning in previous poll above. --EpochFail 23:30, 13 December 2011 (UTC)
  • I support provided Michael starts with a smaller sample as discussed.--Ymblanter 10:43, 14 December 2011 (UTC)

Given the lack of concerns raised despite the long standing notifications about this poll, I am advising Michael to move forward with his recruitment plan. --EpochFail 14:44, 3 January 2012 (UTC)

  • Support ----Lilaroja 15:25, 28 January 2012 (UTC) 15:25, 28 January 2012 (UTC)

Pilot Study Results[edit]

General results: 8/15 people responded (53% response rate). For internet surveys this is considered a high percent. There were also 4 people that opened the link but quit in the first page of the survey. It is not unlikely for people to be curious however i contacted them and tried to find if there was anything wrong with the survey. One responded back and said that gender and age was considered by the participant as sensitive information. The survey was edited and the option "Don't want to say" was added in the questionaire for the questions of age and gender. I don't expect to have many people choosing this and i don't expect to affect the main cases for checking differences between the anonymity states which is the most essential part of the study.

The community seems to have responded fine with the invitations in their talk page. I received no complains about it.

I am posting below the results of the pilot study. I conducted a couple of tests but with a sample of 8 people it would be careless to try and reach a conclusion.

An interesting thing is that 66,7% of the answers consisted of the top two likelihood choices. This is consistent with previous computer based studies for conformity. This means that the study seems to be on a good path.

Below is the table with the results. Each individual answered one of the three scenarios which i haven't included in the table so that i will avoid cluttering. Based on the comments and the variable though you can get a general idea of the dynamics involved.

People believe based on the comments that anonymity would make people be alot bolder than when they are known but as far as i can see, participants didn't care if their names were known. Patricia Wallace in her book The Psychology of the Internet has an explanation about this based on a comparison of Asch's experiment in 1955(people conformed: 25%) and Smilowitz, Compton,Flint computer experiment in 1988(people conformed: 69%). Basically, people feel safer behind a computer screen and feel less threatened. But i wonder if this will mean that anonymity states don't affect conformity from within a computer enviroment at all. Can't wait for the full results. :-)

ASTATE: 1 - Real names, 2 - Nicknames, 3 - Completely anonymous

QREVEAL: 1-7, the higher the higher the likelihood. The likert scale also has a middle point(4) which stands for uncertain.

QSTRENGTH: 1: not important, 2 very important, 3 extremely important

QPERCEPTION: 1-10, 10 being fully anonymous and 1 completely known

1 1 6 3 3 Situation must be brought to a head
1 2 6 3 8 You want to get paid fairly
1 3 6 3 3 You feel that it\'s right and you are that person\'s boss
2 1 5 3 7
2 2 5 3 6
2 3 5 3 6
3 1 6 2 2 I don\'t perceive much threat in response from management in this scenario.
3 2 6 3 7 I still perceive little incentive for someone to bother breaching the mediocre anonymity and little chance of retaliation by management. Retaliation by the misbehaving employee is likely to force the issue.
3 3 6 2 9 I would be willing to stand on this issue even without the anonymity.
4 1 5 3 2 Real names invite retribution.
4 2 7 2 5 In this case anonymity is not an issue. Employees should be compensated for overtime as a legal issue. I would speak up even using my real name.
4 3 7 2 8 Even a valued employee can improve. Being completely anonymous allows for sharper and more poignant responses.
5 1 4 3 1 There are too many unquantified variable here to evaluate.
5 2 7 3 1 I would advocate my opinion in the debate. If it were clear that I had lost the debate, I would abstain in the voting.
5 3 3 1 3 I would leave the company if I didn\'t think my compensation was appropriate to my work.
6 1 6 2 1 It\'s very unlikely that that person will chamge his behaviour if nothing is done.
6 2 6 3 8 that\'s only fair
6 3 6 3 10 it is nececairy for that person to change his behavior.
7 1 7 3 1 I am suggesting a solution to a problem. It would be illegal to fire me for expressing this (reasonable) view.
7 2 3 3 3 It would be best for me to do what senior management want! If they value the employee, and didn\'t like the decision I took on my own, and this resulted in some pseudo-democratic process, where everybody else also seemed to value the employee, I guess I would keep my head down, even if I wanted them fired.
7 3 6 2 8 Well it\'s very difficult because in real life, I would have made my own assessment of the problem tenant\'s character. Obviously, if I felt they were beyond reason, I would very probably suggest the alternative. If I did not, then I would be less likely to.
8 1 7 3 1
8 2 7 3 2 its my job
8 3 4 1 8 don\'t care

Projection for main invitations[edit]

Based on the results from the pilot study i think a 100-150 sample is sufficient for establishing if indeed anonymity states affect voting process or not. Based on the 53% response rate this will require 188-283 invitations to be sent. I believe that starting with 200 invitations is a good choice here. If response rates are as good as in the pilot, the target is going to be surpassed for sure. --Michael Tsikerdekis 09:45, 25 January 2012 (UTC)

Possible extension and additional invitations[edit]

At this point of the study there are 72 participants out of the 200 totally invited that have successfully completed the survey all the way til the end(36%).

Briefly, the results in general across all scenarios, indicate no correlation between the anonymity state and the likelihood of a participant revealing information (rho=0.038,p=0.528). The result was similar even with the more sensitive measure for anonymity based on the individual's perception of anonymity on scale from 1-10 (rho=-0.011, p=0.871). However, there was a statistically significant correlation between the anonymity state and how participants rated their perception of anonymity on the 1-10 scale (rho=0.471, p<0.01). This was expected and shows that the participants must have understood correctly the anonymity state that they were under for each scenario. Therefore and probably, their chances of going against the crowd happens indepedently from their anonymity state.

Even though these are promising results for wikipedia, i would like to end up with at least 100 participants that will give me at least 30 cases between each scenario so that i can check for variance in the means of each anonymity group per scenario. This way i can be confident that i have enough cases to conduct all possible variations of analysis and assert the accuracy of the final results. I would like to ask the RCom and members participating in this discussion to give me permission to invite another 50 people in the survey and extend the period of the survey till mid-late March.Michael Tsikerdekis (talk) 07:41, 21 February 2012 (UTC)

  • I do not see any problems with inviting 50 more people on top of the 200 already invited.--Ymblanter (talk) 10:50, 21 February 2012 (UTC)
  • Agreed. I spoke with User:DarTar and he'll concur. --EpochFail (talk) 14:55, 21 February 2012 (UTC)
  • Ditto, happy to support this --DarTar (talk) 19:32, 21 February 2012 (UTC)
    • Thank you very much! I've sent the invitations today and updated the timeline. It will prolong the study but i believe it's essential for getting accurate results for comparisons of anonymity states for each scenario. --Michael Tsikerdekis (talk) 12:26, 22 February 2012 (UTC)

Results: Highlights[edit]

An update for everyone involved in the project. The paper is ready and already sent to a journal for the long process of review, revisions and eventually publishing.In the meantime i wanted to present here the highlights of the results for everyone that can't wait to read the paper(but hopefully you will read the paper as well when it comes out :-) )

The highlights when adding up data from all scenarios are:

  • A medium effect of correlation was found between the anonymity state and the participant’s self reported perception of anonymity (ρ = .466, p < .001). Perception of anonymity is depended on the anonymity state, which is not really suprising.
  • Partial correlations of this relationship while controling the variable for scenarios and for level of importance showed no big differences in the r.

- A small negative effect was found between perception of anonymity and level of importance of resolving a particular problem (ρ = −.103, p < .05). - Based on all the above i proceeded making an ordinal regression analysis which predicts probabilities for perception of anonymity based on the level of importance and the anonymity state that an individual is under. To the best of my knowledge this is the first model of its kind that measures this.

Now for the really interesting parts of the study: - There was small effect between the likelihood of not conforming and anonymity(perception and anonymity state respectively) (ρ = .101, p < .05) (ρ = .102, p < .05) - Within each scenario this value increases a bit although one scenario seem to be unaffected by anonymity but affected by level of importance

Scenario Noisy Neighbor Unpaid Overtime Bad Employee
A.State*Lik.Conf. ρ = .178∗ ρ = .001 ρ = .160∗
Perc.of.Anon.*Lik.Conf. ρ = .071 ρ = .031 ρ = .182∗
Lev.of.Import.*Lik.Conf. ρ = −.099 ρ = .222∗∗ ρ = .056
A.State*Perc.of.Anon. ρ = .431∗∗ ρ = .530∗∗∗ ρ = .432∗∗∗

Note: * p < .05, ** p < .01, *** p < .001

Without wanting to make this post to long, in general, results show that there is a really small effect of anonymity in the likelihood of conforming. Based on the quantitative results along with the qualitative(which i ommited here to save some space), evidence shows that people base their decisions in other factors and for the most part are really vocal and fearless about stating their opinion and not conforming with the group. However there was a minority of people that didn't behave like the rest of the group. For about 10% of the people(based on the qualitative results) anonymity was critical for them so that they could voice out their opinions. If anyone wants to read the preprint i would be more than happy to send it to you :-) Michael Tsikerdekis (talk) 18:33, 1 April 2012 (UTC)

  • Thanks for sharing the results. Good luck with the referees of the paper.--Ymblanter (talk) 02:15, 2 April 2012 (UTC)