Jump to content

Research:Understanding the Dynamics of Hackathons for Science

From Meta, a Wikimedia project coordination wiki
This page documents a completed research project.


Key Personnel

[edit]
  • Anna Filippova (annafil@cmu.edu)
  • Erik Trainer (etrainer@cs.cmu.edu)
  • Arun Kalyanasundaram
  • Jim Herbsleb

Project Summary

[edit]

The project will enhance sustainability of free and open source software by understanding how engagements with code build community, and disseminating knowledge and tools that will allow stakeholders to plan and conduct successful engagements to build strong, cohesive open source communities that will maintain and enhance the software they use.

To supplement the work we’ve already been doing, we’d like to get more insight into how participants use online tools to supplement their face to face collaboration, how they transition between media and ways this might impact their group dynamics.

Dissemination

[edit]

We will publish results of our research with Open Access permissions in social computing conferences and journals such as Computer-Supported Cooperative Work, ECSCW, and GROUP.

We will publicly share aggregate versions of our datasets so that other online communities planning similar events can learn and benefit.


Wikimedia Policies, Ethics, and Human Subjects Protection

[edit]

This study has been approved by Carnegie Mellon University's Institutional Review Board (IRB). The date of approval is:

Participants will be asked to read and sign a consent form that describes their role in the research study, and how their identifying information and responses will be collected and shared.

The project is subject to the Wikimedia Foundation's open access policy. The researcher filed a memorandum of understanding, acknowledging the terms of the policy for the dissemination of the results and any published output. No NDA was required for this project since no private data is involved and data is collected by participants on an opt-in basis.

Benefits for the Wikimedia community

[edit]

We will put together a report following the event, sharing the aggregate findings, inferences and insights from these different modes of inquiry with the Wikimedia community to help in planning future events. In particular, we will be sure to compile and share the aggregate and anonymized free response feedback from the survey about what participants thought were the best parts of the event as well as what participants hoped could be improved.

Timeline

[edit]
  • June 2016: Complete survey data collection, interviews, and archival analysis
  • August 2016: Begin submitting paper/s to conferences and journals that support Open Access (continuous activity). As of September 2016, under review.
  • September 2016: Release aggregated survey data


Methods

[edit]

Our overall research approach is to use a mix of survey, interviews, and analysis of archival data. With help from the organizers, we will administer a survey to participants of WikiCite 2016 after the event. The purpose of the survey is to understand participants' perceptions about the event, for instance what worked well, and what could be improved for future iterations. As part of the survey, participants may provide us their name and e-mail address so that we may contact them afterward for interviews.

Interview topics will focus on learning more about some of the patterns in the survey data, and how well groups are following up activities they started during the event.

We will extract archival data (e.g., wiki posts, records from the issue tracker, version control logs, mailing list discussions, etc.) before, during, and after the event to construct a complete and detailed picture of activities.

Survey instrument design

[edit]

Where possible, we utilized survey measures that have been validated in prior literature. For latent constructs such as satisfaction, that is, constructs that are not directly observable, we employed multi-item scales that utilize several related statements to describe a complex idea. Below we present examples of multi-item scales used, as well as their sources in literature. For the complete instrument, please refer to the linked document.

a) Satisfaction with Process
[edit]

To evaluate the extent to which participants were satisfied with the process of working in their group, we utilized Reining’s [30] Satisfaction with Process scale. The scale consisted of 4 items, and was evaluated on a 5-point semantic differential scale. We asked, “Would you describe your group/session’s work process as more:” and provided 4 response groups: Inefficient-Efficient, Uncoordinated- Coordinated, Unfair-Fair, Confusing-Easy to Understand.

b) Satisfaction with Outcome
[edit]

We also drew on Reining’s [30] methodology to evaluate Satisfaction with Outcome, that is the extent to which participants were satisfied with the final product of their group/session. The scale consisted of 7 items, and was evaluated on a 5-point Likert scale, from “Strongly Disagree” to “Strongly Agree”. We asked participants to indicate their level of agreement with statements such as “I am satisfied with the work completed in my group/session” and “I am satisfied with the quality of my group/session’s output”.

c) Perceived Participation
[edit]

To evaluate participants’ perceptions of participation we modified an existing scale from Paul et al. [43] measuring perceived participation in group decision making. The scale consisted of 6 items, and was evaluated on a 5-point Likert scale, from “Strongly Disagree” to “Strongly Agree”. We asked participants to indicate their level of agreement with statements such as, “I always felt free to voice my comments during the session” and “Everyone had a chance to express his/her opinion”.

d) Goal Clarity
[edit]

To evaluate the extent to which participants felt the goals of their group were clear to them, we modified Sawyer’s [32] goal clarity scale. The scale we used consisted of 4 items most relevant to our study context, and was evaluated on a 5-point Likert scale, from “Strongly Disagree” to “Strongly Agree”. We asked participants to indicate their level of agreement with statements such as “I was unclear about the goals and objectives for my work in this session/group” and “I was unsure how my work relates to the overall objectives of my group/session”. Negatively worded statements were reverse coded and higher values on the resulting scale were associated with greater goal clarity.

e) Braintorming
[edit]

To the best of our knowledge, a reliable measure of brainstorming processes aligned with Osborn’s [21] original propositions and appropriate for our study context was not yet available. Thus we designed our own scale that consisted of 7 questions. Appendix “A” presents the full list of questions used in our scale. We asked participants to what extent each of the statements in the scale reflected the way their group decided what to work on, and evaluated their responses on a 5 point Likert scale from “Not at all” to “Completely”.

All the items in our scale worked together to produce sufficient inter-item reliability with the exception of one reverse coded question: “Group members criticized ideas proposed during the group/session”. After verifying that the reverse coding was performed correctly, we dropped this question from our scale.

f) Minority Identification
[edit]

We chose to explicitly ask participants if they identified as a minority, instead of indirectly measuring this by asking participants to self-report demographic variables or counting group proportions. We asked, “Do you consider yourself a minority? (For example in terms of race, gender, expertise or in another way)”. Answer options were either “Yes”, “No” or “Rather not say”. Of participants who definitively answered this question, 23% identified themselves as a minority in some way.

g) Covariates
[edit]

We also asked participants to report their level of software self-efficacy, that is, the extent to which participants were comfortable in learning and using new software tools. To evaluate self-efficacy, we used a scale modified from the work of Holcomb et al. [44]. We also asked participants to indicate the number of years of programming experience they had. Finally, to control for potential confounding effects of different leadership styles in teams, we asked participants whether their team had a well-defined leader role (Yes or No).

Results

[edit]

In this section we present the aggregate findings from our survey of the event.

Multi-item scale results

[edit]

The below table presents aggregate results of psychometric variables examined, as well as outcomes of the event. All items (except "New connections made") are on a 5 point scale, from Strongly Disagree to Strongly Agree, and 3 representing a neutral response. Overall results suggest participants were somewhat satisfied with the outcomes of the event, and the process of working together. Individuals made over 3 new connections on average with whom they may start new collaborations. Overall, groups reported a participative, or highly participative environment, and some use of brainstorming techniques to source ideas from all group members. Individuals also reported being somewhat satisfied with goal clarity.

More detailed inferential statistics will be made available via an open access publication that combines this data from a second event, allowing greater statistical power for interpretation.

Question Number of responses Mean Standard deviation Minimum value Maximum value
Satisfaction with Outcome 22 3.86 0.74 1.86 5
Satisfaction with Process 21 3.65 0.68 2.25 5
Number of New Connections Made 21 3.48 1.21 1 6
Perceived Participation 21 4.37 0.52 3 5
Brainstorming 21 3.33 0.51 2.33 4.17
Goal clarity of session/group 22 3.49 1.17 1 5
Software use Self-efficacy 21 3.65 0.7 2.5 5

Experience with Wikipedia

[edit]
I have a user account I am an active editor (5+ edits a month) I am a very active editor (100+ edits a month)
9 participants 6 participants 7 participants
41% 27% 32%

Participants who identify as a minority

[edit]
Identify as Minority Do not identify as minority
5 participants 17 participants
23% 77%

Leadership breakdown

[edit]
I was the session leader Someone else was the session leader There was no one session leader
2 participants 14 participants 7 participants
9% 61% 30%

Satisfaction with organization of event

[edit]

The below table presents aggregate feedback about participants' satisfaction with various aspects of event organization. Items are on a 5 point scale, from Strongly Disagree to Strongly Agree, and 3 representing a neutral response.

Overall, results show participants were satisfied or very satisfied with most aspects of organization. Facilities showed a score slightly below neutral (2.91), however qualitative feedback suggests this to be associated with WiFi issues that were attributed to the venue, rather than event organization.

Question Number of responses Mean Standard deviation Minimum value Maximum value
Help for any problems 21 4.05 0.8 3 5
Communication by organizers 22 4.45 0.96 2 5
Facilities 22 2.91 1.31 1 5
Refreshments 22 4.23 0.81 2 5
Accommodation 21 4 0.84 3 5
Outings 21 4.24 0.83 3 5
Session variety 21 4.24 0.83 2 5
Session quality 22 4.32 0.84 2 5
Overall organization 22 4.41 0.67 3 5
Event duration 22 4.09 0.87 2 5

Event preparation

[edit]

Frequency of preparation activities performed by participants, such as reading up on literature, preparing/learning new tools, coming up with use cases and proposals, data cleaning, and reading online communication respectively:

Wikicite-preparation activities

Inferential statistics

[edit]

Given the small sample size, in order to provide reliable inferential statistics, we combine these results with a second observed event to be able to make inferences about relationships among variables examined. Results will be presented in an open access publication, currently under review and shared with the organizing team.

References

[edit]
  • Erik H. Trainer, Arun Kalyanasundaram, Chalalai Chaihirunkarn, and James D. Herbsleb. 2016. How to Hackathon: Socio-technical Tradeoffs in Brief, Intensive Collocation. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW '16). ACM, New York, NY, USA, 1118-1130.
  • Erik H. Trainer, Chalalai Chaihirunkarn, Arun Kalyanasundaram, and James D. Herbsleb. 2015. From Personal Tool to Community Resource: What's the Extra Work and Who Will Do It?. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW '15). ACM, New York, NY, USA, 417-430.


Funding

[edit]

Supported by a grant from the Alfred P. Sloan Foundation.

[edit]

Contacts

[edit]