Jump to content

Research:Online harassment resource guide

From Meta, a Wikimedia project coordination wiki

This page documents a research project in progress.
Information may be incomplete and change as the project progresses.
Please contact the project lead before formally citing or reusing results from this page.


Online harassment has been a persistent and intractable problem for digital communications since the earliest days of the Internet. Despite substantial efforts in many corners over recent decades, online harassment remains a central problem, with very real harms and risks for many people, particularly for women and minorities.

Valuable scholarship on harassment and its responses has emerged in recent years, but it is often scattered across disciplines and methodologies, often standing a scholarly distance apart from the work of practitioners and advocates. This literature review includes research on understanding and responding to online harassment. We connect and summarize these disparate efforts, with a focus on scholarly literature and reports by organizations.

Taken together, this review offers a broad overview and starting point to scholars, advocates, and practitioners working to understand and respond to online harassment.

About this resource guide

[edit]

In this guide, we summarize and suggest first places to read for anyone getting started in their understanding of the issues surrounding online harassment. Our short list is based on a much larger list of articles in a shared Zotero group. We encourage you to visit to go beyond these introductory readings and browse the full list (link TBA).

The literature considered in this review was solicited from more than twenty scholars and practitioners conducting research related to online harassment. Each researcher was invited to submit bibliographies of relevant work, as well as suggestions of other scholars to invite. This initial sample was extended through citation analysis of the bibliographies they shared. Submitted scholarship was clustered into themes, which were evaluated in workshops that also identified missing literatures and lists of further scholars.

Citing this resource guide

[edit]

If you make use of this resource in the course of your academic research, please cite it as follows:

How to contribute

[edit]

This resource guide is a collaborative effort; if you have suggestions, please share them! We meet semiregularly for working groups and check-ins, and we would love to welcome you. Here are several ways to contribute:

  • Tell us what you need to know if you're a practitioner or researcher who has questions or problems not covered in this guide.
  • Suggest sections by contacting us. Our bibliography includes many sections not yet in this document, so there may be an opportunity collaborate.
  • Share your bibliography or suggest papers by contacting us so we can add you to the Zotero Group

Starting points for understanding online harassment

[edit]

Understanding online harassment

[edit]

What is online harassment? Who engages in it, and how common is it in society? The following papers make good efforts to answer these questions and summarize the state of academic knowledge.

What is online harassment?

[edit]

In a U.S. study by Pew, Duggan surveyed U.S. residents on their experience of online harassment, a study that included being called an offensive name, being embarrassed, being threatened, being harassed over a period of time, being sexually harassed, and being stalked. Cyberbullying has been a focus of much of the research about online harassment, an issue covered in the Berkman review of "Bullying in a Networked Era," which defines bullying, identifies the people involved, and describes the norms around bullying and help-seeking among young people. The law can sometimes see things in different categories than people experience them: Marwick and Miller offer a clear, accessible outline of the law around obscenity, defamation, "fighting words," "true threats," unmasking, hate speech, and hate crimes in U.S. law-- with a substantial effort to define hate speech.

Who are harassers?

[edit]

Most research on harassers focuses on very specific populations or contexts, so we encourage you to consult our Zotero group (TBA) for more detail. Harassers and receivers of harassment aren't always mutually exclusive groups, argue Schrock and boyd in their literature review of research on solicitation, harassment, and cyberbullying. In a review of psychology literature, Foody, Samara, and Carlbring summarize research on the psychology of young and adult bullies, and the psychological risks of engaging in bullying behavior online.

Trolls and trolling culture

[edit]

The idea of a "troll" is often used differently by people to achieve certain ends. Coleman considers the history of troll culture in transgressive politics like phreaking and hacking. Philips argues that troll culture's engagement in spectacle interacts with and relies on online media's incessant demand for scandal. Ryan Milner's work on trolling in 4chan and Reddit shows how "logic of lulz" is used to justify racist, sexist discourse as well as justify counter-speech that critiques that sexism. Is there something in common about trolls? Buckels and colleagues paid people $0.50 on Amazon Mechanical Turk to take extensive personality tests on the enjoyment of trolling. But there are limitations in the quality of results with asking people to self-report trolling and sample limitations with recruiting trolls via Amazon Mechanical Turk. Finally, in Bergstrom's study of response to a "troll" on Reddit, we see accusations of trolling used to justify violating someone's privacy and shutting down debate about important community issues.

Flagging and reporting systems

[edit]

Platforms often offer systems for flagging or reporting online harassment. These readings describe this approach, its effects, and its limitations.

What is a flag for?

[edit]

Crawford and Gillespie "unpack the working of the flag, consider alternatives that give greater emphasis to public deliberation, and consider the implications for online public discourse of this now commonplace yet rarely studied sociotechnical mechanism."

What is the flagging process like for those involved?

[edit]

The report by Matias, Johnson et al describes the kinds of reports submitted to Twitter through a nonprofit service, what the process of responding to reports is like, problems (social, legal, technical) that prevent flags from being handled, and the risks that responders face. These systems are not always so formal: Geiger and Ribes describe the role of distributed cooperation and DIY bots in responding to vandalism on Wikipedia.

What is the role of Terms of Service in flagging systems?

[edit]

Flagging systems often rely on terms of use or other platform ("intermediary") policies-- which mean that flagging is as much a legal approach as a technical one. Wauters and Citron both address the legal and company policy questions from the perspective of user safety and protection.

How do interest groups mobilize to achieve political goals through flagging and reporting?

[edit]

Thakor and Boyd look at the work of anti-trafficking advocates to carry out their goals through platform policies. Chris Peterson's thesis looks at how groups have gamed flagging and voting systems to promote their ideas and bury opposing views.

Volunteer moderators

[edit]

One approach to dealing with online harassment is to recruit volunteer moderators or responders to take a special role on a platform or in a community. This is the approach taken by Google Groups, Meetup.com, Reddit, Facebook Groups, and many online forums.

What actions could moderators be supported to take?

[edit]

Grimmelman's paper offers a helpful taxonomy of moderation strategies, focusing on the "verbs of moderation" and the kinds of powers you might give moderators. Grimmelman also cites many papers and articles relevant to these possible actions. Quinn offers an alternative to Grimmelman's systematic approach, describing the "ethos" that is created through community and moderation by a few. In ongoing work, Matias is researching the work of Reddit's moderators.

Is asking volunteers to moderate online conversation asking them to do free work?

[edit]

Postigo's paper offers an overview of AOL's community leaders and the Department of Labor investigation into the work of moderators in the early 2000s.

Is self-governance democratic or oligarchic?

[edit]

Shaw and Hill's quantitative research across 683 different wikis shows that "peer production entails oligarchic organizational forms," in line with a trend for large democracies to become oligrachic. This issue is taken up in Nathaniel Tkacz's book, where he outlines the kinds of contention that occur in "open organizations," a book that is as much about the idea of Wikipedia as the way Wikipedia actually works.

Why do people do volunteer moderation?

[edit]

In behavioural economics experiments, Hergueux finds that Wikipedia's administrators are most motivated by social image rather than reciprocity or altruism. Butler, Sproul, Kiesler, and Kraut offer survey results showing a diversity of formal and informal community work in online groups, and that people's participation can be related to how well they know other community members.

Automated detection and prediction of social behaviour online

[edit]

Machine learning systems play many roles on social platforms, from spam filtering and vandalism detection to automated measures of trust and reliability. Building effective models is the first hard problem. It's no less hard to figure out how to respond when a machine learning system makes a judgment about a person or their speech.

Detecting high-quality contributions

[edit]

The technical and ethical contours of automated detection systems are illustrated in systems that try to find high-quality conversation. In "How Useful Are Your Comments?" Siersdorfer et al describe what kinds of comments are upvoted by YouTube users, showing how those vary across topics, and illustrating the problem of "comment rating variance," where upvoters disagree strongly. In "The Editor's Eye," Diakopoulos analyzes New York Times comments to identify their "article relevance" and "conversational relevance." Finally, Castillo et al analyze Tweet content and sharing patterns to try to detect "information credibility" during fast-moving

Detecting deviant behavior

[edit]

Is it possible to detect harassment? One of the hardest problems is how to formally define the deviant behavior. Myle Ott has a pair of papers on detecting deception in hotel reviews, building Review Skeptic, and then using it to estimate the prevalence of deception in online reviews, part of a wider literature on opinion spam detection (review post here). Sood employs workers on Mechanical Turk to train machine learning systems to distinguish insults from profanity. Dinakar redefines the problem as "detection of sensitive topics," differentiates bullying within those topics, and trains per-topic harassment classifiers. The paper by Tran outlines a language-agnostic approach for detecting Wikipedia vandalism, trained by the moderation behavior of anti-vandalism bots and large numbers of volunteers. Cheng et al. identifies characteristics of antisocial behavior in three diverse online discussion communities that predict their being banned from a very early stage.

Detection and punishment based approaches may also fuel harassment. In the final two papers, we show work by Ahmad et al to automatically detect gold farmers in World of Warcraft, alongside work by Nakamura that unpacks the racism associated with sentiments and vigilante attacks on gold farmers.

Speech and the law

[edit]

Many of the actions associated with online harassment are illegal, or as some argue, should involve government intervention. How do we draw the lines between things that communities should deal with on their own, platforms should do, and governments should be involved in? Answering this debate requires us to consider a complex set of rights and risks in the U.S. (where many large platforms are based) and internationally. The issue of online harassment has prompted many to revisit principles of free speech and consider relationships between platforms and governments in the U.S. and internationally.

If you have limited time, a good starting point is Danielle Citron's recent book on this topic:

Principles of free speech and hate speech

[edit]

In the United States, civil harassment orders are one common response to "words or behavior deemed harassing." Caplan offers an overview of this approach and how to balance it with speech rights. Tsesis offers an overview of the relationship between free speech rights under the U.S. constitution and laws about defamation, threats, and support for terrorists. Writing for the Cato Institute, Kuznicki critiques and summarizes the view that "we must balance free expression against the psychic hurt that some expressions will provoke." Segall focuses on "individually-directed threatening speech" in the United States, describing the state of law and arguing for greater clarity in U.S. legal interpretation.

[edit]

Legal efforts to address online harassment require governments to collaborate closely with platforms, often introducing systems of surveillance and censorship, argues Balkin, unchecked by constitutional rights in the United States. Tushnet worries that some approaches to creating hate speech legal liability for platforms might harm speech while failing to serve the wider goals of diversity. Citron argues that platforms have far more flexibility than governments in responding to hate speech online, since platform behavior is only loosely regulated, offering powerful alternatives to the legal system. Gillespie points to how the word "platform" is deployed by companies as they try to influence information policy to "seek protection for facilitating user expression, yet also seek limited liability for what those users say."

International approaches to speech rights

[edit]

In a collected volume, Hare and Weinstein offer an overview of legal issues of speech rights and hate speech in Australia, Canada, France, Germany, Hungary, Israel, the United Kingdom, and the United States. Herz and Molnar's book offers another international perspective, with an emphasis on international law, defamation of religion, and human rights.

Voting and distributed moderation

[edit]

Distributed moderation and voting systems that invite users to upvote, downvote, or remove content have become common features on many social sites. In some systems, content is given greater or lesser prominence depending on the votes it receives. In others, like Wikipedia, any user can remove unacceptable material, often with the help of specialized quality control systems. Do these systems work? Early scholarship asked if distributed moderation would result in high quality results or if it was possible to get enough participation to make it work. More recent research has examined and questioned the effects of these systems on users and communities.

Distributed moderation is rarely suggested as a response to extreme forms of harassment, such as threats of violence, attacks, or the release of private information, where there may be a need to respond swiftly beyond just demoting the prominence of information.

Can we trust distributed moderation?

[edit]

Lampe's paper (part of a series) offers the classic analysis of distributed moderation via voting systems, unpacking how voters on Slashdot did manage to agree on ratings and rate comments well. Gilbert's study of reddit looks at a serious risk to these systems: what happens when there aren't enough votes? Gaiger and Halfaker look at one response to underprovision of ratings: the use of bots on Wikipedia. Finally, Chris Peterson offers important case studies on the coordinated use of voting systems to censor ideas that some people want to make disappear.

What effect does moderation activity have on users whose contributions are rejected?

[edit]

Even if voting systems are effective at identifying the best and worst contributions, what are their effects on people? In a study of Wikipedia, Halfaker shows how sometimes-overzealous deletions of Wikipedia content have pushed people away from becoming engaged editors, contributing to a decline in participation on the site. In a study of political comments, Cheng shows how downvotes can drag a community down, as downvoted users respond to downvotes by increasing the number of contributions that the community dislikes.

How can we design principled and effective distributed moderation?

[edit]

One option for improving the quality of community ratings is to offer a wider range of options than just up or down, an idea that Lampe explores in "It's all news to me." But greater nuance may not address the problems created by distributed moderation. In the "Snuggle" paper, Halfaker evaluates a system that offers peer mentorship and support to people who make sub-par contributions rather than taking negative action against them and their contributions.

Bystander interventions

[edit]

Research on online harassment sometimes looks at interventions by non-expert bystanders—people who observe a situation or who are close to the people involved. This bystander activity is different from moderation in that it's not carried out by people in a formal moderation role. It also differs from affordances like distributed moderation, where observers are indirectly involved. In some cases, harassment reporting systems are designed to invite participation by those bystanders. In this section, we present some of the history of academic and policy debates on the role of bystanders in responding to violence, alongside research specific to online harassment.

What is a bystander and what is bystanding?

[edit]

Debates about bystanders often reach back to completely false and discredited accounts of the rape and murder of Kitty Genovese in New York City in 1964. Newspaper articles erroneously reported that 38 bystanders had watched and done nothing, misinformation that is commonly repeated in social psychology textbooks and other popular psychology sources. Manning, Levine, and Collins argue that the Genovese story has been used to limit research on helping behavior in emergencies. Dillon offers a 5-stage model for bystanding in cases of cyber bullying: (1) noticing that something is happening, (2) interpreting the event as an emergency, (3) taking responsibility for providing help, (4) deciding how to provide help, and (5) taking action to provide help. She offers early stage experimental evidence for the effect of designs in these areas on the probability of bystander intervention. In a U.S. nationally-representative sample of young people, Jones and colleagues found high levels of positive and negative bystander intervention in cases of online harassment among young people. Finally, Bastiaensens and colleagues present results of an experiment of cyberbullying where bystanders were more likely to have an intent to intervene as bullying became more severe. They also found that bystanders were more likely to have an intent to join the bullying if they share friendships and social identity with other bystanders who support the bully.

What kind of help can bystanders offer?

[edit]

On Twitter, one kind of bystanding response to online threats is to use social media to organize speech that interrupts and critiques rape culture, a practice documented and put in historical context by Rentschler. Researchers from Yale and Berkeley have taken on consulting work for Facebook to design systems to support bystander intervention, but they have not published any results from their research as of Nov 2015. In interviews, they describe this work as introducing "research-based strategies" rather than conducting research. Evaluation of these systems reportedly focused on completion rates for reporting forms rather than outcomes for the people involved. Unfortunately, we have not been able to find much research on the effects of different kinds of bystander interventions in online harassment.

Secondary, vicarious trauma for people who help

[edit]

The work of responding to harassment can introduce serious risks into the lives of people who help others, even when they don't become targets of harassment themselves. Although there is limited research on secondary trauma and content moderation, parallel findings from journalism and counseling offer sobering accounts. In a study of journalists who work with user-generated content, Feinstein, Audet, and Waknine found that journalists who review violent images daily experience higher levels of PTSD, depression, and psychological distress than those who reviewed user-generated content less frequently. They link the frequency of exposure rather than the duration of exposure to these outcomes and encourage that journalists be exposed less frequently to violent images.

The effects on responders reach very deeply. VanDeusen and Way find that people who offer treatment to survivors or offenders of sexual abuse experience disruptions in their abilities for intimacy and trust, and that this disruption was greatest for people who were newer to this work. Furthermore, people with a personal history of maltreatment experienced greater disruptions than others. On the other hand, a review article by Elwood, Mott, Lohr, and Galovski questions whether secondary trauma effects reach clinical levels of concern, arguing for further, better coordinated research on this issue. They offer a clear overview of research and make important distinctions between burnout and secondary trauma. Finally, Bober and Regehr find that clinicians often fail to engage in coping strategies, and that clinicians who use coping strategies don't have fewer negative effects than people who fail to engage in self care. Like the journalism study, they argue for distributing the workload rather than encouraging self care.

Racism and sexism online

[edit]

If you are interested in the experiences of marginalized people online, a good first step is to listen directly to those groups, who are often vocal about their experiences. Academic conversations do offer ways to think about those experiences and voices; we have assembled some resources here. This section is still in progress and needs substantial improvement. Please contact the authors if you would like to contribute.

How to think about sexism and racism online

[edit]

In her compelling, helpful review of research on race and racism online, Daniels (author of Cyber Racism) outlines a way to think about these issues, including the infrastructure and history of the Internet, debates about digital divides, online platforms, information wars, social movements, law, hate speech, surveillance, and internet cultures. She concludes by arguing that it's important understand them in terms of the "deep roots of racial inequality in existing social structures." When people try to expand participation of marginalised groups through civility, we often make the same assumptions that excluded them in the first place — a history that Fraser traces. Within responses to violent and objectionable speech, the problems of gender, race, and class intersect, making some people even more vulnerable, as Crenshaw shows across several case studies. Gray describes how those intersecting identities shape discrimination and harassment of women of color in online gaming.

Discrimination online

[edit]

Given that racism and sexism are widespread social problems, it should not be surprising that we see evidence of them online. Yet certain design features make sexism and racism more likely. Dollar and Stein conducted a field experiment showing that people paid less to sellers they trusted less if the hand advertising an iPod was black compared to white. Jason Radford showed the effect on gender discrimination of introducing marital status as a field in the charitable giving site DonorsChoose. Data mining and algorithmic systems can learn discrimination from their users, an issue that Solon Barocas reviews. Matias and colleagues reflect on the ethical and political challenges of creating systems that try to correct problems of discrimination online.

Online misinformation

[edit]

The prevalence of misinformation (broadly construed to include rumors, hoaxes, urban legends, gossip, myths, conspiracies) has garnered attention from researchers at the intersection of psychology, communication, and political science since World War II. At least three interrelated findings have been reliably replicated across settings and methods: (1) people's heuristics for evaluating the credibility of information are complex, (2) the persistence of belief in misinformation in spite of factual debiasing attempts, and (3) the social — rather than epistemic — functions of misinformation.

Information credibility

[edit]

Metzger (2007) provides an excellent summary of "the skills that Internet users need to assess the credibility of online information". The paper reviews checklist approaches that provide a good list of content and behavioral features that could be used to develop automated methods as well as a cognitive processing model outlining a "dual processing model" for web credibility assessment decisions from exposure, evaluation, to judgment.

Several research papers have examined credibility assessments specifically on Twitter. Schmierbach & Oeldorf-Hirsch (2012) use experiments to show that information on Twitter is judged as less credible and less important than similar stories posted to newspapers. Morris, et al. (2012) uses surveys to evaluate users' perceptions of tweet credibility and finds a disparity between users' features and those engineered into search engines. The authors then perform two experiments, finding users are poor judges of truthfulness based on content alone but rely on heuristics such as the username of the tweet author. Castillo, et al. (2012) use a supervised machine learning approach to automatically classify credible news events and finds differences in how Twitter messages propagate based on their newsworthiness and credibility.

Earlier work by Fisher (1998) and Bordia & Rosnow (1998) provide a strong basis in psychological theory of misinformation in the quaint context of online systems before the turn of the century. Fisher discusses how grassroots social movements will use language, technology, institutional access, etc. Bordia & Rosnow (1998) examine an early rumor chain and connect its propagation to existing theories about "rumormongering as a collective, problem-solving interaction that is sustained by a combination of anxiety, uncertainty, and credulity" that are similar across face-to-face and computer-mediated settings.

Debiasing

[edit]

Debiasing reflects efforts to correct or retract misinformation. Lewandowsky, et al. (2012) provide the definitive review of misinformation's origins, strategies for debiasing, and persistence of beliefs as well as backfire/boomerang effects where strength of belief increases. The paper's biggest contribution is an exhaustive review of strategies for reducing the impact of misinformation (pre-exposure warnings, repeated retractions, providing alternative narratives) and alternative strategies for correction (emphasis on facts, brevity, affirm worldview, affirm identity).

Berinsky (2012) explores ideological and demographic correlates of American voters' beliefs in various (American) political conspiracies and rumors and performs experiments showing that many strategies for correcting mistruths lead to confusion. Garret (2011) uses a survey method and finds Internet use promotes exposure to rumors and rebuttal and that rumors emailed to friends/family are more likely to be believed and shared. Garrett and Weeks (2013) find evidence that real-time correction may cause users to be resistant to factual information. Nyhan and Reifler (2010) conduct four experiments that replicate findings about corrections' failure to reduce misperceptions as well as "backfire effects" in which corrections increase belief in misinformation.

Social functions

[edit]

The prevalence and persistence of gossip, rumor, conspiracy, and misinformation in spite of debiasing efforts can be attributed to the important social functions that this information fulfills. Rosnow is one of the most influential post-war empirical researchers and his 1988 article provides a review of rumor as a "process of explanation" emerging from personal anxiety, general uncertainty, credulity, and topical importance that influence rumor generation and propagation. Donovan provides a comprehensive review of the concept of rumors throughout the 20th century as psychological, organizational, literary/folklore and provides some useful definitions to differentiate rumors, hoaxes, urban legends, gossip, and myths. Donovan's most persuasive argument is the role of rumor as dialogic acts where "both believers and skeptics build rumor" (pp. 69). Foster provides a review of the social psychological bases of gossip as having substantive social, evolutionary, and personal functions rather than being epistemic defects.

DiFonzo et al. run experiments on medium-sized networks to evaluate how structural properties like clustering drive the emergence of consensus, continuation, and confidence about rumors and hearsay and how they spread in social networks as social influence processes. Earlier work by Bordian and DiFonzo (2005) examined how rumors were transmitted on different online discussion groups, finding differences between "dread" and "wish" rumor types, 14 content categories, and different "communicative postures" such as explanation, information reporting/seeking, and directing/motivating.

Upcoming sections we plan to add

[edit]

We are thinking about adding the following sections from our bibliography. Contact us if you think you can help!

  • Debates about anonymity
  • The experience of online harassment
  • Defining harassment as "online"
  • Sexism, racism, and hate speech online
  • Civility debates
  • Information cascades
  • International dimensions of online harassment
  • Platform policies
  • Responses to online harassment
  • DIY responses
  • Peer governance
  • Mediation and dispute resolution
  • Open questions for research and action
  • Contention among publics, counter-publics, and anti-publics

Acknowledgments

[edit]

Many researchers generously contributed their personal lists of literature to this effort. We are grateful to everyone who helped us bring this together by contributing literature and resources:

  • Whitney Erin Boesel
  • Willow Brugh
  • Danielle Citron
  • Katherine Cross
  • Maral Dadvar
  • David Eichert
  • Sarah Jeong
  • Ethan Katsh
  • Lisa Nakamura
  • Joseph Reagle
  • Carrie Rentschler
  • Rachel Simons
  • Bruce Schneier
  • Cindy Southworth