In any community there’s bound to be friction, but some… take it further than others. Reddit is a platform for thousands of online communities (known as “subreddits”), where community members can submit content, and upvote, downvote, or comment on content that others have submitted. Topics of discussion on Reddit run the gamut of human interest, but one of Reddit’s favorite topics to talk about is, unsurprisingly, Reddit itself.
A recent post on AskReddit posing the question – “What popular subreddit has a really toxic community?” – surged to the top of the front page with 4,000 upvotes and over 10,000 comments as Redditors voiced their opinions on which Reddit communities they found to be the most abhorrent (the “/r/” prefix denotes a subreddit):
- /r/parenting: “there are a TON of sanctimommies in there”
- /r/MakeUpAddiction: “[has] cult-like tendencies” encouraging a “race to be the palest and most translucent snowflake”
- /r/leagueoflegends: “[we] made professional players quit the game, that’s how lovely we are”
As I sifted through the thread, my data geek sensibilities tingled as I wondered “Why must we rely upon opinion for such a question? Shouldn’t there be an objective way to measure toxicity?”
With this in mind, I set out to scientifically measure toxicity and supportiveness in Reddit comments and communities. I then compared Reddit’s own evaluation of its subreddits to see where they were right, where they were wrong, and what they may have missed. While this post is specific to Reddit, our methodology here could be applied to offer an objective score of community health for any data set featuring user comments.
Defining Toxicity and Supportiveness
So what is Toxicity? Before we could do any analysis around which subreddits were the most Toxic, we needed to define what we would be measuring. At a high level, Toxic comments are ones that would make someone who disagrees with the viewpoint of the commenter feel uncomfortable and less likely to want to participate in that Reddit community. To be more specific, we defined a comment as Toxic if it met either of the following criteria:
- Ad hominem attack: a comment that directly attacks another Redditor (e.g. “your mother was a hamster and your father smelt of elderberries”) or otherwise shows contempt/disagrees in a completely non-constructive manner (e.g. “GASP are they trying CENSOR your FREE SPEECH??? I weep for you /s”)
- Overt bigotry: the use of bigoted (racist/sexist/homophobic etc.) language, whether targeting any particular individual or more generally, which would make members of the referenced group feel highly uncomfortable
However, the problem with only measuring Toxic comments is it biases against subreddits that simply tend to be more polarizing and evoke more emotional responses generally. In order to account for this, we also measured Supportiveness in comments – defined as language that is directly addressing another Redditor in a supportive (e.g. “We’re rooting for you!”) or appreciative (e.g. “Thanks for the awesome post!”) manner.
By measuring both Toxicity and Supportiveness we are able to get a holistic view of community health that can be used to more fairly compare and contrast subreddit communities.
Comments were pulled via the Reddit API from the top 250 subreddits by number of subscribers, in addition to any subreddit mentioned in the AskReddit thread with over 150 upvotes. Comments were pulled from articles on the front page of each subreddit, 1000 comments were randomly chosen from each subreddit for analysis, and any subreddit that had fewer than 1000 comments was excluded from the analysis.
The Fun Stuff: Machine Learning
Idibon specializes in combining machine learning with human annotation of text, and for this task I was able to take advantage of our technology to improve both the efficiency and accuracy of our experiment. Specifically, a task as nuanced as labelling comments as Toxic/non-Toxic given our definition requires human annotation, but if we had annotated all 250 subreddits with 1,000 comments the task would have, at about 11 seconds/annotation (the average amount of time it took for our contributors) and 3 annotations per comment (in order to get multiple opinions for consensus), required nearly 23,000 person-hours to annotate.
Instead, we were able to use Idibon’s Sentiment Analysis model to narrow down the number of comments human annotators would need to see to only those that were most likely to carry negative or positive sentiment (a good high-level proxy for Toxicity/Supportiveness), and also only for subreddits which contained highly negative or positive sentiment generally. Using this tool, we narrowed down our dataset to 100 subreddits and 100 comments per subreddit, cutting our total number of annotations from 250,000 to 10,000, a decrease of 96%.
OH THE HUMANITY! – Human Text Annotation
At Idibon, we have three primary ways of engaging a third party to annotate text: the crowd, a global network of analysts, and experts who are analysts for our clients. In this case, we took our 10,000 comments to the crowd with CrowdFlower, an online human annotation service, where nearly 500 annotators from around the globe labeled our Reddit comments based on our criteria, until each comment had been labeled 3 times.
In determining what makes a Subreddit community Toxic or Supportive, simply counting the number of Toxic and Supportive comments wouldn’t be sufficient. One of the unique aspects of Reddit is that members of the community have the ability to upvote and downvote comments, which gives us a window into not only what individual commenters are saying, but whether or not and to what extent the community as a whole supports those comments. With this in mind, overall Toxicity/Supportiveness of a subreddit was determined as a function of the scores1 of all the Toxic and Supportive comments in a subreddit2.
Here are the results for subreddits plotted by Toxicity and Supportiveness:
In the interactive chart above, the red bubbles represent subreddits that were mentioned in the thread “What popular subreddit has a really toxic community?” post with a score greater than 150 (upvotes – downvotes), while those in gray were picked from the top 250 subreddits by subscribers. As we move up and right in the chart, subreddits were found to be more Toxic and less Supportive, while those in the bottom left are the least Toxic and most Supportive. Bubbles are sized by number of subscribers in the subreddit.
So how good was Reddit at picking out its most Toxic communities? Well, it seems they got most of the big ones with a few exceptions. The winner by far with 44% Toxicity and 1.7% Supportiveness, /r/ShitRedditSays, received 4,234 upvotes on the thread. /r/ShitRedditSays is, somewhat ironically, a subreddit dedicated to finding and discussing bigoted posts around Reddit – where the term “Redditor” is often used as an insult, and the Toxicity was generally directed at the Reddit community at large. However, it’s also important to note that a significant portion of their Toxicity score came from conversations between SRS members and other Redditors who come specifically to disagree and pick fights with the community, a trap that many members tend to fall into, and which lead to some rather nasty and highly unproductive conversations.
While many of the most Toxic subreddits were mentioned in the thread, there were also a number of highly Toxic subreddits that Reddit seemed to miss, such as /r/SubredditDrama, /r/TumblrinAction (a subreddit dedicated to mocking Tumblr – where marginalized groups, particularly LGBTQ, post about their experiences), /r/4chan, and /r/news.
On the other end of the spectrum, it seems that some of the subreddits that were picked out as being Toxic were found to be some of the most Supportive communities by our study. In particular, /r/GetMotivated, with 50% Supportiveness and 6% Toxicity, seemed far from the Toxic community described by /u/LookHardBody as comprised of “two type[s] of people […] The people that post content to motivate others or because it motivated them and commenters who comment why it’s bullshit, stupid and unmotivational because it wasn’t specifically tailored to them.”
However, upon inspection of the data, there certainly were these types of negative posts in /r/GetMotivated as claimed, but they were not supported by the community at large. In fact, the average score for Supportive posts in /r/GetMotivated was 41, while Toxic posts had an average score of only 1.4. Overall, /r/GetMotivated fits in nicely next /r/loseit and /r/DIY as a subreddit built specifically for members to seek/give advice and support from the community, an unsurprisingly supportive bunch.
Another example of why it’s important to look at comment scores comes when we look at bigotry across subreddits:
Looking specifically at bigoted comments, the importance of taking score into account rather than number of comments becomes even more apparent. For a small number of communities (/r/Libertarian, /r/Jokes, /r/community, and /r/aww) the total aggregated score of comments that our annotators labeled as bigoted was actually negative – so despite having bigoted comments present in their communities, those bigoted comments were rejected by the community as a whole. On the other end of the spectrum we see /r/TheRedPill, a subreddit dedicated to proud male chauvinism3, where bigoted comments received overwhelming approval from the community at large.
In researching this post, I have delved deep into the darkest recesses of the interwebs, I have read comments that cannot be unread, seen things that cannot be unseen… but for good cause!
Sentiment analysis is only the tip of the iceberg in understanding how people relate to one another, how communities form and what characteristics make up a community abstracted from its individual members. In the case of subreddits, hopefully this post will give you some idea of what communities you’d want to be a part of and which you might want to avoid.
On a broader scale, these methods help answer larger questions like, “How can we build communities that we’re proud of and that encourage effective communication?”, and “How should we structure our discourse so that people really hear one another?” Answering these questions will allow us to strengthen our connections with those around us and improve our daily experiences in an increasingly digital world.
– Ben Bell (@BenSethBell)
PS Like this article? Check out our AMA on it!
- Logged because scores followed an exponential distribution
- Specifically, for each subreddit:
- Not a place I’d recommend spending your time, if you’d like better reading, I’d recommend my colleague Jana’s recent post on Idibon’s efforts to get more women in tech