Chrum, 25, became fluent in such viciousness thanks to her job moderating online commentary and trolling. As a content specialist for Debate.org, a site that invites users to discuss controversial topics, Chrum had to wade into the dark, sometimes poisonous muck of Internet comments. Each day, she sifted through 50 to 200 questionable posts, trying to decide what to publish and what to let rot.
Sitting at her desk in Swansea, Illinois, she alternated between relishing and loathing her role. Many of the site’s discussions involved vibrant, positive conversations about important political and social issues, and Chrum thrived on encouraging debate. Yet, as a moderator, she watched in real-time as some built their arguments around slurs, launching ad hominem attacks based on gender, race and sexuality.
Most of us don’t see this version of the Internet. Unless you’re the target of an attack by so-called trolls, avoiding the dregs of social media is rather simple. You click away from a thread that turns nasty, unfollow a friend who says something reprehensible, or avoid sites infamous for their Lord of the Flies approach to social interactions. People like Chrum are the stopgap between innocent users and thrill-seekers who want to test the boundaries of common decency. But trying to protect the rest of us online can be a personal sacrifice — one that drains the mind and spirit.
Just a few weeks later, Anita Sarkeesian, the founder of a web series that analyzes the representation of women in video games, was again threatened with rape and murder after her latest episode aired. This time, though, an online attacker apparently discovered the location of her home, prompting Sarkeesian to involve the authorities.
Elizabeth Englander, a professor of psychology at Bridgewater State University who focuses on cyberbullying in her research, said that’s because we haven’t yet specifically studied what it’s like to moderate abusive behavior and speech online.
On the Internet, that can include identifying child pornography or reviewing rape or death threats. Over time, Englander added, most professionals find a way to break their lives into pieces, separating themselves from the trauma they see, and maybe even becoming desensitized to it.
Though their stories are rarely heard, there is a class of tech workers whose lives are directly affected by these unanswered questions. Facebook, for example, receives about one million reports of abusive content every week. Staff members in Menlo Park, California, London and Hyderabad, India investigate these claims 24 hours a day. On Instagram, users upload an average of 60 million photos per day, and the company employs moderators to ensure that the service isn’t used to “defame, stalk, bully, abuse, harass [or] threaten” other users. Twitter has a “trust and safety” team that reviews the aggressive behavior of users. Read more…