Advertisement. The page you requested will display in seconds.
Advertisement. The page you requested will display in seconds.
The internet today is a strange place, and some of its strangest corners are the comment sections—public forums where people are able and encouraged to voice their opinions. While many choose to do so under their own names, others remain anonymous, and their cloak of anonymity allows them to post in a more raw, unfiltered way. The consequence? An online community that is at once democratic and inclusive, but also hostile, counter-productive, and often downright weird.
Few know internet comment sections better than Joseph Reagle, Professor of Communication Studies at Northeastern University and faculty associate at the Berkman Center for Internet and Society at Harvard University. He's the author of Good Faith Collaboration: The Culture of Wikipedia, and the forthcoming Reading the Comments: Likers, Haters, and Manipulators at the Bottom of the Net, due out next year.
Excerpts from our talk with Prof. Reagle appeared in Reviewed.com's weekly column in USA Today last week, but his comments were insightful enough that we feel compelled to share the entire interview.
I previously wrote a book about Wikipedia called Good Faith Collaboration. One of the premises is that there’s an old internet law called Godwin’s Law, which says that the longer a conversation goes on the probability of someone calling someone else a Nazi or Hitler approaches one. And this goes back to Usenet, even before the web, so it’s not a wholly new phenomenon. It’s kind of inevitable that people are going to say nasty things about one another online. Good Faith Collaboration was about how something like Wikipedia could be produced despite this tendency.
I don’t think it’s a fundamental right. I think if people want to post comments they can set up their own blog. Indeed, one of the ironies here is how Dave Winer—who has some claim to being both one of the first bloggers and one of the first bloggers to have a comments section on his website—eventually turned comments off as well. And he turned them off because he basically believed, “you’re supposed to be here to comment on a particular article in a substantive way. If you want to say something stupid or if you want to rant you can go somewhere else.”
On one hand I do think people have a right to say we don’t want nastiness here. Similarly, people have a right to create a page elsewhere and say whatever they’d like. The question really is, how do you build a community? Because it is very useful to have substantive comments and civil disagreements and varied opinions expressed.
For instance, maybe people sign up with their real name. There’s some evidence that seems to work well, however there’s also plenty of evidence that people do horrible things even under their real identities, perhaps unknowingly. People say horrible things on Twitter all the time under their own names. There was a case in which the feminist website Jezebel—after President Obama was elected—went and collected a series of Tweets from teenagers saying horrible, racist things under their own names. And there’s the Steubenville rape case—a lot of those guys were Tweeting horrible things under their real names. Even if they didn’t have anonymity, maybe they thought that they did.
Absolutely. As to social research, there are lots and lots of theories, over decades, as to why it is that anonymity might prompt people to act horribly—if it is in fact the anonymity and not something else.
You can go back to Plato and the story of Gyges: Imagine a man who has a ring that can make him invisible. In this parable the man uses the ring to seduce the queen, kill the king, and take over the kingdom. So there’s long been this suspicion that if we’re anonymous, we lose accountability and are most likely to act horribly.
But there’s two things going on from a social sciences point of view: One, does anonymity make good or otherwise decent people behave worse? And that’s the thing most people focus upon. Or two: Are bad or evil people—however you define that—more attracted to commenting and trolling and griefing than normal people? When you look at a thread and you see all this horrible stuff, is this normal people behaving badly, or is this bad people behaving promiscuously? And there’s evidence for both of those, too.
A month ago there was a paper published, "Trolls Just Want to Have Fun?", and they were trying to correlate personality indicators—things like sadism—to a propensity to comment and troll. They found a correlation.
Conversely, people sometimes behave well in anonymous circumstances, which gets back to Wikipedia: How is it you can have this thing where people with their real names, but most often with pseudonyms or even anonymously, contribute to Wikipedia.
Yes. For instance, MetaFilter has been around for a long time, and it has a comment culture that has been fairly successful for many years. And they require a one-time fee for people to sign up, and that seems to really work well there.
Other things that people have tried and gotten rid of: Boing Boing has experimented with turning off comments at various times and coming back with different systems. They had something called a disemvoweler: A moderator could remove all the vowels of a comment that they didn’t like, and the theory was that somehow the abusive commenters would be made to feel ridiculous and leave or not want to participate. It didn’t last long but there’s many, many techniques that people have tried.
Yes, I think Slashdot really popularized that—they called it Karma—but no system is really perfect. Even with that people found ways to abuse it, not necessarily to say horrible things but to sort of control what’s happening on that site. On Digg there were these things called Bury Brigades, where people would form cabals—which again is a very ancient notion going back to Usenet—of people who would upvote one another and bury other people who they didn’t want to see succeed on Digg. On Slashdot they had this notion called Karma Whoring.
Reputation systems are often a favorite of academics because they think, “we just designed this perfect system.” But almost any system that you deploy is the subject of some type of abuse.
I haven’t seen any recent research about attitudes to anonymity and privacy, but my suspicion is that all this news is making people feel that their attempts to protect their privacy are useless. I think people feel like it’s just impossible.
Hero Image: Flickr user "dirt3_monster" (CC BY-NC-ND 2.0)
Be in the know! Get Reviewed.com news and reviews straight to your inbox.
Thanks for signing up!
Sign up to get the latest news, reviews and deals only available to our email subscribers
Thank you for subscribing!