The fact that the internet seems to have a large number of people who indulge in abusive behavior towards others is hardly news. Why people behave this way is not clear. My own theory is similar to what I think about sports. My high school in Sri Lanka had a strong sports emphasis, since it was modeled on British public schools that took seriously the motto mens sana in corpore sano (“a healthy mind in a healthy body”). One used to constantly hear the phrase that playing sports builds character but I always doubted that. It seemed to me that what sports did was reveal character, since on the athletic field, one’s behavior was now visible to large number of people.
I had similar disagreements later with educational colleagues who bemoaned the fact that young people today seemed more inclined than those in generations past to do stupid and even dangerous things. It seemed to me that young people have always done stupid and dangerous things except that BI (Before the Internet) the only people who knew about them were those actually present at the time but AI (After the Internet) it was possible to display one’s idiocy to the world and so all of us are aware of things that normally would have gone unnoticed.
Is abusive behavior of the same kind, something that is inherent in the person and would normally only be exhibited in smaller circles but the internet allows a wider range of targets? Or does the internet itself encourage people to engage in behavior that they would not normally exhibit, that its anonymity enables some to let loose the Hydian side of their personalities?
J.Nathan Matias has looked at this question and suggests that even if one had a system whereby people had to use their real names, this kind of bad behavior would not only not go away, it might even get worse. He says that the idea that online anonymity might be the source of the problem is based on misinterpretations of early research that have taken hold in the popular minds that suggested that:
(a) social problems could be attributed to the design of computer systems
(b) anonymity is to blame.
These ideas aren’t reflected in the research. In 2016, a systematic review of 16 lab studies by Guanxiong Huang and Kang Li of Michigan State University found that on average, people are actually more sensitive to group norms when they are less identifiable to others.
While some non-causal studies have found associations between anonymity and disinhibited behavior, this correlation probably results from the technology choices of people who are already intending conflict or harm.
He lists nine items that are suggested by the research that have the following headings and his assay elaborates on each:
- Roughly half of US adult victims of online harassment already know who their attacker is.
- Conflict, harassment, and discrimination are social and cultural problems, not just online community problems.
- Revealing personal information exposes people to greater levels of harassment and discrimination.
- Companies that store personal information for business purposes also expose people to potentially serious risks.
- It’s not just for trolls: identity protections are often the first line of defense for people who face serious risks online.
- Requirements of so-called “real names” misunderstand how people manage identity across multiple social contexts, exposing vulnerable people to risks.
- Clear social norms can reduce problems even when people’s names and other identifying information aren’t visible.
- People sometimes reveal their identities during conflicts in order to increase their influence and gain approval from others on their side.
- Many hate groups operate openly in the attempt to seek legitimacy.
He also lists things that designers and communities can do.
Sociologist Harry T. Dyer has posted a response to Matias that says in part (where I have eliminated the citations):
Rowe for example has looked at the comment section of the Washington Post which allows users to post anonymously, and compared the comments to those left on the Washington Post’s Facebook site where users had to use personal Facebook accounts to leave a comment. Rowe found that the Washington Post website had far more incivility and impoliteness as well as a greater likelihood for purposefully directed hurtful comments than the Facebook page. Similar findings have been found by other researchers. It appears that interactive affordances, such as comment sections, are not used in uniform manners. Context, it seems, matters. The affordance of anonymity can affect how these spaces are utilised.
Nonetheless, it has been noted recently that a sense of community and engagement is apparent and notable in comment sections and platforms that allow anonymity. This suggests that being social online does not necessarily require a non-anonymous platform, and that social interaction can thrive in anonymous platforms.
Dyer says that an excessive focus on the Facebook model can mislead us about the nature of online communities and we need to look at a broad spectrum.
As such, our approach towards social media and anonymity needs to be equally broad. Whilst we should not forget the many cases of abuse in anonymous platforms (Yik Yak’s problematic past alone should serve as a lesson in abuse and anonymity), we should not treat anonymity as a pantomime villain, and acknowledge its usefulness and potential as well as its problems and difficulties. As my data suggests, anonymity can be freeing to certain users and communities, and allow them to openly discuss and deal with complex topics. However, anonymity can also be a smoke-screen that allows the continuation, manifestation, and exacerbation of the worst aspects of offline abuses that all-to-often are aimed at ‘othered’ groups such as women, LGBTQ+ users, Muslims, and non-white users. This should not be forgotten or side-lined, and we should continue to push for social change, online and offline, to change these narratives.
…As the J. Nathan Matias article notes “design cannot solve harassment and other social problems on its own”. I would agree with this BUT I would also suggest that we as users should hold designers’ feet to the fire to make sure that they deal with abuse in all forms. They cannot be allowed to ignore the fact that design affects how a user will act and interact, and cannot be allowed to ignore their responsibility to continue to assess the sorts of interactions their platform design creates, a point which the J. Nathan Matias article also makes in its conclusion. As Twitter have found out throughout 2016, there needs to be a robust response to abuse. This response not only includes the designers continually assessing their platforms, but also includes users holding designers accountable for the problems with their platforms that may exacerbate existing social divides, as well as a continual push for challenging offline social Discourses that profligate this abuse.
So the upshot seems to be that while the designers and providers of social media platforms have a responsibility to do so in ways that can reduce online abuse without necessarily eliminating anonymity, we cannot ignore the fact that abusive online behavior is also driven by deeper social and cultural dynamics that no amount of design can eliminate.
Caine says
No, we can’t. People in general don’t behave well, and that’s reflected in so many ways. In offline life, people tend to create hierarchies, and people strive be in the top tiers, also known as wanting to be one of the cool kids. People create those same hierarchies online too, and use the same tactics to push someone up or drive someone else down. One of the really big benefits of the ‘net is that those people who are almost always pushed down can find safe havens, and anonymity has a lot to do with that.
Using FB as a model of “real nameness” is absurd, because a great many people on FB don’t use their real name. Back when I had an account, I didn’t use my real name.
screechymonkey says
Relying on “real names” to regulate behavior usually means encouraging or at least counting on social policing tactics of dubious morality.
If it’s an online forum that is of interest only to a small, localized community (a local newspaper, a small college, etc.), then sure, your peers will automatically know what you posted because they’re already reading that forum, and that may cause you to regulate your behavior accordingly. But otherwise, what difference does a real name policy make? The implied threat is that someone who is dissatisfied with your postings will complain to your family/friends/employer/school/commanding officer — whoever is deemed to have authority or influence over you.
That gets into some dicey areas ethically. I can think of scenarios where I believe it’s appropriate to alert third parties regarding someone’s online comments in order to alert them to some potential harm; I can also think of plenty more where people have attempted to do so simply to punish opinions they don’t like.
That method of policing also has differential impacts. Some people are, so to speak, privileged when it comes to speaking their minds online under their real names. Perhaps they’re independently wealthy, or they own their own business, and have no employer to answer to. Or they’re a tenured professor. Or they work in an industry where even the most obnoxious online behavior would be treated with a shrug if not a high-five. Or they’re teenagers or college students and have little to lose. Even “real names” is a relative thing. Forcing someone to use their real name online is a bigger deal if their real name is Albus Quentin Wanklemore III than if it’s John Smith. It’s no accident that so many of the early bloggers were either tenured professors or young political activists, and not a lot of middle management types living at the mercy of corporate America. In fact, I remember the debates about whether it was foolhardy for an untenured professor to publish a blog under his or her own name.