Sunday 6 April 2014

Tackling bad behaviour online: What's in a name?

By David Glance, Published The Straits Times, 5 Apr 2014

EVERY day, millions of Internet users leave comments on websites and social networks covering any topic imaginable. At its very best, commenting fosters a social community of people sharing an interest. The community can work to create new knowledge, expand and explain, bring different perspectives or just be supportive and encouraging.

At its worst, however, commenting can sink to depths of excoriating and vile invective. It can simply provide yet another opportunity for different groups to hurl abuse at one another and cement even further their respective entrenched positions.

The average commenting community usually falls somewhere in between and for some sites, this has been sufficient incentive for getting rid of comments altogether. The online magazine Popular Science decided to ban comments, quoting research that showed rude comments raised the likelihood of readers questioning the content of the article.

In this case, uncivil comments caused some people to evaluate nanotechnology as being more risky than readers who had been exposed to only civil comments. The editors of Popular Science felt this distorted unreasonably and arbitrarily the science being presented in their articles and felt justified in taking the step of banning comments altogether.

Incivility may have effects on our attitudes to the content that is being commented upon, but in the case of YouTube, comments had simply become generally unpleasant. In an attempt to deal with this, Google has progressively enforced a "real name" policy, including having to use a Google+ account, in order to comment.

The idea that anonymity increases bad behaviour on forums is one that is supported by research. Through a process of online disinhibition, behaviour tends to be more uncivil and groups become more polarised when individuals are anonymous.

Contrasting this is the observation that uncivil comments were found to be more prevalent on the online version of the Washington Post, where commenters can be anonymous, as compared to its Facebook version. Interestingly however, where rudeness did occur in both environments, on the online version of the Washington Post, the incivility was directed at other participants, compared to Facebook where it was directed at political figures and others not directly involved in the commentary.

This observation has been taken up by the Huffington Post, which despite employing comment moderators, has observed the increase of trolling, griefing and generally bad behaviour over recent years. It has introduced a requirement for commenters to use their real names.

How exactly they will do this is unclear. Relying on Facebook or Google+ accounts is one way. This assumes these services have an effective means of ensuring their own users are using their real names. However, we know that this is not really the case as it is still possible to set up false names on any service at present.

The other question is whether this will effectively stop bad behaviour. YouTube's experience indicates real names haven't eradicated the torrid comments. This may be in part because on a service like YouTube, there are so many people commenting that a real name operates in the same way as a pseudonym anyway. If you expect nobody to know who you are in a social group, there would not be any of the social norming effects moderating behaviour.

Certainly when the articles are on emotive subjects and the problem becomes one of warring factions, real names are a non-issue because being identified ceases to become an inhibitor. Declaring your allegiance to the group is part of the motivation of engaging in the argument (or fight).

Reducing the problems of commenting down to a single factor of identity versus anonymity is missing the point. As with all relationships and especially ones encompassed in social groups, the situation is far more complicated.

Having a commenting community on a site is much more about the social network aspects of that community than it is about simply engaging customers. In order to have an effective community which is organised around an interest, it largely falls to the site owners to try and cultivate that community.

Turning to the social sciences for an understanding on how to build effective online communities, researchers have outlined a slew of evidence-based design features that can be used to build and regulate those communities. These design suggestions are based on normalising behaviour to achieve a constructive, informative and civil conversation.

Some of these suggestions may actually surprise comment moderators currently dealing with the task of shaping an online community. For example, some sites leave a trace of comments that were removed by moderation. This works as what is called a "descriptive norm", an example of how people should behave. But research has shown that if there are too many of these traces, they can actually elicit even more bad behaviour.

Another suggestion is that commenters who display bad behaviour are far more likely to moderate that behaviour if they are given warnings and face-saving ways of amending their behaviour.

Although anonymity and identity are one aspect of the overall healthy functioning of an online community, they are only a small part and like all relationships, the truth is that "it's complicated".


The writer is director of innovation at the Faculty of Arts, and director of the Centre for Software Practice, at the University of Western Australia

This article first appeared in The Conversation, a website which carries analysis by academics and researchers in Australia and Britain.

No comments:

Post a Comment