Why Brands Need Social Media Community Guidelines

We’ve likely all excitedly published a post on social media and then quickly clicked the comments only to be horrified by an off-colour message. Be it abusive, racist, homophobic, sexually explicit, or plain ignorant, inappropriate comments can feel like a punch to the gut, and are embarrassing to have underneath your post.

Though these comments sometimes come from your always-inappropriate uncle or a friend with a weird sense of humour, they can also come from trolls – strangers who post incendiary comments just to get a rise of others. And though trolls can (and do!) target posts from regular people, brands with name recognition and large numbers of followers often receive much of their focus. Trolls will work to skew the conversation a brand is hoping to nurture online via inflammatory rhetoric that ignites outrage and shifts focus away from the brand’s message or intent.

The implications of trolling on users and brands

Worryingly, these types of comments can cause users to turn not on the brand, but each other. Suddenly, users will attack and bully those with differing opinions, and these comment threads can devolve into virtual boxing matches that turn personal and can result in real mental and emotional wounds.

There is ample research surrounding the turmoil cyberbullying can bring on a personal level, including increased likelihood to engage in self-harm or suicidal behaviour. But brands, too, can see digital ire move into the real world in very dangerous ways. YouTube headquarters in California was attacked by a female shooter who was upset about the company’s policies in 2018. In the same year, CNN received a pipe bomb in the mail from a man who was disgruntled about their political coverage.

In order to discourage and curtail language that can lead to dangerous behaviour, social media platforms like Facebook, Twitter and Tumblr have put into place community guidelines that lay out the types of language and visual content users should not use. If users violate these guidelines, they run the risk of being suspended from the platform.

Though social media platforms do follow through and suspend accounts, trolls and cyberbullies are often barely discouraged by these measures due to the fact that if they are suspended, they can simply set up a new account using a new email address. Plus, it sometimes isn’t clear what does and does cross the prohibited content line because everyone has different personal standards, not to mention the fact that sarcasm and dark humour can be hard to discern online.

Strong social media guidelines curtail online vitriol

In order to keep comment sections from turning into cesspools, brands active on social media are now establishing their own community guidelines.

For instance, in March, the British royal family announced social media community guidelines, seemingly as a response to abusive, hateful and threatening comments made toward both Kate Middleton, Duchess of Cambridge, and Meghan Markle, Duchess of Sussex. The family’s guidelines detail their expectations for courteous and respectful engagement, and clearly state that they will use their discretion to determine if someone is in violation of their standards. Additionally, the famous family’s guidelines establish the actions they may take against those in violation of the guidelines, including deleting comments, blocking users and alerting law enforcement if comments are threatening.

Unfortunately, the existence of such guidelines does not mean that trolls will suddenly clean up their act. But the guidelines do provide brands the opportunity to condemn hate speech, communicate to their followers what their expectations are when it comes to engagement and establish transparent protocols for when and why they delete comments and block users.

If your brand does not already have social media community guidelines in place and is looking to establish a set, here are some things to consider:

  • Be clear in your policies: Revisit your company handbook or human resources policies and use the language regarding the sorts of behaviours that are not tolerated to begin drafting your social media community guidelines.
  • Condemn hate, not criticismThough no one enjoys being criticised, criticism can be useful – especially when brands are hearing directly from their audience and learning how they respond to different messaging. For example, when Pepsi released a campaign featuring Kendall Jenner and tying into the #BlackLivesMatter movement, the response to it was swift and the ad was deemed tone deaf – a lesson that Pepsi needed to learn. Though this type of negative feedback can be harsh, it is very different in nature from comments that are abusive and offensive, and should not be discouraged.
  • Enforce your community guidelines: Once your guidelines are announced and in place, do not be complacent. Your social media teams should monitor comments and enforce the guidelines consistently. After all, there’s no use of putting the guidelines in place if they are only there in theory – you have to practice them, too.

Taking a stand against hate speech and cyberbullying is a just cause, and one that your brand can champion. So if you notice that the comments on your posts make you want to log out, then it’s time to get your community guidelines in place to protect both your brand, your message and your audience.

Need help fighting the trolls? We can help: hello@mutant.com.sg