I wish I could forget the morning I saw a picture of a dead child on my Facebook news feed.
The picture was strong and brutal and was posted to raise awareness about a war. I was stunned for the rest of the day. Why had someone decided to post such an image, and more importantly, why hadn’t Facebook removed it?
The same thing happened more recently when someone I know shared a picture with detailed instructions and told people how to kill themselves if they don’t support U.S. President Donald Trump. Regardless of whether it was a question of humor or a political statement, the post exceeded the boundaries of common decency.
Why would someone post this picture and text and why hasn’t Facebook removed it?
ALSO READ: Canada’s five major banks join Facebook’s anti-hate advertising boycott
ALSO READ: Twitter bans Trump citing the risk of incitement
These posts aren’t the only time I’ve wondered about the moderation guidelines on social media websites. There are times when refuted and dangerous medical advice is given or hatred of an identifiable group is encouraged.
Before the days of social media, if such content had been shared through an internet forum or discussion group, a moderator would have deleted it. Social media sites have traditionally taken a rather straightforward approach.
Things are starting to change. Some of the social media giants are now paying attention to the content posted on their websites.
Last week, Facebook announced that it would crack down on vaccine misinformation on its platform. This announcement follows previous steps by Facebook to exercise more control and add fact-checking tags to posts promoting inaccurate or misleading content. Other major social media platforms, including Twitter, Instagram, and YouTube, are also doing more to monitor content.
The ability to monitor social media content has been around for years. The most important platforms have traditionally introduced community standards and have been able to remove posts or ban users for violating these standards. Why haven’t these tools been used in the past?
Some will argue that Facebook’s latest move to tackle vaccine misinformation is nothing more than a calculated censorship scheme – an attempt to silence all speech but those who follow a certain ideology. If anti-vaccine posts are banned today, will content about religion or politics be banned tomorrow?
Such speculation doesn’t understand the nature of censorship or social media platforms. Censorship is a government’s attempt to control language and other content. Facebook, YouTube, Twitter and Instagram are private companies, not government agencies.
A decision by social media platforms to refute or block certain content does not necessarily silence voices. The major social media platforms aren’t the only options for social media users.
There are some US platforms modeled after Facebook, YouTube, and Twitter that allow registered users to post with little or no moderation. These unfiltered social media platforms attract a specific segment of the public, particularly the alt-right segment, extremists, and those who post content that would be pulled down by the dominant social media platforms.
Right now, these are small players in the social media world, but if the demand for an unfiltered platform is enough, they could eventually become the dominant social media sites.
The question is whether the majority of social media users want an unfiltered platform regardless of what they see, or whether they want some level of moderation on their social media feeds instead.
John Arendt is the editor of the Summerland Review.
To report a typo, send an email to:
news@summerlandreview.com.
news@summerlandreview.com
Like us on Facebook and follow us on Twitter.
columnist
Get local stories you won’t find anywhere else straight to your inbox.
Sign in here