Fixing the Problem of Social Media

Unlike a huge section of the population, I don’t have a Facebook account.  I did have one, but chose to delete it many years ago, once I realised how much the company was abusing the personal details of those using the site.  I think my decision was further vindicated with the stance Facebook has taken regarding being classed as a media outlet, which they clearly are.  The company curates content that appears in front of users and surely any curation process classes it as a media organisation.  What’s worse is the way in which the curation is done.  Decisions on taste and decency are made arbitrarily buy Facebook, based on a set of standards and a manual they follow.

Following yet another awful terrorist attack in London, the UK government has started to go after the social media giants, promising stricter requirements on their operation.  I’m not holding my breath.  What could governments actually do in order to regulate content published on sites such as YouTube, Facebook and Twitter?


An obvious step is to implement validation of all content before it is published – basically moderation.  Each video or other content not easily validated by software, should be validated before being made available.  The social media companies will of course cry foul, saying how hard it will be for them to police every piece of content, however this is where these companies find themselves and they have a moral responsibility to fix the problem of which they are the cause.  In line with other industries (like banking), restrictions could be relaxed for validated accounts, a validated account being one where additional ID verification has been provided.

Naturally the sheer scale of the task involved in watching and validating every YouTube or Facebook upload would be enormous, however just because the problem is hard (and potentially costly), doesn’t mean it shouldn’t be done.  We should not step away from our principles of wanting to prevent morally questionable content being shared with millions of people.


So what is the basis of a definition of morality?  Ask people around the world and you will get different views.  Clearly the view on women’s rights of someone from Saudi Arabia will be radically different from that of someone in the United States.  However there are (hopefully) a set of common values we can all agree are acceptable, such as not threatening to kill someone, not posting videos of people being killed or taking their own life, other types of violence and certain types of pornography.  Outside that core (and I’m not defining it, just giving some examples), then the boundaries become blurred.  The way to find out what is acceptable is either for social media sites to follow country rules on publication that exist for media organisations or simply to ask their subscribers what they feel is right or wrong.


Will the social media giants change their approach?  I’m not hopeful.  There’s a natural aversion to change, driven by the business model of social media and that’s the way in which revenue is generated from customer data.  You the user are the customer, with Facebook and others selling your private data to companies, mainly through targeted advertising.  The last thing these companies want is to slow down their viewing hits because hits = revenue.  Personally I say, so what?  If Facebook has less revenue, the world will not end tomorrow, but we all might feel better about ourselves knowing we did the right thing by pushing unwanted content off the Internet.