Recent news of the social media giants and their ambiguous algorithms have once again, hit the headlines. Harvesting our data for political gain is one thing, (looking at you Facebook), but allowing your algorithms to offer dangerous content to vulnerable users, is another.

Authenticity, an increasingly thrown about word recently, is something that these sites are somewhat disregarding, yet it is them who should be leading the fight of inauthenticity; fake news and misinformation.

Anti vaccination narratives, suicide and self harm are just a few issues that have been said to have been recommended by social media sites like Facebook, Youtube, Instagram and Pinterest.

Facebook has been accused of spreading anti vaccination content and favouring groups which spread unscientific theories that vaccinations lead to illness. It’s been argued for years that Youtube and Pinterest’s recommendation systems has also routinely offered controversial content above factual, accurate information regarding anti vaccination content.

Now, only after a decade of complaints and increasing pressure, Youtube has said that changes will be made to its algorithm with a crackdown of conspiracy- esque video suggestions and the prevention of monetisation and ads for this related content. Sure, this will have some affect for the future but maybe the damage has already been done?

Whilst doing research for this article and searching ‘Should I vaccinate my baby’ on Youtube, initially pro vaccination videos surfaced. However, after fast forwarding a few videos, a video about autism popped up - a disorder that the anti-vacc community claim that is a result of vaccinations.

It may be that the more obvious anti-vaccination content has been blocked by the algorithms, but the subtle, more understated anti vaccination propaganda is still there, bubbling under the surface. It seems that Youtube has a long way to go.

The subject of self harm and suicide has also been a major part of the algorithm rhetoric and instances like the tragic death of Molly Russell in the UK have further fuelled the argument.

Despite clear rules and regulations that these social media sites stand by, of disallowing negative content, it’s not always the case. If you’re a young person who is dealing with depression or an eating disorder and you continually search for such related content, algorithmically, the social platform will send you down a rabbit hole, offering more damaging, persuasive content.

Perhaps the most worrying part of this algorithm issue, is the length of time it has taken for these influential sites to come forward, admit wrongdoing and attempt to prevent this kind of damaging, mis-information. Understandably, monitoring content is a mammoth task, but it seems that it has to take the death of a young girl for merely a conversation to open about these practices.

Without questioning or thinking twice, the public put an unwavering amount of trust into these social media sites, allowing them a look into their lives and the opportunity for them to predict their behaviours.

With the power they hold, comes an obligation to protect their users from damaging content and information. Undoubtedly, the majority of users are most likely unaware of how these algorithms work or that they even play a part in their daily social media scroll.

It’s going to be interesting to see the next steps these sites will take to combat negative algorithms and if more damaging ones surface. These days we may all be about the #nofilter life, but undeniably, these social media giants need to start filtering detrimental content that has huge impact. They need to do this on a huge scale in order to protect the users that they so heavily rely upon.