Our data, our decision?
Written by Laura Browne, Digital Project Manager at bigdog
We need to talk about the amount of control we’ve thrust upon algorithms, particularly those that influence our quality of life.
Depending on data to make crucial decisions puts a huge amount of trust in non-human reasoning. Whether it’s our financial, legal or medical needs, any misjudgement could severely impact daily life – and is it not that human touch, a gut feeling, that causes us to make so many right choices against the odds?
You’ll experience some form of Artificial Intelligence (AI) every single day; this will most commonly come in the form of Machine Learning, which influences your online experience. Have you ever had Facebook’s facial recognition suggest who to tag in your photos? Or perhaps you’ve had e-commerce shopping recommendations pop up that’re eerily close to something you’ve been searching for recently? Both are examples of Machine Learning.
Because we’re quickly acclimatising to this new era of ultra-convenience, the demand for a faster, easier and more personal digital experience has never been higher.
The exploration into Machine Learning is driven by the primal instinct (that’s gotten us thus far evolutionarily-speaking) of problem solving, but what exactly is the problem? It all boils down to decision making. Can you confidently profile human needs individually, yet at scale, without making any mistakes? Absolutely not, so we rely on data to do it for us.
Decision paralysis is a plague on the human mind: avoid making a decision and you will avoid blame for a mistake. Passing this responsibility onto an algorithm is a modern solution that works most of the time, but data can get it wrong too.
In 2018, Facebook was found to be targeting higher-paying and specific sectors towards men over women; these sectors contained mostly manual labour and driving roles. While Facebook didn’t consciously make this data-driven choice, it didn’t stop targeted discrimination reaching its mass audience. Despite the social media giant having policies in place to actively tackle this kind of gender bias, is Facebook ultimately liable for this data discrimination?
It’s not all bad news though - advanced technology has broken ground within the health care industry. Over half of NHS trusts have started investing in AI, with a view to using broader data-ranges to asses patient risk factors and ultimately provide a quicker diagnosis. This, of course, can be crucial to patient survival.
Information provided by AI is being utilised by medical professionals, who will ultimately make decisions for treatment on a case-by-case basis. The important distinction here, however, is how the data is used; a human is utilising the information provided to make an informed choice as opposed to relying on the data itself to give the ‘best’ option.
Machine Learning answers the demands of a world that’s constantly outpacing itself; it analyses broad ranges of data quicker and more accurately than any person could. AI troubleshooting boundaries are still being pushed and are achieving impressive results that will come to shape the way we live our lives. Undeniably, these things have their places and for myriad good reasons.
However, pouring trust into this type software to make our decisions for us will always come with a high, unempathetic risk that shouldn’t be ignored.