Terminator and The Matrix got it wrong. We all thought the villains of the AI era would be killer robots or hostile programmes. Instead, they’re face-ageing apps and YouTube creators with too much time on their hands.
Turns out the biggest problem with machine learning isn’t the machines at all - it’s the people behind them. FaceApp may look like just another fun viral craze, but look closely at those terms and conditions and you’ll realise you’ve signed away your image to a little-known Russian business forever.
The fears around FaceApp are just another reminder that technologies like AI aren’t blank slates free from the intentions of their creators. It’s no secret that, for all their potential to do good, algorithms are as prone to bias as the people who create them. Cognitive technologies are designed to imitate the learning process of the human mind - it’s only natural that it should also emulate its flaws.
There’s also another more sinister problem - what happens when you have bad actors exploiting those weaknesses or purposefully creating something intended to do harm?
Recently, YouTube has come under fire for pushing users towards harmful and extremist viewpoints over time. Then there’s Twitter’s much-publicised struggle with white supremacist accounts. We’ve gone from Tay, the racist chatbot, to psychopathic AIs and Russian hackers.
The problem is extremists know exactly how to manipulate search algorithms to expose their world views to more people. While legitimate organisations are getting to grips with 4IR, criminals have been comfortably using automation to commit fraud for years. And put the wrong people in charge of facial recognition technology and suddenly you’ve got a real threat to personal privacy.
In the wrong hands, machine learning can become a force for harm. It’s up to the rest of us to make sure it lives up to its incredible potential to do good and solve problems.
It’s a fitting change in the age of fake news and picture-perfect video forgery. Setting out to do no harm is all well and good, but it’s not enough anymore. Organisations are waking up to the fact that the only way to prevent their platforms from being used to do harm is to proactively do good. For some, this means fighting fire with fire. The UK government has a machine learning tool designed to automatically detect ISIS propaganda. Twitter and Google themselves are employing their AI tools to comb through and scrub their own data.
Many are also trying to define a universal set of ethics for machine learning and AI - an updated version of Asimov’s Laws of Robotics you might remember from the movie I, Robot. The EU and the OECD are among those working towards a set of AI principles, and most major tech players have their own ethics boards.
But we shouldn’t rely on the Googles, governments and Will Smiths of the world to do the heavy lifting. Every organisation has a role to play in shaping AI as a force for good. From the smallest chatbot to Watson itself, we should be working to create algorithms that offer transparency and accountability.
By their very nature, artificial intelligence technologies are somewhat unpredictable, but there are ways to make sure you’re always on the right track. There are many strategies, guidelines and even Asimov’s laws of robotics to lean on but for every organization, I believe there are three key principles to throw into the mix:
Ultimately, the best way to extract real value and outcomes from technology is to make sure there are good people with good intent and accountability behind them. Because if you’re not willing to take ownership, someone else will find a way.
We're sorry, your request failed. Please try again in a little while.