The transformative potential of algorithms means that developers are now expected to think about the ethics of technology — and that wasn’t part of the job description.
The tech industry is entering a new age, one in which innovation has to be done responsibly. “It’s very novel,” says Michael Kearns, a professor at the University of Pennsylvania specialising in machine learning and AI. “The tech industry to date has largely been amoral (but not immoral). Now we’re seeing the need to deliberately consider ethical issues throughout the entire tech development pipeline. I do think this is a new era.”
AI technology is now used to inform high-impact decisions, ranging from court rulings to recruitment processes, through profiling suspected criminals or allocating welfare benefits. Such algorithms should be able to make decisions faster and better — assuming they are built well. But increasingly the world is realising that the datasets used to train such systems still often include racial, gender or ideological biases, which — as per the saying “garbage in, garbage out” — lead to unfair and discriminatory decisions. Developers once might have believed their code was neutral or unbiased, but real-world examples are showing that the use of AI, whether because of the code, the data used to inform it or even the very idea of the application, can cause real-world problems.
From Amazon’s recruitment engine penalising resumés that include the word ‘women’s’, to the UK police profiling suspected criminals based on criteria indirectly linked to their racial background, the shortcomings of algorithms have given human rights groups reason enough to worry. What’s more: algorithmic bias is only one side of the problem: the ‘ethics of AI’ picture is indeed a multifaceted one.
To mitigate the unwelcome consequences of AI systems, governments around the world have been working on drafts, guidelines and frameworks designed to inform developers and help them come up with algorithms that are respectful of human rights.