---
title: The real ethics of AI are about the labour underpinning it
layout: post
---
Even as worldwide militaries develop autonomous killer robots, when we think of the ethics of AI, we often turn to the Asimov principles:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These seem sound principles if we wish to avoid the feared robot takeover preached to us by Elon Musk etc.
Further, we know that training linguistic models on broad corpora tends to [reproduce oppressive racist and gendered structures](https://nyupress.org/books/9781479837243/). This also seems like an important ethical area.
Perhaps, though, what we need to think more about, in the ethics of AI, is the way that we treat the human data processors who prepare material for the training of artificial neural networks and other machine learning techniques. For instance, staff on precarious contracts at Facebook and Google [are paid $0.02 for each image that they moderate](https://www.theguardian.com/commentisfree/2017/dec/24/facebook-google-youtube-dirty-work-social-media-inappropriate-content), meaning that they must sift through heaps of scarring images of child abuse for a tiny quantity of remuneration. This area has expanded in recent years with [the first conference on the subject being held last year](https://atm-ucla2017.net/).
The point of what I'm trying to say here is this: we think that the ethics of AI are about restricting the actions of advanced machine-learning algorithms to operate within specific normative moral bounds. What we don't often acknowledge is that such learning often still depends upon vast quantities of human labour to filter the datasets. This work is repetitive and mentally scarring. And it is paid very badly. Those who preach the need for AI ethics principles are also, often, Silicon Valley billionaires. Yet their wealth relies on the exploitation of people who filter and moderate content, to feed to AI. Perhaps we should address the ethics of this, before we heed the cries for ethics to be transferred solely to the realm of machine regulation.