Ethical bot-making

These are some guidelines on how to write bots in an ethical way, written in response to some horrendous mess-up or other. I'm pretty sure that nothing I say here is particularly ground-breaking; these are ideas which I've absorbed from the bot-making community, particularly everyone who hangs out in the #botALLY Slack. Hopefully some of them are helpful to you.

First, do no harm

As an ethical principle, this might seem obvious to the point of uselessness. Harming people is bad, and we shouldn't do it. I hope that's self-evident to anyone reading this. Still, this is a good place to start.

Every time we make a decision while making a bot, it's useful to think about whether this could hurt someone. For some decisions, this might be very clear. A bot which generates knock-knock jokes probably shouldn't have racial slurs in its wordlist. For other decisions, it could be more subtle. Sometimes there are unforeseen consequences. Originally, Darius Kazemi's @TwoHeadlines generated occasional transphobic jokes, but he was able to mitigate this with specific code.

Sometimes, it's better to not make something. No matter how clever and cool our idea is, we still need to consider whether or not it could hurt people, and if so, whether we really think the benefits outweigh that. Not every cool idea needs to be made real.

Often, when a bot does something bad, like parroting racism, or harrassing people repeatedly, a suggestion is made that it's impossible to protect against this sort of thing, that there's no way to prevent all harm. This is not true. Sometimes we can't do what we want to do without hurting people. The conclusion to draw from this is not that hurting people is unavoidable, it's that sometimes we shouldn't do what we want to do. A bot which does not exist harms no-one.

A bot is an extension of its creator's will

The ultimate responsibility for what a bot does lies with its creator. We may create a persona for the bot, in order to achieve some effect, but fundamentally it is nothing but a set of instructions we give to a computer. If those instructions lead it to do bad things, then the responsibility is with us, not with the persona we created.

Bots are powerful amplifiers, in a way which is difficult to intuitively grasp. When we insert a machine into a social environment, we must bear in mind that the machine is in some respects superhuman. It does not tire or bore, and it is capable of acting on a scale no human could match, for very little expenditure of effort. If I accidentally make an off-colour joke, I correct myself and move on. A bot can make the same joke hundreds of times per day, for years at a stretch, with no capacity for correction. What was a simple mistake from a human, becomes a sustained campaign of harassment when automated.

For many years, Robot J McCarthy would reply to Twitter mentions of "communism", "socialism", "Marxism", etc, with put-downs in the style of a mid-20th century American anti-Communist. I'm sure this seemed like a funny joke to its creator. Two and a half million tweets later, I've yet to come across anyone who found the bot less than annoying. In many cases people felt harassed by the bot, which was inserting itself into their conversations, in the guise of a figure many would rather forget.

You are what you eat

Choosing and editing sources is one of the most difficult parts of bot-making. Often we rely on input data which we have not vetted completely. We have to decide how comfortable we are with the possibility that something horrible may lurk within our corpus.

It's particularly important to be careful when dealing with "open" sources of data. When we use data from the internet, we're giving up control over an element of our bot. This can lead to wonderful results, but it also means we must be prepared for our bot to receive unsavory input.

This is doubly true when bots interact directly with their audience, regurgitating supplied text or images. Try to imagine how a bot could be used as a harassment tool, or what kind of things a malicious individual could make it do.

We can question who is to blame when a bot is abused for malicious purposes. Of course, the abuser bears responsibility, but this does not absolve the bot-maker. We are all well aware that the internet is home to griefers and trolls. If we build systems without considering how they could be abused, we must bear some responsibility when that inevitably happens.

This balance of responsibility shifts somewhat, according to how much you modify the source data. A bot which recites the text of Adventures of Huckleberry Finn can probably get away with quoting racial slurs. A bot which assembles sentences randomly from the novel's vocabulary will have a harder time justifying this.

Bots should punch up

Leonard Richardson, in his essay Bots Should Punch Up, talks about bots as comedians. In comedy, there's a widely-held belief that we should always direct jokes so that they're at the expense of those with high social status, rather than low: punching up, rather than punching down. The same attitude is often expressed in journalism, through the dictum to "comfort the afflicted and afflict the comfortable".

Of course, not all bots do comedy, and fewer still do journalism. Nevertheless, we can always consider our bot's actions in this light. Who, if anyone, benefits from this bot? Who, if anyone, is disadvantaged? Are we punching up, or are we punching down? Are we punching at all?

All of this presupposes and understanding of the relative social standings of the author, the bot, the target and the audience. The social status of a bot is often unclear. There may be jokes that we, the author, can make because of our particular situation, which come across badly from the more nebulous position of a bot.

Some miscellaneous advice

Monitor your bots. Keep track of what they do, and if you don't like it, fix them. If they can't be fixed, remember you can always take them down, either temporarily or permanently.

You will miss things, so make it easy and obvious how people should get in touch with you if the bot does something wrong. Just knowing there's a human paying attention can go a long way towards minimizing the hurt people feel.

Word filters are a crude tool, and they miss a lot. That doesn't mean we shouldn't use them. People are much more forgiving of a bot which unwittingly implies something unpleasant than one which straight-up repeats a slur. It's easy to check output against a list of "bad words", such as this one from Darius Kazemi.

It is a mistake to assume that the audience for a bot understands what the bot is doing. Often, the audience will have little understanding of bots in general. Sometimes they develop ideas about the bot which are false.

The output of a bot can be shared, retweeted, screenshotted and recontextualized beyond your wildest imaginings. Someone who is hurt by seeing something out of context is still hurt.

Finally, remember that you are going to make mistakes. Learn from them, and those of others. What matters is not that you mess up, but how you respond when you do.

Further reading


If you've enjoyed this piece, please consider contributing on Patreon so I can do more things like this.