Notes Storyboard

Framing Bots

The best way to envision potential futures for social automation lies in accepting the paradoxical nature of bots. Yes, they contain values encoded by the people who build them, but they also live — and perform — on an unpredictable Internet of nearly limitless input and output. This doesn’t mean that responsibility doesn’t exist. It means that it’s complicated and should be addressed as such.

The Data & Society workshop on the questions that bots raise provides a valuable introduction for thinking about responsibility and culpability for creators of automated systems.

Bots are challenging because of their semi-autonomy. They combine the intentions and style of their creators with generative methods based on combining and recombining data. This semi-autonomy can lead to surprising and unpredictable outcomes which require makers to explicitly acknowledge their own values and responsibility to their creations.

One of the unexpected difficulties I ran into with my first attempt at #NaNoGenMo2015 was dealing with some of the more dubious emergent properties of scraping a large corpus of old adventure stories from Project Gutenberg. After starting to generate new writing based on random traversals of these texts, I discovered that many of the books used explicitly racist language and character archetypes—not to mention insidious and divisive levels of sexism, orientalism and colonialism embedded in many of the narrative situations.

From a historical or literary perspective, this isn’t surprising or shocking. Influenced by post-structuralism, we tend to look at such features as reflective of the milieu in which the books were published. But what happens when we yank these features out of context and reproduce them in new works? Such re-contextualisation becomes the generative literature equivalent of blackface.

Eventually, I ended up ditching the project altogether, in part because I didn’t have a robust method for dealing with these issues. Generating texts littered with racial slurs isn’t something I find amusing or comfortable.

So I learned that the vast potential for generating creative works from large data sets has to be constrained by authorial responsibility. “I didn’t intend to offend” is not an acceptable answer. Creators cannot necessarily escape the impact of their work by explicitly relinquishing control.

Further Reading

In Bots Should Punch Up Leonard Richardson explains his personal experiences of managing the potentially offensive and ethical consequences of collecting thousands of movie reviews for the @RealHumanPraise bot.