Author’s Statement on use of AI text-Image tool

On the ethics of using an ‘AI’ tool: 

It seems that the current generations of ‘AI’ tools designed to mimic creative tasks are basically mechanical turks – at first glance, they might seem impressive and autonomous, but if you have a closer look, you realise that they’re fundamentally contingent on concealed human labour. Accordingly, I don’t believe it’s ethical to make a profit off any project which uses the tools in their current state – where the workings behind the scenes are so opaque, and it’s not possible to attribute or compensate the artists whose work was (often unknowingly) used to train the tools. Thankfully, voidspace agrees – so we’re going to be donating anything above the costs of production and postage to a charity which supports human artists.  

So why use it, then? 

This project is an experiment, intended to facilitate discussion and critique of the use of these images and tools. I think we’re at a really confusing time – where these tools have been released into the world, without considered or extended dialogue between creative professionals who can feel threatened, insulted, or exploited by these tools (and I include myself in that camp, as a writer who’s been spooked by certain developments in LLMs – especially with announcements about GPT and Ghostwriter) and the companies working on them. I hope that it’s clear that this project is an exploration of the tools, not an endorsement. The format of the piece is intended to present questions around the effects of the tools: for example, does presenting hundreds of images of similar quality change the way in which the viewer reacts to each one? Thinking about bias: do the underlying trends in the dataset become apparent – is it clear that the tool is representing, and failing to represent, certain things? 

So why cyberpunk? 

Cyberpunk often deals with grotesque attempts to marry man and machine – and this project is, in meta terms, an exploration of getting a human and a machine to work together. I’m a huge sci-fi fan, and spent my years as a teenage media sponge reading Philip K Dick, William Gibson, Paul Di Filippo, JG Ballard, Robert Sheckley, Harlan Ellison, and Isaac Asimov – the future that I saw in fiction was one in which robots and AIs (or things which claimed to be robots or AIs, but turned out not to be…) co-existed uneasily with humans, often working for their own (unclear) ends, operating with opaque reasoning and unintended consequences. The artificially intelligent things that I can name off-hand (like, HAL, AM, Multivac, Nexus-7s) did not necessarily end well for their human colleagues. So, erm, as a sci-fi geek, who comes with this pre-installed idea that automated tools might have unintended consequences, the idea that in 2022-2023 it would even be possible to try writing a graphic novel and asking a machine to generate illustrations is… well, it doesn’t feel plausible, does it? (But, then, so much in 2022-2023 feels implausible…) 

Links to find out more: 
DACS (a group which campaigns to ensure that artists receive fair compensation for their work) – report on artists’ experiences of AI  
The Oxford Internet Institute have a really interesting report on AI and the Arts: How Machine Learning is Changing Creative Work 
An accessible article by Chloe Xiang on some of the issues with the LAION-5B dataset used to train some text-to-image tools.