#ArtistsCompute2016 Festival, Coventry



Disenchanted Eyes and Hopeful Horizons

Text by Jake Watts

Written to coincide with #ArtistsCompute2016 Festival

Disenchanted Eyes and Hopeful Horizons: You compute, I compute, we all compute.

You will most likely be reading this on some form of screen, the screen will be hosting a graphic interface and this interface is designed to allow your device to communicate information to you from the algorithmic processes it is performing in the background. The obfuscation of the links between these processes is with the intention of creating a ‘frictionless’ user experience – the most banal form of magic. Frictionless is a byword for removing potential failure, it is about making invisible the labour of machines and the humans who program them. More specifically it is about limiting the frustration of users, that irritation which arises from not understanding, or being understood by, our devices.

Do you truly understand how your computer works? How computer programs or apps operate? Could you read and comprehend the code written that makes your devices function? If you’re like me then probably not, or you may have a rudimentary understanding of a small element of one of the myriad of processes occurring within your device at any given time. In most instances thinking about these concerns are secondary, tertiary, or not even concerns at all. Yet all of these acts (human and non-human) can be understood as forms of computation.

When we use the term compute it’s often to infer a mechanical quality of processing information, to calculate, to be precise. Yet compute’s etymological root goes back to computāre, which means to think with, to consider with, or to apprehend with. Though this use of the term is now rare, it is a handy way of considering what it means to compute when making art.

To further understand computing as a longstanding element of making art I will try to briefly unpack art’s intertwined relationship with the development of technology. Since we as a species began crafting things (for pleasure or out of functional needs), we have honed, advanced, and innovated the technologies necessary to do so. In the process of doing this we have enacted, or expressed, our thinking through technologies – we have always found ways to compute.

This may at first seem an oversimplification but when thinking about making art I find it freeing to consider that a goldsmith’s crucible can perform the same role as a Raspberry Pi. Many technologies are similar to the forms of artistic thinking they articulate, they remain black boxes. We understand the underlying processes inherent in some of these black boxes more than we do others. Regardless of the depth of our comprehension of these technologies we tend to use them in the following way: content is channelled through them, during the process of which the content is transformed, and then this transformed content emerges from the process and is expressed as something that can experienced by ourselves and others. What inflects the nature of art produced through this process is who decides what type of content will be computed through the black box of technology.

Despite being formed through and acting upon the complex ebbs and flows of our culture, technological objects are in and of themselves neutral entities. For example, we know that a pen can be used to write a sonnet or a hate speech and everything in between. To extend this logic to a more current technology a Twitter bot may be coded to produce magic realist literature – or it might be designed to manipulate the stock market. The pen is neither a poet nor a bigot, the bot is neither a surrealist nor a thief. But the user who acts through either technology may be any of the above or have other intentions entirely. This is the very problem that has begun to confront those who work in the field of AI, currently one of the most prominent fields of technological advancement.

AI’s ascendency is wrapped up in our current desire to reverse the flow of making (as I have described it), and instead imbue technology with the ability to understand us, give it the agency to respond to us, with the aim that this technology will unburden us in some way. These aims have underpinned recent developments in machine learning, specifically the intention being for AI to act as a human would. This aim litters the language that informs AI debates, to the point that Google’s neural networks are modelled on the functionality of the human brain (an entity whose workings of which we know relatively little).

There are plenty of reasons to anthropomorphise technologies, we have often attempted to socialise them through the familiarity of human voices, gestures or appearance. Yet, as soon as we force them to be more human, more ‘like us’ they become culturally laden with prejudices and biases which illicit distrust or a sense of abjectness. The AI of Twitter bots for instance are palatable due their botness rather than their human-ness, to the point that people are more likely to perform actions when directed by an entity discernible as a bot rather than if they suspect it is a bot masquerading as a human. For all the developments made in AI in recent years, the programs remain barely even human-like. Measured against this aim they are rendered disappointing similes, ones that fall drastically short of our 1950s Sci-Fi expectations. Despite Hollywood’s fascination with the concept, a supposed likeness to our human-ness as a marker of intelligence is fundamentally the wrong yardstick by which to understand technological development. This is because computers are not, have not, and probably will never be human, nor should they need to be. Once you begin to alleviate this pressure of expectation upon machine learning then it can be understood as technically complex and impressive by its own standards.

Interestingly our insistence upon AI’s human-ness is fast becoming a grim mirror that reflects back the unjust biases inherent within our society. AI has begun to identify and replicate racist and sexist tendencies but – as with the pen – it is not because AI programs as things are fundamentally racist or sexist. It is because the people who choose what content AI programs learn from are predominantly representative of one perspective – white males. As a result of this the content fed into AI processes have inherent biases that are then deductively identified, replicated, and pronounced by algorithmic logic. For reasons that are hopefully apparent, the emergence of these biases shouldn’t really shock anyone. They are disparities that are ingrained into our daily lives and are felt keenly and presently by many people every single day.

The re-articulation through AI of our cultural shortcomings should cause a pause for thought, we should consider why we strive to create technologies that reproduce our failings. Instead we could learn from how machines think differently to us. We could task these complex technologies to produce other ways of comprehending the world, allow them to propose other ways to move forward. AI programs are forcing engineers to address these disparities and consider their own personal complicity within these systems. Instead of assuming that the disenchanted eyes of non-human neural networks can provide insights into the ‘creative process’ of humans through their algorithmic aggregation of image processing, we should understand them as, what Benjamin Bratton calls, the ‘ascendant ocular subjects’. If we are going to create and nurture other forms of intelligence we should accept that there are alternative ways of thinking with technology that are not human-centred. We should entertain the idea that there are different ways of computing.

For this reason AI engineers could do worse than to look at the way artists learn and make work. Artists understand that not all forms of knowing, or understanding, are empirical or deductive. They experience the process of learning and developing as a speculative journey of exploration, one that is unpredictable. They know that we must be open to the unforeseeable changes these experiences will impress upon us. It is an unknowable future to be worked towards, not a theory to be worked backward from.

What a festival such as #ArtistsCompute2016 offers is a chance to experience alternative ways of apprehending the world through technology. This is because artists often work abductively, that is to say they proceed from the uncomfortable position of not-knowing, embracing moments of not-yet as a hopeful means of presenting alternatives of ‘what could be’ rather than mere reflections of currently ‘what is’. It is a way of apprehending the world that understands there is always friction and that this friction can be good, or that friction can even be necessary for us to move forward. This is how technologies actually develop. We must perceive the lack in our means by imagining what else might be possible. Only when we do this can we invent ways to make these alternatives a reality. This is how artists who compute present us with hopeful horizons that we can engage with and create through technology.

Jake Watts is a Glasgow-based artist and currently Writer-in-Residence for the Office of Art and Design Technology. Jake will be giving a talk as part of the #ArtistsCompute2016 Festival on 10 September. The festival runs 9-11 September at venues across Coventry.

Published on