Hito Steyerl and Lawrence Lek discuss the implications of deep learning AI for the practising visual artist.
London-based Lawrence Lek is an artist working in the fields of virtual reality and simulation, known for his ongoing series of CGI films, soundtracks, games and installations, many of them set within his own meticulously realised and ever-expanding ‘Sinofuturist cinematic universe’. Steeped in the spirit of speculative sci-fi cinema from Blade Runner to Akira, with lashings of Detroit techno’s post-human machine-melancholia and hauntology’s depressive nostalgia for ‘lost futures’, Lek’s narrative world-building borrows both the structural underpinnings and the surface-dazzle of video games to create uncanny, ultra-absorbing AV environments – in which we, the viewer, are invited to confront existential questions about who we are, what we have been and what we might be on the way to becoming.
The recent work of German artist, writer and thinker Hito Steyerl explores what it means to be human at a time when reality is not just mediated, but quite literally constructed, by screen- based images. With a darkly satirical and rigorously unsentimental eye, Steyerl unpicks the means by which big tech and ethically-untroubled capitalism aims to colonise the digital sphere and so control our future – and suggests practical strategies for us to resist this. She has been described as the #1 most influential person in the contemporary art world (in the 2017 iteration of Art Review’s notorious list) but has refused co-option into the mainstream: in 2021 she turned down Germany’s none-more-prestigious Federal Cross of Merit in protest at the government’s handling of the pandemic, specifically its prolonged closure of universities (during which time she moved some of her teaching at Berlin University of the Arts onto Minecraft). A pioneer of the video-essay form, she is presently engaged in research into the social biases of AI image generation.
Fact invited the two artists to talk to each about their creative processes and current preoccupations – including the implications of deep learning AI for visual artists and the gamification of everyday life.
This feature was originally published in Fact’s F/W 2022 issue, which is available to buy here.
LAWRENCE LEK: Generally, I start with an environment, a particular ‘site’ or sense of place. And the question I think about first is: Who’s looking at this place? This is partly a cinematographic or documentary filmmaking approach, but it also comes from video games: Who’s the player? How much agency or lack of agency do they have? Where are they trying to get to? Each work is an exploratory journey. In Theta, which I showed at 180 Studios, a self-driving car is lost in a smart city: that’s the character and that’s the setting. Sometimes the journey might have no narration or literal language, but in this case I’m interested in the interior monologue that happens, which happens to be a self-driving car talking to its built-in AI therapist.
HITO STEYERL: Narrative can be a very simple and effective tool, a way to guide people through a set of questions, an environment or a proposition. It lets people answer their own questions, but at the same time it puts the questions out there. Language is part of my work, but it’s not what I start with. For me, there needs to be at least two components to combine: a real situation and an interesting visual, which somehow come together. I think finding the visual mode, which usually comes with its own technological setup, its own history of consequences and ramifications, is really the most challenging thing. Of course, I’m not saying that the rest simply takes care of itself…
LL: I do a lot of CGI and video game stuff, and unlike in a lot of other forms and approaches of art making, I try to achieve a level of directness as opposed to abstraction. When I’m making a CGI animation of a car, it’s a car. It looks like a car. It sounds like a car. It’s meant to be mimetic. Same with the cityscapes in Theta. The entire work is representational rather than abstract. Which isn’t necessarily a conscious thing. But I do have a desire for my work to be accessible – I want people to be able to understand the representational means in order to get to the deeper message or concepts. And language helps to remove the layers of abstraction.
HS: Yes, if you work in making CGI visuals, then you cannot simply show an idea of a car. It’s got to be a car. Otherwise it makes no sense to the viewer. So then the question becomes: How to extrapolate a level of possible abstraction for the viewer? But even then you still need to deal with very concrete things: Where is the car? What are the people going to say in the voiceover? How am I going to animate this object? And I think that’s where the most interesting questions are: in the problems of implementation.
These are the sorts of questions I’m facing right now. I’m trying to observe the shift in AI visual production from GAN networks to language-based models. In terms of creating visuals, the previous model could be described as a pixel probability calculation: Which pixel is going to be located next to another pixel? All of which depends on pre-existing training material.
But technologically this model is being superseded by language-based models such as GPT-3 [an AI language model that uses deep learning to produce human-like text], the logic of which is now imposing itself onto visual production – like DALL·E, for example, where you input a text/language prompt and an illustration is generated.
How is this going to affect visual production from now on? And is visual production going to be inte- grated even more into the logic of language?
LL: I’ve had some AI-generated sequences in previous works, and looked at things such as game AI, which is quite different from deep learning and the current generation of machine learning, because it’s built more for performance and entertainment. But these are all interesting developments for a visual artist and storyteller – not only technologically, but politically and personally too. And it’s a two-fold thing: How does it inform the content within our work, and then also, in what way does it inform our process outside the work?
At the moment I’m trying to write a new film script, and I’m experimenting with different AI-generated language models, some of which I find very useful, and some I feel really ambivalent about. As a visual artist you recognise that pretty much all the stages of production can be augmented, if not replaced, by some level of automation. It’s a big question.
When I was studying architecture, about 20 years ago now, one of the big questions was: What do you do with generative work? Do you design something, or do you design a system that creates options, and then you curate those options? I always felt resistant to the curatorial process because it felt somehow like being divorced from the process of fabrication, of actually making the work. But now it’s become a challenge to keep resisting certain conveniences, in terms of automation.
Still, many filmmakers continue making stuff on film, when shooting digitally is possible. In fact, the advent of digital filmmaking made some filmmakers really precious about celluloid filmmaking, the physical object, the analogue medium. Maybe in years to come people will be proud to say, ‘I made this work without AI…’
HS: Let’s be clear: being able to conjure up a visual by simply inputting a text prompt is close to magic. On the one hand, it fulfils all the wildest dreams of film-making, but on the other hand, it appears to take away the productive obstacle of reality. And if you want to implement a visual, regardless of whether you’re filming for real, or creating it in CGI, you have to deal with some degree of reality.
And in this process of AI-based automation, reality is still there: in the operations of the code. If you look at the code for this automated creation of visuals, you will see a lot of social filters being applied. Yesterday I spent the whole day trying to remove not-safe-for-work filters in the code for text-prompted automatic image generation. All these social sensors are written into the codes. They’re acting almost like the classical model of the psyche: the superego censoring certain articulations from the social unconscious.
So reality or social reality imposes itself on, and inside, the automated operations of these generative algorithms. It’s still there and you can still contend with it.
LL: The idea of gaming as a vernacular – not just video games but gaming in general – is something I grew up with. This isn’t true for everyone of course, but as someone who grew up in a Chinese family, at our family gatherings there was a love of gambling: playing cards, collective gaming. So I’ve always found gaming most interesting as an active, dynamic social activity: everything from esports to online games or exhibited games in a physical environment where people don’t just play, but watch other people playing.
And even if the play is solitary, the conventional critique of gaming as a passive addiction is quite misleading, because the agency or freedom of the player isn’t really externalised, it happens inside their own head. From the outside you might look like you’re just bashing away mindlessly. But inside your head you’re somewhere else entirely. It’s like reading a book: you’re in another world, but if you were being observed as an experimental test subject, you would look like a zombie [laughs].
What’s interesting to me about this boundary between filmmaking and gaming, as an artist, comes back to that question: What role is everyone playing? Sometimes I might make a fragment of a game and then make a film out of that, or I’ll make a film and then a game related to it later. Either way, the starting point is always a place and a character or viewpoint.
I would like to experiment more with the idea of multiplayer games and what it means to interact with these social situations. I like the idea of taking a framework of a social or multiplayer game environment and turning it into more of an introspective meditation on what that world might mean.
HS: Gaming or, rather, game mechanics are implemented at a large scale in reality and that’s what I think is interesting. For example: economic simulations being implemented as shock-and-awe austerity policies for whole populations. The logic of game theory is widespread, as is the gamification of any sort of competitive behaviour. These are mechanics that abound in our reality, or which have been implemented for at least the last 40 years. So my question is: Is there any sort of counter-game mechanics which one could experiment with?
I have done many experiments in this area of counter-simulations so I can tell you with some confidence: it’s not simple. The brain wants some kind of gratification, which it can get quite easily with competitive game mechanics, so introducing non-competitive game mechanics into any social environment is quite a task – because in order for them to work, people need to completely relearn the things that they think are gratifying.
But it is useful to experiment with these alternative game mechanics in the realm of simulation, because trying to implement them in reality straight away is even more tricky. I think we have seen this bigtime with the crypto and NFT booms: what happens when these kind of gamified mechanisms of reward get implemented into reality very quickly via blockchain mechanisms, and how this reorganises or messes with or disrupts or – I’m not saying this necessarily in a negative way – massively restructures existing realities. I think that simulating these things for a while before immediately implementing them onto huge cohorts of digital artists living in precarity would be a good idea.
In my research I’m trying to think through game mechanics in several ways: Firstly, to redesign existing game mechanics which are usually based on competition and winning and quantified ways of measuring progress. Secondly, to create environments in which alternative mechanics can be tested. It has been an ongoing experiment: I think we started working on this around 2016 with different groups, and I was able to see how strong traditional game mechanics are, how they work and how they creep back into almost any situation almost unconsciously.
To unlearn the sort of gratification which comes with traditional game mechanics is quite a daunting task. How do you get people to feel gratified by sharing points? Or by doing something for other people, or for the environment? That’s just not a learned behaviour.
LL: People have had life experiences over years and decades, so to unlearn these things in weeks or days or even months is such a challenge in terms of learned behaviour, a huge question of individual and group psychology.
This idea of unlearning things is a problem with trained AI models as well. How do you unlearn the biases? I was reading some interesting research about the importance of forgetting in memory formation – a lot of neuroscience research finds its way into deep learning because it’s one of the most ripe fields for cross-technology transfer. For the human mind it’s crucial not to remember everything in order to have our conventional understanding of memory; to highlight certain things, because you can’t have this continuum of total recall or total knowledge. Memory is very partial, very concentrated in certain ways, and forgetting is a crucial part of this.
HS: Yes, and this is an area where different games or simulations make sense as training grounds to unlearn certain reflexes or reward paths. I mean, it sounds a bit scary, maybe it’s not such a good idea, who knows?! It sounds like total reprogramming. Of course, total programming is a possibility, and I’m sure there are whole departments of people somewhere out there thinking about how to completely reprogramme people’s reward paths using gamifications.
INTRODUCTION: Kiran Sande
INTERVIEW: Hito Steyerl & Lawrence Lek
This feature was originally published in Fact’s F/W 2022 issue, which is available to buy here.
Lawrence Lek’s largest exhibition to date, NAS Art Foundation presents Lawrence Lek: NOX is running at Kranzler Eck, Berlin, from 27 October 2023 – 14 January 2024. Find more information on the show at the NAS Art Foundation website.
Read next: Interview: Evian Christ
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : FACT Magazine – https://www.factmag.com/2023/11/24/interview-hito-steyerl-lawrence-lek/