Jake Elwes on A.I. Art and Queering the Dataset
ArtalogueOctober 10, 2025x
9
01:26:2159.37 MB

Jake Elwes on A.I. Art and Queering the Dataset

Can Artificial Intelligence be art? And if yes, can it be good? Today on the Artalogue, Madison Beale sits down with artist Jake Elwes to discuss their art practice, as an early adopter of A.I., using it to challenge how we think about the world and technology through their artwork. Elwes does't separate art and tech, instead they use it as an innovative and generative medium. Elwes creates diffusion models that transform faces, words, and gestures into code. Bias, embedded into the dat...

Can Artificial Intelligence be art? And if yes, can it be good? 

Today on the Artalogue, Madison Beale sits down with artist Jake Elwes to discuss their art practice, as an early adopter of A.I., using it to challenge how we think about the world and technology through their artwork. Elwes does't separate art and tech, instead they use it as an innovative and generative medium. Elwes creates diffusion models that transform faces, words, and gestures into code. Bias, embedded into the datasets and diffusion models that are being used by almost everyone in our world, becomes something you can see when the work breaks down.

We discuss some of their works, like a diffusion‑driven interpretation of Sontag’s Against Interpretation. The Zizi Show, which has been the art work that introduced Madison to Elwes' work, “queers the dataset” by creating a unique data set with consenting and compensated drag performs, making moving images that transform into drag kings, queens, monsters and things that dance and lip-sync. The result is a cabaret that prompts us to think about bias, consent, and what it means to make art with tools that reflect us back in troubled but revealing ways.

Elwes and Beale talk about decolonizing data, art, tech and the problems that arise in the gaps between. We compare US and UK art education, unpack how normativity creeps into “perfect” generated images, and explore how far we can take Artificial Intelligence in the art world. The thread tying it all together is intention: tools are powerful, but human voice is the point. If you care about AI, art, drag, ethics, or how culture could absorb new technology without losing soul, this conversation offers a sharp, hopeful starting point for deeper thinking.

Follow Jake on Instagram @jakelwes and check out their website.

Connect with the Artalogue:

Madison Beale, Host

Be a guest on The Artalogue Podcast

SPEAKER_00:

Hi Jake, welcome to the art. Can you tell me a bit about yourself and how you got into art?

SPEAKER_03:

I'm Jake. Born in 1993. Jake Elvis came out of the womb painting. When I was like a kid growing up in the UK in the 90s, I I think I was probably like one of the first generations of digital natives, I guess. I remember as a toddler, like lying under my mum's desk and hearing the modem go blah blah blah blah blah blah blah blah blah blah you know, all of stuff. I think from a very, very, very young age, I was always really, really interested in how things worked, in pulling things apart and building them up again. And whether that was concepts or computers or like talking to my parents' friends, I don't know, whatever it is. Like I always wanted to understand the world, talking to strangers on the streets. I was always just like going and talking to everybody and trying to sort of learn about the world. I think I was born at a perfect moment. I think a lot of art is about when you were born. I think a lot of it is about being in the right place at the right time, which is really unfortunate because that kind of obviously feeds into sort of elitism and certain people having the opportunities and privileges to be in the right places at the right times. But that having been said, I think for me, it was being around computers, being in London. My dad had one of the sort of fairly early bubble Macs, and I remember going on it and getting obsessed with kid pics when I was very young. I was, you know, sort of really early kids version of Photoshop. And then my parents had a German graphic designer, Claudia, who rented a room downstairs when I was a kid. And she started to teach me Adobe Photoshop. I think that was sort of the big moment for me. I became addicted to Photoshop. I was really getting into this program and trying to understand how I could manipulate images. And before long, as a teenager, I then wanted to download just about every single creative program I could get my hands on. Just to see how they worked. Like advanced compositing programs and understand how I could manipulate and create virtual worlds and environments better. For me, they didn't make any sense disconnecting the art from the computer science. It was always connected for me. See them all as connected, the art side and the science and the philosophy and the music and the maths. My family are very artistic. I think that was a huge privilege that I had. And my dad is a painter. He's like quite a spiritual, monastic person in a way, a wonderful painter who paints their kind of spiritual experience in a way of going to places and just really, I think, pushed himself to the edge of nature. I took a lot from that. Growing up, we were both in London and also spent quite a lot of time on the Essex Marshes on the coast. We used to make garden sculptures. We had this giant statue in the back of our garden that we made out of rubbish. We used to get giant mushrooms and put them on pieces of paper to make spore prints, fill them with plaster of Paris, catch crabs and find crab shells and make little sculptures out of those and do all those sorts of cute things, find old shopping bags and weave them into rugs. My great love became photography when I was quite young. I think when I was like 14, I think I got my first photography job, taking photographs in a local pub of the kitchen and of Suffolk, which they hung on the walls. And I did some sort of wedding photography and things. My brother also loved photography. So whenever we went traveling as a family, we'd both be taking like a million photos of the same thing and competing to the better one. He's actually become a professional cinematographer now, a brilliant DP and cinematographer. And my mum really helped me develop my visual language, I think, sitting down with me and going through all these thousands and thousands of photos, thinking which are the really good ones and which are the bad ones. That's the hard bit in a way, is being able to curate and see which ones are the ones that are successful photographs.

SPEAKER_00:

Well, that was what I was gonna say. Like, let's talk about how technology got into your practice.

SPEAKER_03:

Working with AI and machine learning, it took a little while for photography curators to really embrace it and see it as photography. But for me, it's kind of obviously is in a way. It's all about the photographic data set, lens-based media, how we see the image, and then training neural networks on images, this area of computer science called computer vision, teaching computers to see the world. The computers have always been my art, really, all the way through school when I was doing exams, GCSEs, and A levels in the UK. It was always for me about the interplay of technology and kind of aesthetics and creating form using digital software or code. I did a foundation at Central St. Martin's, which is very exciting. I actually wanted to go and study computer science and philosophy, which was a new course at Oxford, but instead I had a kind of mental breakdown. Kind of got a little bit carried away and got very, very overexcited and then had to go to a psych ward and had depression for many years. I mean, I'm an artist. That's what we do. I didn't want to pursue academia, I wanted to pursue art. It made a total sense. Go to art school. I can carry on doing maths and philosophy and music and things in my art. So I went to St. Martin's for foundation, and I spent a while mapping my Facebook pages algorithmically to turn them into these kind of giant squid-like organisms where all the connections between all of my friends were visualized in these graphs. I got into the Slade School of Art. It's connected to University College London. They take a very small number of artists, which is great because it doesn't feel as competitive in a way. Everyone is allowed to have very different, unique practices. The critiques and discussions we were having were more amongst our peers in many ways than our tutors. So I remember all the seminars at Slade. I went and did an exchange in Chicago briefly. Chicago is where they invented glitch art. So I was literally being taught by the people who had created glitch art. And I got to work with the Sandine Image Processor, which is like the first video synthesizing machine. I came back from Chicago incredibly excited that I'd found a new real medium. I'd met some of the people who had created the programming languages that artists were starting to use to make art. And I had one tutor, Carrie Young, who is a brilliant conceptual artist, but has quite a conceptual minimalist approach to art making. And she was very critical. And in a way, it was good to hear that, but it was hard to hear that. I remember it did make me break down and cry. It's like she basically said, think really hard about what you're doing and why you're doing it. Because in a way, what you've made are pretty, but they're screensavers. They're demonstrating what this technology is capable of doing, but they're not saying anything in itself. What are you trying to say? What's your artist's voice here? And to be honest, I don't think I've found that still. I think that's probably a lifetime pursuit, finding what your artist's voice is. I mean, she's quite right. It was really hard to hear because I was really happy and excited. And actually, everyone in America was incredibly like, this is brilliant, carry on doing this stuff. I think art schools in the US, you know, SAIC is a brilliant art school, but you pay a lot more money to be there. So you have many more classes in the day because you expect more for your money, which means that they kind of shifted the education paradigm to a more didactic approach. Like they're really kind of like teaching you how to do stuff like skills, craft, you know. The thing is that was completely opposite to UK art schools or European German, especially art schools. And we really value having a space to form your own practice, which can be incredibly difficult, but I think it's one of the only undergraduate degrees, a fine art degree in Europe, that you can do, which is effectively research-led. It's all about developing your own practice without really being given much structural guidance at all. You are, in the sense that you get a lot of conversation and discourse, tutorials and individual guidance on what you might be trying to do and say. But in terms of being taught skills, there was very little, really. And I think studying art in America, you come out with a lot more skills. So you could probably get a paid job, and a lot more people will probably get paid jobs after studying art in America. But I think it's much harder to develop your own independent practice with much maturity unless you're really just given the space to just go and make your work. And to be fair, I was in the art and tech departments. Already I was in a more applied arts department at SAIC. And I think a lot of the people that I was working with probably had in mind that they were going to go on to become creative technologists and designers and work with technology with clients to make money. I wasn't working so much in the kind of studio practice fine art department, but it can be really tough to have four years of unstructured time where you don't really know what you're doing or whether you're doing the right thing. It can really break people down. And a lot of people come out of it deciding they don't want to do art. But quite a lot of people come out of that art school actually doing really interesting art. The tutors were really excellent as well. Carrie was very harsh that time, but she was right. And I think it did make my work more considered and mature about like what do I actually want to be doing and saying as an artist? What's my role as an artist? Why am I making art? I could just go and become a computer programmer and make extraordinary images, but that wasn't really going to communicate things. And art can do so much more than that. And I think that's what I started to realize. It was just before my last year at The Slade. I'd already just finished my dissertation. I think The Slade being part of UCL is a bit rare in London. It does have a slightly more academic side, but alongside the art practice degree, you also have to do a sort of independent research-led thesis. And my thesis, but all about artificial intelligence without me quite realizing it. It was more about the philosophy of creativity and consciousness and thinking and whether machines could be thinking machines, what that would even mean, looking at early ideas in philosophy of different forms of intelligence and thinking and how we might approach that with a machine. Now, to be honest, I'm very against this discourse. I don't think machines can think. I think thinking is a human ability. I think intelligence is a human ability. So, in fact, artificial intelligence for me is the wrong language and actually adds to the fear-mongering and mystification in this field. I think calling it data science or calling it machine learning. As humans, we need to use language to anthropomorphise these machines so that we can understand them. It was like finally, I'd found a way of talking about using art to communicate something about the future that we're heading towards in terms of the conceptual, philosophical, moral issues within this field of AI that was about to explode. And to be honest, I felt quite paralyzed for a while. I was like, how the hell do I make art from this personally? Because it was really quite technically complicated back then. You know, you had to be a hacker, effectively, to work out how to use these technologies. I had to download open source code from MIT, from Google Research, from different universities around the world, and change lines of code to make it do things that I wanted it to do. But to get that code to run in the first place, you had to open up command line and install about 50 or 5,000 or however many Python libraries. It just got very technical, all the dependencies, all the layers of abstraction code that had to be running on your machine to be able to run AI back then, because computers weren't really set up to be training AI models in 2016. So it was quite a fun technical challenge. But also, I was like, well, that aside, how do you make art about it? So anyway, when I was at the Slade, I kind of thought about it for a while. And I actually had a brilliant inspirational collaborator and artist called Roland Arnold. And we talked a lot about kind of how to use these models. We came up with an artwork together called Auto Encoded Buddha. It's the first artwork that we really made with this thing. And that artwork is a Buddha, a little wooden statue of a Buddha, looking at an old school analog television with a CCTV camera looking straight back at the Buddha. So effectively, it became a feedback loop of a Buddha meditating and looking at its own reflection mediated through technology. Now, we thought it would be quite fun to train an AI to try and create the ultimate image of the Buddha. Having been trained on 8,000 images of the Buddha, we trained one of the earliest models, DC GAN, it was called Deep Convolutional Generative Adversarial Neural Network, is what it stands for. The computer failed to create images of the Buddha. And this for us was poetry. This is, for me, it feels like that's the queer art of failure, the Jack Halberstand book. It failed to create the Buddha, and that was beautiful. So all you saw is the kind of noise, the artifacts, the guts of this neural network failing to create the essence of the Buddha from 8,000 images because our data set wasn't big enough, our computer wasn't powerful enough, we didn't know what we were doing. So you had the wooden statue of the Buddha looking at a very early AI in its infancy, failing to create an image of the Buddha. And what happened looked like Rothko paintings. They looked like abstract expressionism. They were really beautiful. This kind of void, looking into the void. Anyway, when I came back to the slade, I showed these works. I remember I got a good reception. And then I worked on some new pieces for my final degree show. One of them was a conversation called Closed Loop between an AI talking to an image generating model having a conversation with a language generating model. So it was this kind of conversation, feedback loop between these two AIs. I made a few other pieces there as well. I think there was one where it's like dig whispering tweets in this kind of strange soundscape. And there was one where it's tech CEOs just speaking in numbers. These are all my degree show pieces. Anyway, after there, then it's going and making art. I was very lucky. I got some opportunities to show my art quite early on after that because a few people saw my degree show and they invited me to start doing exhibitions. I got an art agent, and yeah, there we go.

SPEAKER_00:

That's such an incredible story. I feel like I've just been on a whole journey. That was awesome.

SPEAKER_02:

I'm sorry, I've probably been talking for far too long, but I don't normally go that far back.

SPEAKER_00:

So it's quite fun to so let's ground the conversation going forward in some definitions. Can you define what artificial intelligence is?

SPEAKER_03:

I think AI is artificial intelligence is this field of computer science that suggests something that maybe we'll never achieve. It's always that thing is not really AI, there's not really intelligent. There's this sense that we're searching for something that is equatable to human intelligence. And I think that's a real issue, actually, that this thing is very radically different from human intelligence. I think Jaron Lanier had a lovely way of talking about this. He's an extraordinary guy, and he's quite eccentric and was a brilliant thinker around technology and around technological futures, and coined the term virtual reality. He speaks very articulately about a lot of these things. His view on artificial intelligence was that in a way, it doesn't make any sense to call it artificial intelligence because it somehow suggests that we are trying to create something that might be equatable or surpass human intelligence. But the thing is that this thing is built of human ability. It's built by us and it reflects us. It's not outside of us, it's an extension of us, it's a tool. And in this sense, it's a question of semantics, but it's also how we understand these technologies. And it does change the public's perception of them. So it for cars, we wouldn't say a car is a better runner than a human. We would say a car goes faster than a human, but it's a tool built to extend our abilities. A submarine's not a better swimmer than a human, but it's something we can get in to go underwater, you know, and machine learning isn't thinking, it's not intelligent, it's not creative in the way that a human is, but it's built of our data to process data far better than we could. I think machine learning is probably a far more pragmatic term to describe here what we're talking about. That we're creating machines that can effectively teach themselves. And again, that sounds quite human, but it's not that scary. It comes down to actually very simple mathematics, just happening millions of times in parallel. So you give it the data, and then all these weights in the neural networks kind of rearrange themselves and shift around to basically just try and plot data in a space. That's all we're talking about with AI data in space. So actually, this is a metaphor I much prefer from intelligence to thinking about space. All AI models are basically taking this data and giving it vectors, giving it spatial coordinates that correspond to then how we might read something out of that. Let's say that we are training an AI model for facial recognition, right? And we're training it to classify male faces from female faces. Let's imagine we're training it to classify images of Jake Elwiz or images of women in their 30s. Let's say that the data set we have here is 100,000 images of faces. Now, 50,000 are male, 50,000 are female. If we run those images all through a neural network, effectively what happens is it scans through all of these images at the same time, looking for what they all have in common or not in common, and trying to basically plot a space of where to put all these faces, and then it's trying to divide that space into whatever kind of categorization you're asking it to do. So it will effectively just plot all these images in this multi-dimensional latent space. We can't really visualize this because it's not two dimensions or three dimensions, but it might be like 500 dimensions. And we don't really know why the neural networks are starting to decide that that's where it makes sense to look at that little bit of lip or that sort of bit of red pixel data and put it over there. But effectively it's brilliant this because it's teaching computers to see the world. As humans, we're very good at doing facial recognition. But actually, it's a very difficult task for a computer because how does it know that that image from that angle and that image from that angle are both images of faces? So this is where this idea of computer vision using a neural network comes in. And it will plot these images in space. Now we call this a latent space. After the AI has been trained, the images no longer exist. We throw away the data set. All we have left is a mathematical space called a latent space. So what now happens is if we put in a new image that it's never seen before, it will say that new image corresponds to this point in space. 512 numbers, let's say. That with 90% accuracy probably means it's an image of a woman. Or with 92% accuracy, probably means it's an image of a man. It's probably an image of Jake Elwiz, depending on what words we've given it and what words we've asked it to output. This is now called unsupervised machine learning, and it's a beautiful concept. Let's say we just give it 200,000 images of faces and it's just raw pixel data. We're not saying this is male or female anymore. It will then plot all of these images in a space still, but just kind of unsupervised. It'll say, well, those ones have something in common, and those ones have something else in common, right? Now it's kind of just reducing these images to mathematics, and we give it a new image, and it will say it exists at this point in space. Now that's a beautiful concept because for me it means there's a queerness to AI. Because it means that in this latent space, this multidimensional space, this feels to me like queer theory. You can move through this space, and it's all these kind of continuous spectrums between male and female, right? So if we're not giving it the human label or reading the human label out, then there is inherent queerness to how AI is understanding and seeing the world. So it's reducing it to these high-dimensional mathematical dimensions, coordinates. Now, this is why I think space is a much better metaphor when talking about AI than humans or human brains or thinking or consciousness, because we don't really understand what consciousness is. We just get ourselves in these kind of circular arguments about whether it's thinking or not. It doesn't matter. What matters is that it's policing us and it's governing us. Who's building these systems and who are they building them for? What tasks do they actually want them to do? So what's beautiful here is that this space, this existing latent space, contains all the potential of what could be. But there is bias because if it's never seen an image of a black trans person, that black trans person will exist outside of the latent space of everything it's learnt about the world. So we can look at the in-between spaces, but we can't look outside of its boundaries, of its worldview, right? So it's bounded by the data it's being given, effectively. You can also create a latent space for language, which is a beautiful idea. So this is in 2018, I think someone made a paper called Word Tovek. Brilliant Turkish artist called Memo Acton worked a lot with this. But you can train an AI, I think it was like at the time, maybe 100,000 words from the English language, which had been taken from data sets from Wikipedia, I believe. And effectively, from the usage of those words, it started to give them spatial coordinates. Now, this is so extraordinary because it would reduce the word human to 512 numbers, which I love. It would understand that that word gets used. So therefore, it will plot that word in the space near the words person, man, woman, being, you know, and the word queer will be plotted near the words faggot and gay and LGBT and all of these other words. Engineers weren't thinking about necessarily which words were problematic. They need more humanities people working on these things because, of course, this can be problematic. But it can also be used to point out language bias. So Memo Acton did this beautiful thing quite early on, where you can move through this space, you can do arithmetic to look at where words get used. What I loved is that he just used arbitrary words rather than putting a policy call on it himself. But he'd say the word man to the point in space for the word doctor. I don't know, really boring example, but this was an actual one, came out with the word woman to the equivalent space of the word doctor and nurse. For man, it was probably doctor and surgeon, bias in gender. And then if you do the word woman to the word nurse, then it would come up through the equivalent of the man to the word doctor. So it had this kind of bias baked in, which makes sense because it's looking at the way that we use language. So it's human at every level. So as Jaron Lannier points out, it's not so much it's going to replace us, it's more that it's going to make us mutually unintelligible to each other. With the perpetuation of fake news, the AI is going to make it so difficult for us to distinguish reality from fiction that we might have complete breakdowns in communication. I think we're already starting to see this happening with social media. I think, especially of the conflict happening in the Middle East, I think there's so much misinformation that people are now just fighting. They almost aren't able to communicate rationally anymore because they are so embedded in Mark Zuckerberg's algorithm.

SPEAKER_00:

I think you raise a really interesting point about biases in the data sets. How do you think we create more equitable data sets in the future?

SPEAKER_03:

I think we need to program more uncertainty into AI. I think we need to de-binarize AI effectively. Maybe the data sets need to be larger, but the problem, if we go larger with data sets, this becomes a colonizing project. Who's creating this data set? If it's Google and they're going around the world because they think they need to create data sets to represent everything, do those people from small indigenous tribes want to be in the data set? Maybe they feel that Google have no right to take their culture and train an AI on it. So, you know, we need to be careful here. What if we put a big pause to all AI production? And the only people allowed to work on AI development are people who are genuinely thinking about queering and decolonizing these systems. Then they get to choose the data and how the data goes in. I think that would be great. I think that's a start, and that's happening a bit. I've got quite a few friends who are working at Google DeepMind who are brilliant at thinking queerly. I think the technoactivist mode of dirtying data sets is quite interesting. So it's like if we give Facebook and Instagram and Google a bunch of bad data that doesn't represent a story. Like click on the wrong advert, you know. What happens if you click on the wrong ad? That will confuse their AIs and it will give them kind of fake training points. And that could actually help with bias. Also, this idea of federated AI, where there's no centralized data set. If we could decentralize how AI is made, effectively the idea of that is that we all have our own personal little AIs that are being trained on our own personal data. And then those weights of those neural networks all come together. So Google don't actually get my data, they get kind of the result of my data being trained locally on an AI system. That's quite an interesting concept. So there's a few ways of tackling possibly, but I think my main one is we need more queers. We need more queers and we need more people from marginalized oppressed communities building these systems. And as an artist, I kind of feel like that's part of my role now. So I'm like just wanting to get more and more people in the conversation and thinking about these things and thinking about how they can enter in.

SPEAKER_00:

Yeah, absolutely. I think as a woman, one of the things that I've been thinking about when I see AI in the news and AI art, especially recently, has been the rise of deep fake pornography, for example, which is something that you've been addressing in your work since 2016 with your project Machine Learning Porn. So I'm wondering if you think these sorts of issues will be solved or exacerbated with the development of AI in the future.

SPEAKER_03:

This is it's a question of humanity, really. I don't think it's a question of the machines. It's a question of who's going to build them. And if it's only being built by men in America who are maybe quite misogynistic and patriarchal and homophobic and queer phobic and xenophobic and class phobic or whatever other, you know, prejudices they might have. The AI is also going to be built patriarchal, misogynistic, classist, ableist, you know, it's going to inherit the prejudices and biases of those people building it. And the problem is systems of power here. So it actually is more of a political problem, I'd say, than a technical problem. In terms of deepfake porn, it's obviously really horrendous and unethical and can be incredibly traumatic for the targets of the use of such technology. On the one hand, I want all of this stuff to be open source and accessible because I think it's important that anyone, whoever you are, you don't have to just be in a tech company or an ivory tower to work with this stuff. I think that's vital. But at the same time, it obviously means that anybody can just go and put a celebrity's face on a pornographic video. This is what governments right now are trying to work out and grapple with in policy. As an artist, what's lovely is that you don't have to have the answers. You can just pose the questions. And I honestly feel that has been the most brilliant privilege of working in this field as an artist. I guess I've found one way of not making much money from artificial intelligence is making art with it. But as an artist, you can stand back and have that critical distance. You don't have to create something that's purposeful or functional. As a scientist, you kind of have to make something that has a function, that has a reason, that has a purpose that you can back up, that you can say why this is good or. Bad. I think for me, I don't have to have that at all. I can just get this AI thing that a scientist has built, almost like an appropriated Revue made, change a few lines of code and see when it breaks. And when it breaks is poetry. Could be poetry. It could actually reveal some political or ethical issues or biases that the creators hadn't thought of. And I've seen that a few times now where the artists have influenced the technological and scientific research. I started off looking much more at this AI stuff in art in terms of the sort of metaphysical and epistemological questions, the kind of big existential is this thing thinking, is this thing created, the stuff that maybe now I'm less interested in. I think my thinking shifted and moved more and more into the human side. Who's building these systems and who are they building them for? Where's this bias coming from? How do we, you know, what would a queer and AI be? What can AI teach us about drag performance? And what can drag performance teach us about AI? As an artist, it's great that you can really ask the silly questions and not be responsible.

SPEAKER_00:

I think it was Kutlug Ataman, a Turkish artist, said that artists need to ask questions. And that's the purpose of the artist. It's not to be didactic or moralistic, it's to pose these questions and have your audience reflect on that. And I think what's so interesting as well is art history being a circle in a way, because I'm hearing you talk about AI as a ready-made. And I think now it's just a time where we are moving into a new space in art. Do you think that we are moving into a space where now we'll have post-artificial intelligence art? Do you think we're moving into a space where there will be a definitive pre and post-adoption of AI in art?

SPEAKER_03:

You made me think of a few things there, actually. I think firstly is the kind of whether art just poses the questions rather than having to be didactic or moralistic. I think art can do many things. I think it's actually can be great for art to be really didactic. Because again, we don't necessarily have responsibility or accountability. So we can say things without evidence. It feels like it's more than just a straightforward tool at all. Possibly one of the most complex tools that artists have had access to. So the question then becomes, what do you say with it? What's your message? What are you doing with it? We have this magic. I mean, it is magic. Any other period in history would call this thing magic. I think there's real magic around us all the time. I think I don't really believe in having to create narratives of gods or supernatural things happening when planets align. But I believe that there's real, genuine magic that happens all around us in nature, in art, in the complexity of the planet that we actually have on a quantum level. There's a lot of things we don't understand and maybe never will understand. So AI does display that kind of magical emergence, right? That you really do not know. And I think that's what's sad now, slightly, is that because we're all beholden to big tech companies when using these AIs, for me, I was able to use open source code and run it myself on my own data and literally change whole chunks of the code. Now you don't even have access to that code a lot of the time. You're literally just using OpenAI's platform and typing into it. It's lost that sense of really interfacing with this magical thing, which is neural network, computers processing things in huge parallel. This sense of emergence in art isn't new. I think you're quite right. It's got a whole history. We can trace it back to fortune tellers and trace it back to witchcraft and trace it back to people who thought radically differently and could really read into the universe using different tools and using emergence, using tarot cards. In sort of more conceptual art history, I'd probably trace it back to Duchamp. A better artist, I think, maybe is Nam Djun Pike. I think Namjun Pike was a wonderful artist who embraced Eastern spirituality and magic alongside Western conceptualism and had a sense of humor. That is the best combination. He did that in beautifully poetic ways, where he often worked with emergence and randomness, but often telling jokes as well and not being too serious about it. But that means that his work transcends its time. Even though it's completely embedded in the technology of its time and hacking that technology of its time, he also played with timeless concepts of magic and humor and poetry and spirituality and emergence. This sense of randomness that's almost too complex for the human to understand. That's when I feel like it becomes an emergent property. So I feel like AI is doing that all the time. And I think a lot of artists probably are a bit scared of that because they don't really know how to interface with that, how to work with that. And I think I'm very lucky in a way that I've been kind of coding these systems since their inception. So I'm not scared of them. I'm actually quite bored of them. So yeah, your last question, the post-AI art bit. Yeah, I hope so. I'm bored of it. You probably are. I think most people are a bit bored of it. I don't see myself as an AI artist. I think that's firstly a stupid term because I'm not an AI. I would probably say I'm an artist, just an artist. If I had to be more specific, then I'd probably say I'm a media artist, maybe, or a video artist. Some people have started to frame me inside conceptual like glitch art or surrealism. I like seeing how curators might recontextualize some of my work alongside other brilliant artists that I love or don't know about and then discover through shows. But yeah, I'm hoping we're reaching a time where we're no longer so excited by AI that it's AI art.

SPEAKER_00:

I mean, it's been massively in the public conscious recently. Years ago, there was that Jason Allen work that won first prize at the Colorado State Fair. It was for like a special digital category, but still it won first place. It was all over the news and people were really up in arms about it. So I wonder, do you think there's a line between what is and isn't art when it comes to AI image generation?

SPEAKER_03:

Well, that's an interesting question. I mean, really that question comes down to what is art.

SPEAKER_00:

Absolutely.

SPEAKER_03:

And we can't really, we don't have an answer for that. Everyone would disagree on the answer for that. I would probably say for me, art is about self-expression. It's about expressing something that might be unexpressible by any other means. I think it's also the way that we need we need often artists to channel something that we're feeling around us. I think often it's an energy thing. Fundamentally, art has to be human. Because what's the point otherwise? And maybe you know, people will argue, but what about an AI making art for another AI? And I'm like, yeah, that's an interesting question, maybe. But it's probably not going to be something that a human will appreciate or understand. And the thing is, an AI has no intentionality in a human sense. So it wouldn't ever have a logical reason to do that thing to another logic board. Logic boards don't really want to do something that's completely stupid and meaningless, like make a pretty picture. Maybe it will send another AI a beautiful bit of code that humans could never see the beauty in. Maybe that's art for an AI. But the thing is, is that for humans, it becomes is it about the art making for the person, or is it about receiving the art? And if we're talking about art as something that someone might have on their wall to make them feel happy, then I guess absolutely. But for me as an artist, the question is where's the human intentionality? What are you saying? That's just a pretty thing on a wall. Fine. But for me to have any role, any purpose, is for me to try and communicate stuff. Or for me to try and articulate things that are running through my human computer. This is why I'm thinking a lot at the moment about hopeful apocalypses. An apocalypse can be hopeful. What's a sort of messy utopia? What's a queer, messy utopia that's also in a hopeful apocalypse? I don't know. I mean, we're hurtling towards it. People living in the industrial revolution were living through an apocalypse. You know, the world was changing in ways that they couldn't possibly imagine. So right now we are living probably in another one of those sorts of times, and they're going to keep on happening and they're going to keep on accelerating. And I think we just need to make sure the right people are in power when it happens. And that's looking a bit scary right now. But I like to keep hope and joy. That's important for my art.

SPEAKER_00:

That's such a breath of fresh air right now, especially with everything that's happening with the climate, right? Um, I think so many people get bogged down in the scary and sad side of it. But I like that idea of hope and I like that idea of moving forward together. What you were saying earlier about AI art and having something on the wall and whether someone thinks it's pretty. I'm interested in this idea of an aesthetic or conceptual framework emerging for which we can judge or interpret AI, which brings me to a piece of yours called AI Interprets, AI Interpreting Against Interpretation, Sontag 1966, which I just think is incredible. I think like next to the Z Z show, probably my second favorite work of yours. Or maybe they're tied, I don't know. But I think it's so interesting to have a program that's kind of trained on data and biases to go and interpret a piece of critical theory. That's such great satire because a lot of the time we think that our opinions, especially about art, are our own, but actually they're influenced and informed by the biases around us without us even realizing everything that we say and think is interpreted through the lived experience that we each have as individuals. Do you think there's going to be an aesthetic framework that's gonna come about anytime soon?

SPEAKER_03:

Oh my goodness. That's a fascinating question. I think just to quickly talk about that artwork that you mentioned. So this is a more recent piece of mine, and it's actually the only artwork that I've made using large diffusion models. I actually did use an open source one. I didn't want to use one, behold into a big tech company. I think for that work I use disco diffusion as an open source stable diffusion model. Effectively, all my earlier artworks working with images created by artificial intelligence were using an algorithm called the generative adversarial network. You don't really need to know the difference. To be honest, it's a very similar thing. DALI or Mid Journey are built on top of those things. They start with noise and then they try and create a new image that could exist at a point in latent space. That's all these images things are doing, right? My early ones, if we go back to this metaphor of the latent space, you can then pick a point in latent space and say, create me a new image here between man and woman. What's nice about these diffusion models as well, though, is that they are also conditioned on language. So they understand this space in terms of images, but also in terms of words. So instead of the input being 500 numbers that could correspond to the place in between a man and a woman, or could correspond to an elephant eating a cheeseburger, you write down those words and it can interpret that as well for you. And then it uses noise, they all use noise, and it will start to get better and better and better and create this image. So I guess for me, it just seemed like an art theory joke to use this AI, to get an AI to interpret the seminal Sontag essay against interpretation. It's just a joke. It's an AI interpreting against interpretation, AI interpreting AI against interpretation. So there you go. I think maybe Sontag would have found it funny. I hope so. I love Sontag. I think what's interesting here is lots of levels, in a way, to that piece, but just to explain the form of the piece. It effectively was taking a sentence from Susan Sontag's essay that might say Plato's theory of art focused on the form. I don't know, something like that. And then it will create an image of Plato, maybe a sort of big Greek statue of Plato lying down on his side looking really depressed, with a kind of huge balloon thing saying art above it, and maybe sort of a Rothko painting in the background. It's just, it's so naive. And that's lovely. So it's the AI is interpreting Susan Sontag's words. And you know, Susan Sontag, as we know, hence the title, was very against interpretation, in the sense she didn't like things being read too much into things. Like, can all these white men please stop just reading too much into things? Like it's in a way I've kind of flipped that and maybe done a bit of a deriada move on that to literally unpick her words to turn them into images in satire. It's void, really, human meaning, because there's no human interpreting. It is this artificial thing based on humans trying to interpret her literal words into an image. And then I've kind of gone the surrealist step, natural step on, which is to reinterpret that image back into words. And this is kind of like the surrealist game of telephone or exquisite corpse, you know, kind of turning one thing into another thing and seeing how far removed it becomes from the original thing. So it will turn that original statement, Plato talked about art in terms of forms, into, you know, Plato looking depressed with a Mark Rothko painting. And then it will turn that into, I don't know, this image depicts the sort of failure of the Western artist or something, you know. And it's always polemic, which I love because I did actually, this was one thing that I did. Normally I quite like to leave these things quite uncurated and unconditioned, but I actually conditioned that large language model. It was a GPT model, but it was an open source version in which I was able to condition to have a bias towards polemic language. Because Sontag was one of the ultimate polemicists. I kind of made it this polemic AI. So it was constantly saying really outrageous things, which were hilarious because it shows just how wrong this AI can be. Susan Sontag will be talking about why we shouldn't be interpreting art and about the power of the form and the symbol in art. And then it will create an image of kind of this black and white abstract form, and then it will say, why are there meant so many swastikas? And you're just like, what? It's like, yeah, and it kind of was constantly getting stuck in these little feedback interpretations of itself. That was that piece. What do you think? Why don't you talk on it for a sec? I'm interested.

SPEAKER_00:

I think that as AI becomes more sophisticated, because every time we use it, it gets smarter. I there will be a framework that comes about. I just don't know what that is right now. Because when I look at a lot of not necessarily AI art, but images that are being created with diffusion models. In my own personal opinion, I think a lot of them are very ugly. But that's so subjective, right? That's true for any sort of medium. Maybe because so there was a trend of a bunch of people being like, oh, what would my pet look like in a movie about itself and make it a Pixar movie? I wonder if that has trained that data set in that way.

SPEAKER_03:

You find them ugly because they're too generic, that they are trained on that in a way, this is the problem with AI, is it's always biasing towards normativity. If the center of our latent space is kind of the most derivative, generic idea of what art could be or looks like, then yeah, most images that have been trained on billions of images on the internet of artworks are going to trend towards normativity. And I think for you, as someone who works in the arts, you're constantly looking for something that inspires you, that feels more out of the box, that feels more original or is doing something provocative or new, different, critical. I don't know. A lot of people haven't been exposed to as much art as you and I, probably. So for those people, I guess, it probably is, in a way, aesthetically perfect. A lot of this AI-generated art is aesthetically perfect. It's learnt from how humans compose images, how humans compose and take photographs of the world with the rules of thirds, things that make us feel good because they exist in nature. The three to two ratio and the golden mean or natural harmonics and kind of how those things we start to use in composition of images. So a lot of painters will do that subconsciously or consciously. A lot of photographers will do that subconsciously or consciously. If we train an AI, or let's say that OpenAI and Elon Musk steal millions of images from the internet without really asking permissions of a lot of artists who have spent a lifetime developing their own visual language, which I'm sure a lot of those technologists have not really thought about, then of course that AI will create very good formal compositions. And then someone might want to print one of those off and put it in their house. And that's replaced the human artists and the human neighbor. And that's very problematic. But I guess, you know, it's also nice that art can become more accessible and democratized. But it's the question is then what's the art for? Is it for someone to get is it for decoration so that someone can make their house look nicer? Or is it to make us think? Is it to try and communicate something? I don't know. I mean, I don't have the answer. It's different for every single person.

SPEAKER_00:

And that's what I was thinking as well. But the AI images that are being created for decorative and commercial. I actually prefer when it fails. Like you were saying, like the querying of the data set when it fails. That's actually what I like. So when I was kind of messing around with AI diffusion models, I did like Clement Greenberg eating a hot dog or something like that, or like Anthony Bourdain talking with, I don't know, a random person, right? And then they don't know what these people really look like. My mom was messing around with AI as well. She loves it. And what she did was put my image in and tell the machines to make me in the form of a matisse or make me in the form of a so-and-so. And none of them really looked like the work, right? Like they weren't able to capture that human hand or the human aspect of it.

SPEAKER_03:

I think it totally makes sense that you wanted it to do something else. I think this kind of comes down to what we want from these systems. I think that kind of comes down to the individual persons, often like the individual person's experience and view of the world. What are they trying to do with the way that they decorate their house? What's the function of art for them? Do they want to go to museums and look at things that make them think? Do they want to go to galleries and look at beautiful images? Do they want to just have something on their wall that makes them feel like their space is more homely? I think for me and you, we don't want conformity. We're interested in divergence, in when this image might not be in the center of the latent space, but it shoots out to the outer bounds of the latent space, which is where the AI is less sure of things and where the AI can actually break down. And when you actually get to see the conceit of the AI itself, when you start to see the artifacts, you start to see the messiness of this AI trying to construct a perfect image, but it can't. Interestingly, hands, it really struggles with hands. It's just like painters. Painters have always historically struggled with hands, but it's such a complex form. Photos of the hand from different angles and different sides, it's a very difficult thing to make look lifelike. And as humans, our brains have got very good at seeing hands so that we don't cut our fingers off at understanding our own hands. But actually, it's quite a complex form. These things can really, when they fail, I totally agree. That's when you see the guts of the neural network. Actually, one of my favorite ones, or it's just silly and funny and political and taking the piss out of itself. So, like one of my favorite ones was Republicans. Have you seen Republicans?

SPEAKER_01:

No, what's this?

SPEAKER_03:

Oh, this is brilliant. This is brilliant. So this is somebody on Instagram who decided she'd start using either DALI or Mid Journey, I suppose, to basically put Republican politicians in drag. So it's Rue, like RuPaul's drag race, Republicans, RuPublicans, brilliant. And you see Steve Bannon as like this kind of fat, beautiful, like Las Vegas drag queen. And it's just hilarious. So yeah, no, I think these things can be used brilliantly politically and point a lot of things out. I think we just need to keep on staying like radical with them and doing interesting things with them because otherwise we would just kind of fall into normativity. And then as humans, we'll get bored.

SPEAKER_00:

You know, it's interesting that you bring up the Republican thing because even though I haven't seen this, one of the things that had been kind of kicking around in my brain as I was preparing for this episode was the fact that both drag and AI have just been in the public conscious an insane amount recently. They've been very politicized, they've become very polarizing. And I was wondering if you could kind of speak to that. Where do you think the crossover is? Why do you think people are getting so up in arms about both drag and AI?

SPEAKER_03:

There's a link for me for sure because I'm researching both of those fields. Just brief context. I've been working with AI and machine learning now for probably coming up to 10 years, a while before it was so trendy and cool. And now I guess I've also been very much embedded as part of the drag community in London for probably about five years. My boyfriend runs drag venues in London, and my boyfriend's partner, we're in a very 21st-century polyamorous relationship. He's my main drag collaborator, and we work together and sort of the matriarch of the London drag scene. It is funny because both of these things have become so topical now. And I feel I just kind of organically stumbled upon it because they both were things that massively appealed to me, my brain, or whatever. They are both controversial right now. I think they they both represent things that scare people. I think drag and queer performance and queerness and transness and non-binariness threatens older people in the UK, especially actually second-wave feminists, interestingly, who I think are afraid that the world is radically changing of this sense of apocalypse. And for them, this idea of like gender ideology, that you know, a lot of our generation don't really believe we have to conform to this gender thing that's like thrown on us with, you know, us really choosing it or needing it or wanting it, and all of the oppressions that come with that. Let's get rid of it. In terms of AI, there's a lot of fear for similar reasons. Because it again feels like we're hurtling towards this apocalypse that we don't fully understand, that these big tech companies are running too fast, and we don't really understand what they're building and how these things are working. And we don't know what most people just have no idea what AI even is. It's literally this magic thing in a computer that is starting to govern us. Like, that's scary. Why? You know, and what's been really interesting for me is that showing my AI drag, my deep fake drag show, Azizi show in the Victorian Albert Museum in London, is that the curators have reported to me that the people that often sit with it the longest are the kids and the people over the age of like 65. And that was really interesting to me. But I kind of thought that our generation would be the ones that maybe were kind of really intrigued. And I think, you know, they're intrigued, but they also kind of get it and then move on. Thing is, the kids are very excited because they're like, oh wow, look, drag queens dancing. And then the adults, the older people who come and see the show, I think they sit there for a while and they watch these drag performers, these drag forms, the cabaret, performance. We've got things in that show that they're very familiar with. The sort of fashion dancing, the lip syncing, the Shirley Basse, the cabaret, the Beyonce Bowie, whoever else we have. And they'll sit there watching that whilst at the same time kind of having these existential crises, possibly. But also it's very familiar and very joyful. So what's going on here? That's almost like a big juxtaposition or contradiction of things. So I think that's why often they sit with it for 45 minutes. You know, it's an hour, half an hour loop. Now sit and watch the whole loop. And I think it it's this almost like a meditation space to think about where we're heading with AI, but using drag. And why is it using drag? And that's what I want people to ask. It's using drag because drag will engage those people. Because drag is accessible, it's playful, it's joyful, and it's performance. And people like that, people get that fed with this very cold, number-crunching machine that we don't really know how to wrap our heads around. But also, drag is the ultimate form of gender non-conformity, I think. And exploring performance and performativity in gender through something that's really fun and joyful. Drag kings, as well as drag queens, we're not just talking about gay men wearing dresses here. We're also talking about women performing masculinity or non-binary person performing an ant or a clown or a monkey or a fish. You know, I mean, as long as there's some form of kind of gender within this, looking at how your gender relates to the character's gender and often working in underground working class popular performance, you know, cabaret performance, then for me it's very much, you know, in the area of drag. And I think drag has expanded massively now. We're not just talking about drag queens, we're talking about drag clowns, we're talking about drag monsters and drag things. My friend is a drag barbarian, and other friends are drag clown, you know. Oh my goodness. And they basically they sung, it was so sad. And it basically instead of I'm a creep, it's oh, what is it? I'm a fruit, I'm a tomato. And that was their lyrics. And it was so beautiful because it was about being queer and it's all about feeling judged for not being a vegetable.

SPEAKER_00:

That's what it is, right? Drag is this abstraction of gender and all the ways in which we can exist in the world. And I think for me, when it comes down to drag and AI and that kind of equivalency there, not that they're necessarily the same thing, but this kind of moral panic that we're having about both of these things come down to the unknown, the fear of the unknown, and this willful disengagement that many people are doing with both of these things because it's scary and it's different and it's new.

SPEAKER_03:

Yes, or willing negative engagement as well, where a lot of you know right-wing activists seem to be picketing my friends who are literally reading books to kids. It's the world's gone mad. Anyway, I still blame Mark Zuckerberg. I mean, I know maybe we shouldn't blame the individual, we should blame the systems of power, but I do think that actually he has reprogrammed our brains with the help of some of the best minds in the world. And I think people do make a difference. And I do genuinely think that he could change the engagement algorithm at Facebook to prioritize more compassion in the world over the conflict. So going back to the drag, because it had for me this multi-layers of meaning working with drag in AI. Because drag is already a constructed identity, it's playing with this kind of exaggerated idea of gender performance. And then it pushes that. You know, you have the real human, and then we have the drag character on top of that, doing a drag act who's often impersonating another person, so you have another layer there, and then we have on top of that my layer, which is turning them into deep fakes, creating digital avatars versions of these performers in a virtual space generated using artificial intelligence. So we have a construction on top of a construction on top of a construction, you know, it keeps going. And then I hope to deconstruct the whole thing by kind of showing when this thing fails and breaks down to make people less scared of it, to make it funny, to make it silly, to make it more accessible again. So for us, we were talking about deep fake ethics quite a few years ago, actually. And it was fascinating to me that it became such a topical issue with the acting strike and the writer's strike, saying that, you know, extras they don't want to just be like mapped by a computer and then used without your permission for like all of time. Oh, what you mean you actually want to be paid to like do acting work? So, yeah, I mean, we were we were talking about this quite early on. How do we make sure that all of our drag formers are properly paid for their data, that they are fully on board and understand the concept of how the AI is going to take their form and recreate their image, that they could withdraw from this project at any point. If they wanted to. The only people allowed to reanimate or repuppet each other's forms are people within our community, people within the cast, people who are seen in the project as well. So the people doing the movement for the project are also the people who are the kind of deep fake characters. This was important for us because I didn't want it to be anyone coming into the Victorian Albert Museum, waving their arms around and controlling the body of a drag queen. But that for me felt really, really problematic, quite fascistic, actually. You know, who has permission really to control whose form? And I think that comes, that's the crux of the issue of deep fakes. That I'm not giving someone permission to make deep fake porn of me. And if they do, that's a huge violation, really. If people haven't considered why they're doing this thing, why they're building this thing, and no one can really fully consider it. That's the thing. It's always gonna have bias. But maybe we can still program good bias in. What can we reprogram in more like decolonizing and queer bias? I don't know. It's always gonna have that. All brains have biases, and artificial neural networks trained on large amounts of data are going to have those biases too, depending on how much data they're given. In the same way that throughout our lives, we have experiences and resources that make us form our own worldview. And of course, it's got bias, but it doesn't have to be negative bias. So I guess this is a big question here. So for us, it's got a positive bias towards drag performers. Because drag is like, yeah, again, the gender thing. It's talking about class, it's talking about gender, but in a joyful way, we're reintroducing it back into that system. It's fairly easy to say, okay, there's gender bias in this system, or there's racial bias in this system. But in terms of class bias, that's much harder. How do you see class bias? Like that's much more insidious, that's much more complex, like interwoven into society, and needs a lot of unpicking by a lot of people with humanities degrees, not just coders. And also decolonization, like colonial bias, like how the hell is someone on the other side of the world? So I'm talking a lot to Indian queer activists at the moment. How is like some brilliant genius queer thinker in a village in India, maybe a Hijra person who is starting to think about AI on the other side of the world, how are they even supposed to start doing that when programming languages are written in English? And not just the programming language, the documentation for that programming.

SPEAKER_00:

So it's like No, that's an excellent point about the language. That colonization really comes back. It's so insidious. It's in everything we do. And I'm particularly cognizant of that living in Canada, right? It's in everything. But to go back a little bit, to talk about the Z Z show. What was the ZZ Show? Where does the name come from? How did it come about?

SPEAKER_03:

The Z Z Show is my deep fake drag cabaret. So it's effectively this project that I've been working on for the last few years, and it's about querying artificial intelligence. It started with a project called Querying the Data Set. And this was back in 2019. I guess it was around the time that these image generators were starting to create these kind of hyper-realistic images of fake faces. And it was the first time that that was really happening. It was called Style GAN. And what occurred to me was that these faces all felt very normative. Which I guess they would, because at the time I was hanging out mostly with drag performers with like deep blue foundation and bright red wigs or whatever. So it's like, well, you know, they don't look alien enough. What's going on here? And you know, digging down, you realize that this system has been trained initially on a data set called celebrity HQ standard for celebrities, A, high quality celebrities, American celebrities. It was literally a whole data set of just American celebrities. And that was the standardized facial recognition data set for American universities and for the government to kind of test their facial recognition systems. Now, of course, there's going to be a huge issue with diversity if you're only training on images of American celebrities. It's probably going to be quite white. Interestingly, the engineers then tried to improve that for the model I was using by introducing a data set called FFHQ that took images from Flickr. They took the royalty-free images of faces from Flickr. This is also hugely problematic in terms of colonization because a lot of those images were American tourists who had gone around the world to take pictures. So for me, I was like, okay, well, obviously there's issues with these data sets. So creating new images out of these data sets also obviously have those issues. The first step of this mission I've been on was to take a thousand images of drag kings and drag queens and drag monsters and inject it into their neural network that have been trained to create new fake faces from having seen a hundred thousand images of faces. So I only injected a thousand new images, but it completely shifted the weights in that neural network from a place of normativity into a place of otherness. That's sort of you can imagine the latent space, just the whole thing moved. Like the normative core there just didn't exist anymore. The whole thing shifted. For me, that's almost like a queer utopia. That's this idea of like the sort of potentiality that AI has to represent a queer utopia, a queer otherness, and a different future, a different vision of humanity or us. Or kind of more, I don't know, politically, just breaking their system. You know, this idea of obsification, of dirtying their data, of making it not make sense anymore. So effectively, the AI would generate these perfect faces. And then after I gave it these images, the whole thing started to break down. The kind of lip turned into a giant eyebrow, and some people had like blue faces, and the whole face changed into all these multicolored, crazy, sort of fractal images, and the eyes will appear and disappear and pop in and out and do these really strange, surreal things. This work actually got shown recently next to Max Ernst's paintings in the Max Ernst Museum, kind of contextualized within surrealism, which was quite nice by great curator Patrick Blumel in Germany. So I guess that's kind of where the project started. You can make videos from that. You can move through this latent space. This is an extraordinary idea because it's a non-linear video. It's a video that can move in any direction at any time. It's just moving through these maybe 512 dimensions. And one of those dimensions might correspond to the eyebrowiness and really no. So I would kind of just tell it to go on these random paths through this mathematical space that it's now learned. You know, when it gets the outer bounds, the face almost disappears completely and does really strange things. Right in the middle, probably biased towards more drag queenie-y, to be honest. So in the middle of that space, it's kind of quite a normal-looking picture of a quite traditional RuPaul drag queen, maybe. And then it gets weirder and weirder the further out you go. Then the next obvious step was to work with my drag community friends in London directly. So the question became how do we now get this AI to perform? So the obvious thing was to use what was quite recent at the time, a conditional generative adversarial networks, which was the kind of earliest version of a deep fake. Basically says instead of starting with a random noise and a random coordinate in a space and creating me a new queer drag face from this latent space, instead of doing all of that, start with a skeleton. So we basically train an AI to get really, really good at turning skeletons into images. It's got now two data points, right, rather than just images of faces. And it will build again a latent space and it will say, okay, so this type of skeleton corresponds to an image of this drag queen doing this thing. And the machine learning will get really, really good at turning those skeletons into a body that could have existed. This is important that we made our own data. We filmed drag queens in lockdown in drag venues moving around the stage. So let's say Lily Snatch Dragon, amazing drag queen. She's actually like a female AFAB drag queen and a brilliant burlesque performer. And we filmed her moving through the space. Now, the AI will learn Lily's form. And effectively, I will film her for let's say three minutes. That will correspond to 7,000 frames. So that's an image data set of 7,000 images. I will then turn each of those images into corresponding skeletons, just literally black and white. It's using an AI to motion track the sort of poses of each frame. Then you train an AI to basically only see the skeleton. The AI literally only gets to see the skeleton, and it gets given a score on how close it was to the original image. So effectively, there's like two AIs here. There's one kind of that gets to see the original image and gives it a score, and the other one which tries to create new images. It was like given this skeleton, make an image, and it's like, okay, here, a bunch of pink pixels. And it's like, well, that's 5% close to the original image. Try again. And it will try again and again and again, millions and millions of times, until it can create a new photorealistic image of Lily Snatch Dragon that's 99.9% close to the original image, right? At this point, you have an AI. You then take away all the images. That's it. You can then feed in new skeletons and it will create new images of Lily Snatch Dragon. So at this point, I can film a friend performing Shirley Bassey and get Lily Snatch Dragon to perform the Shirley Bassie when she never did that in the first place. So that's that's the deep fake. What was lovely for me, one of my favorite moments in the ZZ show. This is, by the way, real diversity of drag performers and drag kings and things, diversity across race and gender and sex. And they perform David Bowie to you, Shirley Bassie. We have three sort of full-length ones and then seven close-up ones, where we've filmed different drag performers lip syncing a David Bowie song or a Shirley Bassie song, and then got 10 of their friends to all perform that at the same time, the same movement at the same time. So effectively, after the AI has been trained to see Lily Snatchdragon and create new images of Lily Snatch Dragon, we then can feed in new movement and get these new performers to do these new performances. Now, one of my favorite moments happens when Kara Mel, who's an amazing drag queen, a black trans-American drag queen, she came and performed Beyonce's Sweet Dreams, and she's an amazing dancer, and she dropped into the splits fairly early on in the act. And I was just like, I had no idea what was gonna happen. I mean, I let it obviously happen. I wanted it to happen, but I had no idea what would happen when the other 21 deep fake drag performers tried to drop into the splits. Now, the reason for this is because this is an AI bias, right? We never filmed our drag performers in their original data sets to train the deep fakes. We never filmed them in that position. Partly because a lot of them couldn't do the splits, but basically, we asked them to move through the space to lean over, to walk towards the camera, walk away from the camera for three minutes. We never asked them to like do a like falling drop into the splits in the middle of a Beyonce song. So effectively, what happens is when you see these performers try and follow Karamel's skeleton and drop into the splits, the AI never had a skeleton doing that position. So the AI breaks down and it fails because that data point of that pose exists outside of the latent space. So when Kara drops into the splits, you see T T Bang, you see Lily Snatch Dragon, Chio, Mark Anthony, Luke Syka, you see them as their wigs take off into the air and their like outfits disintegrate into the floor, like the kind of melting witch in the Wizard of Oz. It's really weird. And it's kind of beautiful illustration of what happens when you don't have the right data. And for me, it's like a funny, joyful illustration of that thing. But it is, in a way, AI bias, in the same way that if a facial recognition system has never seen a black trans person, it will fail when you then show it a black trans person. It just won't know what to make of that data. In the same way that our data never saw our drag performers doing the spits. So it just melts into the floor. It becomes this kind of really strange, amorphous, non-human blob. And for me, that also reminds me of this kind of queer theory, which is the there's a wonderful book by Jack Halverstamp called The Queer Art of Failure. So it reminds me of that, that we're taking systems of oppression that are used to control us and sprinkling them with glitter and looking at what happens when they break as a kind of queer move.

SPEAKER_00:

Where do you see your art going from here?

SPEAKER_03:

Oh gosh, that's a huge question. I've been quite inspired this year. I turned 30. I was away with the radical fairies. The radical fairies being a kind of international queer movement of radical, queer thinkers and creatives, often quite neurodivergent, nature-loving people who think radically differently. And I've been very inspired by a lot of decolonizing Indian queer activists who are thinking about similar questions to me. I spent some time in India. I got invited to be part of the first institutional queer art exhibition in Delhi as pretty much one of the only white people, but they wanted a kind of AI queer futures perspective for their exhibition. And I just met some really brilliant inspirational people. I'm thinking about whether we can create a much larger scale project, collaborating with some of them to think about what would AI or technological digital futures look like if they weren't being built by Mark Zuckerberg and Elon Musk? What if we could get some funding to actually run workshops and talk to really interesting, queer, working class thinkers in India and get their ideas and work out with them ways of documenting it and platforming it and kind of curating it? So that's one project that I'm trying to sort of funnel some art funding, give it out to some really interesting thinkers in India. We'll see where that goes. That might be a long-term project. I'm also writing a letter to Mark Zuckerberg as a philosophical thought experiment. A sort of sincerely ironic or ironically sincere exercise to like see if we could get him to prioritize, like actually empathize with this human being. I think he is so set on kind of the sort of Greek Odyssey of colonizing the entire planet for his digital platforms. I don't know if he will ever listen to anyone. But I think it would be interesting to think about whether he could be convinced to prioritize compassion over conflict in his engagement algorithms, change that little one to a little zero. Maybe the world will stop eating its own head, especially the left who seem to have a good habit of it. I've got quite a few little projects in the works. The one that I'm working on in relation to the AI drag project is called Z Z and Me. This is a project, effectively, uh, that we want to stage a double act between a real-life drag queen and a deep fake AI doppelganger drag queen, which has been trained on the physical drag queen. And they're on stage together and they compete. We've already done a few little video tests of this project. So we did one called Anything You Can Do, I Can Do Better, which is such a stupid song for an AI and a human to sing together. Because effectively it's this song, you know, from Annie Get Your Gun. Anything you can do, I can do better. I can do anything better than you. Yes, you can, no, you can't, no, you can't. Yes, I can, yes, I can. But seeing an AI sing that next to their human counterpart, obviously the AI has been completely trained on their human counterpart, their image is from their human counterpart, their movement is from their human counterpart, but still it's being run through a neural network and generated by AI. So effectively, in this sense, the AI is applying the makeup. So, in fact, my drag queen collaborator, she's called me the drag queen, very confusingly for a lot of people, the drag queen. They actually performed the movement as Ben, as their non-drag character. They created the movement out of drag, and then they trained an AI in drag with lighting in a photo studio. And then we can effectively use their movement and run that through our deep fake algorithm to create a virtual body. So effectively the AI is applying the makeup, and then we get them to perform these double act songs, which are all about satirizing the idea that an AI is going to take over from a human artist, I guess. So anything you can do, I could do better, is like perfect for that. It's like, will the AI take over from a human drag queen? Probably not, is the answer. But it's fun to talk about and maybe slightly make fun of that whole argument.

SPEAKER_00:

Yeah, I saw that and I thought that that song was perfect for what you were doing. My final question is what advice would you have for people looking to pursue a career in art?

SPEAKER_03:

It's a very difficult question in a way, because everyone comes at art from such different angles and such different backgrounds, and with such different things they want to say or get from making art. My main thing, if you're wanting to make art which is just for self-expression, like fine art, which sometimes can seem very elitist, because in a way it is quite elitist, because a lot of people aren't given the resources or support to be allowed to just be a full-time fine artist. But if you want to just focus making your own art, then I would say to not be too influenced by the art market. It's a very difficult line to tow because you want to be making work that you are generally interested in. And the thing is, if you're genuinely interested in the work that you're making, then other people will be too. I think that's the thing. Don't try to make artwork because you think other people will be interested in that thing. Don't try and make it really for any other reason than you want to make it. I think that's quite key. As soon as you start calculating and being like, what would be a good thing to make? I think you often lose what makes your art interesting and original. I actually have kind of stopped looking so much at other contemporary artists. I love looking at art. Occasionally I love seeing what my peers are making. And often when I'm exhibited with them and meet them, I think I find that actually much more stimulating for me having conversations with other artists than necessarily seeing their final output and form because I feel that sometimes it might influence me in negative ways. I kind of just want to be like in my own head a bit, which is also quite selfish and self-absorbed, but I guess artists are somewhat self-absorbed. But I think the important thing is to stay empathetic and compassionate and interested. I think to be an artist, you have to have quite a high level of empathy and compassion and sensitivity to what's going on in the world. I think often artists are actually overly sensitive. And that's why often artists suffer from depression or sometimes get a little bit overly hyper and excited because they've got so many kind of simulations and ideas coming in because they're very sensitive to everything going on around them. And they often look very hard at things, think very deeply about things. But yeah, you want to kind of keep that level of empathy with other humans, I'd say. Because I think for a very long time we've put on a pedestal the idea that this male tortured artist in a studio is the best way of making art. I think often those sorts of artists can lose a sense of empathy of others. They are just in their own world, making their own work because it's the most important thing in the world. And I think that can become very self-indulgent, verging on sort of sociopathic or psychopathic, really. You're the only thing that matters. I'd say if you want to be a more financially successful artist in the art world, then you need to find ways of making it resonate with other people. It's about engaging the public and people that maybe aren't always engaged by art, but also it's often about trying to engage the people who are the gatekeepers. The galleries, the museums, the curators, the art critics, the press. Often you just have to speak their language. You have to work out ways of effectively kind of making friends with these people because often they're the most interesting people as well. Curators have an interest in art, often because they are very creative thinking people themselves. And just basically just to sort of make things like allow them just to be things in themselves. It's quite a sort of almost like Eastern philosophy way of thinking about things, sort of not trying to become anything, not trying to become a better artist or become a more successful artist. Just be like present in the moment and just to think about things as you think about things. And obviously, this is also a privilege and a luxury because obviously class plays into that. If you are trying to work four jobs at the same time, you might not have the energy to be focusing on your art as well. And that's kind of just capitalism. That's the problem of systems of power and structure. And I don't know. It would be lovely if AI could take down capitalism for us and fix the environment and allow us to just make art for art's sake. But I don't know, maybe that's not going to happen. You can't necessarily make art and expect to make money from it. You have to go and talk to people and meet people, often in big cosmopolitan cities, which is where these people tend to be and where the galleries and museums and funding tends to be. And then, you know, try and go to lots of shows and just chat to lots of interesting artists, make friends with other artists, make friends with interesting curators. There are so many interesting young curators who aren't old white men who hate everything that isn't, you know, their opinion of what art should be. There's a lot of interesting young curators that you can talk to. They're gatekeepers, of course, they are gatekeepers, but often they're gatekeeping the right sorts of things. They're trying to actually give platforms to people who haven't been given platforms before. So if you can find those sorts of people to talk to, you might have some really, really interesting things to talk about. And then maybe you'll get in a show, and then maybe eventually you'll get in another show, and then maybe you'll get in 10 more shows, and then maybe you'll be in Loma. Who knows?

SPEAKER_00:

Fantastic. Thank you so much, Jake, for coming on the Artog today.

SPEAKER_03:

You're welcome. It's been great to chat.