The Zizi Project - a deepfake drag cabaret The Zizi Project is a series of works that explore the interaction of drag and A.I. Currently,
The Zizi Project is made up of multiple artworks.
Zizi - Queering the Dataset (2019) Knowing that facial recognition technology statically struggle to recognize black women or transgender people, Elwes set out to "Queer the Dataset" through an open-sourced
generative adversarial network (
GAN, a type of machine learning model and an early
Generative artificial intelligence). Elwes added a dataset of 1,000 photos of drag kings and queens into the GAN's 70,000 faces collected in a standardised facial recognition dataset called Flickr-Faces-HQ Dataset (FFHQ). They then created new simulacra faces, known as deep fakes. The first act, which features a digital lip-sync duet to
Anything You Can Do (I Can Do Better), satirises the idea of A.I. being mistaken for a human, using drag performance and cabaret to critique societal narratives about A.I. and its role in shaping identity. The project is part of The Zizi Project by Jake Elwes, which explores the intersection of drag performance and A.I.
The Zizi Show - A Deepfake Drag Cabaret (2020) The Zizi Show is a
deep fake drag act based on artificial intelligence (AI). It has been presented live and as interactive online artwork. It is an exploration of queer culture and the algorithms philosophy and ethics of AI.
The Zizi Show was exhibited as the inaugural exhibition in the digital gallery at the
V&A’s Photography Center from 2023 to 2024.
Zizi in Motion: A Deepfake Drag Utopia (Movement by Wet Mess) (2023) "Zizi in Motion" is a multichannel silent video installation featuring AI-generated deepfake performances, which are dynamically re-animated through the movements of London drag artist Wet Mess. The movements of Wet Mess cause the AI-generated visuals to glitch and distort, showcasing the interaction between drag performance and artificial intelligence. The work explore the potential for queer communities to ethically and creatively reclaim and repurpose deepfake technology, using it to celebrate queer bodies and identities.
Art in the Cage of Digital Reproduction (2024) In an act of protest on 26 November 2024, Elwes facilitated indirect access to an early access token for
OpenAI’s Sora text-to-video model through a Hugging Face frontend under the account "PR Puppets". The accompanying statement called to 'denormalize the exploitation of artists by major AI companies for training data, R&D, and publicity'. The incident attracted international press coverage calling into question the role of artists in shaping the future of generative AI versus merely serving as data and credibility providers for tech giants.
Installations exploring interpretation and feedback loops between neural networks Elwes has created works based on the interpretations and misinterpretations between different neural networks and training datasets including:
A.I. Interprets A.I. Interpreting ‘Against Interpretation’ (Sontag 1966) from 2023,
Closed Loop from 2017, and
Auto-Encoded Buddha from 2016.
A.I. Interprets A.I. Interpreting ‘Against Interpretation’ (Sontag 1966) (2023) A.I. Interprets A.I. Interpreting ‘Against Interpretation (Sontag 1966) is a three-channel video artwork where an AI interprets Susan Sontag’s essay into images, and then and another AI reinterprets those images back into language. The piece highlights how AI-generated art can misinterpret and introduce bias.
Closed Loop (2017) Closed Loop is a two-channel video where two neural networks engage in a continuous feedback loop, one generating images based on the text output and the other creating text based on the image output. The work explores how AI models misinterpret and evolve in a surreal, self-perpetuating conversation, without human input.
Auto-Encoded Buddha (2016) Auto-Encoded Buddha is a mixed-media piece where an AI attempts to generate an image of a Buddha statue, trained on 5,000 Buddha images. The AI struggles to accurately represent the Buddha, highlighting the limitations of early generative neural networks. The work is a tribute to Nam June Paik’s TV Buddha (1974).
CUSP (2019) In their video work
CUSP (2019) Elwes places marsh birds generated using artificial intelligence into a tidal landscape. These digitally generated and constantly shifting birds are recorded in dialogue with native birds. The video work is also accompanied by a soundscape of artificially generated bird song.
Latent Space (2017) Latent Space is one of the earliest examples of generative AI in art. The video artwork uses a neural network trained on 14.2 million images from the ImageNet database to explore "latent space," a mathematical representation where AI maps learned image categories, such as trees or birds, into specific regions. Once trained, the AI understands all images of trees as existing in one area and all images of birds in another. By reverse-engineering the network, it becomes possible to generate synthetic images from coordinates within this space. The video illustrates the AI’s process of creating novel images by not moving directly between recognizable categories, but by navigating the transitional spaces between them. The work highlights the network’s ability to generate unique and unexpected visual forms. The project draws on research from Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space (2016) and the ImageNet database (2009), with special acknowledgment to Anh Nguyen and the Evolving AI Lab for their contributions. ==References==