Early image generative models
Experiments with some of the first text-to-image models like VQGAN+CLIP, ruDALLE and Disco Diffusion.
2021-22
Exploring within 3D animation, face filters and collages, I researched ways to play with AI-generated images.
The following images were created with VQGAN+CLIP, two machine learning algorithms that allowed me to generate images and videos from text prompts. I used it through Katherine Crowson’s colab (click here to try out).
The image can be adjusted by giving it specific “styles” to follow, which are often called modifiers.
Coming from the 3D field, I implemented AI images to 3D objects. The textures were generated separately and composited in Cinema4D.
Using Spark AR, I experimented by implementing AI images in face filters. They are both available on Instagram.
Coming back to the 2D space, I’m also researching ways to make compositions from different AI generated images. The first two are experiments, while the other two are pieces from my AI-generated book “Digital Folktales”, where stories and illustrations were made with generative models.
Since I was getting very excited about the possibilities of this medium, I created a video essay of what I had learned and thought. I explored its connection with semiotics and hyper-reality, while explaining how it works and how other artists collaborate with it.
ruDALLE
A different image generator is ruDALLE, a russian based text-to-image AI. While it doesn’t have the artistic freedom seen in the previous one, its strength is to create coherent and “realistic” images.