This video is intended to prove two things: you can easily train an AI model on your own artwork, ideas, and public and archival material on your computer, AND you can animate your model's output, frame by frame, to generate a video that can be used as an asset. As somebody who creates visuals for music, the amount of new workflows that these AI tools enable is fascinating. This video is like a snapshot of my new discovery, and I am only less than one month into learning how to use Stable Diffusion and its extensions.
Long story short, I trained a LoRA (compatible for SDXL base models) to produce Rorschach Inkblots using a set of 58 different 1024x1024 inkblot images (public domain and stock images) that I prepared in Photoshop before training. Using a total of 2320 training steps my model is able to produce some more interesting and believable inkblot shapes than what the SDXL base model can produce by itself.
Using a loopback script in the img2img tab, I can generate series of frames from an initial inkblot image, to create a morphing inkblot animation, that can be interpolated for smoother motion using the Interpolate Pics feature of the Deforum extension for Stable Diffusion. This video I can then use in VJ software like Resolume as an asset to mix with others... if you follow my work then you already know the rest of the story :)
Stable Diffusion A1111: github.com/AUT...
Kohya_ss (for training LoRAs): github.com/bma...
Resolume: resolume.com/s...
Music: downstate.band...
#vj #aianimation #stablediffusion #aiart #glitchart
4 окт 2024