Тёмный

Generate AI Images with Stable Diffusion + Audio Reactive Particle Effects - TouchDesigner Tutorial 

Torin Blankensmith
Подписаться 9 тыс.
Просмотров 21 тыс.
50% 1

Опубликовано:

 

21 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии    
@rahsheedamcrae2381
@rahsheedamcrae2381 Год назад
This is revolutionary, can’t wait for more 🤩
@elekktronaut
@elekktronaut Год назад
incredible, thank you ❤excited to experiment with td + sd!
@plyzitron
@plyzitron Год назад
Super fascinating for this AI integration in TD, thanks so much!
@benchaykin4286
@benchaykin4286 Год назад
Fantastic work. So exciting to see such an AI-integrated TD project
@therob3672
@therob3672 Год назад
Brilliant work Torin, I will share the news on your amazing integration and tools and grab them from your Patreon. It’s especially impressive that you made it so straightforward with the use of the api service and created and excellent way to create a constant animation between the generated frames. Using these as textures composited into the base color of a PBR texture onto 3D objects also generated by AI would be an interesting way for for this to evolve. What an incredible holiday gift for the community! ❤
@blankensmithing
@blankensmithing Год назад
Thanks Rob! I'm glad you've been enjoying the tutorials! Yeah, I think it'd be really interesting to use these to generate HDRI maps, or for texture map for a 3D model. I should make an example of applying the image output to a 3d model. Looks like Spline added that into their web editor ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ma91lA51UJ8.html
@hudsontreu
@hudsontreu Год назад
Hey Torin, thanks so much for the tutorial and project file! I am having a few issues though and wondering if you can answer a question. So when I open the project the noise objects that are used for altering the img2img function do not have anything in them, and it seems like img2img is required for everything to work. How exactly do you get the noise populated with an image or noise data and running correctly? Thank you!
@therob3672
@therob3672 Год назад
Torin, I was wondering if you could use this and computerenderer to generate images in HD or 4K resolution in Midjourney, DallE-2 or StableDiffusion Models and if the environment could show a count of how many images have been generated to be aware of the run cost as it accrues?
@smon1127
@smon1127 Год назад
I love you so badly! Thanks for that. Huge fan ❤
@GianTJ
@GianTJ Год назад
Hey, Torin! This is absolutely stunning... could this potentially be used in a live setting? For example could I get audio in from Ableton Live and then project the reactive visuals in real-time?
@blankensmithing
@blankensmithing Год назад
Hey Gian, yeah you could use an audio device in to get the microphone input and map the audio analysis to the particle system
@bennettgrizzard5527
@bennettgrizzard5527 Год назад
This is fantastic. Could the image generation be done live as well, so that prompts could be entered during a performance rather than pre-recorded? @@blankensmithing
@Nanotopia
@Nanotopia Год назад
Amazing! Thank you for sharing this. I wonder if it would be possible to make the particles interactive through webcam or Kinect movement.. going to try :)
@blankensmithing
@blankensmithing Год назад
hey, glad you’re enjoying it! yes absolutely you could do it with both 😁
@JannatShafiq-fr4lr
@JannatShafiq-fr4lr 5 месяцев назад
Thanku so much for this amazing video and also for the link of file. My API component is not working can u plzz tell me is it due to version difference of touch designer if it is so plzz tell me which version u used for this.
@blankensmithing
@blankensmithing 5 месяцев назад
It works fine for me on the latest TD version. Just make sure you create an API key on computerender.com/ and swap out your key in the project
@ricardcantm
@ricardcantm Год назад
Great work bro! Do you know if it can work on mac machines? i know that there are some specs limitations on those
@ricardcantm
@ricardcantm Год назад
nvm i just saw that u use a mac😅😅
@stiffyBlicky
@stiffyBlicky Год назад
Is it possible to use multiple images as inputs? Maybe like around 30?
@AnderrGraphics
@AnderrGraphics Год назад
Great tutorials, keep up the good work! Is there a way to generate images through this method but with live audio coming from an external device, like a turntable?
@blankensmithing
@blankensmithing Год назад
Thanks Anderr! Yeah totally, you can use an Audio Device In CHOP instead of a Audio File In. Using that operator you can select your computer's built-in microphone, or if you're able to connect your turntables to your computer through an audio interface you can select your audio interface.
@DSJOfficial94
@DSJOfficial94 Год назад
very creative
@unveil7762
@unveil7762 Год назад
Would be cool to have depth map so than the particles becomes 3d… ❤
@xthefacelessbassistx
@xthefacelessbassistx Год назад
how can i stable diffuse a live video feed
@blankensmithing
@blankensmithing Год назад
You can just pass in a Movie File in TOP into the component. It's not going to convert them in real-time since it takes some time to process, but every time you generate a new image it'll snag the current frame from the TOP
@Fonira
@Fonira Год назад
thanks !!
@ddewwer23
@ddewwer23 Год назад
this isn't a tutorial?
@carlottorose
@carlottorose Год назад
please give me the project :D
@elekktronaut
@elekktronaut Год назад
it's in the description
Далее
3D to AI - THIS is the REAL Power of AI
10:17
Просмотров 91 тыс.
ТЕСЛА КИБЕРТРАК x WYLSACOM / РАЗГОН
1:40:47
How I made this battle scene in blender
11:05
Просмотров 689 тыс.
Why Houdini Has No Rivals in Simulation
10:15
Просмотров 224 тыс.
Real time Stable Diffusion in TouchDesigner
8:12
Просмотров 17 тыс.