Тёмный
Oleg Chomp
Oleg Chomp
Oleg Chomp
Подписаться
Комментарии
@simont733
@simont733 12 дней назад
any updats
@erdbeerbus
@erdbeerbus Месяц назад
nice, what tec do u use? RTX 3060 or better?
@dzikriamrulloh517
@dzikriamrulloh517 2 месяца назад
damn its so amazing
@marshala.mathew2131
@marshala.mathew2131 2 месяца назад
Can you make a detailed tutorial on how to make this please
@syedhannaan2974
@syedhannaan2974 Месяц назад
did you find any content on the same
@Yar_dar
@Yar_dar 3 месяца назад
Привет, Олег. Спасибо за классный ролик. Уверен, что ты потратил не один десяток часов для реализации проекта. У меня сейчас есть примерно похожая задача. И конечно же куча вопросов в начале этого пути. Есть ли возможность пообщаться и узнать о твоем опыте?
@CrazyUnleashed-wd1gx
@CrazyUnleashed-wd1gx 4 месяца назад
Can you provide code plss
@petertsai2801
@petertsai2801 5 месяцев назад
hello,may i ask what's your computer specifications ?What is the type of gpu, cpu or memory
@PALTUBABY
@PALTUBABY 6 месяцев назад
Loved it :)
@madcatandrew
@madcatandrew 6 месяцев назад
This is really cool to see. I've been working on a very similar pipeline but running everything local on my machine with Whisper, Mistral 7b and Edge-TTS with RVC. Came across this while looking to see if anyone else had done audio2face in a similar fashion already.
@vj-baker-88
@vj-baker-88 6 месяцев назад
Thanks for the video on VJ School channel. Do you know other optimisations to reduce delay with webcam ? (choice of models, lora LCM, use of cuda/directML in comfyui, ...)
@stillwalker2077
@stillwalker2077 7 месяцев назад
Hi, is it a possibility to make this generation faster?
@HansHeidman
@HansHeidman 4 месяца назад
Using Stream Diffusion instead of stable diffusion
@stillwalker2077
@stillwalker2077 3 месяца назад
@@HansHeidman are you using streamdiffusion NDI?
@0streetsurfing0
@0streetsurfing0 8 месяцев назад
Is there any guide on how to do this and is it possible to do with cloud GPU?
@maximreshetov8759
@maximreshetov8759 8 месяцев назад
Hi, friend. I couldn't message you. I really liked your code on Github. I work in a library and wanted to create a virtual companion for visitors. I tried your code. It looks like the code is outdated. I ask you for help. if possible please update or help me with two errors. thank you friend
@domain3973
@domain3973 9 месяцев назад
Sir please make a tutorial on it ❤ This is excellent 👍
@hopeonelove4703
@hopeonelove4703 10 месяцев назад
I’ve always wondered about these things. How do you increase response time?
@aerx13
@aerx13 10 месяцев назад
i wonder what if i use my own Chatbot connected to Metahuman ?
@progification
@progification Год назад
Hey that's awesome, could you share your process or some resources that might help building this? Would love to play around with this in my bachelors thesis project :) Thanks a lot!
@user-hf4iq7nc7u
@user-hf4iq7nc7u Год назад
amazing
@sinanrobillard2819
@sinanrobillard2819 Год назад
Amazing! Congrats for the pipeline! I was trying to do something similar but couldn't manage to get the emotions on the audio stream. How did you do? Do you pass a fixed audio and then save the blend shapes to the metahuman character at each iteration?
@ginogarcia8730
@ginogarcia8730 Год назад
holey smokes
@IvoAngelani
@IvoAngelani Год назад
Hi! Im really interested in how did you get to use the webcam as input for stable diffusion in real time! this would be perfect for my college project. if you would be so kind to tell me i’m going to be very grateful. Greetings from argentina!
@SchlafGutGeschichte
@SchlafGutGeschichte Год назад
Have you started with the project? greetings
@HansHeidman
@HansHeidman 4 месяца назад
Use Stream Diffusion in Touch Designer
@t0nj0uRs
@t0nj0uRs Год назад
try convai :)
@user-og7sl6gy2d
@user-og7sl6gy2d Год назад
It's so cool😮😮😮learn from you
@mendicott
@mendicott Год назад
#virtualbeings