Тёмный

controlnet paper explained - Adding Conditional Control to Text-to-Image Diffusion Models 

AI Bites
Подписаться 8 тыс.
Просмотров 1,9 тыс.
50% 1

Опубликовано:

 

4 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 6   
@abcd45058
@abcd45058 4 месяца назад
Great work. Interesting paper read indeed. At 7:27 ; Bayes theorem is incorrect. P(X/Y) = P(Y/X).P(X) / P(Y) ; The rest of the math that follows is fine.
@AIBites
@AIBites 4 месяца назад
well spotted. thank you. I think I saw it after the video pub. Left it as YT doesn't allow newer versions of videos. I think I should start writing errata in the comments :)
@frazuppi4897
@frazuppi4897 7 месяцев назад
great video but is not clear how one train it, one needs to have pairs of controlnet input - image output right?
@AIBites
@AIBites 5 месяцев назад
yes, we need depth or pose datasets. We already have several datasets in computer vision for depth or pose. The problem is these datasets are tiny compared to the scale at which LLMs or LVMs are trained. So the solution is ControlNet. By ControlNet approach, we simply add a few trainable layers and we are good to go and train with these "small" datasets. As a result, we will be able to control the spatial layout of the generated image during inference. Hope that clarifies :)
@frazuppi4897
@frazuppi4897 5 месяцев назад
@@AIBitesyeah but I guess controlenet is around 50M
@AIBites
@AIBites 4 месяца назад
thats the upper bound I guess. Not sure whats the lower bound to train.
Далее
кажется, началось
00:45
Просмотров 1,6 млн
ЗЕНИТ - РОСТОВ: обзор матча
01:03
Просмотров 186 тыс.
How AI 'Understands' Images (CLIP) - Computerphile
18:05
ControlNet
2:07:56
Просмотров 2,7 тыс.