Тёмный

So you think you know Text to Video Diffusion models? 

Neural Breakdown with AVB
Подписаться 9 тыс.
Просмотров 1,2 тыс.
50% 1

Опубликовано:

 

18 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 7   
@openroomxyz
@openroomxyz 5 дней назад
Thanks for creating this video nicely explained keep creating
@avb_fj
@avb_fj 5 дней назад
Thanks for the words of encouragement! 🙏🏼
@aamirmirza2806
@aamirmirza2806 2 дня назад
I was wondering if same kind of thing spatial temporal compression can be applied to colorization of B/W or grey scale videos , or is there something like this out there, I mean to preserve color and lighting continuity , FYI NTIRE 2023 Video Colorization Challenge
@avb_fj
@avb_fj 2 дня назад
Yeah that should be possible to do. Good thing about video colorization is that acquiring training data is very straightforward - we just take colored videos and convert them to black B/W, and then train a neural net to reverse it (ie go from BW to color). You probably won’t need diffusion for this because it’s not a generative task, we could just treat it as a sequence to sequence prediction task.
@Patapom3
@Patapom3 4 дня назад
5:24 Shouldn't the "high semantic information" arrow point to the center of the UNet, rather than to the end, where semantic features are once again converted back into detailed info?
@avb_fj
@avb_fj 4 дня назад
Great point! You are right, high semantic information is indeed captured within the bottleneck layers. In the illustration however, my goal was to show how deep layers in convnets capture high semantic information coz their local receptive field expands to capture global features from the input image. The purpose was to show how the skip connections allow combining the low-level highly localized details (at the beginning of the unet) with global level features from the deep layers. They are high semantic too coz they’re derived partly from the bottleneck layers. Note that we can’t directly add the bottleneck features to the initial feature maps with skip connections because of shape mismatch. Instead we are essentially upsampling the bottleneck feature maps to the correct size before adding the initial features back. Hope that made sense.
@saurabhsswami
@saurabhsswami 4 дня назад
gg
Далее
The Other Side of AI No One is Talking About
24:55
Просмотров 14 тыс.
GIANT Gummy Worm Pt.6 #shorts
00:46
Просмотров 13 млн
How AI 'Understands' Images (CLIP) - Computerphile
18:05
iOS 18 is AMAZING! - Try these 10 things first!
17:18
Why Does Diffusion Work Better than Auto-Regression?
20:18
Zero to AI ML Engineer: Get Hired Without Experience
13:23
How I Understand Diffusion Models
17:39
Просмотров 30 тыс.