Тёмный

So you think you know Text to Video Diffusion models? 

Neural Breakdown with AVB
Подписаться 9 тыс.
Просмотров 1,2 тыс.
50% 1

Опубликовано:

 

18 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 7   
@openroomxyz
@openroomxyz 4 дня назад
Thanks for creating this video nicely explained keep creating
@avb_fj
@avb_fj 4 дня назад
Thanks for the words of encouragement! 🙏🏼
@aamirmirza2806
@aamirmirza2806 2 дня назад
I was wondering if same kind of thing spatial temporal compression can be applied to colorization of B/W or grey scale videos , or is there something like this out there, I mean to preserve color and lighting continuity , FYI NTIRE 2023 Video Colorization Challenge
@avb_fj
@avb_fj 2 дня назад
Yeah that should be possible to do. Good thing about video colorization is that acquiring training data is very straightforward - we just take colored videos and convert them to black B/W, and then train a neural net to reverse it (ie go from BW to color). You probably won’t need diffusion for this because it’s not a generative task, we could just treat it as a sequence to sequence prediction task.
@Patapom3
@Patapom3 3 дня назад
5:24 Shouldn't the "high semantic information" arrow point to the center of the UNet, rather than to the end, where semantic features are once again converted back into detailed info?
@avb_fj
@avb_fj 3 дня назад
Great point! You are right, high semantic information is indeed captured within the bottleneck layers. In the illustration however, my goal was to show how deep layers in convnets capture high semantic information coz their local receptive field expands to capture global features from the input image. The purpose was to show how the skip connections allow combining the low-level highly localized details (at the beginning of the unet) with global level features from the deep layers. They are high semantic too coz they’re derived partly from the bottleneck layers. Note that we can’t directly add the bottleneck features to the initial feature maps with skip connections because of shape mismatch. Instead we are essentially upsampling the bottleneck feature maps to the correct size before adding the initial features back. Hope that made sense.
@saurabhsswami
@saurabhsswami 3 дня назад
gg
Далее
Why Does Diffusion Work Better than Auto-Regression?
20:18
What La Niña Will do to Earth in 2025
19:03
Просмотров 309 тыс.
Why OpenAI's Strawberry paves the way to AGI
16:47
Просмотров 39 тыс.
Variational Autoencoders | Generative AI Animated
20:09
Wreckage Of Titan Submersible Reveal How It Imploded
17:21
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
Просмотров 1,5 млн
Has Generative AI Already Peaked? - Computerphile
12:48
How Stable Diffusion Works (AI Image Generation)
30:21
Просмотров 148 тыс.