you have explained so well, i have seen so many videos , but, the way you explain from start to end is vary relative to what we are learning. very very good explanation about stable diffusion work-flow.
I'm sure you get a lot of comments like this, but I've been binging MKBHD vids and saw him recommend your video about compression. I watched it and was so impressed by how well you explained it and am equally impressed with this one. Especially to someone like me who has very basic understanding of the concepts. Can't wait to binge more of your videos now! Subscribed 😃
Cristal clear explanation, thanks a lot! With the recent release of Meta's SAM, I was wondering it was feasible to make an improved text embedding model (i.e., CLIP) by, instead of classifying the image with a sentence, creating bounding boxes and applying a mask with different weights to indicate exactly where's a specific object in the image. For example, in the image with the white dog on the beach, for the description "samoyed dog", pixels "making up" the dog would have a weight of 1.0, while others would have a weight of 0. I'd be interested to know what you think, I'm quite unfamiliar with how these embedding models work :)
Thanks! That's an interesting idea! Given the scale of training data, my guess is that it wouldn't make much of a difference for CLIP. It may be useful when training domain-specific models with limited data though. With dense, segmentation labels, we would get more information from fewer images.
Leo, you absolutely Amazing. May you give a video how to interpolate mathematically text classification: Single task multitask classification Multitask- transfer learning Plz
Thanks Leo for the video, the concept of converting noised image to a clear image is understood. How does it creates a image which doesn't exist in its training ? It is understood that the model doesn't understand the concepts of the image and only focuses on the patterns. But how is the below operations performed, 1. Creating a cartoon image of cat based on caption ex: Place a hat on top of cat How does it creates a cartoon image of cat ? How does it know the exact location of cat's head ? How does it know to place the hat exactly at the head ? 2. A closeup shot of a dog facing the sun How does it knows to create a close shot of a dog ? How does it know to place the sun in the background ? How it makes the the object to turn towards the sun ? No videos exist to explain this concept. It would be of great help if you could make a video on this.
Sure :) the short answer to your questions is cross-attention. The U-Net based generator is conditioned on text embeddings. Spatial attention softmax(K Q) determines where input text (e.g. cat, hat, etc) attends to.
Merhabalar hocam bir sorum olacak sizlere Türkiyede bir devlet ünide yazılım mı okusam bitirdikten hemen sonra yurtdışına çıksam (pasaportum var) mantıklı mı yani gelecegi var mı yoksa diş hekimliği mi mantıkı
Thanks! There are many paths to becoming an expert in computer vision or any other field, including taking courses, reading books and research papers, and practicing coding. If you already have some background in the field, I would recommend looking at papers with code. Good luck!
@@leoisikdogan thank you for you responses. As you are expert and you have time please prepare videos about future trending and appliations of Deep learning. Thanks!
Thanks for the feedback. This video was indeed a bit denser than usual since I tried to fit a lot of information in 10 minutes. You can check out my Deep Learning Crash Course and Image and Video Processing series for more introductory videos.
This video doesn't show up in your RU-vid channel. I got here from a web page that embeds your video. I assume you have the video as "unlisted". If you want more views, you should change it to "public", otherwise very few people will find it.