Тёмный
Computer Vision with Hüseyin Özdemir
Computer Vision with Hüseyin Özdemir
Computer Vision with Hüseyin Özdemir
Подписаться
Here, you can find information about Computer Vision, Deep Learning, Machine Learning and Artificial Intelligence. Rest assured that your time will not be wasted.

Multi-Head Attention
10:58
3 месяца назад
Comparison of CNN and ViT
2:45
3 месяца назад
Vision Transformer
20:18
3 месяца назад
Greetings
1:06
Год назад
Supervised Learning
5:00
Год назад
Unsupervised Learning
5:28
Год назад
Semi-Supervised Learning
4:02
Год назад
Self-Training
3:03
Год назад
Self-Supervised Learning
9:01
Год назад
Autoencoder
9:47
Год назад
Logit and Probability
8:03
Год назад
Binary Classification
7:05
Год назад
Multi-Class Classification
6:49
Год назад
Multi-Label Classification
7:16
Год назад
Loss Function
2:28
Год назад
Cost Function
2:16
Год назад
Binary Cross-Entropy Loss
7:09
Год назад
Categorical Cross-Entropy Loss
6:12
Год назад
Linear Transformation
7:22
Год назад
Affine Transformation
11:40
Год назад
Projective Transformation
17:50
Год назад
Homogeneous Coordinates
11:42
Год назад
Rigid Transformation
2:49
Год назад
Similarity Transformation
3:07
Год назад
Splatting
3:34
Год назад
Image Rotation
9:46
Год назад
Комментарии
@Ashish-sp4hw
@Ashish-sp4hw 11 дней назад
Mathematics from scratch was something which I couldn't find anywhere else. Thank you for making this awesome video ❤. But I didn't understand the following . 1. reparameterisation part 2. How the sum of normals were calculated
@TekesteTesfay
@TekesteTesfay 23 дня назад
Very helpful. Thanks.
@utkuerdogan6551
@utkuerdogan6551 Месяц назад
Nice explanation. you helped me to solve the calibration problem in a grid detection problem. In opencv, there are methods called ".getPerspectiveTransform" and "warpPerspective". If you know the math behind, two lines of codes solve the problem.
@SplendidKunoichi
@SplendidKunoichi Месяц назад
for years, I've wished to see it explained in just this way !!
@huseyin_ozdemir
@huseyin_ozdemir Месяц назад
Thank you for your comment
@marufahmed3416
@marufahmed3416 2 месяца назад
Very good visual explanation, thanks very much.
@huseyin_ozdemir
@huseyin_ozdemir Месяц назад
Glad it was helpful!
@talon6277
@talon6277 2 месяца назад
Very helpful, well explained Thank you!
@huseyin_ozdemir
@huseyin_ozdemir Месяц назад
You're welcome!
@ercancetin6002
@ercancetin6002 3 месяца назад
Güzel çalışma
@huseyin_ozdemir
@huseyin_ozdemir 3 месяца назад
Teşekkürler
@ajkdrag
@ajkdrag 3 месяца назад
Can you do video on detr and yolo new models?
@doublesami
@doublesami 3 месяца назад
very good explanation, Could you please make a video on vmamba or Vision mamba to understand it in depth , like how selective scan 2d works etc , looking forward
@user-rz8qb7gm2t
@user-rz8qb7gm2t 3 месяца назад
Thanks for your detailed explanation!
@huseyin_ozdemir
@huseyin_ozdemir 3 месяца назад
You're welcome
@zaharvarfolomeev1536
@zaharvarfolomeev1536 4 месяца назад
Thank you! I liked your video more than anyone else on the topic of momentum.
@huseyin_ozdemir
@huseyin_ozdemir 4 месяца назад
Glad it is helpful.
@ivannasha5556
@ivannasha5556 5 месяцев назад
Thanks! I was experimenting with IFS fractals 30+ years ago. Did not remember much and google was no help. Everyone is just listing the basic known and nobody else explains the math to make your own.
@huseyin_ozdemir
@huseyin_ozdemir 4 месяца назад
You're welcome!
@arinmahapatro61
@arinmahapatro61 7 месяцев назад
Insightful !
@dhirajkumarsahu999
@dhirajkumarsahu999 9 месяцев назад
Thanks a lot
@huseyin_ozdemir
@huseyin_ozdemir 9 месяцев назад
👍
@gneil1985
@gneil1985 9 месяцев назад
Great insights into the perspective transformation. Very clear explanation.
@huseyin_ozdemir
@huseyin_ozdemir 9 месяцев назад
Thank you.
@sixface20
@sixface20 9 месяцев назад
Great tutorial
@huseyin_ozdemir
@huseyin_ozdemir 9 месяцев назад
Thank you!
@user-ro8kx2dc6g
@user-ro8kx2dc6g 9 месяцев назад
Perfect presentation!
@huseyin_ozdemir
@huseyin_ozdemir 9 месяцев назад
Thank you!
@thatguy5787
@thatguy5787 9 месяцев назад
This is fantastic. Very well done.
@huseyin_ozdemir
@huseyin_ozdemir 9 месяцев назад
Thank you very much!
@ercancetin6002
@ercancetin6002 10 месяцев назад
Bu kadar özenli bir çalışmanın bu kadar az ilgi görmesi üzücü. Başarılar diliyorum kardeşim.
@huseyin_ozdemir
@huseyin_ozdemir 10 месяцев назад
Yorumunuz için teşekkür ederim. Kanalım için yaptığım çalışmalar özelinde değil de daha geniş manasıyla bakacak olursak, hayatın bana öğrettiği şeylerden biri de her çabanın her fiilin bir karşılığı olduğu. Bazen hemen olur, bazen zaman alır. Bazen direkt olur, bazen dolaylı yollardan.
@mehmetozkan1075
@mehmetozkan1075 10 месяцев назад
ABSOLUTELY GOOD JOB. THANK YOU SO MUCH
@huseyin_ozdemir
@huseyin_ozdemir 10 месяцев назад
Thank You 👍
@mehmetozkan1075
@mehmetozkan1075 10 месяцев назад
It's great that you added this lesson as well. Thanks a lot.
@huseyin_ozdemir
@huseyin_ozdemir 10 месяцев назад
Thank You. I think YOLOv1, YOLOv2 and YOLOv3 are important to understand how to address object detection in single pass formulating it as a regression problem.
@mehmetozkan1075
@mehmetozkan1075 10 месяцев назад
It is really a very simple and understandable series. The series is easy to understand and follow. It would be great if you could include courses on OpenCV, advanced computer vision, and Kaggle project solutions. Thank you for all your hard work.
@huseyin_ozdemir
@huseyin_ozdemir 10 месяцев назад
Glad you like the videos
@krimafarjallah7553
@krimafarjallah7553 11 месяцев назад
💯🤍
@huseyin_ozdemir
@huseyin_ozdemir 11 месяцев назад
👍
@denischikita
@denischikita Год назад
I didn't got. How input depth became from 3 to 32?
@huseyin_ozdemir
@huseyin_ozdemir Год назад
Those are two different examples. In the first one, at 09:22 of the video, an RGB image is convolved with a 3×3 filter. Since RGB image has 3 channels, convolution filter should also have 3 channels. This is a typical filtering operation in an image processing application. The second example, at 12:08 of the video, is more generic, a convolution operation at a convolutional layer is illustrated. That's why, in the video, it's written "Let our input image depth be 32".
@denischikita
@denischikita Год назад
Thank you. I resect your original attitude to teach such complex topic. It helped me to place right things to my mind.
@huseyin_ozdemir
@huseyin_ozdemir Год назад
Glad it was helpful.
@vivekrai1974
@vivekrai1974 Год назад
Very Informative Video. I see that you have covered various topics like mathematics of transformation, supervised learning etc. in your various videos. If you create playlists, it would be easier for the viewers.
@huseyin_ozdemir
@huseyin_ozdemir Год назад
Thank you for the comment
@mfatihaydogdu7
@mfatihaydogdu7 Год назад
It would be very helpful to generate playlists .
@muhittinselcukgoksu1327
@muhittinselcukgoksu1327 Год назад
I congratulate your Digital Image Processing videos. When commercial products are everywhere , then detailed and explanatory videos are easily accessible datum. Thank you so much.
@huseyin_ozdemir
@huseyin_ozdemir Год назад
Thank you for the comment
@dinezeazy
@dinezeazy Год назад
Man i really love how you are fusing different topics in single video!! Then have a separate topic for that particular video. This is great.
@huseyin_ozdemir
@huseyin_ozdemir Год назад
Glad you like the videos. Thanks for the comment.
@milanm4772
@milanm4772 Год назад
Nicely. Best explained.
@huseyin_ozdemir
@huseyin_ozdemir Год назад
Thanks a lot :)
@dinezeazy
@dinezeazy Год назад
This is amazing, please do more of these, camera calibration also with example and from there what and can be achieved using the calibration like solving parallax problem, estimating object distance etc. With you kind of slow and steady explanation everyone will be able to understand.
@huseyin_ozdemir
@huseyin_ozdemir Год назад
Thank You very much.
@mizzonimirko
@mizzonimirko Год назад
I do not fully understand how jt works honestly. Given a batch, the output of that hidden layer should be dimension_batch* dimension _output? It follows that mean / variance shouldn't be vectors?
@huseyin_ozdemir
@huseyin_ozdemir Год назад
Hi, batch normalization can be confusing at first glance. Never mind. Let's say we have a fully connected layer with n neurons. If batch size is m, then each neuron outputs m values for 1 batch of inputs. Mean and variance for that neuron for that batch are computed using those m outputs as described in 09:01 of the video. So mean and variance are scalars and are computed for each batch during training. And one important thing to note is that while computing mean and variance for 1 neuron, only outputs of that neuron are used.
@irshadirshu0722
@irshadirshu0722 Год назад
Nice explanation ❤
@huseyin_ozdemir
@huseyin_ozdemir Год назад
Thank You:)
@villagelifebangladesh9636
@villagelifebangladesh9636 Год назад
i dont hear any audio...dont know why
@huseyin_ozdemir
@huseyin_ozdemir Год назад
I prepared some videos without voiceover. But, that's not an issue :) Each video is fully self-contained.
@srihithbharadwaj3421
@srihithbharadwaj3421 Год назад
does forward warping need the depth information
@wolfgangbierling
@wolfgangbierling Год назад
Great work! Thank you for this clear explanation!
@huseyin_ozdemir
@huseyin_ozdemir Год назад
Thank You. Glad it was helpful.
@cathycai9167
@cathycai9167 Год назад
thank you for such clear video! It really saved me :)
@huseyin_ozdemir
@huseyin_ozdemir Год назад
You're welcome! Glad it helped :)
@z3515535
@z3515535 Год назад
This is a good video. I am currently searching on implementation of deconvolution using tensorflow. Did you use tensorflow for your implementation? If so, can you share the code?
@FelLoss0
@FelLoss0 Год назад
Silent video?
@huseyin_ozdemir
@huseyin_ozdemir Год назад
When I first started my channel, I prepared some videos without voiceover. But, I can assure you, those videos, too, include all necessary information and detail as text, diagrams and images to understand the related concepts.
@waterspray5743
@waterspray5743 Год назад
Thank you for making everything concise and straight to the point.
@huseyin_ozdemir
@huseyin_ozdemir Год назад
Thank You for your comment. Glad you liked the video.
@aaryannakhat1004
@aaryannakhat1004 Год назад
Thanks a lot! Was facing difficulty in understanding how mini-batch standard deviation helps prevent mode collapse until I saw this video! Really appreciate it! Great work!
@huseyin_ozdemir
@huseyin_ozdemir Год назад
Thank You :)
@dyyno5578
@dyyno5578 Год назад
thank you very much for the clear explanation!
@huseyin_ozdemir
@huseyin_ozdemir Год назад
You are welcome
@mohammadyahya78
@mohammadyahya78 Год назад
Third question please, at 5:13, what do you mean by modulation weights please?
@mohammadyahya78
@mohammadyahya78 Год назад
Thank you again. You mentioned at 4:10 that there is a dimension is reduced by reductuon ratio r
@huseyin_ozdemir
@huseyin_ozdemir Год назад
Reduction ratio r is used to create a bottleneck. This way, network is forced to learn which channels are important. Then unimportant channels are suppressed scaling them with modulation weights.
@mohammadyahya78
@mohammadyahya78 Год назад
Thank you very much. May I know what is the modulation weight please at 2:11?
@huseyin_ozdemir
@huseyin_ozdemir Год назад
Modulation weight scales a channel depending on the importance of the channel. So following layers focus on important information.
@muhtasirimran
@muhtasirimran Год назад
Any link to understand why 2nd part works?
@AJ-et3vf
@AJ-et3vf Год назад
Awesome video. Thank you
@huseyin_ozdemir
@huseyin_ozdemir Год назад
You're welcome
@balajiharidass4997
@balajiharidass4997 Год назад
Thanks for a great video. It is beautiful to see the clarity of info without Audio. Awesome! Love your other videos too :) Keep going...
@huseyin_ozdemir
@huseyin_ozdemir Год назад
Thanks a lot
@sivuyilesifuba
@sivuyilesifuba Год назад
nice
@huseyin_ozdemir
@huseyin_ozdemir Год назад
Thanks
@speedbird7587
@speedbird7587 Год назад
excellent explanation thanks
@huseyin_ozdemir
@huseyin_ozdemir Год назад
You're welcome