Тёмный
Gorthi Subrahmanyam
Gorthi Subrahmanyam
Gorthi Subrahmanyam
Подписаться
I am Dr. Subrahmanyam Gorthi, faculty in the Department of Electrical Engineering, Indian Institute of Technology (IIT) - Tirupati, India. You may refer to my website: subrahmanyamgorthi.weebly.com/ for more details. I hope you will find some useful content on this channel!
MLIP L31 - Backpropagation Part-2
50:29
2 года назад
MLIP L30 - Backpropagation Part-1
41:14
2 года назад
MLIP L29 - Nonlinear Classifier Part-2
43:52
2 года назад
MLIP L28 - Nonlinear Classifier Part-1
42:07
2 года назад
MLIP L27 - Linear Classifier Part-2
45:38
2 года назад
MLIP L26 - Linear Classifier Part-1
42:51
2 года назад
Комментарии
@RamaKrishnaReddySaikam-u4c
@RamaKrishnaReddySaikam-u4c Месяц назад
sir make new videos and keep uploading on youtube sir..your videos are very interesting and easily understandable..
@AdithyaVardhanReddy
@AdithyaVardhanReddy 2 месяца назад
Your clear explanations and engaging examples make topics much easier to grasp. Eagerly anticipating more of your content!, I'm grateful to be guided by you sir.
@user-rr3ki3gg5k
@user-rr3ki3gg5k 2 месяца назад
thank you so much. It helped me a lot!
@ConPara1
@ConPara1 5 месяцев назад
Super helpful! Thank you!
@user-ju5fw8nc4t
@user-ju5fw8nc4t 6 месяцев назад
well explained lecture!really helpful tbh
@kartiksaroha7976
@kartiksaroha7976 7 месяцев назад
isnt your are misinterpreting the suscript of lambda ?
@gustavohenriquenascimentod3992
@gustavohenriquenascimentod3992 8 месяцев назад
Thanks, I was following the book and didn't undestand the part of brightness adaptation, I try watch some other videos but they just explain exactly how is on the book don't making any analogy, and luckly I could find this video. Regards from Brazil
@GorthiSubrahmanyam
@GorthiSubrahmanyam 8 месяцев назад
Felt happy reading your encouraging comment. Thank you.
@haiderali-wr4mu
@haiderali-wr4mu 10 месяцев назад
Thank you for uploading these videos. It's very helpful
@malepatilokeswarinaidu3628
@malepatilokeswarinaidu3628 Год назад
Thank you very much for the lectures. Explanations are very clear
@wasimawan7204
@wasimawan7204 Год назад
sir how we can calculate the gradient of any image by using finite difference
@ashwinkumar5223
@ashwinkumar5223 Год назад
Thank you Sir.
@nitishgupta3590
@nitishgupta3590 Год назад
Sir, please share the slides of the lectures and thank you for uploading these lectures for us
@GorthiSubrahmanyam
@GorthiSubrahmanyam Год назад
subrahmanyamgorthi.weebly.com/medical-imaging.html Hope that helps!
@a.fratcobanoglu5526
@a.fratcobanoglu5526 Год назад
Thanks a lot :) it really helped me out in many ways. Btw Thanks for explanation in English since it really helps people like me who don't understand Indian :)
@nurpadil-ch3oe
@nurpadil-ch3oe Год назад
hello sir, do you know how to test the accuracy of the canny method?
@amianifineug1353
@amianifineug1353 Год назад
Thank you please do you have code Matlab thank you
@studywithjeth
@studywithjeth Год назад
Sir at 6:47 why ps_s = pr_r * abs(dr/ds)? I don't figure out why is dr/ds here? Why do we get multiply with it ? Your answer is so important to me. Please help me out Thank you <3
@mohamedalphakamara5665
@mohamedalphakamara5665 Год назад
Hello sire, i am very new to Machine Learning, i have being following your videos... please i want to asked how questions looks like for these topics...
@GorthiSubrahmanyam
@GorthiSubrahmanyam Год назад
Please take a look at the assignments and exam papers given at the following web page: subrahmanyamgorthi.weebly.com/machine-learning-for-image-processing.html
@mohamedalphakamara5665
@mohamedalphakamara5665 Год назад
@@GorthiSubrahmanyam alright sire thank you for your reply... you are so good in these topics... i hope i can understand them like u
@ashiksuresh8977
@ashiksuresh8977 Год назад
Thank you
@panpanchen5645
@panpanchen5645 2 года назад
Thank you! This is so helpful!!!
@ShivamSingh-wp1vs
@ShivamSingh-wp1vs 2 года назад
Thank you sir
@rajcet04
@rajcet04 2 года назад
Thank you so much, for the series medical imaging..
@ShopperPlug
@ShopperPlug 2 года назад
Good explanation in Motion estimation.
@mohsen865
@mohsen865 2 года назад
fantastic. i watched this video twice it was perfectly clear
@mohsen865
@mohsen865 2 года назад
long live man,thanks for beautiful explanation
@GorthiSubrahmanyam
@GorthiSubrahmanyam 2 года назад
Thank a lot for the your warm encouraging comment 🙏🏼
@sandravinaykumar570
@sandravinaykumar570 2 года назад
excellent sir
@amarparajuli692
@amarparajuli692 2 года назад
Sir, by looking at the histogram can we always infer that the image has high contrast? Suppose, we have an image with some values in the band of [0-9] but the rest are all in the band of [236-245] and [246-255]. Then will we still call it a High contrast image. Or do we need to have a uniform distribution of values along the intensity lines by using Contrast Stretching?
@amarparajuli692
@amarparajuli692 2 года назад
Sir, regarding aliasing and other keywords. Your course on Digital Signal Processing is enough right? I have taken a course on Convolutional Neural Network earlier, the instructor never talked about Nyquist theorem and to what amount down sampling needs to be done. These concepts are so important and yet undermined. Thank you for the videos. Also, Sir do tell about the prerequisites for this course, like if the courses of DSP and Medical Imaging are required to get a better understanding of this course.
@aedty9844
@aedty9844 2 года назад
thanks so much
@aedty9844
@aedty9844 2 года назад
Very good lecture....Thank you sir
@quadriquadri1655
@quadriquadri1655 2 года назад
sir how to do detection by template matching using java??
@kanakachary2312
@kanakachary2312 2 года назад
dear sir good lecture
@aniruddhaupadhye759
@aniruddhaupadhye759 2 года назад
Very educative. Thank you so much sir !
@surojit9625
@surojit9625 2 года назад
Excellent explanation and presentation. Thank you so much for making the content open source!
@GorthiSubrahmanyam
@GorthiSubrahmanyam 2 года назад
Thank you for the encouraging words. I am glad that the content is useful.
@sachinbhardwaj2679
@sachinbhardwaj2679 2 года назад
errors
@hariharannair3281
@hariharannair3281 2 года назад
sir i passed the medical image analysis course in nptel (iit kharagpur) with decent marks. ii am indebted to you sir, for all the doubts u cleared and that too in record time. because of that i could succeed. thanks a tonne sir. i dont have your email id thus using this public forum to thank you. you are doing a great service to the youth of today sir. god bless u
@GorthiSubrahmanyam
@GorthiSubrahmanyam 2 года назад
It is very gratifying for me to know that the shared videos are useful. Thank you for your appreciation and giving your feedback. Hearty Congratulations for clearing the exam, and wish you all the very best.
@ridhiarora1417
@ridhiarora1417 2 года назад
Sir, your videos are very good and easy to understand.
@amarparajuli692
@amarparajuli692 2 года назад
Sir, do we require Domain Knowledge while working with Deep Learning models as well? Can you give examples where DL models might faulter when worked without Domain Knowledge?
@GorthiSubrahmanyam
@GorthiSubrahmanyam 2 года назад
Deep learning models should never be seen as a substitute for domain knowledge. In fact, deep learning models should be the last resort, or at least should not be definitely your first resort! In the case of complex problems, ideally, your model should elegantly combine domain knowledge with deep learning approaches. Consider a trivial example where, from the domain knowledge, you know that the object of your interest is of circular shape in a given image. If you use a simple parametric model like the Hough transform, you can find those objects with a minimal computational cost even when there are occlusions. On the other hand, if you are trying to achieve the same by blindly feeding an image to a Deep Learning (DL) model, it is no more a trivial problem; you may need a monster-like DL network to achieve the same. We can go on and on with many such examples. Hope that gives some clarity!
@amarparajuli692
@amarparajuli692 2 года назад
Few things about the lecture. 1. What is feature Vector? Image processing vs Computer Vision(I had not the least of idea about inference role of CV, I never looked at it that way) 2. Correlation vs Causation(Very very important) 3. Domain knowledge is the key, ML is the support to make inference once we have the domain knowledge. 4. Recent Trends - Light weight ML+ ML model with fewer number of instances to train. 5. ML applications - Computer Aided Diagnosis(doing really great in predictions) , Speech recognition(Huge application in India) 6. Three months to become a Data Scientist. Great that you debunked some of the general myths. Thankyou for the lecture. P.S: The memes were cool.
@amarparajuli692
@amarparajuli692 2 года назад
Sir, can we have two instances with same feature vector but with different class? How do we respond to that? Also, Is it good practice to not start by looking at any duplicates that might happen, if it happens at all? Can the case of duplicates happen in the problems related to image?
@GorthiSubrahmanyam
@GorthiSubrahmanyam 2 года назад
If two instances from different classes are having the same feature vector, it can mean that those features are NOT the right ones to distinguish the samples from different classes, and hence we need to relook into the feature vectors that we should use for that specific application for effectively representing those samples.
@amarparajuli692
@amarparajuli692 2 года назад
A great introduction sir. Few points I would like to add that were really creative and new in this introduction. 1. What not to expect(really important thing before taking any course) 2. Great resources that have been self tried and found helpful. 3. Students not to worry about you passing or failing the course. 4. Explaining why we need to understand the basics, by using the meme video. 5. Ask any doubts you have, even if the answer is trivial. Probably, I will email you when I have a doubt. Thanks for Online content. Keep us posted sir.
@GorthiSubrahmanyam
@GorthiSubrahmanyam 2 года назад
I am glad you like the introduction. Nice summary of points. All the best for going through the rest of the course also.
@amarparajuli692
@amarparajuli692 2 года назад
@@GorthiSubrahmanyam Sure Sir.
@GorthiSubrahmanyam
@GorthiSubrahmanyam 2 года назад
@@amarparajuli692 By the way, you can simply post your doubts/questions, if any, under the respective video lectures themselves as a comment, and I will try to answer them as a reply.
@amarparajuli692
@amarparajuli692 2 года назад
@@GorthiSubrahmanyam Yeah I was thinking about that. This will help future viewers as well.
@hariharannair3281
@hariharannair3281 2 года назад
sir at 24:05 when wesay meu 2 it is calculated by taking mean of x22 and x12 ??
@GorthiSubrahmanyam
@GorthiSubrahmanyam 2 года назад
I am not sure I understood your question completely... mu_2 is the mean feature vector of all class-2 samples. In the example that was discussed, the size of the feature vector is 2 (i.e., x1 and x2) and hence, mu_2 is also a 2D vector. In other words, the average of x1 values of all class-2 labels is the first element of the mu_2 vector, and the average of x2 values of all class-2 labels is the second element of the mu_2 vector. Hope that clarifies.
@hariharannair3281
@hariharannair3281 2 года назад
@@GorthiSubrahmanyam sure sir . My doubt clarified
@hariharannair3281
@hariharannair3281 2 года назад
@@GorthiSubrahmanyam sir it absolutely clarified my doubts
@hariharannair3281
@hariharannair3281 2 года назад
sir at 26:55 how is it that since if covariance matrix is same the x1^2 and x2^2 terms get cancelled. that means they are same in the gi and gj. how sir
@GorthiSubrahmanyam
@GorthiSubrahmanyam 2 года назад
In order to find the decision boundary between class-i and class-j, you will do g_i(x, y) = g_j(x, y). You can then notice from the equation both x1^2 and x2^2 get canceled out. Isn't it?
@hariharannair3281
@hariharannair3281 2 года назад
sir what have we achieved by doing the g(x) which is a monotonically increasing function on the f(x). was it to make the equation simplified by taking the log on the gaussian eq. what is the larger purpose. please tell sir.
@GorthiSubrahmanyam
@GorthiSubrahmanyam 2 года назад
Not just for simplifying the equation...... In this process, there is further no need to begin the modeling from the Bayesian framework and to explicitly compute the likelihoods, MAPs, etc. We can now start applying "any" monotonically increasing function empirically as long as you see a reason to do so!
@hariharannair3281
@hariharannair3281 2 года назад
@@GorthiSubrahmanyam thanks sir
@hariharannair3281
@hariharannair3281 2 года назад
sir how did u frame the equation at 8:38 . what does prob P(x/ w1) *P(w1) signify
@GorthiSubrahmanyam
@GorthiSubrahmanyam 2 года назад
P(x/ w1) *P(w1) is the maximum a posteriori probability (MAP) of class w1. At the decision boundary, MAP of w1 = MAP of w2, and that is how we got that equation. Hope that clarifies your doubt.
@hariharannair3281
@hariharannair3281 2 года назад
@@GorthiSubrahmanyam thanks so much sir
@yasasvyguntur4109
@yasasvyguntur4109 2 года назад
great lecture sir
@Janamejaya.Channegowda
@Janamejaya.Channegowda 2 года назад
Thank you for sharing, looking forward to more lectures, especially when you start covering PCA later on in the course. Keep up the great work.
@kanakachary2312
@kanakachary2312 2 года назад
Good
@Janamejaya.Channegowda
@Janamejaya.Channegowda 2 года назад
Thank you for sharing, keep up the good work.
@GorthiSubrahmanyam
@GorthiSubrahmanyam 2 года назад
Many THANKS Dr. Janamejaya for the encouraging words.
@gopalsharma7402
@gopalsharma7402 2 года назад
very nice explanation. I have a question......if we want to calculate in 3D what are the assumption?
@053_abdulhannanbhat8
@053_abdulhannanbhat8 2 года назад
Just awesome thanks sir it cleared my concepts