Тёмный
No video :(

SVM Kernels : Data Science Concepts 

ritvikmath
Подписаться 163 тыс.
Просмотров 72 тыс.
50% 1

A backdoor into higher dimensions.
SVM Dual Video: • SVM Dual : Data Scienc...
My Patreon : www.patreon.co...

Опубликовано:

 

5 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 121   
@cassie8324
@cassie8324 Год назад
you have been teaching me the fundementals of SVMs better than my expensive professor at my university. thank you, man.
@mitsuinormal
@mitsuinormal 6 месяцев назад
same
@KoriKosmos
@KoriKosmos 4 месяца назад
+1
@Gibson-xn8xk
@Gibson-xn8xk 2 года назад
I started learning SVM looking for some material that would provide an intuitive understanding of how this model works. By this time, i have already covered in depth all the mathematics behind it and I have spent almost a month on it. It sounds like a eternity, but i can’t feel myself confident, until i consider everything in details. In my opinion, basic intuition is the most important thing in model’s exploration and you did this extremely cool. Thank you for your time and work. For those, who are new to this channel, i highly recommend you to subscribe. This guy makes an awesome content!
@matattz
@matattz Год назад
Hey, i love that everything we learn in the video is already written on the board. It's so clean and compact, yet so much information. Just great man
@ritvikmath
@ritvikmath Год назад
Thanks so much !
@bztomato3131
@bztomato3131 Месяц назад
When some one has tries a lot to know something, he can explain it much better than others, thanks a lot.
@flvstrahl
@flvstrahl Год назад
By far the best explanation of kernels that I've seen/read. Fantastic job!
@1MrAND
@1MrAND 2 месяца назад
Dude, you are a legend. Finally I understood the power of Kernel functions. Thanks!
@undertaker7523
@undertaker7523 Год назад
I'd love to see a video on Gaussian Process Regression, or just Gaussian Processes in general! Thanks for this video - very helpful
@norebar5848
@norebar5848 Год назад
You are blowing my mind sir, thank you for this amazing explanation! No one else has been able to teach the subject of SVM this well.
@samruddhideshmukh5928
@samruddhideshmukh5928 3 года назад
Amazing explanation!! Finally kernels are way more clearer to me than they have been in the past.
@the_stat_club
@the_stat_club 6 месяцев назад
Was stuck for 3 days on kernels looking at numerous lectures online. You just made it clear. Thank you so much!
@CodeEmporium
@CodeEmporium 3 года назад
Good stuff
@ritvikmath
@ritvikmath 3 года назад
Thanks for the visit!
@DavidLolxe
@DavidLolxe Год назад
As someone who's searched everywhere for an explanation about this topic, this is the only good one out there. Thanks so much!
@twincivet9668
@twincivet9668 Год назад
Note, to get the inner product after transformation to be equivalent to (1+x_i * x_j)^2, the transformation will need to have some constants. Specifically, the transformation should be [x1, x2] --> [1, sqrt(2)*x1, sqrt(2)*x2, x1^2, x2^2, sqrt(2)x1*x2]
@durgeshmishra-fn6kx
@durgeshmishra-fn6kx 7 месяцев назад
Instead ignore the coefficients (for example will have a term 2 xi^(1) xj^(1) so only consider xi^(1) xj^(1) and drop the 2 in the expansion you will get the match).
@adelazhou5900
@adelazhou5900 5 месяцев назад
The two paths diagram explains everything so clearly! Thank you!!
@ritvikmath
@ritvikmath 5 месяцев назад
You're very welcome!
@AndBar283
@AndBar283 3 года назад
Huge, big thank you, for your hard work and spreading the knowledge. Nice, brave explanation.
@ritvikmath
@ritvikmath 3 года назад
My pleasure!
@DevanshKhandekar
@DevanshKhandekar 2 года назад
Great Man. After months of stumbling over the convex optimization theories and KKT and whatnot, this video made everything clear . Highly appreciated.👏👏
@anurondas3853
@anurondas3853 Год назад
Much better than other youtubers explaining the same concept.
@guygirineza4001
@guygirineza4001 3 года назад
Might be one of the best videos I have seen on SVM. Crazy
@danielbriones6171
@danielbriones6171 2 года назад
Been struggling to grasp this even after watching a bunch of RU-vid videos. Finally understand! Must be the magic of the white board!
@obakasan31
@obakasan31 Год назад
This is the clearest explanation of this topic I've seen so far. Thank you
@BiKey91
@BiKey91 9 месяцев назад
dude I like before even watching the vids because I know I won't be disappointed
@johnstephen8041
@johnstephen8041 5 месяцев назад
Bro - Thanks much!!' The way that you are teaching and your understanding is crazy!
@ritvikmath
@ritvikmath 5 месяцев назад
Happy to help!
@mehdi_mbh
@mehdi_mbh 8 месяцев назад
You are amazing! Thank you so much for explaining the math and the intuition behind all of this. Fantastic teaching skills.
@amaramar4969
@amaramar4969 5 месяцев назад
Amazing, Amazing, you are my true guru while I prepare for the university exam. You are far far above my college professors whom I barely understand. Hope you get your true due some how. Subscribed already. 🙏
@DeltaPi314
@DeltaPi314 3 года назад
Marketer studying Data Science here. Amazing content!
@ritvikmath
@ritvikmath 3 года назад
Glad you enjoy it!
@alimurtaza4904
@alimurtaza4904 Год назад
This explanation cleared up everything for me! Amazing work, I can’t thank you enough!
@aalailayahya
@aalailayahya 3 года назад
Absolutely great !
@Palapi_H
@Palapi_H Год назад
Cant thanks you enough to explain it so simply.
@gufo__4922
@gufo__4922 2 года назад
I found you by case and this was a damn miracle, will constantly check for new videos
@qiguosun129
@qiguosun129 2 года назад
You summed up all the needed knowledge about svm, and the discussion in this episode is more philosophical, thank you very much for the course.
@martian.07_
@martian.07_ 11 месяцев назад
Very underrated video
@xKikero
@xKikero 9 месяцев назад
This is the best video I've seen on this topic. Thank you, sir.
@Daily_language
@Daily_language 4 месяца назад
Clearly explained! Thank you!
@hareemesahar6140
@hareemesahar6140 4 месяца назад
That's a great video. Thank you for making this.
@SiddhantSethi02
@SiddhantSethi02 Год назад
Hey man, Just wanted to admire you for your beautiful work on bringing some of the key complex fundamentals such as this to ease. :D.
@zzzzzzzmr9759
@zzzzzzzmr9759 Год назад
Very clear and well-organized explanation. Thank you!
@ritvikmath
@ritvikmath Год назад
Glad it was helpful!
@liat978
@liat978 Год назад
this is the first time i get it! thank you
@thecamelbackfiles3685
@thecamelbackfiles3685 2 года назад
Smart AND fit - these videos are like candy for my eyes and brain 🧠 😂
@uoohknk6881
@uoohknk6881 2 года назад
You spittin knowledge, GD! This needs to go viral
@Ranjithbhat444
@Ranjithbhat444 2 года назад
Can’t get any better explanation than this 👌🏼
@alessandro5847
@alessandro5847 2 года назад
Such a great explanation. First time I get it after many attempts
@abdelrahmantaha9785
@abdelrahmantaha9785 Год назад
very well explained, thank you!
@yt-1161
@yt-1161 2 года назад
Your data science concepts video series is one of a kind
@process6996
@process6996 3 года назад
Awesome explanation. Thank you!
@ritvikmath
@ritvikmath 3 года назад
Glad it was helpful!
@morisakomasaru8020
@morisakomasaru8020 3 года назад
I finally understood what a kernel does! Thanks!
@manishbolbanda9872
@manishbolbanda9872 3 года назад
we get inner products of high dimensional data with out even converting data into high dimension, thats the conclusion i drew, correct me if am wrong.
@ritvikmath
@ritvikmath 3 года назад
Yup, that's exactly the main point !
@pranavjain9799
@pranavjain9799 2 года назад
This is an incredible explanation. It helped me alot. Thank you so much.
@axadify
@axadify 3 года назад
Thats the best video I have seen on kernels on YT! great content
@asdadasasdsaasd
@asdadasasdsaasd Месяц назад
Nice explanation
@geogeo14000
@geogeo14000 Год назад
Very insightful thanks a lot
@eyuelmelese944
@eyuelmelese944 Год назад
This is amazing
@JOHNREINKER
@JOHNREINKER 4 месяца назад
this video is goated
@eacd2743
@eacd2743 Год назад
Great video man thanks a lot!
@giantplantofweed6061
@giantplantofweed6061 Год назад
that well explained. thank you
@ritvikmath
@ritvikmath Год назад
Glad it was helpful!
@zahratebiyaniyan1592
@zahratebiyaniyan1592 Год назад
You are GREAT!
@loveen3186
@loveen3186 Год назад
amazing teacher
@ritvikmath
@ritvikmath Год назад
Glad you think so!
@javiergonzalezarmas8250
@javiergonzalezarmas8250 Год назад
Beautiful
@ritvikmath
@ritvikmath Год назад
Thank you! Cheers!
@lechx32
@lechx32 Год назад
Thank you. I just imagined what a hard time I would have if I tried to grind through all of this math on my own. It is not a good idea for a beginner)
@softerseltzer
@softerseltzer 3 года назад
Your videos are of exquisite quality.
@e555t66
@e555t66 Год назад
Really explained well. If you want to get the theoretical concepts one could try doing the MIT micromasters. It’s rigorous and demands 10 to 15 hours a week.
@hazema.6150
@hazema.6150 2 года назад
Masha'Allah man, like really Masha'Allah. This is just beautiful and truly a piece of gold. Thank you for this
@shaktijain8560
@shaktijain8560 2 года назад
Simply amazing 🤩
@jasonwang9990
@jasonwang9990 2 года назад
Amazing explanation!
@zwitter689
@zwitter689 Год назад
You have done a very good job here - Thank You! How about a list of youtube videos you have done? ( I just subscribed)
@mahdimoosavi2109
@mahdimoosavi2109 2 года назад
dude I love you
@dungtranmanh7820
@dungtranmanh7820 Год назад
Thank you very much ❤, you save us a lot of time and effort, hope I can work with you someday
@harshitlamba155
@harshitlamba155 2 года назад
Hi Ritvik, this is an excellent explanation of the kernel trick concept. I have a doubt though. When we apply 2-degree polynomial trick to the dot product of the two vectors we will apply (a+b+c)**2 formula. Doing this will introduce a factor of 2 for a few terms. Is it ignored since it will just scale the dot product?
@durgeshmishra-fn6kx
@durgeshmishra-fn6kx 7 месяцев назад
Ignore the coefficients (for example will have a term 2 xi^(1) xj^(1) so only consider xi^(1) xj^(1) and drop the 2 in the expansion you will get the match).
@ireoluwaTH
@ireoluwaTH Год назад
Your videos rank pretty high on the 'binge-ability' matrix...
@ritvikmath
@ritvikmath Год назад
Thanks!
@nimeesha0550
@nimeesha0550 3 года назад
Great Job! Thank you soo much!!
@kevinmeyer3863
@kevinmeyer3863 3 года назад
Hi Ritvik, in the end you have to sum the values in the 6-tuple to get the equivalent to the kernel output, right? (in order to get a proper scalar from the scalar product)
@GAZ___
@GAZ___ 3 месяца назад
This is a good explanation, but I'm a bit confused about the terms on the bottom right corner. Did we reach this by squaring the parentheses And then taking? That's gonna result in the sum of the terms, so what did we do next, take each term independently and set it as a term?
@moravskyvrabec
@moravskyvrabec Год назад
Dude, like the other commenters say, you are so good at just laying stuff out in plain English. Just for this and the prior video I'm going to hit subscribe...you deserve it!
@ritvikmath
@ritvikmath Год назад
Wow, thanks!
@arvinds7182
@arvinds7182 Год назад
quality👏
@jalaltajdini7959
@jalaltajdini7959 2 года назад
Thanks, this was just what I wanted 😙
@maged4087
@maged4087 2 года назад
i love you man. i am vt student. i wish that i knew this a month a go :(
@oscargonzalez-barrios9502
@oscargonzalez-barrios9502 2 года назад
Wow, thank you so much!
@MauroAndretta
@MauroAndretta Месяц назад
What is not clear for me is that, is the output of the kernel function a scalar?
@victorsun9802
@victorsun9802 3 года назад
Amazing explanation! Thanks for making these series of video on SVM. One question is that does kernel/kernel trick can also be applied on other model like logistic regression? I saw some online posts saying kernel can be applied on logistic regression but seems like it's very unpopular. Wonder if it's because the logistic regression and other models can't really get the dot product term, which makes computation expensive or other reasons? Thanks!
@durgeshmishra-fn6kx
@durgeshmishra-fn6kx 7 месяцев назад
Little late but still, It can be applied to any ML algorithm, for example Linear regression (Kernelized) and so on, to include higher dimensional polynomial features instead of linear attributes.
@thirdreplicator
@thirdreplicator 2 года назад
Ritvik for president!
@ritvikmath
@ritvikmath 2 года назад
haha!
@walidghazouani9427
@walidghazouani9427 2 года назад
what is xj exactly? am i understanding it right if i can consider it as the triangle data point and xi are the x data points...? so xj is like feature variables within our data...?
@mainakmukherjee3444
@mainakmukherjee3444 Год назад
Why we calculate the the inner products ? I understand the data points need to be transformed in higher dimensions, so that they can be linearly sepereble. But why we calculate the 6 dimensional space for that ?, say we have 2d space (original feature space), we can transform it to 3d space to make things done.
@moatzmaloo
@moatzmaloo 2 месяца назад
Thats correct applyinf polynomial kernel quadratic for example will convert it to 3d dimensions but rdf can convert it to infinite dimensions
@manishbolbanda9872
@manishbolbanda9872 3 года назад
what do you mean by Inner products of original data?
@samuelrojas3766
@samuelrojas3766 2 месяца назад
I am still confused about how you developed the kernels in the first place. I know what they do but don't know how to obtain them without using the transformed space.
@damialesh2109
@damialesh2109 2 года назад
If we plugged in the kernel function output(similarity of our points in higher dimensional space) into the primal version of the cost function i.e use the similarity instead of the inputs themselves. Would it be equivalent to solving the dual function? Just a lot more inefficient?
@PF-vn4qz
@PF-vn4qz 3 года назад
Thank you!
@Fat_Cat_Fly
@Fat_Cat_Fly 9 месяцев назад
magic
@mattkunq
@mattkunq 2 года назад
Can someone elaborate how a kernal exactly does that? At the end of the day, we still need the higher demsion data no? I'm confused.
@murilopalomosebilla2999
@murilopalomosebilla2999 2 года назад
Thanks!
@Kirill-xp9jq
@Kirill-xp9jq 3 года назад
What is the purpose of finding the relationship between two separate vectors? Why can't you just take the polynomial of a vector with respect to itself (xi_1^T xi_1+c)^2? Wouldn't your number of terms just blow up when you have to find K(xa,xb) for every a and b in X?
@hussameldinrabah5018
@hussameldinrabah5018 2 года назад
why do we add 1 term to the dot product in Kernel?
@richardbloemenkamp8532
@richardbloemenkamp8532 Год назад
He did not derive the kernel. He showed that if you use (1 + )^2 as a kernel, then if you work it out, you get exactly the same terms as when you explicitly compute (except for a few factors 2). If you would take the kernel ()^2 then you would not get the same terms. Probably some clever person invented the kernel: (1 + )^2 , but it is not explained here how he/she found it. Note there are also other kernel functions that work well for SVM, but with different basis functions.
@revycayolivia
@revycayolivia 2 года назад
sorry may I ask? how if we have 4/5 class ? how we describe or using it?
@ccuuttww
@ccuuttww 3 года назад
The Phi is always impossible to compute directly If u don't mind I can give u a simple kernel PCA example to help viewers because this concept is hard to understand if u are new to this topics
@ritvikmath
@ritvikmath 3 года назад
sure! any resources are always welcome
@iidtxbc
@iidtxbc 3 года назад
Why does 1 mean in the transformed matrix?
@ritvikmath
@ritvikmath 3 года назад
1 is just for the "intercept". It's like the "b" term in the linear equation "y=mx+b"
@KernaaliKehveli
@KernaaliKehveli 3 года назад
Hey, I know your videos are according to the current theme, but would be great to have a projector matrix/subspace video at some point in the future! Keep up the great content
@skelgamingyt
@skelgamingyt 9 месяцев назад
india se ho kya bhai?
@nuclearcornflakes3542
@nuclearcornflakes3542 10 месяцев назад
let him cook
Далее
SVM Dual : Data Science Concepts
15:32
Просмотров 47 тыс.
Support Vector Machines : Data Science Concepts
8:07
Timings hated him #standoff #timing #meme
00:14
Просмотров 430 тыс.
when you have plan B 😂
00:11
Просмотров 3,1 млн
The Kernel Trick - THE MATH YOU SHOULD KNOW!
7:30
Просмотров 172 тыс.
I Day Traded $1000 with the Hidden Markov Model
12:33
SVM (The Math) : Data Science Concepts
10:19
Просмотров 100 тыс.
Support Vector Machines: All you need to know!
14:58
Просмотров 143 тыс.
16. Learning: Support Vector Machines
49:34
Просмотров 1,9 млн
SVM10 The Kernel Trick (Part1: Basis Expansion)
16:06
The Kernel Trick in Support Vector Machine (SVM)
3:18
Просмотров 255 тыс.
Timings hated him #standoff #timing #meme
00:14
Просмотров 430 тыс.