Тёмный

Live Day 6- Discussing KMeans,Hierarchical And DBScan Clustering Algorithms 

Krish Naik
Подписаться 998 тыс.
Просмотров 125 тыс.
50% 1

Join the community session ineuron.ai/course/Mega-Community . Here All the materials will be uploaded.
Live ML Playlist: • Live Machine Learning
The Oneneuron Lifetime subscription has been extended.
In Oneneuron platform you will be able to get 100+ courses(Monthly atleast 20 courses will be added based on your demand)
Features of the course
1. You can raise any course demand.(Fulfilled within 45-60 days)
2. You can access innovation lab from ineuron.
3. You can use our incubation based on your ideas
4. Live session coming soon(Mostly till Feb)
Use Coupon code KRISH10 for addition 10% discount.
And Many More.....
Enroll Now
OneNeuron Link: one-neuron.ineuron.ai/
Direct call to our Team incase of any queries
8788503778
6260726925
9538303385
866003424

Опубликовано:

 

1 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 87   
@AmirAli-id9rq
@AmirAli-id9rq 2 года назад
ek number session ... in easy terms ... BIAS is the inability of ML algorithm to capture the 100 percent or exact relationship. To understand bias one must think why do we need a ML in first place. In mathematics or physics we have absolute relationship or formula between dependent and independent variables like s=ut+1/2 at2 (std 7 Physics) or SI = P*R*T so for computing cases like we have absolute formula we don't need any ML algo. ML try to do the same i.e. estimate a formula, let say I want to calculate the purchasing power (P) so I train a model with different variables like income,age, family income and m model fetches a formula P = wo+ b1*income+b2*age + b3* family income..... So this formula is not absolute or universal as its derived by a specific ML algo for specific data but let say by miracle we derive a formula that exactly calculates the purchasing power with 100 percent accuracy so for that model bias is 0 as the model accurately captures the relationship..... Variance ---- Talking about variance, in short way the difference in fits between data set is called variance , imagine we used that same miracle formula in test data and data fits 100 percent as in we get 100 percent accuracy(for different test set) then we can say that the variance is 0 which means the ML formula is perfect or let say when use the same miracle formula in test set we get 50% accuracy which means the bias was low but variance is high as formula didnt work well with unseen (test) data... SO in an imaginary world if bias is 0 and variance is also 0 then my friend you have discovered a formula not an estimation .... In a practical world we aim for a model with low bias and low variance..... Subscribe Krish Channel if this helped
@vikascbr
@vikascbr 2 года назад
Good morning krish.. You have really made my foundation very strong before that I was null in statistic and machine learning since from non technical background.. Now I can read very high level books and could really understand.. You are really great value addition to my learning path..
@KamalSingh-rt2bb
@KamalSingh-rt2bb 2 года назад
Hello sir I started every morning with a new session of machine learning. And last 6 days teach me a lot about machine learning algorithms. Thank you very much for this playlist.
@akhilbez88
@akhilbez88 Год назад
You are the best teacher that I have in my life in this domain,thanks a lot to share this kind of knowledge...
@kumarnityanand4731
@kumarnityanand4731 2 года назад
Excellent and knowledge gaining session and every second spend was gain. Thanks alot 😊 keeping helping and sharing the knowledge & concepts 💐💐💐
@geekyprogrammer4831
@geekyprogrammer4831 2 года назад
Good Evening Krish. Your contents is absolutely a gold mine. Please arrange Deep Learning sessions next :)
@Dovahkiin7994
@Dovahkiin7994 Год назад
Thanks for this great Tutorial.
@krishnadhawalapure
@krishnadhawalapure Год назад
you are one of the best teachers any student can have..❤
@yusmanisleidissotolongo4433
@yusmanisleidissotolongo4433 4 месяца назад
Excellent, just excellent. Thanks
@gayanath009
@gayanath009 4 месяца назад
Super Explanation as always. hats off
@kaustubhkapare807
@kaustubhkapare807 2 года назад
Thank You
@gh504
@gh504 2 года назад
Amazing explanation thank you sir
@parth.mandaliya
@parth.mandaliya 2 года назад
A humble request to you @Krish, make next live streams on Deep Learning.
@Ishaheennabi
@Ishaheennabi 2 года назад
ya
@prashantkandarkar8993
@prashantkandarkar8993 2 года назад
Yes
@kkevinluke
@kkevinluke 2 года назад
I would EDA, cuz that is more applicable in the job scenarios, i.e. it depends on the role, but generally, most roles, require strong EDA knowledge, so, I would go for EDA 7 days. next,
@parth.mandaliya
@parth.mandaliya 2 года назад
@@kkevinluke looks like your opinion won. And I also agree with you.
@kkevinluke
@kkevinluke 2 года назад
Hello @Krish, thank you for the explanations. Please do an extensive depth in EDA sessions next. I appreciate your efforts very much, thanks again.
@rahulalladi2086
@rahulalladi2086 2 года назад
I got placed at tiger analytics Credit goes to u krish Your videos helped me to crack the interview
@rafibasha4145
@rafibasha4145 2 года назад
Hi Rahul ,congrats .please share interview quesions
@pollypravir5378
@pollypravir5378 2 года назад
Thanks
@harshitsamdhani1708
@harshitsamdhani1708 7 месяцев назад
Thank you for the lecture
@pankajkumarbarman765
@pankajkumarbarman765 2 года назад
Thank you so much sir❤️
@abhishekpatil1106
@abhishekpatil1106 2 года назад
First thing First ! Great session 👏 👌 👍
@harshgupta3641
@harshgupta3641 2 года назад
This video is incredible, and very well explained . But if we have more than one feature in our dataset, should we make the feature selection first and then perform the elbow test?
@piyushsonekar1225
@piyushsonekar1225 Год назад
thanks! really want know about exact definition of bias & var great teaching
@kkevinluke
@kkevinluke 2 года назад
Is the silhouette score applicable to hierarchical clustering? as some clusters are within other clusters. How do we differentiate a(i) from b(i) then?
@akarkabkarim
@akarkabkarim Год назад
Thank your sir Krish
@rafibasha4145
@rafibasha4145 2 года назад
Please cover XGboost'GBM and catboost in live videos so we can understamd learn better
@hamzasabir6480
@hamzasabir6480 Год назад
Hello Krish! How it is possible to have 3 centroids when k=2 is specified as you told at 32:00 while introducing kmeans plus?
@raghavsharma8512
@raghavsharma8512 2 года назад
superb.....!!
@tanwilliam7351
@tanwilliam7351 2 года назад
Yes DEEP LEARNING NEXT!
@gummalasaiteja961
@gummalasaiteja961 2 года назад
1.75 speed is he best way to watch and lot of information covered in less time
@dataanalyst1012
@dataanalyst1012 2 года назад
Hello sir. Do you, by any chance, know about the assumptions of k means cluster analysis in the case of large variance?
@sandipansarkar9211
@sandipansarkar9211 2 года назад
finished watching
@ramdasprajapati7884
@ramdasprajapati7884 Год назад
Beautiful sir....
@pankajgoikar4158
@pankajgoikar4158 Год назад
You are just amazing Sir. 😊
@LearningWithNisa
@LearningWithNisa 6 месяцев назад
Hello sir, you are doing great job. do you have any video related to OPTIC clustering?
@dataanalyst1012
@dataanalyst1012 2 года назад
In k means clustering, is there an assumption in numbers of observations and variables? Would having variables greater than observation affect the results of clustering and make it less accurate?
@md.ishtiakrashid1523
@md.ishtiakrashid1523 6 месяцев назад
The video was very good. But how to calculate the feature importance after k-means clustering?
@rakeshliparefms2
@rakeshliparefms2 Год назад
Hi krish sir its learning from you. Can you please detailed video of Principle components analysis
@rafibasha4145
@rafibasha4145 2 года назад
Please start mock interview sessions as well
@navalsehgal1015
@navalsehgal1015 9 месяцев назад
Keep it up.
@ridoychandraray2413
@ridoychandraray2413 Год назад
Krish Naik Sir is Awesome
@sejalkale67
@sejalkale67 2 года назад
A humble request to you @Krish,make next live session streams on Machine learning practice and practicals
@rafibasha4145
@rafibasha4145 2 года назад
Please let me know on which kind of data like linear ,non linear etc which algorithm works better
@ishwarsalunke1838
@ishwarsalunke1838 Год назад
Depends on the data points
@ankan54
@ankan54 2 года назад
What are the type of Biases can there be in a dataset? how to answer this question ?
@shubhamgupta09
@shubhamgupta09 Год назад
Hi Sir, At 1:11:00, I think you had mistakenly spoken the wrong terms for High Bias & low Bias. It should be like for High Bias-> Not perform well, Low Bias-> Perform well. We use Low Bias & low variance for the Generalized Model as it performs well. Correct me if I am wrong.
@ashutoshmishra6920
@ashutoshmishra6920 Год назад
Pata hai bsdk galti se boldiye sir iske liye comment krne ki jarurat nai thi gyaan mat chodo
@AmirAli-id9rq
@AmirAli-id9rq 2 года назад
at 1:11:31 , I guess its wrong if the model captures the good relationship(between dependent and independent variable) in data then it has low bias not high bias. Low bias means that model output the formula is flexible (low bias) to capture the relationship , high bias means that the accuracy is low and model is unable to capture the actual data points .. please verify guys
@user-yc7zi3gy9v
@user-yc7zi3gy9v Год назад
Hello sir take care of your health
@cloudengineer1348
@cloudengineer1348 2 года назад
Hi Krish, Are you planning to take ML (Deep Learning) session?
@mdyounusahamed6668
@mdyounusahamed6668 Год назад
Please make some videos on soft clustering algorithm (ex. Fuzzy C Means)
@dukesoni5477
@dukesoni5477 2 года назад
Mil gya bhai ml padhna ka channel ekdum maja aagya sir
@zahrasiraj766
@zahrasiraj766 2 года назад
sir can you make an urgent lecture on cluster labeling problem ?? document cluster labeling thing ? and what if we enhance this issue as hierarchical cluster labeling thing ?
@amritakaul87
@amritakaul87 2 года назад
@KRISHNAIK SIR, KINDLY PROVIDE THE DBSCAN VIDEO LINK
@minhaoling3056
@minhaoling3056 2 года назад
will you do deep learning series?
@rohanwaghulkar3551
@rohanwaghulkar3551 9 месяцев назад
sir pls make video on homogeneity, completeness, V-measure and Davies-Bouldin Index
@shreyasnatu3599
@shreyasnatu3599 2 года назад
anyone knows where I can get data science/ml internships? I am in third yr of comp eng
@BhavyaArora-co2wd
@BhavyaArora-co2wd 2 месяца назад
Could someone share github link which is being referenced at 51:51?
@RumiAnalytics2024
@RumiAnalytics2024 2 года назад
I didnt find the githuub link sir
@rahulaher3874
@rahulaher3874 Год назад
10/10 rating
@deepsarkar2003
@deepsarkar2003 2 года назад
Where is the Github link for this?
@paneercheeseparatha
@paneercheeseparatha Год назад
K means clustering is not mathematically clear. The line you're drawing connecting the two centroids is ok, but how does that perpendicular line drawn. means how is that perpendicular line decided? Also for any new point, will that line be used to classify for k nearest neighbours is to be used?
@user-wg4ms3xk3p
@user-wg4ms3xk3p 8 месяцев назад
how to find eps and impis in dbsan
@basavarajag1901
@basavarajag1901 Год назад
can i know the matrial link ?
@darshanvala9224
@darshanvala9224 2 года назад
10 out of 10
@harshavardhansvlkkb2290
@harshavardhansvlkkb2290 2 года назад
10/10
@user-mo4xq3zp2j
@user-mo4xq3zp2j 3 месяца назад
Sir can you please provide the github link?
@ALLINONEMEDIA33
@ALLINONEMEDIA33 Месяц назад
Can I've the git hub link here please 😵‍💫
@arpita0608
@arpita0608 Год назад
I don't understand after knowing the clusters we draw the histogram in hierarchical clustering and you are showing we need to draw a parallel like and the number of vertical lines it intersects will be number of clusters?? I mean we already drawing the histogram based on the clusters. Doesn't make sense what you told.
@sandeepagarwal8566
@sandeepagarwal8566 2 года назад
Yes Deep learning course
@tom-shellby
@tom-shellby Год назад
Sir, if low bias - high variance is overfitting and high bias - high variance is underfitting , then what is high bias - low variance ?
@shubhamnaik9555
@shubhamnaik9555 Год назад
That is practically not possible because u will not get a model that performs bad on training data but somehow performs well on test data.
@mainakseal5027
@mainakseal5027 10 месяцев назад
east or west naik sir is suppper duper best
@chitrranshia7765
@chitrranshia7765 2 года назад
Quick qq. High bias meaning better accuracy. ??
@secretsoul9319
@secretsoul9319 2 года назад
No Low Bias & Low variance .
@sidindian1982
@sidindian1982 Год назад
silhouette Code is dam tough to understand Sir 😞
@sridharbajpai420
@sridharbajpai420 10 месяцев назад
51:27 k means cant do cluster like this , kmeans created convex pattern in data
@ishwarsalunke1838
@ishwarsalunke1838 Год назад
Silhouette score
@anubhabsaha3760
@anubhabsaha3760 Год назад
Andrew NG of INDIA==Krish Naik Sir
@siddhantkohli5063
@siddhantkohli5063 2 года назад
Sir pls make a video ON pea
@siddhantkohli5063
@siddhantkohli5063 2 года назад
pca*
@parthshah5482
@parthshah5482 Год назад
silhoit score
Далее
Clustering: K-means and Hierarchical
17:23
Просмотров 197 тыс.
UNO!
00:18
Просмотров 716 тыс.
Clustering with DBSCAN, Clearly Explained!!!
9:30
Просмотров 291 тыс.
StatQuest: K-means clustering
8:31
Просмотров 1,6 млн
This is why Deep Learning is really weird.
2:06:38
Просмотров 376 тыс.
UNO!
00:18
Просмотров 716 тыс.