Тёмный
Saptarsi Goswami
Saptarsi Goswami
Saptarsi Goswami
Подписаться
S4DS Ideathon
3:12
3 года назад
GAN using Tensorflow
16:08
3 года назад
GAN - A simple introduction
20:47
3 года назад
Genetic Algorithm Overview
22:08
3 года назад
Data warehouse Overview
45:38
3 года назад
Insertion, Deletion, Update Anomaly
7:18
3 года назад
Smote with Python
16:34
3 года назад
Oracle LiveSQL Overview
6:26
3 года назад
DBMS SQL Tutorial
36:34
3 года назад
Python Lists
26:23
3 года назад
Python List Class 1
10:04
3 года назад
Data Mining Class 2021 01 06 Clustering
1:00:07
3 года назад
K Means and Hclust using R Programming
23:13
3 года назад
A Tutorial on Semi Supervised Learning
13:37
3 года назад
Applying kNN using R
16:56
3 года назад
Lec 22 CNN Architectures 2 AlexNet
17:34
3 года назад
Комментарии
@muhammadrauhan3727
@muhammadrauhan3727 Месяц назад
Great video! Sir, your PC or Laptop specification?
@RamandeepSingh_04
@RamandeepSingh_04 3 месяца назад
Thank you sir really helpful
@vijaykumar-od7kx
@vijaykumar-od7kx 3 месяца назад
thanks saptarsi, this video hepled me to understand the SMOTE much better. thanks again
@nk-dy4hc
@nk-dy4hc 4 месяца назад
Very good explanation. You deserve more subscription sir. RU-vid shorts might bring some users. Unfortunately, the algorithm works that way. All the best.
@sumanbasak3883
@sumanbasak3883 4 месяца назад
Badaa hi khatarnaak subject hain !! 🙂🙂
@iitncompany
@iitncompany 7 месяцев назад
Wring at 10.40 s1 and s2 matrix both calculation wrong .
@shatiswaranvigian9349
@shatiswaranvigian9349 8 месяцев назад
Sir any idea VGG on noise samples?
@malihehheydarpour104
@malihehheydarpour104 9 месяцев назад
Thanks for your video. could you please help me to find the next video of that which you discussed about different methods of smote?
@aadiljamshed5239
@aadiljamshed5239 10 месяцев назад
Sir for applying ANOVA for feature selection we need to apply normality test to demonstrate whether our data follows a normal distribution? Or we can apply for any type of data set without checking the normal distribution.... could you please clarify it?
@abhay9994
@abhay9994 10 месяцев назад
Wow, this video on Linear Discriminant Analysis (LDA) by Instructor Saptarsi Goswami is incredibly informative and well-explained. I truly appreciate how he breaks down the concepts and compares LDA to PCA, highlighting the advantages of LDA. The explanations of the Fisher discriminant ratio, inter-class scatter, within-class scatter, and eigenvalue decomposition have given me a solid understanding of LDA. Thank you, Instructor Saptarsi, for sharing your expertise and helping me improve my knowledge in this area!
@SumitGoswami
@SumitGoswami Год назад
Hey @Saptarsi Amazing!! how to get your contact details.
@tyronefrielinghaus3467
@tyronefrielinghaus3467 Год назад
English too painful to listen I'm afraid.
@memoonashehzadi9660
@memoonashehzadi9660 Год назад
In SMOTE, on which bases do we identify a point from minority class, in step 1?
@sinan_islam
@sinan_islam Год назад
Did anyone had a case where SMOTE made ML models performance even worse?
@UMARFARUK-qu3vc
@UMARFARUK-qu3vc Год назад
Thanks Sir , It is very useful.
@TheJuniorApollo
@TheJuniorApollo Год назад
Thank you sir
@user-cu6dq2qe6w
@user-cu6dq2qe6w Год назад
Hello sir, how can I do it if I want to solve a problem with three categories with the smote algorithm
@nintishia
@nintishia Год назад
Excellent exposition, thanks.
@komalsangle8259
@komalsangle8259 Год назад
Is mahalanobis and Manhatten is same?
@mostafakhazaeipanah1085
@mostafakhazaeipanah1085 Год назад
great video. I can't understand how scores gets calculated can you help me??
@neoblackcyptron
@neoblackcyptron Год назад
Wow this is one of the most insightful deep explanations on the origins and mechanics of GMM, EM algorithm. Great job.
@SaptarsiGoswami
@SaptarsiGoswami Год назад
Thanks a lot
@TobiShoyinka
@TobiShoyinka Год назад
how can i add random_state to Agglomerative clustering...my clustering numbers keeps changing everytime i re-run the model
@lipe5331
@lipe5331 Год назад
Thanbk you very much sir
@SaptarsiGoswami
@SaptarsiGoswami Год назад
Thanks for your comments
@bulusuchanakyachandrahas7380
top tier and simple
@SaptarsiGoswami
@SaptarsiGoswami Год назад
Thanks a bunch
@theNeuo13
@theNeuo13 Год назад
In the SWAP stage, do we drop the medoid node that is replaced by non-medoid node so it is not SWAP checked against other medoid nodes? in other words, dose the medoid node becomes non-medoid if it is replaced by a non-medoid and SWAP checked against the other initial medoids or removed from the list of medoids and non-medoids?
@Nadia-db6nb
@Nadia-db6nb Год назад
Hi. May i know why the no of components for LDA is based on no of classes when we're trying to reduce the no of features?
@awon3
@awon3 Год назад
What ist X_train1 and y_train1? you use it but it was never defined
@abhijit777999
@abhijit777999 Год назад
Nice explanation of anova lectures that you have given
@SaptarsiGoswami
@SaptarsiGoswami Год назад
Thank you very much
@thejaswinim.s1691
@thejaswinim.s1691 Год назад
great job...
@SaptarsiGoswami
@SaptarsiGoswami Год назад
Thank you
@pcooi7811
@pcooi7811 Год назад
Thank you sir.
@SaptarsiGoswami
@SaptarsiGoswami Год назад
Thanks a lot
@nehabhullar913
@nehabhullar913 2 года назад
Sir how to use vgg 16 for gray scale images
@hsumin3302
@hsumin3302 2 года назад
Thank you for sharing this video. I have learned much from it.
@SaptarsiGoswami
@SaptarsiGoswami Год назад
Thank you so much
@bruteforce8744
@bruteforce8744 2 года назад
excellent video...just a small correction, the mean of the y componets of X1 is 3.6 and not 3.8
@solwanmohamed9400
@solwanmohamed9400 2 года назад
i need the material
@piyukr
@piyukr 2 года назад
It was a very helpful lecture. Thank you, Sir.
@waqaralam7519
@waqaralam7519 2 года назад
Finally my doubt is resolved, keep Going Sir
@SaptarsiGoswami
@SaptarsiGoswami Год назад
Thank you so much
@RupshaliDasgupta
@RupshaliDasgupta 2 года назад
Can anyone please provide the link of the datatset
@chrisleivon8567
@chrisleivon8567 2 года назад
why is there 22 in acc = np.empty(22). I mean can we put some lower no instead of 22? i am stuck in re-training labelled and pseudo-labelled data
@iheleanbeefpatty
@iheleanbeefpatty 2 года назад
Thank you for this video sir.
@sajadms4121
@sajadms4121 2 года назад
thank you so much for the video but i have a question in adasyn we choose must far instance to have higher chance of being sampled to avoid over fitting ? if yes but what if was a noisy one ?
@musmanmureed3728
@musmanmureed3728 2 года назад
Thanks very informative but question is that can we use any csv file with tsne?
@jahanvi9429
@jahanvi9429 2 года назад
thank you , very helpful
@SaptarsiGoswami
@SaptarsiGoswami 2 года назад
You are welcome, thanks for your comments
@junaidali5129
@junaidali5129 2 года назад
You don't share the notebook of this code sir
@startrek3779
@startrek3779 2 года назад
Very informative and clear. Thank you for your effort! The following are the steps for the self-learning algorithm. 1. Train a supervised classifier on the labelled data. 2. Use the resulting classifier to make predictions on the unlabelled data. 3. Add the most confident of these predictions to the labelled data set. 4. Re-train the classifier on both the original labelled data and the newly obtained pseudo-labelled data. 5. Repeat steps 2-4 until no unlabelled data remain. There are two hyperparameters to set, the maximum number of iterations and the number of unlabelled examples to add at each iteration. One issue of self-learning is if we add many examples with incorrect predictions to the labelled data set, the final classifier may be worse than the classifier only trained on the original labelled data. I hope this answer may help someone interested in semi-supervised learning.
@SaptarsiGoswami
@SaptarsiGoswami 2 года назад
Thanks a lot for the addition
@rajatrautela6257
@rajatrautela6257 2 года назад
Thanks a lot sir. I have a doubt. I used LDA on Water Potability dataset from Kaggle. I did all the cleaning of data and proceeded with the methods you taught. Since my data was of binary classification, I had only 1 component. So, which graph should I plot for such cases to show classification and how? Also, the accuracy for my dataset was quite low around 61%. Any suggestions on this low accuracy on why it is so?
@saswatisahoo6235
@saswatisahoo6235 2 года назад
Sir, it's fantastic. Thnx a lot
@SaptarsiGoswami
@SaptarsiGoswami 2 года назад
Thank you very much
@arghyakusumdas54
@arghyakusumdas54 2 года назад
Thanks Sir, for the video which was very easy to understand. However I was thinking if the labelled dataset contains sample of 2 classes only(does not contain a sample of a possible 3rd class) and the unlabeled sample contains that specific sample of 3rd class (without the class), then I think the classifier trained on the labeled data cannot properly predict and its confidence for both classes would be low. Can anything or strategy be adopted in this case?
@AnkitSingh-cg3rp
@AnkitSingh-cg3rp 2 года назад
Thankyou very much for such a informative video
@SaptarsiGoswami
@SaptarsiGoswami 2 года назад
Thanks for your comments
@zaheerabbas4718
@zaheerabbas4718 2 года назад
The new point I learnt is to calculate the cluster purity with respect to ground labels. Thanks for the knowledge sharing and please keep going!
@SaptarsiGoswami
@SaptarsiGoswami 2 года назад
Thanks for your encouragement dear
@KaushikJasced
@KaushikJasced 2 года назад
Thank you sir for giving a wonderful lecture. Can you tell me how I can put the sampling ratio as per my choice instead of 1:1 using SMOTE?
@SaptarsiGoswami
@SaptarsiGoswami 2 года назад
Please go through the parameters class imblearn.over_sampling.SMOTE(*, sampling_strategy='auto', random_state=None, k_neighbors=5, n_jobs=None). the sampling strategy parameter will give you the handle.