Тёмный

Tutorial 2- Feature Selection-How To Drop Features Using Pearson Correlation 

Krish Naik
Подписаться 1,1 млн
Просмотров 154 тыс.
50% 1

Опубликовано:

 

29 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 213   
@waytolegacy
@waytolegacy 2 года назад
I think instead of dropping "either of" 2 highly correlated features, we should check from both of them how each of them correlates with the target as well and then drop the less correlated with the target variable. Which might increase some accuracy instead of considering dropping whichever comes first. Again, I think it is.
@djlivestreem4039
@djlivestreem4039 2 года назад
good point
@beautyisinmind2163
@beautyisinmind2163 2 года назад
you can check importance value of each using RF and one can be dropped which has less importance value
@niveditawagh8171
@niveditawagh8171 2 года назад
Good point
@niveditawagh8171
@niveditawagh8171 2 года назад
Can you please tell me how to drop the less correlated variable with the target variable?
@beautyisinmind2163
@beautyisinmind2163 2 года назад
@@niveditawagh8171 you only drop when two feature variables are highly correlated but you don't have to drop feature that is less correlated with target variable because less correlated feature with target variable could be a good predictor variable in combination with other features.
@rukmanisaptharishi6638
@rukmanisaptharishi6638 4 года назад
If you are transporting ice-cream in a vehicle, the number of ice-cream sticks that reach the destination is inversely proportional to temperature, higher the temperature, lesser are the sticks. If you want to effectively model the temperature of the vehicle's cooler and make it optimal, you need to consider this negatively correlated features, outside air temperature and number of ice-cream sticks at the destination.
@andyn6053
@andyn6053 Год назад
In which order should u do the feature selection steps? 0. Clean the dataset, get rid of NaN and junk values. Check format for datatypes in testset etc 1. Use z-method to eliminate outliers 2. Normalize the train_X data 3. Check correlation between x_train variables and y_train. Drop variables that have a low correlation with the target variable. 4. Use pearsons correlation test to drop highly correlated variables from x_test 5. Use variance threshold method to drop x_train variables with low variance. All variables that have been removed from the x_train data should be removed from the x_test aswell. 6. Fit x_train and y_ train to a classification model 7. Predict y(x_test) 8. Compare the predicted y(x_test) output with y_test to calculate accuracy 9. Try different classification models and see which one performs the best (have the highest accuracy) Is this the right order? Have I missed something?
@prakash564
@prakash564 4 года назад
Sir your channel is a perfect combination of sentdex and statquest. You are doing a great work 🙌more power to you!!
@ashishkulkarni8140
@ashishkulkarni8140 4 года назад
Sir, could you please upload more videos on feature selection to this playlist? It is very amazing. I followed all the videos from feature engineering playlist. You are doing a great work. Thank you.🙏🏻
@alphoncemutabuzi6949
@alphoncemutabuzi6949 3 года назад
I think the abs is important since it's like having two rows one being the opposite of the other
@MrKaviraj75
@MrKaviraj75 3 года назад
Yes, I think so too. If changes to one feature affects another feature, they are dependent, in other words, they are correlated.
@rhevathivijay2913
@rhevathivijay2913 3 года назад
Being in a teaching profession ,I assure this is the best explanation about Pearson correlation.. Please make more likes.
@KnowledgeAmplifier1
@KnowledgeAmplifier1 3 года назад
I want to point out a veryyy important concept which is missing in this video discussion: Suppose 2 input features are highly correlated then it's not like that , I can drop any between those 2 , then I have to check which feature between those 2 has weaker correlation with output variable , that one has to be dropped.
@siddharthdedhia11
@siddharthdedhia11 3 года назад
what do you mean by weaker? do you mean the most negative?
@KnowledgeAmplifier1
@KnowledgeAmplifier1 3 года назад
@@siddharthdedhia11, here , weaker means lesser correlation with output feature .
@siddharthdedhia11
@siddharthdedhia11 3 года назад
@@KnowledgeAmplifier1 so for example between -0.005 and -0.5 , -0.005 is the one with lesser correlation right?
@KnowledgeAmplifier1
@KnowledgeAmplifier1 3 года назад
@@siddharthdedhia11 yes , correct as correlation value towards 0 is considered as less value and towards 1 or -1 means strong relationship :-)
@amankothari5508
@amankothari5508 3 года назад
@jayesh naidu
@shubhambhardwaj3643
@shubhambhardwaj3643 4 года назад
Any word is not sufficient to thank you for your work sir ....🙏🙏
@parms1191
@parms1191 4 года назад
I write the threshold code simply like [df.corr()>0.7 OR df.corr()
@codertypist
@codertypist 3 года назад
Let's say variables x, y and z are all strongly correlated to each other. You would only need to use one of them as a feature. By saying [df.corr()>0.7 or df.corr()
@nurnasuhamohddaud728
@nurnasuhamohddaud728 2 года назад
Very comprehensive explanation for someone from non AI background. Thanks Sir keep up the good work!
@gurdeepsinghbhatia2875
@gurdeepsinghbhatia2875 4 года назад
I think it all depends on domain that whether to involve the neg corr or not , or we can train two diff models and compare their scores , Thanks Sir
@sukanyabag6134
@sukanyabag6134 4 года назад
Sir, the videos you uploaded on feature selection helped a lot ! , Please upload the rest tutorials and methods too! Eagerly waiting for it !
@suhailsnmsnm5397
@suhailsnmsnm5397 7 месяцев назад
amazing teaching skills you have bhaai ... THNX
@neelammishra5622
@neelammishra5622 2 года назад
Your knowledge is really invaluable. Thanks
@suhailabessa9901
@suhailabessa9901 2 года назад
thank you sOOo much , perfect explaining :) good luck with your channel that is recomended
@abhishekd1012
@abhishekd1012 2 года назад
In this video it's said negatively correlated features are both imp. lets take an example, when we have both percentage and ranks in a dataset, for 100% we have 1 in rank and 60% lets say 45(last) in rank. both resemble the same importance in the dataset. So what I think is we can remove one feature among those 2 features, otherwise we will be giving double weightage for that particular feature. Hope someone can correct this if I was wrong.
@Moiz_tennis
@Moiz_tennis 2 года назад
I have a doubt. Suppose if A and B have correlation greater than threshhold and the loop includes column A from the pair. Further B and C are highly correlated(although C is not highly correlated with A)and the loop includes B in the list. Now if we drop A and B wouldn't that affect the model as both A and B will be dropped?
@abinsharaf8305
@abinsharaf8305 3 года назад
since we are giving only one positive value for threshold, the code abs allows check for both negative and positve values with threshold, so i feel its better if it stays
@perumalelancgoan9839
@perumalelancgoan9839 2 года назад
please clear it the below if any independent variables are highly corelated we shouldn't remove them right because its give very positive outcome
@elvykamunyokomanunebo1441
@elvykamunyokomanunebo1441 Год назад
Thanks krish, You've earned a rocket point from me :) Would have been nice, if the function also printed which feature it was strongly correlated with: because from the code you dropped all the features that met the threshold, not one was kept.
@yashkhant5874
@yashkhant5874 4 года назад
GREAT CONTRIBUTION SIR.... THIS CHENNAL SHOULD 20M SUBSCRIBER🤘🤘
@SuperNayaab
@SuperNayaab 3 года назад
watching this video from Boston (BU Student
@JenryLuis
@JenryLuis Год назад
Hi friend, I think the correlation function is removing more than expected because when the fors loops are iterating not validate if for a value > threshold the column and index already was removed before. I corrected the function and in this case the features removed are these: {'DIS', 'NOX', 'TAX'}. Also I tested creating the correlation matrix again and verify that there is not values > threshold. Please can you check it. def correlation(dataset, threshold): col_corr = set() corr_matrix = dataset.corr() for i in range(len(corr_matrix.columns)): for j in range(i): if abs(corr_matrix.iloc[i, j]) > threshold: if (corr_matrix.columns[i] not in col_corr) and (corr_matrix.index.tolist()[j] not in col_corr): colname = corr_matrix.columns[i] col_corr.add(colname) return col_corr
@pratikjadhav1242
@pratikjadhav1242 3 года назад
We cheak the correlation between inputs and the output so why you drop output column and then cheak correlation we use a VIF (variance inflection factor) to cheak the relationship between inputs and the threshold value is preffer 4.
@nmuralikrishna4599
@nmuralikrishna4599 2 года назад
General Question - What if we drop few of the import features from and data and train again ? Will the accuracy drop ? or precision ?
@hibaabdalghafgar
@hibaabdalghafgar Год назад
again I wish if you explain how to handle the test set...but the explination is excellent am really gratful
@JithendraKumarumadisingu
@JithendraKumarumadisingu 3 года назад
Great tutorial it helps a lot thanks @Krish Sir
@josephmart7528
@josephmart7528 2 года назад
The abs takes care of both positive and negative numbers. If not specified, the function will only take care o positively correlated features
@dinushachathuranga7657
@dinushachathuranga7657 8 месяцев назад
Thanks a lot for very clear explanation.❤
@СалаватФайзуллин-щ3д
Should small values of correlation such as -0.95 be deleted or they are good to train our model and they should stay in data frame?
@suneel8480
@suneel8480 4 года назад
Sir make video on how to select features for clustering?
@youcefyahiaoui1465
@youcefyahiaoui1465 5 месяцев назад
Great tutorial, but I think you're mistaken about the abs(). You're actually considering both with abs(). If you remove abs() and you keep the > inequality then a 0.95 would be > Thresh=0.9, but -0.99 would not satisfy this condition! If you want to remove abs(), then you need to test 2 conditions, like if corr_matrix.iloc[i,j] > +1*thesh (assuming thres is always +ve) and corr_matrix.iloc[i,j]
@RandevMars4
@RandevMars4 3 года назад
Well explained. Really great work sir. Thank you very much
@yasharthsingh805
@yasharthsingh805 4 года назад
Sir , can you please tell which website should I refer if I want to start reading white papers.... Please please do reply....I follow all ur videos!!
@ActionBackers
@ActionBackers 3 года назад
This was incredibly helpful; thank you for the great content!
@gabrielegbenya7479
@gabrielegbenya7479 2 года назад
great video. very informative and educative. Thank you
@siddhantpathak6289
@siddhantpathak6289 3 года назад
Hi Krish, I checked it somewhere and I think if the dataset has perfectly positive or negative attributes then in either case there is a high chance that the performance of the model will be impacted by Multicollinearity.
@niveditawagh8171
@niveditawagh8171 2 года назад
Nice explanation.
@nkechiesomonu8764
@nkechiesomonu8764 2 года назад
Thanks sir for the good job you have been doing . God bless you. Please sir my question is can we use correlation on image data. Thanks
@nahidzeinali1991
@nahidzeinali1991 7 месяцев назад
Thanks so much! very useful. you are so good
@amarkumar-ox7gj
@amarkumar-ox7gj 4 года назад
If idea is to remove highly correlated features, then both highly positive and negative correlation should be considered!!
@waatchit
@waatchit 3 года назад
Thank you for such a nice explanation. Does having 'abs' preserve the negative correlation ??
@pankajkumarbarman765
@pankajkumarbarman765 2 года назад
Very helpful . Thank you sir.
@laxmanbisht2638
@laxmanbisht2638 3 года назад
Hi, thanks for the lecture. What if we have a dataset in which categorical and numeric features are present. Will pearson's correlation be applicable?
@Jnalytics
@Jnalytics Год назад
Pearson's correlation only works with numeric features. However, if you want to explore the categorical features, you can use Pearson's Chi-square test. You can use the SKBest from scikit-learn and chi2. Hope it helps!
@salihsarii
@salihsarii 11 месяцев назад
Thanks Krish 😊
@ireneashamoses4209
@ireneashamoses4209 4 года назад
Great video!! Thank you!👍👍💖
@kalvinwei19
@kalvinwei19 3 года назад
Thank you man, good for my assignment
@antoniodefalco6179
@antoniodefalco6179 3 года назад
thank you, so usefull, good teacher
@kjrimer
@kjrimer 2 года назад
Hello nice video, how to do feature selection if we have more than one target variable? i.e. In case of MultiOutput Regression problem how we can do feature selection. do we have to perform the pearson correlation individually on each of target variable or is there another convenient way that can solve the problem?
@shivarajnavalba5042
@shivarajnavalba5042 3 года назад
Thank you Krish,
@tigjuli
@tigjuli 3 года назад
Nice! please upload more on this topic!! thank you!
@hirakaimkhani3338
@hirakaimkhani3338 2 года назад
wonderful tutorial sir!!
@levon9
@levon9 3 года назад
Two quick questions: (1) Why not remove redundant features, ie highly correlated variables, from X before splitting it into training and test? What would be wrong with this approach? (2) If one features variable is correlated with a value of 1 and another variable with a value of -1 with regard to a given feature, are these also considered redundant?
@doggydoggy578
@doggydoggy578 2 года назад
Hello can I ask a question ? Is Pearson Correlation the same as Correlation-based Feature Selection ?
@sanketargade3685
@sanketargade3685 Год назад
Why we are droping highly correlated feature after spliting train and test either it is easy to drop features from original data set and then we can simply split the dataset?❓😕🤔
@deepanknautiyal5725
@deepanknautiyal5725 4 года назад
Hi krish please a make a video on complete logistic regression for Interview preparation
@Learn-Islam-in-Telugu
@Learn-Islam-in-Telugu 3 года назад
The function used in the example will not deliver high correlation with the dependent variable. Because at the end you dropped the columns without being checking the correlation with dependent variable.
@SachinModi9
@SachinModi9 2 года назад
If absolute is not used then threshold can not be 0.85. If any features are highly co-related negatively like -0.85 Still it wont qualify for the drop. Hence Absolute is necessary.
@drshahidqamar
@drshahidqamar 2 года назад
LOL, you are jsut amazing Boss
@ankitmahajan3674
@ankitmahajan3674 3 года назад
Hi Krish while removing the correlated features we haven't checked that the independent variable is corelated to dependent variable. As you said in staring we should not remove the features that are highly correlated to dependent variables so while generating the heatmap should we include the dependent variable also ? let me know if my understanding is correct?
@prateekkhanna4590
@prateekkhanna4590 3 года назад
Hi Ankit, If we include the dependent variable in this feature selection process, the accuracy of our model might get compromised. Also if you can see in video if 2 features are highly correlated we are only removing 1 feature. So if that feature has good correlation with dependent variable which we don't know yet it is still in the dataset. (As we have dropped only one feature out of those 2)
@amitmodi7882
@amitmodi7882 3 года назад
Wonderful explanantion. Krish as mentioned in video you said you upload 5-6 videos for feature selection. Can you please share the link for rest of them.
@World_Exploror
@World_Exploror Год назад
Can we drop features while comparing correlation of dependent variable with independent variables by taking some threshold....!
@phyuphyuthwe670
@phyuphyuthwe670 3 года назад
Dear teacher, May I ask a question? In my case, I want to predict sale of 4 products with weather forecast information, season and public holiday one week ahead. So, do I need to organize weekly based data? When we use SPSS, we need to organize weekly data, how about Machine Learning? I feel confused for that. In my understanding, ML will train the data with respect to weather information. So, we don't need to organize weekly data because we don't use time series data. Is it correct? Please kindly give me a comment.
@erneelgupta
@erneelgupta 2 года назад
what is the importance of random_state in train_test split ? How the values of random_state (0,42,100 etc.) affect the estiamation???
@rahuldevnath14792
@rahuldevnath14792 3 года назад
Krish, can we not use VIF for collinearity?
@marcastro8052
@marcastro8052 2 года назад
Thanks, Sir.
@aayushdadhich4840
@aayushdadhich4840 4 года назад
Should i practice by writing my own full code including the hypothesis functions, cost functions, gradient descent or fully use sklearn?
@YS-nc4xu
@YS-nc4xu 4 года назад
If you're a student and have time to explore, please go ahead and implement it from scratch. It'll really help you to not only understand the basic working but also the software development aspect of creating any model (refer sklearn documentation and source code) and get to know more about industry level coding practices.
@raghavkhandelwal1094
@raghavkhandelwal1094 3 года назад
waiting for more videos in the playlist
@StanleySI
@StanleySI 2 года назад
Hi sir, there's an obvious flaw in this approach. You can't drop all correlated features, but only some of them. e.g. perimeter_mean & area_se are highly correlated (0.986507), and they both appear in your corr_features. However, you can't drop all of them because from pairplot, you could see perimeter_mean has a clear impact on the test result.
@mariatachi8398
@mariatachi8398 4 месяца назад
Amazing content!~
@aritratalapatra8452
@aritratalapatra8452 3 года назад
If I have 3 correlated columns, I should drop 2 out of 3 right ? why do you drop all correlated features from training and testing set ?
@killerdrama5521
@killerdrama5521 2 года назад
What if we have some features numerical and some features are categorical against categorical output .. which feature section method will be helpful
@thecitizen9747
@thecitizen9747 Год назад
You are doing a great job but can u please do similar series on categorical features in a regression problem?
@TejusVignesh
@TejusVignesh 2 года назад
You are a legend!!🤘🤘
@jannatunferdous103
@jannatunferdous103 Год назад
Sir, what you've shown in the last of this video, in that big data project, after deleting those 193 features, how I can deploy the model? Please share a video (or link if you have in your playlist) the deployment phase after deleting features. Thanks. ❤
@bishwasarkarbishwaranjansarkar
@bishwasarkarbishwaranjansarkar 4 года назад
Hello Krishna thanks for your video but along with please explain real life use as well. Where can we use in real life.
@Egor-sm4bl
@Egor-sm4bl 3 года назад
Perfect defence on 3rd place!
@fongthomas3246
@fongthomas3246 Месяц назад
Pearson correlation coefficient only measures the linear relationship between features. This approach may not be effective if there are non-linear relationships between features.
@arunkrishna1036
@arunkrishna1036 2 года назад
Hi Krish.. how about using an VIF to find the correlated features?
@venkatk1591
@venkatk1591 3 года назад
Do we need use the entire datasets for correlation testing. Are we not missing something by considering the train set only?
@omi_naik
@omi_naik 3 года назад
Great explanation :)
@World_Exploror
@World_Exploror Год назад
Multi collinearity has checked but what about the Correlation of dependent vs independent variables
@HumaidAhmadKidwai17
@HumaidAhmadKidwai17 4 месяца назад
How to check correlation between numerical column (input) and categorical output(in the form of 0s and 1s)
@asha4545
@asha4545 2 года назад
Hello Sir my dataset contains 17000 features, when I execute corr() its taking more than 5 minutes to execute and also for generating heatmap memory related error generating. Can you help to solve the issue?
@ajaykushwaha-je6mw
@ajaykushwaha-je6mw 2 года назад
Hi everyone i need one help. this technique to select numerical features only. Suppose we have done one hot encoding on categorigal data and converted into numerical then can we apply this technique on that features as well(entire data set with numerical column and categorical column converted into numerical with some encoding technique.) Kindly help me to understand.
@rafibasha4145
@rafibasha4145 2 года назад
Hi Krish,how to check in case of categorical variables
@Eric-bq1jo
@Eric-bq1jo 2 года назад
Is there any way to apply this approach to a classification problem where the target variable is 1 or 0?
@marijatosic217
@marijatosic217 3 года назад
What do you think about feature reduction using PCA, looking for a correlation between each feature and principal components, and then using those who have the most number of correlation that is great than 50% (or any other)?
@megalaramu
@megalaramu 4 года назад
Hi kris, in multicollinearity conceps we have both corrlation matrix as well as VIF to remove the collinearity. Which method is best or does that depend upon data
@krishnaik06
@krishnaik06 4 года назад
Both are good...u can use any of them
@megalaramu
@megalaramu 4 года назад
@@krishnaik06 i worked on a dataset which was highly correlated features and both these methods gave me different results. Hence was confused which method to use. Thats why this question. Thanks
@krishnaik06
@krishnaik06 4 года назад
But I have vif was much more good
@PraveenKumar-pd9sx
@PraveenKumar-pd9sx 4 года назад
Hi. megala.. What is VIF. Can you pls tell me
@arjundev4908
@arjundev4908 4 года назад
@@PraveenKumar-pd9sx in short VIF is Variation inflation factor which also helps in finding multicolinearity between independent variables.
@conceptsamplified
@conceptsamplified 3 года назад
Of the highly correlated columns, Should we not keep one of the columns in our X_train dataset?
@rmrz2225
@rmrz2225 Год назад
Hi, sorry for my question, but why is he dropping the features most correlated, it shouldnt keep those features and drop loss correlated features?
@__SHRUTHISRINIVASAN
@__SHRUTHISRINIVASAN 4 месяца назад
Same doubt here
@clintpaul6653
@clintpaul6653 2 года назад
Bro i ഗെറ്റ് error after the line.. Corr_features=correlation (X_train, 0.8) ### the error is 'dataframe' object has no attribute
@prabhusantoshpanda5259
@prabhusantoshpanda5259 4 года назад
While dropping the columns using the list of all corelated columns arent we deleting all of them and not even retaining the ones we actually want. for example, suppose we get 3 corelated columns in the list. and then apply, corelated_columns=[f1,f2,f3] : corr>0.8 for e.g x_train=x_train.drop(corelated_columns,axis=1) then all 3 are getting dropped whereas we want only 2 to drop and retain one?? Please clarify.
@YS-nc4xu
@YS-nc4xu 4 года назад
That's a great question! I believe, we would want to retain one and drop the rest. Dropping all will be a loss of information imo. I would also suggest adding i and j column to the 'set' as well. This would help get pairs of correlated columns rather than just a list. For example, replacing col_corr.add(colname) by col_corr.add((corr_matrix.columns[i], corr_matrix.columns[j])) will give us the pairs, and then we can decide which one to keep. Again this is just my opinion, I might be wrong. Happy learning!
@prabhusantoshpanda5259
@prabhusantoshpanda5259 4 года назад
@@YS-nc4xu Actually this approach of getting corelated pairs is correct. But there is one flaw. I myself have faced this flaw and its quite problematic when tackling a dataset with feature columns more than 500. What happens is we get too many combinations of corelated pairs and they are double in number because while iterating we will get both . for e.g corelated list below . [f4,f9],[f9,f4] ,[f5,f9],[f8,f7],[f4,f8],[f8,f9],[f7,f8],[f9,f5] Check: kaggle.com/MoA prediction competition. And run Pearson's Corr on the dataset. You will be shocked Again going through the whole list and finding out the corelated columns for respective feature while tackling duplicate lists is going to be a very diificult one if done manually. I am in process of trying to figure out a solution to this and hopefully i will. Peace.
@YS-nc4xu
@YS-nc4xu 4 года назад
@@prabhusantoshpanda5259 Sure, my response was just for your point of dropping all correlated cols in the given problem. Additionally, the for loops shown in the video, takes care of the repetition mentioned by you. The 'for j in range(i): ' considers only the lower triangular matrix, thus eliminating the repetitions. Furthermore, for data with more than 500 cols, obviously one wouldn't want to go with Pearson's corr. I believe, this video was to give a basic use case of corr on simple data and not on a high dimension data. In my opinion, PCA / SVD might help for your problem . Peace out!
@piyushdandagawhal8843
@piyushdandagawhal8843 4 года назад
Instead of doing X_train , x_test split, if we find correlation of the whole data and then we compare correlated column's correlation with the dependent feature and then drop only those features among the correlated columns which are less correlated?....does my question makes sense? if it does, would it affect the model?
@PraveenKumar-pd9sx
@PraveenKumar-pd9sx 4 года назад
Same doubt
@YS-nc4xu
@YS-nc4xu 4 года назад
I believe those should be two separate questions. Regarding the split, it is necessary to split before getting correlation to understand its effect on the test data. If you do not split, then when testing, you're already assuming the correlation to be present in the test data and thus overfitting. Remember, the actual "test" data will always be unknown to us, and the split helps us validate the model and generalize it for the future unknown data. For the second question: Yes, that makes sense to me. After getting the "multi-correlated" columns, we can calc the correlation of each with the target, and drop the ones with low absolute correlation.
@PraveenKumar-pd9sx
@PraveenKumar-pd9sx 4 года назад
@@YS-nc4xu Why should we split before the correlation check
@piyushdandagawhal8843
@piyushdandagawhal8843 4 года назад
@@YS-nc4xu YES!! i get it now, Thank you for sorting the issue!
@piyushdandagawhal8843
@piyushdandagawhal8843 4 года назад
@@PraveenKumar-pd9sx if we check correlation of whole data rather than splitting(X_train, X_test). there is a chance that the correlation of whole data will be slightly different than the correlation if we had split. this might give us a better result on the validation (X_test) but would not perform on the actual test data when we deploy it in real world. this is my understanding from @Y S's comment.
@athulsuresh736
@athulsuresh736 2 года назад
You shouldn't remove abs, u should consider the negative correlation as well. Also u have to check its correlation with the target to decide which feature to be removed. Please pull down this video or correct it or else people will study wrong things.
@lwasinamdilli
@lwasinamdilli Год назад
How do you handle correlation for Categorical variables?
@antonyjoy5494
@antonyjoy5494 3 года назад
Sir, I have a query regarding this...features which are highly correlated gives same information right or they are duplicate feature. where does it in this code remove only the duplicate features?? From the code I feel like this is removing all the features which shoiws a value above threshold..
@ruturajjadhav8905
@ruturajjadhav8905 3 года назад
" I feel like this is removing all the features which shoiws a value above threshold" I had the same doubt. Analyze the code. col_corr.add(colname) --> this part takes care it.
@souvikghosh6509
@souvikghosh6509 3 года назад
Sir, kindly make a video on embedded methods of feature selection..
@meshmeso
@meshmeso 6 месяцев назад
These are on numeric features, what of correlation between categorical features ?
Далее
MAGIC TIME ​⁠@Whoispelagheya
00:28
Просмотров 22 млн
This is how Halo felt as a kid 🤣 (phelan.davies)
00:14
Feature selection in machine learning | Full course
46:41
How do I select features for Machine Learning?
13:16
Просмотров 177 тыс.
I Analyzed My Finance With Local LLMs
17:51
Просмотров 490 тыс.
MAGIC TIME ​⁠@Whoispelagheya
00:28
Просмотров 22 млн