Тёмный

Machine Learning Tutorial Python - 9 Decision Tree 

codebasics
Подписаться 1,1 млн
Просмотров 526 тыс.
50% 1

Опубликовано:

 

29 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 1 тыс.   
@codebasics
@codebasics 2 года назад
Check out our premium machine learning course with 2 Industry projects: codebasics.io/courses/machine-learning-for-data-science-beginners-to-advanced
@honeymilongton8401
@honeymilongton8401 2 года назад
it is better for us if you please provide that slides sir can you please send slides also sir
@adiflorense1477
@adiflorense1477 2 года назад
Cool
@kisholoymukherjee
@kisholoymukherjee Год назад
Hi Dhaval sir, please note I tried to register in Python course. But the link is not working on the site
@Swormy097
@Swormy097 Год назад
@codebasics Hello Sir, Regarding the encoding approach (label encoding) used in the video, I read on the sklearn documentation that it should be used only on the target variable (output "y") and not the input feature ("x"). The documentation stated that for input feature one should use either onehotencoder, ordinalencoder, or dummy variable encoding. Also, I was expecting that you use onehotencoder(OHE) since the input features (company, job and degree) are nominal and not ordinal variables. Is it best practice to use OHE for nominal variables or it just doesn't matter? Please could you clarify for me??? Thank you.
@franky0226
@franky0226 4 года назад
Got an accuracy of 78.92 Thanks for the Lovely tutorial !
@kartikeyamishra4641
@kartikeyamishra4641 5 лет назад
This is by far the most straight forward and amazing video on decision trees I have come across! Keep making more videos Sir! I am totally hooked to your channel :) :)
@codebasics
@codebasics 5 лет назад
Thanks kartikeya for your valuable feedback. 👍
@MunnaSingh-dx3or
@MunnaSingh-dx3or 4 года назад
Simple explanation thank you! The excercise you have given got score of 98.18%... And it's predicting pretty well 👍 Thank you once again
@niyazahmad9133
@niyazahmad9133 4 года назад
Best_params_ plz
@洮云陇草
@洮云陇草 4 года назад
This is unbelieveable. I saw someone used Random forecast, SVM, Gradient Boosting etc. The best score on testing data is 84%. With simple Decsion Tree, best score would be around 82%, i think.
@AnilAnvesh
@AnilAnvesh 2 года назад
Thanks for this video. I have used train and test csv files of titanic. Cleaned both datasets and implemented Decision Tree Classifier and got a test score of 0.74 ❤️
@codebasics
@codebasics 2 года назад
That’s the way to go anil, good job working on that exercise
@ritamsadhu2873
@ritamsadhu2873 Год назад
Score is 97.75% for exercise dataset. Filled the null values in Age column with median value
@RohithS-ig4hl
@RohithS-ig4hl Год назад
I did the same thing, but i still get accuracy around 79%. Any suggestions?
@istiakahmed3033
@istiakahmed3033 11 месяцев назад
@@RohithS-ig4hlHey, I got 80% percent accuracy. I got also low accuracy like your.
@mukulborole
@mukulborole 2 года назад
Thank you for this awesome tutorial Sir I got accuracy of 97.98% I replaced the missing age values with mean of whole age column.
@surajraika9245
@surajraika9245 2 года назад
where did you get that data base
@mukulborole
@mukulborole 2 года назад
@@surajraika9245 You can find the dataset on his github repo
@surajraika9245
@surajraika9245 2 года назад
@@mukulborole thanks
@ashishbirajdar5
@ashishbirajdar5 2 года назад
Amazin video, thank you so much! I have a question.. In the dummy variable video, you had mentioned that we should always make sure when we do the One Hot Encoding, we should create different columns. ie. if Monroe township = 1, Robbinville = 2 and West Windosor = 3.. and so we want to avoid confusing the model which may assume Monroe township < Robbinville < West Windosor.. But in this video, you're assigning company names Google = 0, ABC Pharma = 1 and Facebook = 2. Is it the right thing to do?
@codebasics
@codebasics 2 года назад
Decision tree is one of those algorithms where label encoding works ok in some cases like ours and you can save some memory space by not using OHE. Check this for some insights: datascience.stackexchange.com/questions/9443/when-to-use-one-hot-encoding-vs-labelencoder-vs-dictvectorizor Having said that since a number of categories are small we can use OHE as there is no concern with sparsity. If I have to re-record this session, I'd probably use OHE.
@vaibhavdhand1140
@vaibhavdhand1140 4 года назад
Thank you, sir, the exercise that you gave at the end of your lectures help us to experiment and get an in-depth knowledge of the algorithm. accuracy achieved =0.87
@codebasics
@codebasics 4 года назад
Perfect. thats a pretty good score. Good job.
@jaihind5092
@jaihind5092 4 года назад
@@codebasics sir, i got 97.7% accuracy
@HarshalDev
@HarshalDev 4 года назад
@@jaihind5092 how did you acheibe a score of 97.7 % ? i only achevied 82 :( even after removing all NAN values from age and conveting age n fare to int my score went from 74 to 80 to finally flattened at 82 ! help me improve .
@HarshalDev
@HarshalDev 4 года назад
how did you acheibe a score of 87 % ? i only achevied 82 :( even after removing all NAN values from age and conveting age n fare to int my score went from 74 to 80 to finally flattened at 82 ! help me improve . thanks
@zainhana2968
@zainhana2968 2 года назад
i start to learn about machine learning and your video help me so much to make understanding
@moushmi_nishiganddha
@moushmi_nishiganddha 2 года назад
thank you for this ML playlist....your way of teaching is the best anybody can understand if they watch videos in sequence my model score is 1 i replace all the NaN values in age by mean value of age by Pclass
@abhishekgoyal7580
@abhishekgoyal7580 2 года назад
you didn't split the dataset into training and test and maybe that's why its 1 coz your test is same as train model. split the dataset and check the score
@moushmi_nishiganddha
@moushmi_nishiganddha 2 года назад
@@abhishekgoyal7580 i split the data but i used x_train,y_train as parameter in score method. now my score show .79 thanks for correcting me
@abhishekgoyal7580
@abhishekgoyal7580 2 года назад
@@moushmi_nishiganddha just saw your profile. You’re from houston too?
@moushmi_nishiganddha
@moushmi_nishiganddha 2 года назад
@@abhishekgoyal7580 yes
@piyushjha8888
@piyushjha8888 4 года назад
Sir Accuracy for exercise given=98,20 percent. Thanks one again for great video
@codebasics
@codebasics 4 года назад
Great that's an excellent score Piyush. Good job :)
@piyushjha8888
@piyushjha8888 4 года назад
@@codebasics thanks sir . your ML series is grt source to learn. i do all your exercise
@kabirnarayanjha
@kabirnarayanjha 5 лет назад
Wohooooo once again new video thank you so much sir
@Egitam-ow7ih
@Egitam-ow7ih 5 месяцев назад
I question I have is arent we supposed to do OneHotEncoding since the variables are not ordinal or is it that decision trees takes care of it since it doesnt considers the magnitude of features but rather the values of feature to determine the rules
@dmcg_creative
@dmcg_creative 3 месяца назад
This is a wonderful video, very clear overview, thank you! Is there a way to predict a continuous variable vs just a binary one (yes/no)? For example if I wanted to take purchase amount, gender, and whether or not they started a subscription, how much is this person likely to spend over the next year? Thanks in advance!
@zerotoherofacts7106
@zerotoherofacts7106 3 года назад
Got a score of 0.9845 using decisiontree. Thanks for the great tutorials.
@zerotoherofacts7106
@zerotoherofacts7106 3 года назад
I also used the same dataset for training and also for the score but i did not got score 1. Why?
@codebasics
@codebasics 3 года назад
That’s the way to go ashish, good job working on that exercise
@saisanthosh8370
@saisanthosh8370 Год назад
For testing 0.79 For training 0.985 thank you for the lectures these are smooth to learn the machine learning
@kalaipradeep2753
@kalaipradeep2753 Год назад
How to fill empty value on age feature
@myslates2854
@myslates2854 11 месяцев назад
@codebasics. Why did we used sklearn LabelEncoder instead of pd.get_dummies. SInce the company name, Job, degree are nominal categorial data we should have used pd.get_dummies instead of LabelEncoder. LabelEncoder should be used mostly for Target variables and that too when the data is ordinal Categorical Data e,g low < medium < high. Please help to clarify my doubt.
@r0cketRacoon
@r0cketRacoon 7 месяцев назад
i thought so, using onehotencoder for compay and job columns, ordinalencoder for degree? have u figure out the answer?
@КоробкаРобота
@КоробкаРобота 3 года назад
My score is: Without Train Test Split - 0.97 With Train Test Split - 0.77 Thanks for your video!
@codebasics
@codebasics 3 года назад
Hood work. thanks for working on the exercise
@nnennaumelloh8834
@nnennaumelloh8834 3 года назад
Thank you for the straightforward explanation!
@codebasics
@codebasics 3 года назад
Glad it was helpful!
@MLLearner
@MLLearner 6 месяцев назад
81% accuracy Sir! Thanks, a lot.
@nihalchidambaram3395
@nihalchidambaram3395 2 года назад
Hello Sir, Great tutorial. My model's accuracy for the titanic dataset came out to be 82%. Thank you.
@viwygervitq3
@viwygervitq3 2 года назад
this really helped me. Thank you
@amalsunil4722
@amalsunil4722 4 года назад
Got an accuracy of average 97-99%(for the different test/validation Dataset...using different values for randomstate) for the titanic dataset. Features used-->Age,Sex,Fare,Pclass
@mithunjain4834
@mithunjain4834 3 года назад
can you please say why fit_transform () is used with labelencoder?
@spicytuna08
@spicytuna08 3 года назад
you explain so well. thanks.
@usmanasad3146
@usmanasad3146 5 лет назад
As usual, all your videos are awesome to watch. Thanks for the same :)
@60pluscrazy
@60pluscrazy 2 года назад
Amazing explanations 👌
@KallolMedhi
@KallolMedhi 5 лет назад
can anyone tell me why didn't we use OneHotEncoding in this example???? does it mean that we need dummy variable only in Regression algorithms???
@daisydiary1895
@daisydiary1895 5 лет назад
I also got the same question. I appreciate if somebody help.
@daisydiary1895
@daisydiary1895 5 лет назад
Maybe here is the answer: "Still there are algorithms like decision trees and random forests that can work with categorical variables just fine". datascience.stackexchange.com/questions/9443/when-to-use-one-hot-encoding-vs-labelencoder-vs-dictvectorizor
@Bobette_2409
@Bobette_2409 4 года назад
use pandas.get_dummies
@amalsunil4722
@amalsunil4722 4 года назад
Using One hot encoding worsens the accuracy of trees...therefore it's recommended to use label encoding
@mihirsheth9918
@mihirsheth9918 3 года назад
Sir i have a doubt regariding method .score() from sklearn.model_selection.DecisionTreeClassifier and accuracy_score() from sklearn.metrics. you have computed the performance of the model on the basis of .score().What if we compute on basis of accuracy_score()??Are they identically the same?? What if for a certain classifier accuracy is not the best parameter to measure the performance?i.e the best parameter might be precision or recall or something else
@mohamadalomair2028
@mohamadalomair2028 4 года назад
Simple and short, many thanks
@lenaazimi1386
@lenaazimi1386 2 года назад
Thanks for this great tutorial. I replaced Nan with 29 which is approximately average of age. My score is 97.41
@memama2174
@memama2174 2 года назад
how did you replace? using fillna it is coverting column into str
@leoadi3833
@leoadi3833 3 года назад
Sir i want to ask. Is it possible to get multiple target values. E.g if i have more than 1 target values on my input. This tree is giving only 1 .
@financewithsom485
@financewithsom485 3 года назад
you are a gem being in bloomberg the work may be a lot and also salary still you help the community with your videos.
@mariatereza4348
@mariatereza4348 2 года назад
Thank you very much!!!! I'd like to learn how to build the charts too 😅
@haziq7885
@haziq7885 2 года назад
hi wouldnt label encoder means you're assigning some sort of ordering to the values ?
@israelgonzalez677
@israelgonzalez677 3 года назад
Awesome explanation! Just kindly allow me to ask one question: why did you use label encoding instead of dummies?
@codebasics
@codebasics 3 года назад
For decision tree one can use label encoding. But you can very well use dummies as well, you will get same result
@israelgonzalez677
@israelgonzalez677 3 года назад
@@codebasics thanks for your reply. ;)
@TarunSingh-je9my
@TarunSingh-je9my 3 года назад
Why hot one encoding is not used for all the three columns after labeling ?
@mingzeli1770
@mingzeli1770 3 года назад
should we drop the na rows in exercise? since the ages are not correlated to each other, and, in my opinion, fillna with the mean value may affect the accuracy of the final model.
@adi5187
@adi5187 5 лет назад
You have created 3 objects "in cell 7" (i.e. le_company, le_job, le_degree), but you have used only one object while creating new columns "in cell 8" (i.e. le_company) is it necessary to create 3 objects or we can get the job done by only one object like you do. ??
@eagleax9480
@eagleax9480 3 года назад
why we did not use 'one hot encoding' here ? please reply sir.
@itsme.samrat
@itsme.samrat 3 года назад
i guess, using LabelEncoder might make inputting the prediction data easy rather than in OneHotEncoding.. however, we can also use both
@azizalbastaki
@azizalbastaki 4 года назад
Great explanation, thanks a lot!!!!!
@codebasics
@codebasics 4 года назад
👍😊
@AmruthamOriginals
@AmruthamOriginals Год назад
Accuracy score is - 0.748 Training score is - 0.977 Replaced the Age column , Null values with Median Dropped the unwanted features as mentioned in video
@ajaykushwaha4233
@ajaykushwaha4233 3 года назад
To convert categorical variable into numeric we have 2 techniques, dummy variable, onehotencoding, label encoding. My question is here we have used label encoding why not other technique ?
@aakashp7808
@aakashp7808 4 года назад
I didnt get the label encoder part could u explain that in comment ?
@Otaku-Chan01
@Otaku-Chan01 Год назад
ValueError: could not broadcast input array from shape (2,712) into shape (1,712) I'm getting this error whenever I'm tryint to fit the (xtrain,ytrain) in the model can anyone please resolve it??
@ИванПетрович-г6ю
@ИванПетрович-г6ю 2 года назад
You said, that LabelEncoder might mislead our model. Shouldn't we use get_dummies or OneHotEncoder instead?
@codebasics
@codebasics 2 года назад
Decision tree is one such classifier where using labelencoder also works ok. But in general I agree, one should use OHE only. You can modify code in this tutorial using OHE and it works perfectly ok.
@TheHelghastkilla
@TheHelghastkilla 4 года назад
Why did you use regular encoding instead of One Hot Encoding? When do you know to use which?
@codebasics
@codebasics 4 года назад
You can use any of those. Both are same.
@krupagajjar5410
@krupagajjar5410 Год назад
What is the datatype of target variable? I executed the query model.fit(inputs_n, target) and it throwed below error : ValueError: Unknown label type: 'unknown' . Pls help
@Shubham365
@Shubham365 3 года назад
Why you are not using get_dummies method in this case ?
@aleynahukmet5996
@aleynahukmet5996 2 года назад
Thanks for the video so much I love your work! However I want to ask a question, should not we use nominal encoding for company name ?
@ogochukwustanleyikegbo2420
@ogochukwustanleyikegbo2420 Год назад
I think where the categories are not greater than 2 by much margin,we can stick to ordinal encoding
@Pride_Of_Ultras
@Pride_Of_Ultras 3 года назад
you are great!
@jaysoni7812
@jaysoni7812 4 года назад
i have a one question is why you don't do One Hot Encoder after Label Encoder???
@arpitdalal214
@arpitdalal214 Месяц назад
My model got a score of 97.9%
@codebasics
@codebasics 4 года назад
Step by step roadmap to learn data science in 6 months: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-H4YcqULY1-Q.html Exercise solution: github.com/codebasics/py/blob/master/ML/9_decision_tree/Exercise/9_decision_tree_exercise.ipynb Complete machine learning tutorial playlist: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-gmvvaobm7eQ.html 5 FREE data science projects for your resume with code: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-957fQCm5aDo.html
@bestineouya5716
@bestineouya5716 4 года назад
97.97% accurate
@rahulpatidar9905
@rahulpatidar9905 4 года назад
@@bestineouya5716 i also got the same accuracy
@praveenkamble89
@praveenkamble89 4 года назад
Great Explanation Sir, Thanks a lot for your efforts and help. I got 97.76% accuracy. I did not map male and female to 1, 2 instead used as it is. Is it necessary to do that ? is there any significance of it?
@harris7294
@harris7294 3 года назад
Exercise results ::::: Accuracy : 0.8229665071770335 Actually I your csv file as training and for test data used test.csv provided on Kaggle >> which increase my training data(which would have been less if I had split my data) >> Increased Accuracy(As we have more data to train) >> Reduce chances of overfitting if i had used same data for both training and testing... Thank you.. for great video
@anonym9158
@anonym9158 3 года назад
0.98
@The_TusharMishra
@The_TusharMishra 9 месяцев назад
ACCURACY --- 0.811111111 , But i have a question How do we know on when to use linear regreesion vs when to use decision tree on a dataset, PLEASE ANSWER THIS
@g.scholtes
@g.scholtes 2 года назад
In In (8) you use the "le_company" LaberEncoder object 3 times and never use the 'le_job" and 'le_degree' objects. It still works, so my guess would be that you'll only need one LabelEncoder object to do the job.
@rajubhatt2
@rajubhatt2 2 года назад
label encoder basically converts the categorical to numerical, since job and degree are categorical you still need them to be LabelEncoded. and he used them see carefully using fit_transform().
@omdusane8685
@omdusane8685 Год назад
@@rajubhatt2 he encoded them using company object Only though
@AkhileshKumar-mg9vs
@AkhileshKumar-mg9vs Год назад
well here it worked as Sir used fit_transform but if he had splitted the data into test and train sets , then he would have used transform on remaining test set and for that different instances would be required for each coloumn.
@PAWANKELA-rh7yj
@PAWANKELA-rh7yj 3 месяца назад
when i use only one object then my first 2 rows are drop from dataset ,why??
@Koome777
@Koome777 10 месяцев назад
My model got a score of 98.6%. I dropped all the Age Na values which reduced the sample size from 812 to 714. I label-encoded the Sex column and then used a test size of 0.2 with the remainder of 0.8 as the training size. I am all smiles. Thanks @codebasics
@fairoossahfeerulwasihf1139
@fairoossahfeerulwasihf1139 7 дней назад
i did the same thing and got 82% accuracy only. why is it?
@nikhilrana668
@nikhilrana668 3 года назад
For those wondering what 'information gain' is, it is just the measure of decrease of entropy after the dataset is split.
@proplayerzone5122
@proplayerzone5122 2 года назад
Hi sir, I am a 10th grade student and I am learning ML and in the exercise My model got 81% accuracy😀 sir. Will Make many models while learning and share with you. Thanks for the tutorials sir.
@codebasics
@codebasics 2 года назад
It is ok to learn ML but make sure you find time for outdoor activities, sports and some fun things. The childhood will never come back and do not waste it in search of some shiny career. If you are so much concerned, I would advice focusing on math and statistics at this stage and worry about ML later.
@proplayerzone5122
@proplayerzone5122 2 года назад
@@codebasics ok sir. Thanks for guidance!
@kalaipradeep2753
@kalaipradeep2753 Год назад
Hi bro now what doing....
@kalaipradeep2753
@kalaipradeep2753 Год назад
How to fill empty value on age feature
@toxiclegacy5948
@toxiclegacy5948 8 месяцев назад
@@codebasicsAbsolutely correct, it’s great to learn new things. But learning all these is not your right age. Make more and more memories in childhood. I am 23 and trust me life is very painful…
@anujack7023
@anujack7023 3 года назад
I got 74.4% accuracy. it is good to do everything by my own....
@codebasics
@codebasics 3 года назад
That’s the way to go anujack, good job working on that exercise
@gaganbansal386
@gaganbansal386 3 года назад
Why we have not created dummy variables here as we have done in Logistic Regression using OneHotEncoder
@mohitb5230
@mohitb5230 3 года назад
In one hot encoding turorial you mentioned its better cos then we dont have encoding which has relation to each other. Please clarify. These videos are teaching me a lot.
@anshulagarwal6682
@anshulagarwal6682 2 года назад
Yes same doubt. Have you cleared your doubt? If yes, then please tell.
@anshulagarwal6682
@anshulagarwal6682 2 года назад
I think company should be given one hot encoding while job and degree should be label encoded.
@WestCoastBrothers_
@WestCoastBrothers_ 3 года назад
Incredible video! Thank you for sharing your knowledge. Scored a 83.15%. I changed the hyperparameter "criterion" to entropy instead of gini and was consistently performing better. Looking forward to seeing how changing other hyperparameters effects accuracy.
@codebasics
@codebasics 3 года назад
That’s the way to go niko, good job working on that exercise
@ansh6848
@ansh6848 2 года назад
Actually this man has made learning Machine Learning easy for everyone whereas if you will see other channels they show big mathematical equations and formulas..which makes beginners uncomfortable in learning ML. But thanks to this channel.♥️🥰
@bhawnaverma5532
@bhawnaverma5532 2 года назад
very True. Complex concept explained in very understanding way. Hats off really
@ajaykumaars2154
@ajaykumaars2154 4 года назад
Hi Sir, Thanks for the great video. I've a question, why didn't we use one hot encoding here for our categorical variables?
@codebasics
@codebasics 4 года назад
We can but for decision tree it doesn't make much difference that's why I didn't use it
@ajaykumaars2154
@ajaykumaars2154 4 года назад
@@codebasics Ohh, OK Sir. Thank you
@whatever_5913
@whatever_5913 3 года назад
@@codebasics But then doesn't the model give a higher priority(value) to Facebook than to google on the basis of the number assigned in Label Encoding ...just confused here.
@moeintorabi2205
@moeintorabi2205 4 года назад
There are some NaN values in the Age column. I filled them through padding. Also, I spit my data for testing and at the end I got the accuracy of 0.8.
@piyushtale0001
@piyushtale0001 2 года назад
Use fillna with median and accuracy will be 0.9777 by normal method
@tejassrivastava6971
@tejassrivastava6971 2 года назад
@@piyushtale0001 i have used median() for Pclass, Age and Fare but got score = 78 around. How to improve?
@irmscher9
@irmscher9 5 лет назад
*for x in features.columns:* *features[x] = le.fit_transform(features[x])*
@prabur3296
@prabur3296 5 лет назад
How to write the predicted values into a csv file For eg: model.predict(test_data), I want the output array in a csv file submission.csv
@sujankatwal9255
@sujankatwal9255 4 года назад
Thank you so much for the tutorial. Im doing all the exercise.I got an accuracy of 81% on titanic dataset
@codebasics
@codebasics 4 года назад
Sujan that a decent score. Good job 👍👏
@kirankumarb2190
@kirankumarb2190 3 года назад
Why didn't we use dummy column concept here like we did for linear regression?
@naveedarif6285
@naveedarif6285 3 года назад
As in trees we have many levels so here dummy variables concept doesnt work well so we try to avoid it
@snehagupta-xz1fs
@snehagupta-xz1fs 3 года назад
@@naveedarif6285 how can we train and split dataset in this? Please help
@bhumitbedse8156
@bhumitbedse8156 3 года назад
Hello sir at 7:50 LabelEncoder is used for all the columns like compony,job and degree but when we fit_transform then why only le_compony is used ? For job and degree we have to write le_job.fit_transform() and le_degree.fit_transform() ? Am I right please answer 😶
@ss57hd
@ss57hd 5 лет назад
Your VIdeos are always Awesome! Can u suggest me some websites where I can find Questions like those in ur Excercises and all?
@codebasics
@codebasics 5 лет назад
Hey, honestly I am not aware of any good resource for this. Kaggle.com is there but it is for competition and little more advanced level. Try googling it. Sorry.
@valapoluprudhviraj9778
@valapoluprudhviraj9778 4 года назад
Hurray! Sir i got an accuracy of 97.38% by using interpolate method for Age column.😍✨
@HipHop-cz6os
@HipHop-cz6os 4 года назад
Did u use train_test_split method
@codebasics
@codebasics 4 года назад
Good job Prudhvi, that’s a pretty good score. Thanks for working on the exercise
@jixa2109
@jixa2109 2 года назад
It was easy.. i got 98.3%
@vanshoberoi2154
@vanshoberoi2154 Месяц назад
can you walk me through what extra did u do for 97 score .. normally im getting 82 . i found right random state..
@udaysai2647
@udaysai2647 5 лет назад
Great Tutorials keep going but I have a doubt why haven't you used onehotencoder for company here as it is nominal variable? and please make a tutorial on what exactly these parameters are and on random forests
@Bobette_2409
@Bobette_2409 4 года назад
true, one hot encoding is better than labelEncoder as assigning categories would results in errors in prediction if that feature is chosen, because higher category is considered better over the others. so in this case if google =0 and Fb =1 , then FB>Google.
@aravindabilash151
@aravindabilash151 4 года назад
@@Bobette_2409 Thank you for the clarification, actually i was trying it with OneHotEncoder and resulted in mis-prediction.
@ganeshyugesh9559
@ganeshyugesh9559 2 года назад
i have only started to learn about data science using python and i have a question: Why use labelencoder rather than getting dummy variables for the categorical variables? Is it more efficient using labelencoder?
@yourskoolboy
@yourskoolboy Год назад
I prefer the .get_dummies()
@larrybuluma2458
@larrybuluma2458 4 года назад
Thanks for this tutorial mate, it is the best straight forward DTC tutorial. Using entropy i got an 81% accuracy and, using gini i have a 78% accuracy
@codebasics
@codebasics 4 года назад
That’s the way to go Larry, good job working on that exercise
@kuldeepsharma7924
@kuldeepsharma7924 4 года назад
Got an accuracy of 97.20% Dropped all rows whose values were missing. Thank you, Dhaval sir..
@codebasics
@codebasics 4 года назад
Kuldeep, that is indeed a nice score. good job buddy.
@elvenkim
@elvenkim 2 года назад
Mine is 98.459%. Likewise I removed all missing data for Age.
@ShubhamSharma-qb1bw
@ShubhamSharma-qb1bw 2 года назад
@@elvenkim why you are removing the missing value whether it is possible to fill with whether mean or median it depends upon the outlier present in the column age
@naveenkalhan95
@naveenkalhan95 4 года назад
really appreciate your work. learning a lot... just want to confirm something from the tutorial @7:40 you are using fit_transform with le_company object for all the other columns and did not use le_job object and le_degree object. is it ok? or should we do it? Thank you very much again.
@sadiqabbas5239
@sadiqabbas5239 3 года назад
That's just the variable name you can use that way too..
@eliashossain9849
@eliashossain9849 4 года назад
Exercise result for the titanic dataset: Score: 0.77 (using Decision Tree Classifier)
@cyberversary262
@cyberversary262 3 года назад
DUDE CAN U PLS SHARE ME THE CODE.... IM GETTING ACCURACY 1.0
@prakashdolby2031
@prakashdolby2031 3 года назад
@@cyberversary262 you are giving entire dataset to get trained , Better try with test_size != 1 (use 0.3-0.2 ) to get better results
@cyberversary262
@cyberversary262 3 года назад
@@prakashdolby2031 dude I have asked this question 3 months ago 😂😂😂
@rahulkambadur147
@rahulkambadur147 5 лет назад
Do you have any thing related to sentiment analysis/Text mining/Text analysis? please have a tutorial for the text analytics as the other videos are so good I also request you to create chats for AUC and also create a model evaluation according to CRISP DM model
@abhishekkhare6175
@abhishekkhare6175 3 года назад
got 97.4% accuracy filled the empty blocks in age with mean. thanks a lot for perfect tutorial
@nitinmalusare6763
@nitinmalusare6763 3 года назад
How to calculate accuracy for the above dataset mentioned in the video
@muskanagrawal9428
@muskanagrawal9428 7 месяцев назад
thanks it helped me increase my accuracy
@durjoybarua9520
@durjoybarua9520 День назад
@@nitinmalusare6763 model.score
@stephenngumbikiilu3988
@stephenngumbikiilu3988 2 года назад
Thank for these awesome videos. I have been learning a lot through your ML tutorials. I replaced the missing values in the 'Age' column with the median. My test set was 20% and my accuracy on test data was 99.44%.
@AnanyaRay-ct8nx
@AnanyaRay-ct8nx Год назад
how? can u share the solution?
@vikassengupta8427
@vikassengupta8427 6 месяцев назад
There is high chance that the model is overfitted, it is not generalized
@vikassengupta8427
@vikassengupta8427 6 месяцев назад
Nd chances are that ur model has already seen your test data, better rerun from the first cell once and check...
@mohammedalshen3147
@mohammedalshen3147 4 года назад
Thank you so much for making it very simple. As an ML learner, will do we need to understand the code behind each of these sklearn functions ?
@codebasics
@codebasics 4 года назад
Not necessary. If you know the math and internal details then it can help if you want to write you own customised ML algorithm but otherwise no.
@areejbasudan4732
@areejbasudan4732 2 года назад
@@codebasics can you recommend videos for understanding the math behind it, thanks
@mandeep8696
@mandeep8696 Год назад
@codebasics I have a doubt here, for different companies and job we should have used get_dummies or one hot encoding but why we used Label Encoder here? Will our model not assume internally that Google is better than facebook and pharma company and vive versa. Please clarify if I understood it correctly.
@slainiae
@slainiae 7 месяцев назад
It might indeed affect the accuracy. One-Hot-Encoding should be used when dealing with nominal categories - i.e. no inherent ordering. Similarly for the Titanic exercise, male or female should (in theory) also use One-Hot-Encoding and not Label Encoding.
@patelshivam1965
@patelshivam1965 5 лет назад
Please can any one tell me how to increase our model's accuracy? i.e. Score
@codebasics
@codebasics 5 лет назад
Increasing score is an art as well as science. If your question is specific to only decision tree then try fine tunning model parameters such as criterian, tree depth etc. You can also try some feature engineering and see if it helps.
@samitpatra8615
@samitpatra8615 5 лет назад
I tried with increasing training data and score is increased.
@vikaskhugshal50
@vikaskhugshal50 2 года назад
So, in this example, why aren't we converting categorical features to numbers? We did convert them to numerical values but we are not doing OneHotEncoding here like we did in one of the previous video. Do we need to convert Categorical features to different numerical columns only in case of linear models?
@O_BALLE_BALLE
@O_BALLE_BALLE 2 месяца назад
Tree ensembles do not need one hot encoding
@O_BALLE_BALLE
@O_BALLE_BALLE 2 месяца назад
Tree ensembles do not need one hot encoding
@musicsense2799
@musicsense2799 4 года назад
Amazing Video! But I have some doubts please help me here: 1. We made three Label encoder instances here. Cant we use just one to encode all three? 2. We Use label encoding and not OneHoteEncoding, however, the latter made more sense as our model might assume that our variables have some order/ precedence It would be great if you clarify my doubts. Thanks!
@paulkornreich9806
@paulkornreich9806 2 года назад
It is necessary to understand the underlying logic of the algorithm. In regression, the algorithm tries to fit to a line, curve (or higher dimensional object in SVM), so, what the relative value (order, or where it is on the axis) is matters. In decision tree, the algorithm is just asking Yes/No questions, such as Is the company Facebook?, Does the employee have only a bachelors degree?, etc, so the order is not significant. Therefore, a the Label encoder is valid for decision tree. While it could have been possible to lump the label encoders into one, say by using a power of 10 to distinguish them, it would have given too much weight to the highest power of 10 (the algorithm understands numbers, so it is going to ask >/< /= questions), but the whole point of using decision tree was for *the algorithm* to find the precedence of features that will give the quickest prediction. Therefore it is better to have more features (i.e. more Label encoders). Then, if more features is better, one could re-ask the question of why not one-hot encoding, that would give even more encoders. Now, the issue is the tradeoff of accuracy vs conciseness. Here, there were only 3 companies, but there could be a case where a problem was examining over 100 companies. Having a one-hot encoder for all the companies would get quite cumbersome.
@MounirAhmedou
@MounirAhmedou 11 месяцев назад
Hello @codebasics thank you for the video, but i have a question. Why did you use the LabelEncoder instead of one-hot-encoding knowing that these vaules are not ordinal ones. If you guys know the answer, thanks for sharing with us.
@antimuggle_ridhi2565
@antimuggle_ridhi2565 11 месяцев назад
same question, why we did not use one hot encoding after label encoding? @codebasics
@alexplastow9496
@alexplastow9496 3 года назад
Thanks for helping me get my homework done, by God it was a mistake to wait till the last day
@yeru2480
@yeru2480 3 года назад
oh i couldn't agree more
@kavibharathi1547
@kavibharathi1547 Год назад
Sir why you are creating labelencoder object three time why can't we create a common encoder 8:10 !?? Anyone know the answer let me know !???! :)
@pablu_7
@pablu_7 4 года назад
I got 98.4 % in titanic data set . Thank you Sir , you are the best.
@codebasics
@codebasics 4 года назад
Oh wow, good job arnab 👍😊
@jayrathod2172
@jayrathod2172 3 года назад
I don't want to hurt your fillings but 98.4% is only possible if you are checking model score on train data instead of test data.
@blaze9558
@blaze9558 8 месяцев назад
true@@jayrathod2172
@vikassengupta8427
@vikassengupta8427 6 месяцев назад
​@@jayrathod2172yes I was about to say that, and also possible if you have change the random state multiple times and your model has seen all your data, and is now overfitted
@vanshoberoi2154
@vanshoberoi2154 Месяц назад
how the fuckkkkk
@tejobhiru1092
@tejobhiru1092 3 года назад
thank you for such amazing, well detailed and easy to understand tutorial(s) ! im following your channel exclusively for learning ML, along with kaggle competitions. also recommending your channel to my peers. great work..! PS - i got 75.8% as the score of my model in for the exercise. any tips to improve the score?
@shreyansengupta2594
@shreyansengupta2594 2 года назад
take test_size=0.5 it increases to 78.15%
@pranav9339
@pranav9339 Год назад
re execute the test train split function as it generates rows randomly. Then Again fit the model and execute. Continue this for 4-5 time until u get somewhere around 95% accuracy. So this set of data is the most accurate for training the model.
@siddhantapuvenkatabhuvanch7597
@siddhantapuvenkatabhuvanch7597 3 года назад
i got this, thanks for your detailed explination. Score = 0.7541899441340782
@noorameera26
@noorameera26 3 года назад
Will never get tired to say thank you at every video I watched but honestly, you're the best! :) Keep posting great videos
@codebasics
@codebasics 3 года назад
I am happy this was helpful to you.
@krijanprajapati6816
@krijanprajapati6816 5 лет назад
Thank you so much sir, I really appreciate your tutorial, I learnt a lot
@codebasics
@codebasics 5 лет назад
Krijancool, thanks for the comment. By the way your name is really cool 😎
@ghzich017
@ghzich017 2 года назад
On 7:28, why do you have to created multiples LabelEncoder() classifier?
@WorldsTuber13
@WorldsTuber13 5 лет назад
Your videos are absolutely awesome.... Those who wants a career transition in DS basically they use to spend more then 3k us dollars to do their certification and what they ultimately get is a diploma or a degree certification on Data Science not what exactly happening in data science, but when a scholar like you train us we come to know what's happening in it.
@codebasics
@codebasics 5 лет назад
K Prabhu, thanks for your kind words of appreciation.
@study_with_thor
@study_with_thor 3 года назад
help me explain this : I use different methods to encode string from SX columns ( 1 : LabelEncoder, 2 get_dummies , 3 map ) then I fillna with mean() method and also test_size the same for 3 above encoding methods BUT I got different accuracy . Tell me why??
@yashchavan1350
@yashchavan1350 3 года назад
Sir, In the Exercise you perform map on sex column and I did it using LabelEncoder. I liked when you give us a difference approach to perform a same task .and one more question Sir, instead of mean why cant we use mode on age column ........btw My score is 79%
@yashdewan3633
@yashdewan3633 2 года назад
my score : 0.8044692737430168
@minsaralokunarangoda4251
@minsaralokunarangoda4251 4 месяца назад
Thanks for the awesome tutorial.... Dropped all na values in Age column which reduced the sample size from 812 to 714 and ran the model couple times, the best accuracy I got was 83.21%
@learnerlearner4090
@learnerlearner4090 4 года назад
Thanks so much for these tutorials! These are the best tutorials I've found so far. The code shared by you for examples and exercises are very helpful. I got score 76% for the exercise. How is it possible to get a different score for the same model and the same data? The steps followed are the same too.
@codebasics
@codebasics 4 года назад
In train_test_split it will generate different samples Everytime so even when you run your code multiple times it will give different score. Specify random_state in train_test_spkit method, let's say 10, after that when you run your code you get same score. This is because now your train and test samples are same between different runs.
@learnerlearner4090
@learnerlearner4090 4 года назад
@@codebasics Got it. Thanks!
@anujvyas9493
@anujvyas9493 4 года назад
Same, I too got an accuracy of 76% but was aware about the random_state attribute! :)
Далее
Дикий Бармалей разозлил всех!
01:00
Катаю тележки  🛒
08:48
Просмотров 498 тыс.
Decision Tree Classification Clearly Explained!
10:33
Просмотров 671 тыс.
Machine Learning Tutorial Python - 11  Random Forest
12:48
Дикий Бармалей разозлил всех!
01:00