Тёмный

Decision Tree Hyperparameters : max_depth, min_samples_split, min_samples_leaf, max_features 

Bhavesh Bhatt
Подписаться 105 тыс.
Просмотров 34 тыс.
50% 1

In this video we will explore the most important hyper-parameters of Decision tree model and how they impact our model in term of over-fitting and under-fitting.
The important hyper-parameters of a decision tree are max_depth, min_samples_split, min_samples_leaf, max_features, criterion.
The difference between min_samples_split & min_samples_leaf is taken from an amazing answer provided on stackoverflow, link : stackoverflow.com/questions/4...
If you enjoy these tutorials & would like to support them then the easiest way is to simply like the video & give it a thumbs up & also it's a huge help to share these videos with anyone who you think would find them useful.
Please consider clicking the SUBSCRIBE button to be notified for future videos & thank you all for watching.
You can find me on:
Blog - bhattbhavesh91.github.io
Twitter - / _bhaveshbhatt
GitHub - github.com/bhattbhavesh91
Medium - / bhattbhavesh91
#DecisionTree #Hyperparameters #maxdepthdecisiontree

Опубликовано:

 

30 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 27   
@Howtosolveprobem
@Howtosolveprobem 4 года назад
nice video but if pictorially explained it would have been more interesting
@yasokavi
@yasokavi 4 года назад
Its really helpful!
@suhailchougle7315
@suhailchougle7315 3 года назад
Thank you so much for the video, really appreciate it
@bhattbhavesh91
@bhattbhavesh91 3 года назад
I'm glad you liked the video :)
@yscosta
@yscosta 2 года назад
Hello! Do you have some advice about hyper parameters to use in regression randonforest?
@MrYURIBP
@MrYURIBP Год назад
gracias bro.. saludos 🇨🇱
@bhargav7476
@bhargav7476 3 года назад
Does a highly skewed feature affect the AUC of Decision tree classifier? and do I have to remove outliers? I have a feature with 80% of values as 0 and the maximum is 13, I have tried log and sqrt transformation but it's still highly skewed
@texasfossilguy
@texasfossilguy 2 года назад
try binning or reduce the skew by IQR
@RajaSekharaReddyKaluri
@RajaSekharaReddyKaluri 4 года назад
Nice one!!
@shahrukhahmad4127
@shahrukhahmad4127 Год назад
Insightful video...thank you
@bhattbhavesh91
@bhattbhavesh91 Год назад
You are welcome 😊
@trinkesh8423
@trinkesh8423 4 года назад
Great helpful!
@bhattbhavesh91
@bhattbhavesh91 4 года назад
Glad it was helpful!
@jamalnuman
@jamalnuman 6 месяцев назад
do you have similar presentation for the hyper parameters of random forest and xgboost?
@murilopalomosebilla2999
@murilopalomosebilla2999 3 года назад
Nice explanation!
@bhattbhavesh91
@bhattbhavesh91 3 года назад
Glad it was helpful!
@alipaloda9571
@alipaloda9571 3 года назад
love the information shared by you sir can you please make video on pre pruning and post pruning in R.F
@nopenope5949
@nopenope5949 2 года назад
What aspect that considered to know which hyper parameter we should use in our decision tree clasifier?
@kkamit0106
@kkamit0106 2 года назад
Can you tell about CP values in Decision Tree?
@anish_k
@anish_k 4 года назад
I'm badly stuck here, please explain with the practical implementation
@kurrucharan9376
@kurrucharan9376 4 года назад
hey bhavesh !.....When max_features=10 what is the difference in selecting the best attribute/feature either from 10 or 50(total number of features). As both the features were selected based on the information gain or Gini gain.
@venkatvicky570
@venkatvicky570 3 года назад
when the best feature is being selected from 50(total number of features), the same feature i.e. the topmost best feature out all the features will be selected to split the internal nodes every time when there needs to be a split in the sub trees .... whereas when max_features =10, it randomly chooses 10 features for each split and out of those 10 .. the best feature from those randomly selected 10 features will be used to split the internal nodes.
@jamalnuman
@jamalnuman 6 месяцев назад
very useful but better if you provide more figures and images to better explain what each parameter means
@ajinkyakoshti2411
@ajinkyakoshti2411 4 года назад
Do a similar video with lgbm instead of decision trees wherein more hyperparameters come into the picture
@bhattbhavesh91
@bhattbhavesh91 4 года назад
Sure!
@Ankitsharma-vo6sh
@Ankitsharma-vo6sh 3 года назад
can u show it on a dataset
@bhattbhavesh91
@bhattbhavesh91 3 года назад
sure, I'll create a video on it soon!
Далее
Random Forest Algorithm Clearly Explained!
8:01
Просмотров 579 тыс.
Fast and Furious: New Zealand 🚗
00:29
Просмотров 35 млн
ML Was Hard Until I Learned These 5 Secrets!
13:11
Просмотров 252 тыс.
Decision Tree Classification Clearly Explained!
10:33
Просмотров 639 тыс.
All Learning Algorithms Explained in 14 Minutes
14:10
Просмотров 204 тыс.
Decision and Classification Trees, Clearly Explained!!!
18:08