Тёмный

XGBoost Part 2 (of 4): Classification 

StatQuest with Josh Starmer
Подписаться 1,2 млн
Просмотров 221 тыс.
50% 1

In this video we pick up where we left off in part 1 and cover how XGBoost trees are built for Classification.
NOTE: This StatQuest assumes that you are already familiar with...
XGBoost Part 1: XGBoost Trees for Regression: • XGBoost Part 1 (of 4):...
...the main ideas behind Gradient Boost for Classification: • Gradient Boost Part 3 ...
...Odds and Log(odds): • Odds and Log(Odds), Cl...
...and how the Logistic Function works: • Logistic Regression De...
Also note, this StatQuest is based on the following sources:
The original XGBoost manuscript: arxiv.org/pdf/1603.02754.pdf
The original XGBoost presentation: homes.cs.washington.edu/~tqch...
And the XGBoost Documentation: xgboost.readthedocs.io/en/lat...
For a complete index of all the StatQuest videos, check out:
statquest.org/video-index/
If you'd like to support StatQuest, please consider...
Buying The StatQuest Illustrated Guide to Machine Learning!!!
PDF - statquest.gumroad.com/l/wvtmc
Paperback - www.amazon.com/dp/B09ZCKR4H6
Kindle eBook - www.amazon.com/dp/B09ZG79HXC
Patreon: / statquest
...or...
RU-vid Membership: / @statquest
...a cool StatQuest t-shirt or sweatshirt:
shop.spreadshirt.com/statques...
...buying one or two of my songs (or go large and get a whole album!)
joshuastarmer.bandcamp.com/
...or just donating to StatQuest!
www.paypal.me/statquest
Lastly, if you want to keep up with me as I research and create new StatQuests, follow me on twitter:
/ joshuastarmer
Corrections:
14:24 I meant to say "larger" instead of "lower.
18:48 In the original XGBoost documents they use the epsilon symbol to refer to the learning rate, but in the actual implementation, this is controlled via the "eta" parameter. So, I guess to be consistent with the original documentation, I made the same mistake! :)
#statquest #xgboost

Опубликовано:

 

3 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 405   
@statquest
@statquest 4 года назад
Corrections: 14:24 I meant to say "larger" instead of "lower. 18:48 In the original XGBoost documents they use the epsilon symbol to refer to the learning rate, but in the actual implementation, this is controlled via the "eta" parameter. So, I guess to be consistent with the original documentation, I made the same mistake! :) Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/
@parijatkumar6866
@parijatkumar6866 3 года назад
Very nice videos. God bless you man!!
@rahul-qo3fi
@rahul-qo3fi Год назад
15:27 The similarity equations are missing residual **2 (Thanks for the detailed explanations Love your content)
@statquest
@statquest Год назад
@@rahul-qo3fi At 15:27 we are calculating the output values for the leaf, not similarity scores, and the equation in the video at this time point is the correct equation for calculating output values.
@rahul-qo3fi
@rahul-qo3fi Год назад
@@statquest aah got it, thanks:)
@pedromerrydelval7260
@pedromerrydelval7260 4 месяца назад
Hi Josh, I don't understand the mention to parameter "min_child_weight" in 12:58. Is that a typo or am I missing something. Thanks!
@TY-il7tf
@TY-il7tf 4 года назад
How do I pass any interviews without these videos? I don't know how much I owe you Josh!
@statquest
@statquest 4 года назад
Thanks and good luck with your interview. :)
@TheParijatgaur
@TheParijatgaur 4 года назад
did you clear ?
@guneygpac6505
@guneygpac6505 4 года назад
I got a few academic papers under review thanks to Josh. I watch his videos first before studying the other sources. Without his videos it would be Xhard to understand those sources. I put his name in the acknowledgements for helpful suggestions (he did actually reply to me several times here). I wish I could cite some of his papers but they are very unrelated to my area (economics). Unfortunately that's all I can do because the exchange rate would make any donations I can make look very stupid...
@arda8206
@arda8206 3 года назад
@@guneygpac6505 I think you are from Turkey :D
@jiangtaoshuai1188
@jiangtaoshuai1188 2 года назад
so you also yell BAM!! ?
@manuelagranda2932
@manuelagranda2932 4 года назад
I finished with this video all the list, I am from Colombia and is hard to pay for learn about this concepts, so I am very gratful for your videos, and now my mom hates me when I say Double Bamm for nothing!! jajaja
@statquest
@statquest 4 года назад
That's awesome! I'm glad the videos are helpful. :)
Год назад
From Vietnam, and hats off to your talent in explaining complicated things in a way that I feel so comfortable to continue watching.
@statquest
@statquest Год назад
Thank you very much! :)
@wucaptian1155
@wucaptian1155 4 года назад
You are a nice guy , absolutely! I can't wait for the part 3.Although I have been learned XGBoost from the original paper, I can still get more interesting things from your video.Thank you :D
@statquest
@statquest 4 года назад
Thank you! :)
@yukeshdatascientist7999
@yukeshdatascientist7999 3 года назад
I have come across all the videos from gradient boosting till now, you clearly explain each and every step. Thanks for sharing the information with all. It helps a lot of people.
@statquest
@statquest 3 года назад
Glad it was helpful!
@alihaghighat1244
@alihaghighat1244 11 месяцев назад
When we use fit(X_train,y_train) and predict(X_test) without watching Josh's videos or studying the underline concepts, nothing happens even if we get good results. Thank you Josh for simplifying these hard pieces of stuff for us and creating these perfect numerical examples. Please keep up this great work.
@statquest
@statquest 11 месяцев назад
Thank you very much!
@prathamsinghal5261
@prathamsinghal5261 4 года назад
Josh! You made a machine learning a beautiful subject and finally I m in love with these Super BAM videos.
@statquest
@statquest 4 года назад
Hooray! :)
@shaelanderchauhan1963
@shaelanderchauhan1963 2 года назад
Josh, On a scale of 5 you are a level 5 Teacher. I have learned so much from your videos. I owe so much to Andrew Ng and You. I will contribute to Patreon Once I get a Job. Thank you
@statquest
@statquest 2 года назад
Wow, thanks!
@seanmcalevey4566
@seanmcalevey4566 4 года назад
Yo fr these are the best data science/ML explanatory vids on the web. Great work, Josh!
@statquest
@statquest 4 года назад
Thank you very much! :)
@wongkitlongmarcus9310
@wongkitlongmarcus9310 2 года назад
as a beginner of data science, I am super grateful for all of your tutorials. Helps a lot!
@statquest
@statquest 2 года назад
Glad you like them!
@amalsakr1381
@amalsakr1381 4 года назад
Million Thanks Josh. I can not wait to watch other videos about XGBoost, lightBoost, CatBoost and deep learning. Your videos are the best.
@statquest
@statquest 4 года назад
Part 3 on XGBoost should be out on Monday.
@joshisaiah2054
@joshisaiah2054 3 года назад
Thanks Josh. You're a life saver and have made my Data Science transition a BAM experience. Thank You!
@statquest
@statquest 3 года назад
Glad to help!
@lakhanfree317
@lakhanfree317 4 года назад
Finally yay waited for these video. For long but worth the wait. Thanks for everything.
@statquest
@statquest 4 года назад
Thank you! :)
@changning2743
@changning2743 3 года назад
I must have watched almost every video at least three times during this pandemic. Thank you so much for your effort!
@statquest
@statquest 3 года назад
Wow!!! Thank you very much! :)
@madhur089
@madhur089 3 года назад
Josh you are saviour...thanks a ton for making these fantastic videos...your video lectures are simple and crystal clear! Plus I love the sounds you make in between :)
@statquest
@statquest 3 года назад
Bam! :)
@chelseagrinist
@chelseagrinist 3 года назад
Thank you so much for making Machine Learning this easy for us . Grateful for your content . Love from India
@statquest
@statquest 3 года назад
Glad you enjoy it!
@saptarshisanyal4869
@saptarshisanyal4869 2 года назад
All the boosting and bagging algorithms are complicated algorithms. In universities, I have hardly seen any professor who can make these algorithms understand like Joshua does. Hats off man !!
@statquest
@statquest 2 года назад
Thank you!
@allen8376
@allen8376 3 месяца назад
The little calculation noises give me life
@statquest
@statquest 3 месяца назад
beep, boop, beep!
@lambdamax
@lambdamax 4 года назад
Thanks for boosting my confidence in understanding. There was this recent Kaggle tutorial that said LightGBM model "usually" does better performance than xgboost, but it didn't provide any context! I remember that xgboost was used as a gold standard-ish about 2-3 years ago(even CERN uses it if I'm not mistaken). Anyhoo, I hope I can keep up with all of this. I need to turn my boosters on.
@statquest
@statquest 4 года назад
I'm happy to boost your confidence! Part 3 will explain the math if you are interested in those details - they are not required - and Part 4 will describe a lot of optimizations that XGBoost uses to be efficient (making it easier to find good hyper-parameters).
@hassaang
@hassaang 3 года назад
Bravo! Thanks for making life easy. Thanks and appreciation from Qatar.
@statquest
@statquest 3 года назад
Hello Qatar!! Thank you very much!
@thebearguym
@thebearguym 9 месяцев назад
Enjoyed it! Cool explanation
@statquest
@statquest 9 месяцев назад
Thanks!
@gerrard1661
@gerrard1661 4 года назад
Thank you! Can’t wait for part 3.
@statquest
@statquest 4 года назад
Thanks! Part 3 should be out soon.
@gerrard1661
@gerrard1661 4 года назад
​@@statquest Thanks for your reply. I am a stats PhD student. These days the industry prefer machine learning and deep learning. However, I feel like stats people are not strong at programming compared to CS people. We know lots of theory, but when solve real problems CS seems better? You have any idea on this? Thanks!
@statquest
@statquest 4 года назад
@@gerrard1661 It really just boils down to the type of job you want to work on. There are tons of jobs in both statistics and cs and machine learning.
@furqonarozikin7157
@furqonarozikin7157 3 года назад
thanks buddy, its hard for me to know how xgboost works in classification, but this tutorial has explained well
@statquest
@statquest 3 года назад
Thanks!
@globamia12
@globamia12 4 года назад
Your videos are so funny and smart! Thank you
@statquest
@statquest 4 года назад
Thanks! :)
@yusufbalci4935
@yusufbalci4935 4 года назад
Very well explained!! Awesome..
@statquest
@statquest 4 года назад
Thank you! :)
@maruthiprasad8184
@maruthiprasad8184 8 месяцев назад
hats off all my doubts clarified here, superb cooooooooooooool Big BAAAAAAAAMMMMMMMMMM!
@statquest
@statquest 8 месяцев назад
Hooray! :)
@paligonshik6205
@paligonshik6205 4 года назад
Thanks a lot, keep doing an awesome job
@statquest
@statquest 4 года назад
Thank you very much! :)
@parinitagupta6973
@parinitagupta6973 4 года назад
All the videos are awesome and this is THE BAMMEST way to learn about ML and predictive modelling. Can we also have some videos about time series and the underlying concepts. That would be TRIPLE TRIPLE BAM!!!
@statquest
@statquest 4 года назад
Thank you very much! :)
@jamemamjame
@jamemamjame 2 года назад
Ty very much, will buy your song within tomorrow morning from Thailand :)
@statquest
@statquest 2 года назад
Wow! Thank you!
@superk9059
@superk9059 2 года назад
Awsome!!!👍👍👍very very very very good teacher!!!
@statquest
@statquest 2 года назад
Thank you! 😃
@munnangimadhuri3334
@munnangimadhuri3334 2 года назад
Excellent explanation Brother!
@statquest
@statquest 2 года назад
Thanks!
@jingo6221
@jingo6221 4 года назад
life saver, cannot thank more
@statquest
@statquest 4 года назад
Thanks! Part 3 should be out soon.
@anggipermanaharianja6122
@anggipermanaharianja6122 3 года назад
Awesome vid!
@statquest
@statquest 3 года назад
Thanks!
@yulinliu850
@yulinliu850 4 года назад
Awesome bang. Happy 2020
@statquest
@statquest 4 года назад
Thank you! :)
@abylayamanbayev8403
@abylayamanbayev8403 2 года назад
Thank you very much professor! I would love to see your explanations of statistical learning theory covering following topics: concentration inequalities, rademacher complexity and so on
@statquest
@statquest 2 года назад
I'll keep that in mind.
@ducanhlee3467
@ducanhlee3467 4 месяца назад
Thank Josh for your knowledge and funny BAM!!!
@statquest
@statquest 4 месяца назад
Thank you!
@lfalfa8460
@lfalfa8460 Год назад
Classification is not a vacation, it is not a sensation, but it's cooooool! 🤣
@statquest
@statquest Год назад
bam!
@teetanrobotics5363
@teetanrobotics5363 4 года назад
Best Professor on the planet. Could you please make a playlist for DL or RL ?
@statquest
@statquest 4 года назад
I'm working on them.
@zzygyx9119
@zzygyx9119 11 месяцев назад
awesome explanation! I bought your book "The statquest illustrated guide to machine learning" even though I have understanded all the concepts.
@statquest
@statquest 11 месяцев назад
Thank you so much!!! I really appreciate your support.
@nurdauletkemel8155
@nurdauletkemel8155 2 года назад
Wow, I just discovered this channel and will use it to prep for my interview BAM! But the interview is in 2 hours Smal BAM :ccccccc
@statquest
@statquest 2 года назад
Good luck!
@muralikrishna9499
@muralikrishna9499 4 года назад
After a long time..... BAMMM!
@statquest
@statquest 4 года назад
Thanks! :)
@itisakash
@itisakash 4 года назад
Hey thanks for the videos. Can't wait for the remaining parts in the XGboost series. When are you gonna release the next part?
@statquest
@statquest 4 года назад
Since you are a member, you'll get early access to part 3 this coming monday (January 27). Part 4 will be available for early access 2 weeks later.
@santoshkumar-bz9mg
@santoshkumar-bz9mg 3 года назад
U r awesome Love from INDIA
@statquest
@statquest 3 года назад
Thank you!
@karangupta6402
@karangupta6402 3 года назад
Awesome :)
@statquest
@statquest 3 года назад
Thanks 😁
@andrewwilliam2209
@andrewwilliam2209 4 года назад
Hey Josh, you might not see this, but I really look up to you and your videos. I got sucked into machine learning last month, and you have made the journey easier thusfar. If I get an internship or something in the following months, I'll be sure to donate to you and hit you up on your social media to thank you :). Hopefully one day I will have enough knowledge to share it widely like you. Cheers
@statquest
@statquest 4 года назад
Thank you very much! Good luck with your studies! :)
@andrewwilliam2209
@andrewwilliam2209 4 года назад
@@statquest thanks Josh, will definitely update you in a year or two about the progress I've made😀
@statquest
@statquest 4 года назад
Bam!
@mehdi5753
@mehdi5753 4 года назад
Thanx for this simplification, can you do the same this for LGBM and CatBoost ?
@mrcharm767
@mrcharm767 Год назад
concepts going straight to my head as if u shot arrows bam!!!!!
@statquest
@statquest Год назад
Hooray! :)
@chrisalvino76
@chrisalvino76 Год назад
Thanks!
@statquest
@statquest Год назад
HOORAY!!! Thank you so much for supporting StatQuest!!! :)
@dc33333
@dc33333 2 года назад
The music is fantastic.
@statquest
@statquest 2 года назад
bam!
@rajdipsur3617
@rajdipsur3617 3 года назад
Infinite BAAAAAAAAAAAAAAAMMMMMMMMMM for these amazing videos bosss... :-)
@statquest
@statquest 3 года назад
Thanks! :)
@osmanparlak1756
@osmanparlak1756 3 года назад
Thanks a lot Josh for making ML algorithms understandable. I am learning a lot from your videos. Just one question on the order when splitting to create the trees. I think it doesn't matter whether you start from the last two or first two as we check all.
@statquest
@statquest 3 года назад
That is correct.
@henkhbit5748
@henkhbit5748 3 года назад
Love this series of xgboost. I read your answer about finding the best gamma value parameter using cross validation. According this video xgboost does not create new leaves when the gain < 0. When is extra pruning necessary? I suppose pruning can be done using lambda and additionally use gamma to prevent overfitting...?
@statquest
@statquest 3 года назад
Trees, in general, are notorious for over fitting the data. Random chance can easily result in a gain < 0 and adding an extra parameter for pruning will help prevent over fitting. For more details about the need for pruning trees in general, see: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-D0efHEJsfHo.html
@himanshumangoli6708
@himanshumangoli6708 2 года назад
I hope you were my teacher in my college days. So instead of watching your videos, i am able to create it.
@statquest
@statquest 2 года назад
:)
@Kevin7896bn
@Kevin7896bn 4 года назад
Hit that like button before watching it.
@statquest
@statquest 4 года назад
Thank you!!!! :)
@devran4169
@devran4169 Год назад
The best
@statquest
@statquest Год назад
Thank you! :)
@rrrprogram8667
@rrrprogram8667 4 года назад
Hit and like first... Then later i am gonna watch video... MEGAAAA BAMMMM
@statquest
@statquest 4 года назад
Awesome!!! :)
@61_shivangbhardwaj46
@61_shivangbhardwaj46 3 года назад
Thnx sir😊
@statquest
@statquest 3 года назад
bam! :)
@bharathjc4700
@bharathjc4700 4 года назад
What statistical tests do we need to perform on the training data and how do we validate the data
@asabhinavrock
@asabhinavrock 4 года назад
Hey Josh. Your videos are really informative and easy to understand. I have joined your channel today and look forward to more exciting content coming up. I was also eager to see your third video in the XGBoost Series. When will that be live?
@statquest
@statquest 4 года назад
If you go to the community page, you may be able to find a link to part 3 since you are a channel member. Here's the link to the community page: ru-vid.comcommunity
@asabhinavrock
@asabhinavrock 4 года назад
@@statquest Finally. Made my day!!!
@statquest
@statquest 4 года назад
@@asabhinavrock Awesome!!! Thank you very much.
@manojbhardwaj27
@manojbhardwaj27 4 года назад
@Josh Starmer: I would like to know about PRUNING concept in XGB. Are Gamma and Cover used for Pre-Pruning or Post-Pruning. In sklearn, we generally use Pre-Pruning which make more sense to me. However, from you tutorial it's seems like we are doing Post-Pruning (after full tree built). Can you please specify with a reason ?
@statquest
@statquest 4 года назад
These videos on XGBoost describe how XGBoost was designed from the ground up. Thus, the reason for anything in these video is "that's the way they designed XGBoost."
@user-ng1hs4lx4u
@user-ng1hs4lx4u 3 года назад
Thank you for marvelous video! I have some questions regarding what's explained 1. Can Number of trees we make be controlled by what we call 'Epoch' in ML? 2. When the model runs through epochs, is there any chance some epochs go the other way from the answer value? - I understood that by setting learning rate too high, new prediction will bypass the answer, causing the learning procedure to fluctuate a lot. 3. Ways we can slow down learning speed, I think are 1) Larger cover, 2) Larger gamma, 3) larger lambda is it right? or are there more ways to control the speed? Always thanks to all the efforts you made for the materials!
@statquest
@statquest 3 года назад
1) I think you can use that terminology if you want, but I don't know of anyone else who does. In xgboost, the parameter you set for the number of trees is "num_boost", and generally speaking, building trees is called "boosting". 2) I don't know. 3) Although not mentioned in the original paper, XGBoost contains a few other ways to slow down learning (add regularization). For full details, see the manual: xgboost.readthedocs.io/en/latest/parameter.html
@user-ng1hs4lx4u
@user-ng1hs4lx4u 3 года назад
@@statquest Thanks for kind reply! :)
@vijaykrish64
@vijaykrish64 4 года назад
Must watch videos.Just a small question,why do we need both cover and gamma for pruning?
@statquest
@statquest 4 года назад
Although gamma is thoroughly discussed in the original manuscript, cover is never mentioned. So my best guess is that while both cover and gamma do similar things, there are still differences in how they do them and the types of leaves they prune. For example, you could have a leaf with a lot of residuals in it (and thus, a relatively high "cover", so cover would not prune), but if they are not very similar, you will have a low similarity score and a low gain (so gamma would prune).
@shalinirajanna4281
@shalinirajanna4281 4 года назад
Thank you such good videos. I see that XGBoost has boot alpha and lambda parameters. you've explained about lambda, where would alpha fit in ?
@statquest
@statquest 4 года назад
Alpha was added after the original publication, so I didn't cover it. Presumably alpha is just like lambda and makes the trees shorter and shrinking the output values. And presumably it can shrink output values all the way to 0, just like lasso regression (and presumably lambda can not, just like ridge regression).
@n0pe1101
@n0pe1101 4 года назад
Can you do a video about Gaussian Process Regression/Classification?
@jamiescotcher1587
@jamiescotcher1587 3 года назад
Hi Josh, Specifically, the gradient of the training loss is used to predict the target variables for each successive tree, right? Therefore, does a steeper gradient imply it is going to try harder to correctly predict a specific sample that has been mis-classified, or does it mean it will work harder to predict any member of a certain true class? Thanks!
@statquest
@statquest 3 года назад
For details on how XGBoost treats misclassified samples and how, exactly, it tries harder to correctly classify them, see ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-oRrKeUCEbq8.html
@yjj.7673
@yjj.7673 4 года назад
That's great. BTW is there a video that only contains songs? ;)
@statquest
@statquest 4 года назад
Not yet! :)
@dhruvbishnoi8840
@dhruvbishnoi8840 4 года назад
Hi Josh, What happens if after splitting the node, one leaf has cover lower than the set threshold and the other leaf has cover greater than the set threshold. Splitting would not be performed, right?
@statquest
@statquest 4 года назад
That is correct.
@EvanZamir
@EvanZamir 4 года назад
You really should write a book.
@statquest
@statquest 4 года назад
Thanks! :)
@Brandy131991
@Brandy131991 2 года назад
Hi Josh, thank you for your amazing videos. They are really helping me a lot. One thing i still don‘t get is how does xgboost predict multiple classes (e.g. „most likely drug to use“ with drugs 1,2 and 3)? Does this work like in multinomial logistic regression, where each class is checked against a baseline-class? Or is it something like a random forrest when using xgboost?
@statquest
@statquest 2 года назад
When there are multiple classes, XGBoost uses the softmax objective function. I explain softmax in my series on Neural Networks: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-CqOfi41LfDw.html
@ahmedabuali6768
@ahmedabuali6768 3 года назад
please could you do more video, i am in love with your lectures, I want a video in how we use negative binomial in estimating the sample size
@statquest
@statquest 3 года назад
I'll keep that in mind.
@ahmedabuali6768
@ahmedabuali6768 3 года назад
@@statquest do you have lecture notes for these videos? I start downloading the video and prepare my slides. may if you have lecture notes for each video will help me documenting these as a book for me.
@statquest
@statquest 3 года назад
@@ahmedabuali6768 I have PDF study guides for some of my videos here: statquest.org/studyguides/ and I am writing a book that I hope will come out next year.
@ahmedabuali6768
@ahmedabuali6768 3 года назад
@@statquest that is good, I can pay for all at once? it will take time from me, please, I see you forgot to talk about multinet done by Nir Freidman, it is very important.
@statquest
@statquest 3 года назад
@@ahmedabuali6768 I'm not familiar with Multinet, so I can't say if it is important or not. And you are more than welcome to buy all of the study guides! That would be awesome. Thanks for your support.
@kamaldeep8257
@kamaldeep8257 3 года назад
Hi Josh, Thank you for such a great explanation. Just want to clarify one thing i.e. is this cover concept applies specifically on the xgboost trees or is it a normal method for all the tree-based algorithms. As every tree-based algorithm have this min_child_weight parameter in sklearn library.
@statquest
@statquest 3 года назад
Every tree based method has a way of filter out leaves that do no have enough samples going to them, however, the way XGBoost does it is unique.
@omreekapon2465
@omreekapon2465 Год назад
Great explanation like always! just a small question, at 10:12 you mentioned that the cover is defined as the similarity score minus lambda, but it looks in the equation that is plus, so what is the right answer? thanks for such an amazing explanations!
@statquest
@statquest Год назад
The denominator = [Sum (previous * (1 - previous)] + lambda. Cover = Sum (previous * (1 - previous). Thus, cover = denominator - lambda = [Sum (previous * (1 - previous)] + lambda - lambda = Sum (previous * (1 - previous)
@hubert1990s
@hubert1990s 4 года назад
while cover makes a leaf not to be sufficient enough to stay in the tree, is it also kinda pruning?
@statquest
@statquest 4 года назад
That is correct. Cover is a way to enforce pruning and not over fitting the training data.
@pierrebedu
@pierrebedu Год назад
great explanations! and how does this generalize to multiclass classification? Thanks (one vs all classif repeated n_classes times? )
@statquest
@statquest Год назад
That's one way to do it. I believe that you can also swap out the loss function and use cross entropy.
@sudhanshuchoudhary3041
@sudhanshuchoudhary3041 3 года назад
BEST BEST BEST!!!!!!!!
@statquest
@statquest 3 года назад
Thanks!
@raj345to
@raj345to 2 года назад
which vedio making tool do u use .....its so cool.
@statquest
@statquest 2 года назад
I answer these questions in this video: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-crLXJG-EAhk.html
@karangupta6402
@karangupta6402 3 года назад
Hi Josh: Can it be possible to make some video on the scale_pos_weight feature of XGBoost and how it can help in solving imbalanced datasets problems?
@statquest
@statquest 3 года назад
I'll keep that in mind.
@FF4546
@FF4546 2 года назад
Hello Josh, thank you for your video. How would this work with more than one variable? Does each variable end up with only one threshold? Thank you!
@statquest
@statquest 2 года назад
You test every variable to find the optimal thresholds and use the one that does the best. However, XGBoost has some optimizations explained here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-oRrKeUCEbq8.html
@dikshantgupta5539
@dikshantgupta5539 3 года назад
for purning the tree , is gain-gamma is same as cover value? As you remove the leaf when you calculate the cover value and also when you calculate gain-gamma
@statquest
@statquest 3 года назад
For details on cover (and everything else in XGBoost), see: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ZVFeW798-2I.html
@Patrick881199
@Patrick881199 3 года назад
Hi, Josh, when building the trees, does xgboost like the random forest which bootstrap the dataset and choose random subset of features for each tree?
@statquest
@statquest 3 года назад
You can do that with XGBoost, but it's not as fundamental to the algorithm as it is to Random Forests.
@Patrick881199
@Patrick881199 3 года назад
@@statquest Thanks, Josh
@user-kf7vg3bq8z
@user-kf7vg3bq8z 2 месяца назад
These videos are being truly helpful. Many thanks for sharing them! I do have a question RE XGBoost usage context. You mentioned that XGB is designed for large, complicated datasets; does this mean that it performs poorly with smaller datasets? Thanks in advance
@statquest
@statquest 2 месяца назад
I'm not sure - I just know that it has tons of optimizations for large datasets. To learn more about them, see: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-oRrKeUCEbq8.html
@yuchenzhao6411
@yuchenzhao6411 4 года назад
8:04 If two thresholds have same 'Gain' why would we pick "Dosage < 15" rather than "Dosage < 5"? Dose it matters for larger dataset? 13:23 Since in part1 we set gamma=130 and part2 we set gamma=3, I'm wondering how do we choose the value for gamma?
@statquest
@statquest 4 года назад
1) If 2 or more thresholds have the same "best GINI score", then just pick one, it doesn't matter. Since this is a greedy algorithm it does not look ahead to see if one of those choices is better in the long run. 2) When we use XGBoost for regression, the residuals can be relatively large, so gamma may need to be relatively large. When we use XGBoost for classification, the residuals are relatively small, so gamma may need to be relatively small. You can always just build a few trees to get a sense of what values for gamma make sense for pruning.
@yuchenzhao6411
@yuchenzhao6411 4 года назад
@@statquest thank you very much Josh! Really enjoy your video!
@lucaslai6782
@lucaslai6782 4 года назад
Hello Josh, Could you tell me how I can decide the value of Tree Complexity Parameter (Y Gamma)?
@statquest
@statquest 4 года назад
Cross validiation.
@gabrielpadilha8638
@gabrielpadilha8638 2 года назад
Josh, good morning, let me ask you a question. You said that we can put the initial probability to a value different than 0.5 if, for example, the training dataset is unbalanced. That means that xgboost can deal with unbalanced datasets without the needing to balanced the training dataset before submitting it to the model?
@statquest
@statquest 2 года назад
I'm not really sure. It probably depends on how imbalanced the data are.
@tudormanoleasa9439
@tudormanoleasa9439 3 года назад
What do you do if the cover of a left leaf is less than 1, but the cover of a right leaf is greater than 1? Do you only remove the left leaf or the entire subtree made of root, left leaf, right leaf?
@statquest
@statquest 3 года назад
If the cover value for one of the leaves is too small, we remove both leaves.
@muralik98
@muralik98 7 месяцев назад
Rule No 1 before watching statquest video. Like and then click on play button
@statquest
@statquest 7 месяцев назад
bam! :)
@khaikit1232
@khaikit1232 Год назад
Hi Josh, At 19:20, it is written that: log(odds) Prediction = 0 + (0.3 x -2) = -0.6 However I was just wondering since the tree is predicting the residuals, isn't the output of the XGBoost tree a probability? So shouldn't we convert the output from probabilities to log(odds) before we add it to the initial guess of 0?
@statquest
@statquest Год назад
The tree predicts residuals, but the output values from the leaves are not residuals, instead, they calculated at 14:58. Now, to be honest, I have no idea why that particular formula results in a log(odds), but it must, because that is what both XGBoost and Gradient Boost do, and neither of them do anything else before calculating the final log odds.
@priyabratbishwal5149
@priyabratbishwal5149 3 года назад
Hi Josh , How to make a tree with multiple predictors using XG boost .Here you showed only single variable called Dosage . How to do it for multiple variables? Thanks
@statquest
@statquest 3 года назад
For each variable in your dataset, you go through the process shown here. You then select the variable that results in the best similarity score.
@davidlo2247
@davidlo2247 3 года назад
at around 11:00, could you explain further why the cover, meaning the minimum number of residuals in each leaf, is 0.25, why it cannot allow a leave with 1 residual? isn't 1 > 0.25?
@statquest
@statquest 3 года назад
I answer this question in the StatQuest that explains the math behind XGBoost: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ZVFeW798-2I.html
@somalkant6452
@somalkant6452 4 года назад
Hi josh, Can you help and tell me whether similarity score and entropy are different things or same?
@statquest
@statquest 4 года назад
They are different.
@LL-hj8yh
@LL-hj8yh 9 месяцев назад
Hey Josh, how does the similarity score here related to gini/entropy we use for XGBoost’s classification?
@statquest
@statquest 9 месяцев назад
I'm not sure I understand your question. Are you wanting to compare the similarity score for XGBoost to how classification is done (with GINI or entropy) for a normal decision tree? If so, they are not related. This similarity score is derived from loss function, whereas GINI and entropy are just used because they work. For details on the XGBoost similarity score, see: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ZVFeW798-2I.htmlsi=iv2nJpFE41ijE3zo
@LL-hj8yh
@LL-hj8yh 9 месяцев назад
@@statquest thanks Josh! I was earlier under the impression that we need to specify gini or entropy in a xgboost classifier, which seems incorrect as they are only for decision tree, not XGBoost’ classifier. Yet is it true that the similarity score and gini/entropy serve the same purpose, that is to calculate the similarity/purity therefore determine the split? Thanks again and congrats on 1M subscribers, that says a lot!
@statquest
@statquest 9 месяцев назад
@@LL-hj8yh Yes, the similarity score and GINI serve the same purpose, but we can't use them (Gini or entropy) here since we are fitting the tree to continuous values (even for classification). Thanks!
@hubert1990s
@hubert1990s 4 года назад
Can we apply gini instead of gain in XGBoost?
@statquest
@statquest 4 года назад
This is an interesting question. In part 3 (which will be out in a few weeks), you'll see how the similarity scores and regularization are all derived from a single formula and I'm not sure how it would work if we swapped in GINI. So check back in in a few weeks and watch the next video in the series the reason GINI is not used may make more sense.
@siddhantk007
@siddhantk007 4 года назад
You have used example where x (variable/feature) is continuous. How are the unique regression trees made when x is discrete or ranked ? Like the candidate selection using gain and similarity scores ?
@statquest
@statquest 4 года назад
When the feature is discrete or ranked, we use the exact same method described in this video. This is because we are fitting the tree to the residuals, which will still be continuous, regardless of whether the feature is discrete or continuous.
@siddhantk007
@siddhantk007 4 года назад
@@statquest thanks for the quick response ! your videos are simply amazing...
@user-fj7cb5dx7u
@user-fj7cb5dx7u 4 года назад
your lecture is triple bamm! do you have any plan to teach deep learning?
@statquest
@statquest 4 года назад
As soon as I finish with XGBoost.
@amirsayyed2158
@amirsayyed2158 4 года назад
Where can I get your Xgboost slides???
@statquest
@statquest 4 года назад
I'll try to make a study guide soon.
@dylangaldes7044
@dylangaldes7044 3 года назад
Ive been researching on how to use XGBoost for image classification, unfortunately I did not find a lot of research papers on this. Is it a good algorithm for this job, Classification has multiple different classes that are either various types of diseases on leaf plants or a healthy leaf. Thank you
@statquest
@statquest 3 года назад
I've never done that myself, but I've heard of people who have and been successful.
@ankurbhattacharjee3912
@ankurbhattacharjee3912 3 года назад
I have a question that for the initial predicted output we have taken 0.5 but this is classification problem why did we choose 0.5 as the default value.. I mean why the predicted initial value couldn't have been any other value say 1 or 0. Probably my question seems stupid, apologies in advance..
@statquest
@statquest 3 года назад
You can set the initial predicted value to be whatever you want, but, by default, it is 0.5. To be honest, this seems fairly reasonable for classification (since the goal is to have probabilities between 0 and 1 and 0.5 is halfway between them). However, it seems totally crazy for regression, but that's the way it is and the guy that made XGBoost is totally fine with it.
@KUNALVERMAResScholarDeptofMath
@KUNALVERMAResScholarDeptofMath 2 года назад
Hi Josh, Why are we taking the last two values at 6:04?
@statquest
@statquest 2 года назад
I'm not sure I understand your question. At 6:04, we put 3 residuals in the leaf on the left and 1 residual in the leaf on the right.
@aneesarom
@aneesarom Год назад
5:47 if we conisder root node as dosage < 15 then similarity will not be 0 right. since it has 3 elements less than less than 15
@statquest
@statquest Год назад
No, the similarity of the root node stays the same, regardless of the threshold we use, because it still contains all of the residuals. However, the similarities in the leaf nodes will change.
Далее
XGBoost Part 3 (of 4): Mathematical Details
27:24
Просмотров 121 тыс.
XGBoost Part 1 (of 4): Regression
25:46
Просмотров 617 тыс.
FARUX RAIMOV AVJIGA CHIQDI - JAVOHIR🔥
01:01
Просмотров 634 тыс.
Dr. Fauci weighs in on Biden's debate performance
5:43
Gradient Boosting : Data Science's Silver Bullet
15:48
All Learning Algorithms Explained in 14 Minutes
14:10
Просмотров 187 тыс.
681: XGBoost: The Ultimate Classifier - with Matt Harrison
1:09:56