thanks for noticing! I think I wanted to use random forest for classification, I just used the "lm" in the name of the object where I saved the model. thanks for watching!
What I love the most about this RU-vid channel, is that the quality of the free tutorials is much better than many paid ones. I must admit, that you have a talent in illustrating such a complex topic into very easy method. A true professor you are. I really don't know how to thank you. I would be so grateful if you create a tutorial of machine learning using tidymodels package. From the bottom of my heart, thank you. ❤❤❤❤
Wow, thanks! The most positive feedback I've ever got! 🙏🙏🙏 I already tried to get my hands on tidymodels, but they are still a bit chaotic for me. I plan to do it though in the future. So, stay tuned!
Kudos to our Boss, thanks in million sir, the data scientist gurus in the universe Sir could you make a tutorial on handling class imbalance when dealing with a binary classification
Good suggestion. I just did random forest with massive class imballance with lots of data and many zeros and few ones, and solved it this way: library(randomForest) set.seed(1) fit=randomForest(response ~ predictor1 + predictor2 + ... + , data = data, importance = T, scale = T, mtry = 5, ntrees=10000, sampsize=c(1000,1000)) So, "sampsize" is the argument that helps. I then got similar results to the logistic regression in terms of importance of variables and their interactions
Very well explained 👍 Thanks for sharing 🙏 just to be clear, for cross-validation, which model is going to run over the test-data created in the very beginning?
@@yuzaR-Data-Science thanks for the feedback 👍i was asking to specify if you may since there will be N models for each of the N-folds, there will be a statistic for each of the parameter of the model, isn't it? If so, which one?
hey, I actually did a video on bootstrapping before doing this one, so, just browse on my youtube channel and you'll find it. and the link is working perfectly, I just checked it. thank you for watching!
As in the beginning of the video with the train-test split. CV and bootstrapping can be used to find the best model with ONLY the training set, but then you use this model to get a final R2 or RMSE from this model applied to a training set to predict our response variable in the test set. hope that helps
@@yuzaR-Data-Science exactly that definetely helps. I wish there had been a method to combine the values obtained from cv, kendi of ensembling. Thanx for the reply and the videos you share
Could I use this approach to remove some BIAS regarding different data sample efforts? Example: I have a dataset with monthly video records of animal interactions with vertebrate latrines. However, some spots have different months' recordings because come latrines weren't there when I installed my equipment, meaning I have different latrines sampling with different sample efforts. I am looking for a way to correct it, but since stats are something new for me, I was wondering if this approach you use could be used in my situation. Please don't stop doing this, we need this kind of informative videos and didactic 🙏🙏🙏. Thank you again for this one!
Thank for cool feedback! With the month it depends. It sounds to me, that you want to capture the variance (difference) from month to month, then "strata = " argument is a way to go. So, thats for resampling. But it also useful to check out mixed-effects model, where Month would be your random effect, it account for month then. But if you don't care about the month, just want to average out the monthly effect, then bootstrapping 1000-2000 times would do the trick. cheers
Even though I know very little about modeling it is clear that this package kicks ass! The animations on this video illustrating the concepts being described is superb!
Regarding this, yuzaR, I was wondering if it existed something along the concept of keeping % of classes in our test-train split, but for numerical values in order to draw two populations that got the most similar means, sd and all of that. Thank you :D
oh, it's a great question! I honestly never tried or needed it. I would be interested myself. I neves seen this in the tutorial, and could ask the creators of the package, but I don't know any case for a moment, I would need it. The thing is, if you stritify the numeric predictor and use a few breaks (strata = horsepower, breaks = 10) you'll make the testing and training sets (their distributions actually) very similar and then the means and SDs will be very similar. hope that helps. thanks you for watching!
@@yuzaR-Data-Science Yep, just edited it with the thank you cause I just watched it that part! You sir, are really fast answering comments!! Thank you again and Have a good day
Good to know about Monte Carlo CV in that package. Where does Monte Carlo fit in the variance|bias continuum vis-a-vis bootstrapping and cross-fold validation? I use tend to use 'caret' instead where I dial in these model performance tests via train_control. We need a vid on sub-space clustering. 😉
thanks for many suggestions! I'll do my best to cover ML in tidyverse ASAP, including sub-space clustering ;). For now, I am not sure about your monte-carlo question, I think it will depend on the data. The tidymodels are created by Max Kuhn, the same guy who created caret. So, it should contain everything, caret has.
@@yuzaR-Data-Science Another topic suggestion: Tweedie GLMs, working with hard right skew distributions with and without (hard & soft) zeros. Zero inflated models.
thanks, mate! Zero inflated models are definitely on the list, because I use them often. Not sure I'll do tweedie anytime soon though, because nobody in my field (medicine) is using them.
@@yuzaR-Data-Science Yeah, no need to look at the whole Tweedie family, but if you are faced with zeros frequently and use zero inflated models, the Tweedie of the 1 < epsilon < 2 family for zero and positive continuous data may be worth a look.