Welcome to my VLOG! My name is Yury Zablotski & I love to use R for Data Science = "yuzaR Data Science" ;)
This channel is dedicated to data analytics, data science, statistics, ML and AI! Join me as I dive into the world of data analysis, & coding. Whether you're interested in business analytics, data mining, data visualization, or pursuing an online degree in data science, I've got you covered. If you are curious about Google Data Studio, data centers & certified data analyst & data scientist programs, you'll find the necessary knowledge right here. You'll greatly increase your odds to get online master's in data science & data analytics degrees. Boost your knowledge & skills with my engaging content. Subscribe to stay up-to-date with the latest & most useful data science programming tools. Let's embark on this data-driven journey together!
If you wish to support me, please join the channel 🙏 ru-vid.com/show-UCcGXGFClRdnrwXsPHi7P4ZAjoin
Excellent video and easy to learn .Thank you so much. I have one query after plot this graph,when I save with proper dimension, the text size appeared very small. Im trying so many times with a manually fixed font size ,but not success. Could you provide any idea to fixed. Note: this problem is only appeared within ggstatplot function graphs like Pairwise compariaon, vilion plot.. Thank you in advance
Hey sir, just for information, it seems like the package is under maintenance or remission because the feature no longer works. I tried several datasets and variables, even copied your example character by character, but it just always shows the same error: `stat_xsidebin()` with `bins = 30`. Choose a better value with `binwidth'. `stat_ysidebin()` with `bins = 30`. Choose a better value with `binwidth`. Error in `plot_theme()`: ! The `ggside.axis.minor.ticks.length' theme element must be a <unit> object. I've tried to troubleshoot it but no success jet, and I know it's out of your control, but I just wanted to give you a heads up. PD: I noticed one drawback to this feature: it only has 4 types of correlations, and you cannot use e.g. Kendall's, Gaussian's or Shepherd's correlation, which is not bad in itself, but it would be great to test these other types of correlations as well. PD2: I found a sort of alternative with the easystats correlation package (easystats.github.io/correlation/), which offers a large number of methods and a very similar plot (like plot(cor_test(iris, "Sepal.Width", "Sepal.Length")), but it only shows the frequentist calculation at once (as far as I know). would you consider doing a review of this package or even the other easystats packages (you have already done some 😉)? As always, thank your for your labor and fast responses.
hey man, thanks for the update. Interestingly, the ggstatsplot works perfectly on my computer. So, it might be some dependency which is not updated. Try to update all the packages your have (espetially ggside) and R version. Sure, I also wanted to suggest "correlation" package as I was reading your message. I love the whole easystats environment, and was thinking about doing further packages reviewes, but desided to wait and do modelling first, which is what I working on right now. I might do those packages eventually in the future :) cheers
@@yuzaR-Data-Science it solved my problem: ggside was not to date. Rookie Mistake Hahahaha. thank you so much. I am looking forward to see the modeling reviews. The tidymodels is a marvelous but overwhelming world. Thanks sir
Hey there! I'm wondering where you get the information about the conventional thresholds for interpretation (like for p-values, Bayes, etc). There are so many different versions from different authors out there, which one should we trust? I'm really struggling to make up my mind! I already know about the effectsize package, but should we trust their frames of reference? Thanks in advance sir.
Oh man, I totally get it! It made me also crazy when there are two different interpretations of the same effectsize. Where is the truth? Not even statistitians know, they only can defend their opinion. Thus, I also decided to take the one which make the most sense for me with the reference to it. The reference is important, because then you have the source you trust and the others can reproduce and build on your knowledge. When you ask RStudio in this way "?interpret_eta_squared()" you'll get all the references you need. Hope that helps! Cheers
Hi Dominus, unfortunately it was blocked for too much traffic. I’ll try to reopen it ASAP with free alternative, but in the meanwhile please just rewatch the videos, because my blog is literally the script for them, so you won’t miss anything. I know that it might be cumbersome to write down the code from the script, so, I am sorry for that! But if you wanna get the written whole code now, consider to join my channel to become a member, because I post the code to members upon request. And members can see other code-posts I've posted on other videos already. Cheers
Hello Sir. Is it possible to display only one category of the dichotomous variable provided with the "by" parameter. For example I only want to display the percentage of people who said Yes (sport-Oui). library(gtsummary) library(questionr) data("hdv2003") hdv2003 %>% tbl_summary( include = c("sexe", "relig", "relig"), by = "sport", percent = "row", statistic = all_categorical() ~ "{p}%" ) . The objective is that I would subsequently like to combine several tables where in the column I will have the percentages of several variables.
Yes it’s possible. Check out the arguments of the Funktion please yourself, I am away from my computer for a week. And you can combine several tables easily via tbl_merge
Thank you for this video. I just have two question. 1. You showed the plot of the variable importance ("plot('model_name', type = "s")). Is there a way to extract the names of the variables and/or interactions using a threshold (e.g. 75%)? I need them as a list, e.g. "variab_interactions_plus_75 <- ????". 2. I used the example data "mtcars". Is an interaction "hp:cyl" equal to an interaction "cyl:hp"?
Hi Hendrik, not that I know of. Once I needed to use for my paper, I created the data frame manually and only put the things inside, I wanted. But I think it's not that much more work, after algorithm done the whole heavy lifting ;) cheers
Wow! Thank you! So many important informations. I have to watch this video several times. But one question: In which order would you use the packages "janitor" and "dlookr". Would be interesting to teach people how to load and handle "dirty" excel table, fix some excel problems (e.g. date as numbers or entries like "no data" in numerical columns etc) and if those problems are fixed to use "dlookr" to diagnose, explore and repair the data.
Thanks a lot for your nice reply, Hendrik! I would use janitor first and dlookr on top. I guess you already have seen the janitor video on my channel. If not, feel free ;) I also have one video on tidy data, where I show the dirty table, but there is not much of R programming. Thanks for watching!
Wow, thanks for such a generous feedback! If you know some folks who also would benefit from it, feel free to share it! I wish I had something like this video as I started to learn R. I hope the other videos are helpful too! Thanks again! Cheers!
This is an excellent video!! I was thinking, a nonparametric alternative for linear regression could be LOESS regression and boostrapp could be done without problem but, because LOESS is a nonparametric, instead of medians the means could be used properly or also in this case the medians should be used?
While resampling allows for a better use of means, I am a big fan of medians, because if the distribution of anything after bootstrapping does not get normal, like in the case of p.values, I would trust the median, but not the mean. So, I would use median as much as I can.
@@yuzaR-Data-Science I had another question. Althoguh bootstrapping is not exactly an option to handle outliers, could be the case that the more resamples used, the more robust is the model to outliers?
Yes, because then you would resample the most frequent cases more often, so their distribution would be higher, and the outliers ... hmm, we would not get rid of them, but they will be resampled very rarely. hope that helps. cheers
you actually can, here is what I think about them: Close relatives: connect the dots Ironically, there is nothing exact about Exact Binomial test. It is called “exact” because it simply calculates the p-value directly from the probability, and not from any kind of statistics, like the Chi-Square. However, Chi-Square’s Goodness of Fit test is only approximation for a p-value, that is why the exact binomial test is recommended. Proportion test If you have lots of data (N > 30) or more than two outcomes, use a proportion test which is highly similar to the exact binomial test. In fact, the Exact binomial test is exactly the same as the proportion test with Yates continuity correction, which is used by the proportion test by default. Below, I explicitly wrote down such correction: prop.test(x = 7, n = 10, p = 0.5, correct = T) 1-sample proportions test with continuity correction data: 7 out of 10, null probability 0.5 X-squared = 0.9, df = 1, p-value = 0.3428 alternative hypothesis: true p is not equal to 0.5 95 percent confidence interval: 0.3536707 0.9190522 sample estimates: p 0.7 One sample Chi-Square test However, as you can see above, the proportion test calculates the chi-squared statistics, so it is actually calling a chi-squared test. And interestingly a proportion test without Yates continuity correction gives identical results to a Goodness-of-Fit One-sample Chi-Square test: prop.test(x = 7, n = 10, p = 0.5, correct = F) 1-sample proportions test without continuity correction data: 7 out of 10, null probability 0.5 X-squared = 1.6, df = 1, p-value = 0.2059 alternative hypothesis: true p is not equal to 0.5 95 percent confidence interval: 0.3967781 0.8922087 sample estimates: p 0.7 chisq.test(c(7,3)) # total is 10 and p = 0.5 for both numbers by default Chi-squared test for given probabilities data: c(7, 3) X-squared = 1.6, df = 1, p-value = 0.2059 (Simplest) Logistic regression If you are not overwhelmed yet, you can also go further and conduct the simplest logistic regression possible (don’t need to understand it now! I’ll cover it in different videos). Below you’ll find an log-odds output of the logistic regression (0.847) which can be expressed in probability using the plogis function. You’ll see that the probability is exactly 0.7, as in the tests above and the p-value is similar. By the way, the p-value represents the probability of observing a result as extreme or more extreme than the one you got, assuming the null hypothesis is true. m <- glm(c(rep(1,7), rep(0,3)) ~ 1, family = binomial()) broom::tidy(m) # A tibble: 1 × 5 term estimate std.error statistic p.value <chr> <dbl> <dbl> <dbl> <dbl> 1 (Intercept) 0.847 0.690 1.23 0.220 plogis(0.8472979) [1] 0.7
@@yuzaR-Data-Science Thank you for your attention and complete answer. This was very insightful. Also thanks for your video about regressions vs statistical tests, which I wondered about that for months or more (since I don't have a strong background on statistics, I'm a Biologyst who (tries to ^^) enjoy statistics). So, the proportion test calculates the pvalue from the Chi-square value as in the Chi-square test unlike binomial test? At least I can see a clear advantage given that proportion test gives the confidence intervals and I would expect that the Exact binomial test to be similar to the binomial glm but in fact it's similar to Chi-square pvalue. Is this due to the Yates continuity correction?, I found some sources saying is can be a bit conservative. For some reason, I tried to do the Chi-square test and it gives me the same result with continuity correction (which is the default it seems, and doesn't specify which) and not corrected, equal to the proportion test without correction. When i used pvalue by Monte Carlo simulation it gives something closer to the Yate's continuity correction Chi-squared test for given probabilities with simulated p-value (based on 2000 replicates) data: c(7, 3) X-squared = 1.6, df = NA, p-value = 0.3543 What is best? Using continuity corrections or alpha adjustments for multiple outcomes, or both?
Great video! Is it possible to only show the pairwise comparisons between one group e.g. 'Original' and Synthetic1, Synthetic2, Synthetic3 ... etc? It also does pairwise comparisons between those synthetic groups which I don't want to show and also don't want to conduct tests for except with the original one. Having a separate one by one figure takes up a lot of space so wondering if this is possible?
I think it’s difficult with this function, although not impossible. But it’s much more practical to model it, with for example quantile regression, and use tab_model function from sjPlot package 📦 I have videos on both if need some assistance for a start
Very nice video! It helped me a lot.🤓 I have a question here, when I try to build a function for mixed effect models and continue to the next step of glmulti, it warns me that "Error during wrapup: unused argument (REML = F) Error: no more error handlers available (recursive errors?); invoking 'abort' restart". Could you pls tell me how might this happen and how to solve this problem? That would be very appreciated!😳
hi, thanks for feedback! first, have you installed all the important packages (lme4, lmerTest ... etc.)? and secondly, have you tried this? glmer.glmulti<-function(formula, data, random = "", ...){ glmer(paste(deparse(formula),random), data = data, REML = F, ...) } mixed_model <- glmulti( y = response ~ predictor_1 + predictor_2 + predictor_3, random = "+(1|random_effect)", crit = aicc, data = data, family = binomial, method = "h", fitfunc = glmer.glmulti, marginality = F, level = 1 )
then I guess there are two many predictors. if I try it with ca. 20 it collapses and I need to restart rstudio. so, reduce the number of predictors to the most sensical ones and then run the glmulti
sure, let me know whether it worked. by the way if you use "d" instead of "h", you can see how many models you are going to make, and if it goes over 6figures, I would reduce the number of predictors first and then use the glmulti
Thanks man! Greatly appreciate your positive feedback. My blog was shut down, since they want me to pay. I refuse to pay, because I actually do something useful for the world for free. So, hope for your understanding. The good news is, youtube is still free and will stay free, so, when you just stop the video at any time and type the code it's free. However, when you want to see the whole code from any of the video, you could join the channel (ru-vid.com/show-UCcGXGFClRdnrwXsPHi7P4ZAjoin), because for the members, I do provide the whole code.
Thank you very much! Will definitely do. Just made some videos on linear regression in R. Logistic will follow and then the rest of models including Survival and ML one day. 3 years ago I've done two videos on survival already, but they are old, theoretical and low quality. I'll redo them in a more concise and R focused way. Thanks for nice feedback and for watching!
Thank you very much! Will definitely do. Just made some videos on linear regression in R. Logistic will follow and then the rest of models including Survival and ML one day. 3 years ago I've done two videos on survival already, but they are old, theoretical and low quality. I'll redo them in a more concise and R focused way. Thanks for nice feedback and for watching!
Great suggestion! I'am on it, just made some videos on linear regression in R. Logistic will follow and then the rest of ML one day. Thanks for nice feedback and for watching!
Excellent content quality! You sir always find a way to keep us interestingly attached to the video. You are like our Statistics and Data Science dealer. Thanks for your labor. We'll keep growing up.
Wow, thank you! Your comments, my friend, are the most supportive and motivating! So, after reading them I just wanna jump straight into creating a new video on one of the 1000 ideas I have :) For instance, I am finally starting the logistic regression series. One of my favorite topics ;) Genuinely thankful for your continuous support! Cheers!
Thanks for this but is this not the classic case when p-value adjustments for multiple testing need to be applied? Why wasnt it applied? Also, in a situation where we have only 2 possible outcomes, when should the binomial test be used versus Chi-square? Would this be based on preference or one is objectively more appropriate versus the other?
Man, you ask good questions! ;) first, yes, you are absolutely right, p-values adjustment would be the right thing to do, but to the time I have done the paper, I used it irregularly. For the video I also try to keep the focus, so it's concise.
And since I try to keep videos short, some infos does not end up there, but was considered, while I was writing the script. For instance, check these parts below, I hope you'd find them useful: Intuition Ironically, there is nothing exact about Exact Binomial test. It is called “exact” because it simply calculates the p-value directly from the probability, and not from any kind of statistics, like the Chi-Square. However, Chi-Square’s Goodness of Fit test is only approximation for a p-value, that is why the exact binomial test is recommended. Proportion test If you have lots of data (N > 30) or more than two outcomes, use a proportion test which is highly similar to the exact binomial test. In fact, the Exact binomial test is exactly the same as the proportion test with Yates continuity correction, which is used by the proportion test by default. Below, I explicitly wrote down such correction: prop.test(x = 7, n = 10, p = 0.5, correct = T) One sample Chi-Square test However, as you can see above, the proportion test calculates the chi-squared statistics, so it is actually calling a chi-squared test. And interestingly a proportion test without Yates continuity correction gives identical results to a Goodness-of-Fit One-sample Chi-Square test: prop.test(x = 7, n = 10, p = 0.5, correct = F) chisq.test(c(7,3)) # total is 10 and p = 0.5 for both numbers by default (Simplest) Logistic regression If you are not overwhelmed yet, you can also go further and conduct the simplest logistic regression possible (don’t need to understand it now! I’ll cover it in different videos). Below you’ll find an log-odds output of the logistic regression (0.847) which can be expressed in probability using the plogis function. You’ll see that the probability is exactly 0.7, as in the tests above and the p-value is similar. By the way, the p-value represents the probability of observing a result as extreme or more extreme than the one you got, assuming the null hypothesis is true. m <- glm(c(rep(1,7), rep(0,3)) ~ 1, family = binomial()) broom::tidy(m) plogis(0.8472979)
Hi Richmond, thanks a lot for your nice feedback! I do not share my email, but that's no problem, because you can ask anything here in the comments section of videos and I would do my best to answer as quick and as good as I can. The channel members get quicker and more insightful responses though, and the higher their level, the more time I can invest into answering questions, thus if you wish, join my channel: ru-vid.com/show-UCcGXGFClRdnrwXsPHi7P4ZAjoin
@@yuzaR-Data-Science Thank you very much for such informative videos. I spent several years in class and didn't understand all these concepts, but watching this video has made things easier for my comprehension. I have a few questions I would like to ask: When performing a statistical test, we use a parametric test if the data or variable in question is normally distributed, and a non-parametric alternative if the data or variable is not normally distributed. My question is: when does the central limit theorem come into play here? Also, a colleague of mine told me to always use parametric tests even if the data is not normally distributed. His explanation was that parametric tests are more powerful than non-parametric tests. So, should I straightforwardly use the non-parametric alternative when I observe that my data is not normally distributed, or should I take the CLT into consideration and use the parametric test?
I am not sure the CLT helps too much, but using parametric test for a highly skewed data is absolute nonsense. The power difference is minimum and is overrated. I also have colleagues who use non-parametric tests by default. Another extreme nonsensicality and laziness. Just for the sake of learning effect, please, take skewed data and calculate mean and median to see how much difference you'll get. And if your colleague would really care about power, he/she would use multivariable models, not univariable tests. And this is what I would recommend to you - the test are fine, in the beginning - but try to learn models and their assumptions when you want to go to the next level. Cheers and thanks again for a nice feedback!
@@yuzaR-Data-Science I'm really grateful for finding time in your busy schedule to reply me. So please when should I use the CLT or I shouldn't use it at all. Thank you
amazing question! Thanks :) I was thinking to put it into a video actually, but deleted it as "boring" and "less useful" part of the script :) Here is what I was going to say: Ironically, there is nothing exact about Exact Binomial test. It is called “exact” because it simply calculates the p-value directly from the probability, and not from any kind of statistics, like the Chi-Square. However, Chi-Square’s Goodness of Fit test is only approximation for a p-value, that is why the exact binomial test is recommended. Hope it answers the question :)