Hey, I'm Emma, a former data scientist now helping you reach your data science career goals.
► You Deserve Your Dream Job When you know how to LAND interviews, PREPARE for interviews efficiently, and EXCEL in your interviews, you will have the confidence to rapidly advance your career and land the job of your dreams. All of our content is designed to help you with those three elements, using practical strategies and resources.
► Is this you? Many data science professionals feel overwhelmed by the numerous skills required to land a data scientist job, often spending significant time searching for interview questions without a clear action plan or system to follow that leads to their dream job offer.
► We offer: 📍 Instant Interview System to Secure Interview Opportunities 📍 Data-driven Interview Preparation 📍 3E System to Excel Data Science Interviews 📍 Offer Negotiations
Welcome to the channel once known as Data Interview Pro. Let's land that dream job together!
there is a mistake in the representation of algorithm. the equation for ri, L(Y, F(X)), and grad ri = Y-F(X) can't hold true at the same time. I think ri= Y-F(X) and grade ri should be something else (right?)
Thank you for the video. There is one confusing part. You say principal component 1 is in the direction that captures maximum variance. Principal component 2 is in the direction that captures 2nd highest variance. If you tilt the PC1 just a little, it will be the component that captures second highest variance right? I understand that you mean that PC2 captures the highest variance that remains after removing variance PC1 has captured. Or you could say that PC2 is the one that captures second highest variance and is also orthogonal or perpendicular to PC1. I hope you continue to make videos like these that explain data science concepts well even though they get fewer views than your other types of videos. Hopefully, one day, if one has to become a good data scientist, all one has to do is watch your videos at least for ideas that aren't usually explained clearly like Gradient Boosting in decision trees etc.
I am glad that I found your yt channel today and now following you on linkedin. Thank you for making this informative video and especially for aspiring and passionate data professional like me. Please continue to mentor people. Wish you more success!
hello Emma, thank you so much for this amazing ab testing series video. I got a quick question about the ramp-up plan part, so in the first day, we only use 5% of traffic for each variant, so its like we use 5% of traffic for variant 1, another 5% of traffic for variant 2, and the left 90% of traffic goes to the control group? i am just really confused about this part. much appreciated if you can give me some hints! : )
Under user spending 5 minutes on the web page, do you happen to capture user spending 5 minutes yesterday but not today? Or by saying "spending 5 minutes in web page" you are talking about user spending 5 minutes on wuserseb page "at least today"?
Super helpful info. Thank you!! Where can we see the complete pie charts for ML and stats? Is there also one for Python coding? I've been searching in your channel and didn't have the luck to find them.
*Usually* not but "never" is too strong. Layoffs also target based on compensation, age, business strategy changes -- these are independent of performance. If you are top 10% and your company (e.g. a large bank) shifts away from your business or closes your division it doesn't matter. These things also happen to top 10% employees during M&A.
Is dsa asked in data acience interviews?..as i understand a data scientists job is to find insights from the data and build models in python using pandas library..
I think you should have choosen a different example to explain metrics to select other than the youtube one, it would have been more beneficial to understand.
Good explanation but I think the last error is incorrectly handled.... Imagine you run an experiment and it is significant (you haven't checked the observed power yet), if you accept it then it is wrong but if you rerun it you just nearly doubled the p value. We should be only looking at the rerun or let the experiment have significant power (probably more than our threshold)
Can you explain how this query works ? SELECT COUNT(p) OVER(order by p ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) as p_count, p, c,d FROM `d.t` SELECT COUNT(p) OVER(order by p ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) as p_count, p FROM `d.t` Why is the window affected when I specify c and d columns? I would expect to count 1 before, current one and next one. So for first row it would be 2 for next ones except last one 3 and for last one to be 3, because the window is made of 3 items so it will count 3 most of the time. But no.. it returns different counts that does not make any sense to me. I read the bigquery documentation and I really don't get it.