Тёмный
Emma Ding
Emma Ding
Emma Ding
Подписаться
Hey, I'm Emma, a former data scientist now helping you reach your data science career goals.

► You Deserve Your Dream Job
When you know how to LAND interviews, PREPARE for interviews efficiently, and EXCEL in your interviews, you will have the confidence to rapidly advance your career and land the job of your dreams. All of our content is designed to help you with those three elements, using practical strategies and resources.

► Is this you?
Many data science professionals feel overwhelmed by the numerous skills required to land a data scientist job, often spending significant time searching for interview questions without a clear action plan or system to follow that leads to their dream job offer.

► We offer:
📍 Instant Interview System to Secure Interview Opportunities
📍 Data-driven Interview Preparation
📍 3E System to Excel Data Science Interviews
📍 Offer Negotiations

Welcome to the channel once known as Data Interview Pro. Let's land that dream job together!
I SUCK AT EVERYTHING..
3:08
Год назад
Комментарии
@FauziFayyad
@FauziFayyad 2 дня назад
Hi emma, pls update your content ❤
@geoffreyz5466
@geoffreyz5466 2 дня назад
5:18 - 6:04 To increase awareness • Increase the size of the component • Use a popup window • Send emails or push notifications 11:55 ML
@karlvandesman
@karlvandesman 7 дней назад
Great! You chose interesting characteristics to point out the differences between the methods, congrats!
@afuturemodern
@afuturemodern 8 дней назад
hi Emma. thanks for the video. why are you determining stat sig based on the confidence interval and not the t-statistic?
@observer698
@observer698 10 дней назад
thank you
@babatundeonabajo
@babatundeonabajo 11 дней назад
Found this very inspiring.
@bossrui
@bossrui 13 дней назад
looks like it is how I failed
@aaronsayeb6566
@aaronsayeb6566 15 дней назад
there is a mistake in the representation of algorithm. the equation for ri, L(Y, F(X)), and grad ri = Y-F(X) can't hold true at the same time. I think ri= Y-F(X) and grade ri should be something else (right?)
@user-yn8zn2jr4l
@user-yn8zn2jr4l 16 дней назад
老中做題家辛苦了
@100_IQ_EQ
@100_IQ_EQ 17 дней назад
Thank you for the video. There is one confusing part. You say principal component 1 is in the direction that captures maximum variance. Principal component 2 is in the direction that captures 2nd highest variance. If you tilt the PC1 just a little, it will be the component that captures second highest variance right? I understand that you mean that PC2 captures the highest variance that remains after removing variance PC1 has captured. Or you could say that PC2 is the one that captures second highest variance and is also orthogonal or perpendicular to PC1. I hope you continue to make videos like these that explain data science concepts well even though they get fewer views than your other types of videos. Hopefully, one day, if one has to become a good data scientist, all one has to do is watch your videos at least for ideas that aren't usually explained clearly like Gradient Boosting in decision trees etc.
@lilpahedchix
@lilpahedchix 20 дней назад
I am glad that I found your yt channel today and now following you on linkedin. Thank you for making this informative video and especially for aspiring and passionate data professional like me. Please continue to mentor people. Wish you more success!
@ramanadeepsingh
@ramanadeepsingh 23 дня назад
Great video...what happens when sample-size is less than 30 and population distribution is not normal. What kind of tests are used in practice?
@n.r.a4933
@n.r.a4933 23 дня назад
Are you for reall?? Is this simpel….I did’nt anderstand a shit 😂
@alienfunbug
@alienfunbug 25 дней назад
This video is so condensed and packed full concise and well explained materials. Thank you for the fantastic content.
@mohammedsadiq69
@mohammedsadiq69 28 дней назад
Nice explanation..but it would be more nice if u develop your English pronunciation bcuz u sound "a,e,o" as same
@rezamahmoudi163
@rezamahmoudi163 29 дней назад
please share slide or power point ?
@yiminglee4372
@yiminglee4372 Месяц назад
hello Emma, thank you so much for this amazing ab testing series video. I got a quick question about the ramp-up plan part, so in the first day, we only use 5% of traffic for each variant, so its like we use 5% of traffic for variant 1, another 5% of traffic for variant 2, and the left 90% of traffic goes to the control group? i am just really confused about this part. much appreciated if you can give me some hints! : )
@fishboobmaximus6062
@fishboobmaximus6062 Месяц назад
Fix your hair, please
@markdotinc8371
@markdotinc8371 Месяц назад
The top 10% doesn't need your help, lady
@zbynekba
@zbynekba Месяц назад
Your content is good, but your strong accent needs improvement.
@nihalnetha96
@nihalnetha96 Месяц назад
is there a way to get the notion notes?
@mizutofu
@mizutofu Месяц назад
why you have 3 master's degree
@binuraanthony1983
@binuraanthony1983 Месяц назад
Under user spending 5 minutes on the web page, do you happen to capture user spending 5 minutes yesterday but not today? Or by saying "spending 5 minutes in web page" you are talking about user spending 5 minutes on wuserseb page "at least today"?
@FelipeCampelo0
@FelipeCampelo0 Месяц назад
I have been storing so many CTEs using the WITH statement lol. I think these window functions could make my queries less verbose. Great content!
@MrFromminsk
@MrFromminsk Месяц назад
Ansamble mesos?
@rudroroy1054
@rudroroy1054 Месяц назад
A/B testing seams like a resource consuming testing. Are there any alternatives? maybe something more automated?
@jacob-zs9wc
@jacob-zs9wc Месяц назад
The weight is unbalaced. So does they heavier side land face down?
@user-hq4ge6no3p
@user-hq4ge6no3p Месяц назад
An excellent video
@sharanupst
@sharanupst Месяц назад
Great primer
@TechWithAbee
@TechWithAbee Месяц назад
Thank you very much! 🔥
@mohamedsamy2895
@mohamedsamy2895 Месяц назад
So bad
@bakus_naur
@bakus_naur Месяц назад
are you an angel??
@user-ds8gv5hi4v
@user-ds8gv5hi4v Месяц назад
Super helpful info. Thank you!! Where can we see the complete pie charts for ML and stats? Is there also one for Python coding? I've been searching in your channel and didn't have the luck to find them.
@clarencestephen
@clarencestephen 2 месяца назад
*Usually* not but "never" is too strong. Layoffs also target based on compensation, age, business strategy changes -- these are independent of performance. If you are top 10% and your company (e.g. a large bank) shifts away from your business or closes your division it doesn't matter. These things also happen to top 10% employees during M&A.
@guiltycrown6024
@guiltycrown6024 2 месяца назад
Is dsa asked in data acience interviews?..as i understand a data scientists job is to find insights from the data and build models in python using pandas library..
@livingbyheart8510
@livingbyheart8510 2 месяца назад
I think you should have choosen a different example to explain metrics to select other than the youtube one, it would have been more beneficial to understand.
@user-in8ic3xq8c
@user-in8ic3xq8c 2 месяца назад
❤❤❤❤
@kunalmehta4051
@kunalmehta4051 2 месяца назад
Great work, Emma. Appreciate your efforts.
@geoffreyz5466
@geoffreyz5466 2 месяца назад
2:11 4:50 7:55
@kandiahchandrakumaran8521
@kandiahchandrakumaran8521 2 месяца назад
Excellent video Many thanks. Could you kindly make a video for time to event with survival SVM, RSF, or XGBLC?
@jacobdsk1381
@jacobdsk1381 2 месяца назад
amazing thank you!
@user-wy4ge3yu4h
@user-wy4ge3yu4h 2 месяца назад
Good explanation
@user-vq6bw2je7r
@user-vq6bw2je7r 2 месяца назад
Great tutorial! Thanks for sharing this valuable content.
@jasdeepsinghgrover2470
@jasdeepsinghgrover2470 2 месяца назад
Good explanation but I think the last error is incorrectly handled.... Imagine you run an experiment and it is significant (you haven't checked the observed power yet), if you accept it then it is wrong but if you rerun it you just nearly doubled the p value. We should be only looking at the rerun or let the experiment have significant power (probably more than our threshold)
@muse3324
@muse3324 2 месяца назад
1:41 "It should not be obscure like what you see in Wikipedia" 😅😁😁
@tinos0330
@tinos0330 2 месяца назад
wow it's very informative emma
@emmysway96
@emmysway96 2 месяца назад
I think the green and blue parameters are swapped for the normal distribution diagram.
@Kheekhee_khakha
@Kheekhee_khakha 2 месяца назад
Really bad video. Keeps iterating the same thing.
@Cplanet-uo1sf
@Cplanet-uo1sf 2 месяца назад
Can you explain how this query works ? SELECT COUNT(p) OVER(order by p ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) as p_count, p, c,d FROM `d.t` SELECT COUNT(p) OVER(order by p ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) as p_count, p FROM `d.t` Why is the window affected when I specify c and d columns? I would expect to count 1 before, current one and next one. So for first row it would be 2 for next ones except last one 3 and for last one to be 3, because the window is made of 3 items so it will count 3 most of the time. But no.. it returns different counts that does not make any sense to me. I read the bigquery documentation and I really don't get it.