Wow, thank you so much for these video. I am a software engineer by trade but increasingly big tech companies have ML system design as one of their interview rounds. Your content was amazingly helpful in preparing for those interviews!
Awesome content, but something unrelated question, What are your camera settings? I especially like your camera setup, could you give info on that? What lens, what aperture, and anything else is needed to replicate the same light/room setup. Thanks 🙂
Thank you for this video, Thank you so much. I have questions to ask based on this model evaluation, my questions go like this sir, " is there a way to use the confusion matrix to know the exact datapoint in our dataset our model got wrongly during the predictive system? Also, Sir when we deploy our model to a web app using streamlit, can we use a confusion matrix to figure out which exact datapoint our model predicted wrongly by applying the confusion matrix to the final predictive system output in the web app ?
Please ma, can you share the codes you used to plot the true positive rate vs false positive rate graph, PR curve? It looks so beautify and i can't get the exact, please help. Thanks in advance
What do you think about repeated random data splitting e.g you split the data 80 percent for train and 20 percent for test on a random basis that preserves the class structure vs k fold cross validation? Edit yep I now know this is worse
Depending on the language you're using, you'd need different resources, but for Python and the Scikit-learn library, here is some really good and comprehensive documentation, explaining the implementation of each metric: scikit-learn.org/stable/modules/model_evaluation.html