Incredible presentation! I semi-disagree with precision and recall being good evaluation metrics for a recommendation system using a masking technique to evaluate model performance during the offline training phase. This is due to them demanding the output of the model to be binary, where as masked-prediction in this case would represent more of a regression problem leading RMSE to be a more valuable evaluation technique. Great presentation though, very clear explanations.
I found this video while searching for more info on how to make noise in Python - I'm a hobbyist programmer looking to procedurally generate terrain for a 2D top-down game I'm currently working on. This helped me to understand the general way that noise is used to render terrain, so thank you very much 👍
Thanks for the explanation and example code on how the WSGI and web server are working together. The code demonstrate how they both work together step byt step in a very clear way.
Great talk for a general overview on recommendation systems! From there I could deepen in the subjects I found interesting or didn't know about, in my opinion it's a great video for people with a general knowledge of ML or maybe that have some knowledge in other applications but never touched Recommendation Systems. Just one thing that doesn't come clear to me at the pre-processing part: When she talks about normalization, she talks about applying mean normalization for the users ratings, which comes clear, but the slides show a formula with "user-item rating bias" which she skips explaining, can someone explain me on where does the formula come from and if it's something that you should need to subtract from every cell? The fact that there is a variable for "global average" and another for "item's average rating" kinda confuses me, does the global average regards the whole dataset of movies? Thanks!
Suddenly matrix factorisation comes up. Why? What are its benefits and limitations. Ok i never studied this but it looks to me that im very dumb or the speaker jumps over a lot of issues.
Awesome talk, there is just so much content on the web that tries to explain this topics but somehow end up missing the point entirely. The actual simple implementation/example is what helped me the most, thank you!
How do u make predictions bcz in knn for predictions we need train or test data by splitting but here we r using different approach for this so how gonna we make predictions for ds?
I learned it hard way! I went over Django and unicorn source code to understand it. But this is a gem. I wish I could have found this video earlier. Inspired from this talk I rebuild a WSGI server and applications side. I added few more features like handling GET request with query params and POST request etc,. Code is pretty well documented and followed the similar design. Will try to post the link of GitHub repo once push it there.
The points in this video are good. I will try to add on it a bit. Tests add more code. The hope is that you are "not affecting the design negatively", "writing a bug free test", "writing according to a valid spec", "increasing maintainability", "increasing faith in the code", "reducing points of regression" The reality however is this: "Designs can be affected negatively, when you are placing Testability above other things such as encapsulation and black-box" "Test code can have bugs too", "Tests could be invalid", "tests can increase maintenance", "tests that fail incorrectly or pass incorrectly can reduce faith", "you can break tests with refactoring and not reap the reward of regression tests". In order for tests to be a positive outcome, the spec should be correct, the test written perfectly, the code written perfectly. However, you can do that without many other great coding principles. You could have low cohesion and the tests pass. You could have high cyclomatic complexity and tests still pass. By this, you should understand that the tests are only as good as all the other factors, like good code writing, feedback, spec, requirements gathering etc etc. Add other negatives and test code may never equal a net positive. - increased learning curves - additional dependencies - cognitive load of code+tests+dependencyinjection - additional change chain reaction - additional dependencies (test harnesses) to manage On the other side of the coin. What if you increased your code reading skills? used design by contract, added fault tolerance to your code, added atomicity to your code, reduced the call stack, have aspect oriented dynamic checks, state validation, isolation etc?
Very nice tutorial. I guess my question is (5:16) - if eventually we are gonna link the posterior probability with p-value, why do we want to conduct Bayesian A/B test at the first place?