Тёмный

Why You Shouldn't Trust Your ML Models (...too much) 

ritvikmath
Подписаться 163 тыс.
Просмотров 5 тыс.
50% 1

Опубликовано:

 

8 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 15   
@mndhamod
@mndhamod 4 месяца назад
I have a PhD in computer science with focus on deep learning and i still learn something new from your videos. I'm grateful for all the neat insights i get from your teaching!
@wenhanzhou5826
@wenhanzhou5826 4 месяца назад
This phenomena occurs in Deep Q-learning and SARSA when you need the target Q-value in order to update the current Q-function. Especially in problems with continuous state spaces where the target Q-value is typically estimated using the same model. So the algorithm essentially try to predict a target and learn from that. One way to reduce this effect is to implement the epsilon-greedy policy which chooses a random action depending on the value of epsilon, which is conceptually similar to the concept in the video of keeping a small amount of randomness in the action of the model.
@climbscience4813
@climbscience4813 4 месяца назад
I've had this same effect in some models I trained and I think there is one thing you can do that nearly gets completely rid of the effect is eliminating the effect of the model from your data. In my case it was possible to calculate the outcome if the model hadn't influenced the process. In the case you explained, what I would do is divide the numbers by the percentages of recommendations to compensate for the effect of the recommendations. It's essentially Bayesian statistics, where you try to determine the probability of the film getting watched given that it has been recommended. Hope this makes sense to everyone!
@mango-strawberry
@mango-strawberry 4 месяца назад
hi ritivik ive been watching a lot of your videos. you explain very well. ive one request. can you do some videos purely on math topics that are required for ml? especially something like stats.
@iSJ9y217
@iSJ9y217 4 месяца назад
Your channel is a treasure! Thank you!
@emmang2010
@emmang2010 4 месяца назад
Thank you.
@Jack-cm5ch
@Jack-cm5ch 4 месяца назад
Really unique video! I loved it. Do similar issues of a feedback loop occur with demand or price forecasting? And if so, how? I was thinking high demand on an item on one day put more bias on that item in the future?
@paull923
@paull923 4 месяца назад
insightful video, thank you!
@user-sb9oc3bm7u
@user-sb9oc3bm7u 4 месяца назад
Probably should implement concepts from genetic algorithms to make sure you include in the next iteration's training set elements that were excluded in the i-1 output. Question: You start with N unique values. you sample with replacement N values (basically, bootstrapping). then you iterate this process, where the i input vector (of length N, always) is the output of iteration i-1. How many iterations would you need untill converging on a single value? Example: N=5 [1,2,3,4,5] i_1 = [1,2,2,4,5] i_2 = [1,2,4,5,5] . . . i_n-1 = [4,4,4,4,5] i_n = [4,4,4,4,4] Vector size: 5 Converged value: 4 (doesnt mean much. can be colors as well) # of iterations: n Answer: The larger the N (size of vector), the closer you get to number of iterations = 2 * N (so for vector of size 50, it'll take, on avg/expectancy, 100 times)
@user-sl6gn1ss8p
@user-sl6gn1ss8p 4 месяца назад
Would you also take the diversity directly in account when training the next model? Like, say, if you can measure that the diversity had an effect different from the predicted, that indicates something, right?
@MorseAttack
@MorseAttack 4 месяца назад
Missed a good opportunity to plug a “like and subscribe to train the model” 😂
@ritvikmath
@ritvikmath 4 месяца назад
Haha good one!
@zlucoblij
@zlucoblij 4 месяца назад
Imagine you're paying out profit share to authors of your content based on popularity and your own recommendation model does this. Sucks to be the content creator... The way the diversity is implemented seems to be absolutely key...
@InfiniteQuest86
@InfiniteQuest86 4 месяца назад
Yeah you shouldn't ever be training a new model based on a previous model. This is ignoring that you shouldn't just be recommending stuff that's popular. Which is already wrong. You should recommend related movies to what the user likes. Which avoids all of this.
@nononnomonohjghdgdshrsrhsjgd
@nononnomonohjghdgdshrsrhsjgd 4 месяца назад
very unpleasant channel, starting from the loud background music.
Далее
Kernel Density Estimation : Data Science Concepts
25:52
I Used Data Science to Buy the Dip
19:32
Просмотров 8 тыс.
Discovering Communities: Modularity & Louvain #SoMe3
41:34
Hidden Markov Model : Data Science Concepts
13:52
Просмотров 118 тыс.
Multi-Tenant: Database Per Tenant or Shared?
8:55
Просмотров 11 тыс.
Maximum Likelihood : Data Science Concepts
20:45
Просмотров 36 тыс.
Explaining nonparametric statistics, part 1
10:59
Просмотров 20 тыс.