Тёмный
Victor Lavrenko
Victor Lavrenko
Victor Lavrenko
Подписаться
IR20.9 Learning to rank: features
1:11
8 лет назад
IR20.5 SVM explained visually
10:01
8 лет назад
IR20.2 Large margin classification
6:01
8 лет назад
IR20.1 Centroid classifier
6:16
8 лет назад
LM.9 Jelinek-Mercer smoothing
1:07
8 лет назад
LM.7 Good-Turing estimate
11:03
8 лет назад
LM.4 The unigram model (urn model)
1:45
8 лет назад
LM.14 Issues to consider
2:39
8 лет назад
LM.2 What is a language model?
2:07
8 лет назад
LM.10 Dirichlet smoothing
2:21
8 лет назад
LM.11 Leave-one-out smoothing
2:30
8 лет назад
LM.5 Zero-frequency problem
2:03
8 лет назад
LM.3 Query likelihood ranking
5:03
8 лет назад
LM.1 Overview
2:17
8 лет назад
BIR.17 Modelling term frequency
3:12
8 лет назад
BIR.16 Linked dependence assumption
14:00
8 лет назад
BIR.12 Example
3:52
8 лет назад
BIR.3 Probability of relevance
2:14
8 лет назад
Комментарии
@DataWiseDiscoveries
@DataWiseDiscoveries 20 часов назад
Great collection of videos, Thoroughly loved it..
@archismanghosh7283
@archismanghosh7283 3 дня назад
You just cleared every doubts on this topic, it's 10 days before my exam watching your video and getting everything cleared
@glitchAI
@glitchAI 5 дней назад
why does the covariance matrix rotates the vectors towards the greatest variance?
@TheTechPhilosopherTTPVLOGS
@TheTechPhilosopherTTPVLOGS 16 дней назад
great explanation, simple and visualized. Thanks! =)
@amalalmuarik5160
@amalalmuarik5160 17 дней назад
THANKS, you've answered a lot of questions in my mind with your amazing explanation!!!!
@einstorical
@einstorical 23 дня назад
you sound like Gale Boetticher from breaking bad
@NickLilovich
@NickLilovich Месяц назад
This video has (by far) the highest knowledge/time of any other video on this topic on RU-vid. Clear explanation of the math and the iterative method, along with analogy to the simpler algorithm (k-means). Thanks Victor!
@ankitkusumakar7237
@ankitkusumakar7237 Месяц назад
Content is good, but please amplify audio.
@tejareddy74
@tejareddy74 Месяц назад
When andrew tate explaining Math
@raihanpahlevi6870
@raihanpahlevi6870 2 месяца назад
sir we cant see your cursor omg
@raihanpahlevi6870
@raihanpahlevi6870 2 месяца назад
how to know the value of P(b) and P(a)
@wajahatmehdi
@wajahatmehdi 2 месяца назад
Excellent explanation
@DereC519
@DereC519 2 месяца назад
ty
@tazanteflight8670
@tazanteflight8670 2 месяца назад
Its amazing this works at all, because the first step is to take a 2d image that makes sense, into a 1d image that has lost ALL spatial information. A 1d stream of pixels is not an image.
@deepakjoshi7730
@deepakjoshi7730 2 месяца назад
Splendid. Example very well portrays the algorithm stepwise!
@theclockmaster
@theclockmaster 2 месяца назад
Thanks for this. Your video helped bring clarity to the problem statement.
@saunakroychowdhury5990
@saunakroychowdhury5990 2 месяца назад
but is not projection (y .e)e where y = x - mew
@raoufkeskes7965
@raoufkeskes7965 3 месяца назад
at 3:08 the variance estimator shoud be divided by (nb-1) as corrected estimation and not nb .. that's what we call Bessel's correction
@yeah6732
@yeah6732 3 месяца назад
Great tutorial! But why the slop of two eigenvectors are expected to be the same?!
@DrKnowsMore
@DrKnowsMore 4 месяца назад
Outstanding!
@johanesalberto6136
@johanesalberto6136 5 месяцев назад
thanks brother
@azuriste8856
@azuriste8856 5 месяцев назад
Great Explanation Sir. I don't know why it motivated me to appreciate and comment on the video.
@samarthpardhi7307
@samarthpardhi7307 6 месяцев назад
Andrew Tate of machine learning
@guyteigh3375
@guyteigh3375 6 месяцев назад
This course / lecture series has been staggeringly useful, thank you. It was also explained in a way that I could understand easily - and I have been struggling with the eplanations from others. You simplified things superbly. I did get lost when we started talking about mathmatical functions et al, but the information I needed was more to do with concepts and ideas - so i could safely let the maths part slip by, though noting different efficiencies of course. Thank you. Sincerely appreciate you sharing your work.
@virgenalosveinte5915
@virgenalosveinte5915 6 месяцев назад
Amazing, thank you!
@mtushar
@mtushar 6 месяцев назад
Enlightening, thank you!
@martijnhuijnen1
@martijnhuijnen1 7 месяцев назад
Thanks so much! I will refer my students to your webpage!
@m07hcn62
@m07hcn62 7 месяцев назад
This is awesome explanation. Thanks !
@Bulgogi_Haxen
@Bulgogi_Haxen 7 месяцев назад
Studying at TUM. I admire german students who are following the lecture contents from the uni. Taking ML course atm, but here, the lecture is just like dumping only the whole concepts, regardless of whether students can understand them or not... So nice explanations in every video in ML related playlist.. I fcking regret that I did not choose the UK to study my master's.
@JD-rx8vq
@JD-rx8vq 7 месяцев назад
Wow, you explain very well, thank you! I was having a hard time understanding my professor's explanation in our class.
@jonathanfiscus3221
@jonathanfiscus3221 7 месяцев назад
Thanks for posting these Victor. I'm working on understanding the prior bias of precision and this helped. I hope things are going well!
@ArjunSK
@ArjunSK 8 месяцев назад
Great tutorial!
@dinar.mingaliev
@dinar.mingaliev 9 месяцев назад
Incredibly insightful! Your teaching style, peppered with examples, made complex topics like inverted index data structure and MapReduce algorithms easy to grasp. The way you broke down the compression techniques was particularly eye-opening, and I gained a newfound appreciation for the mechanics behind large-scale search engines and big data management. Before watching your lectures, I was quite overwhelmed by these concepts. However, your clear and structured approach has removed that uncertainty and replaced it with genuine interest and understanding. Thank you for your dedication to spreading knowledge. Your work has had a significant impact on my learning journey, and I am truly grateful for that. Please continue to share your wisdom; you are making a real difference in the lives of your viewers!
@josmbolomnyoi2498
@josmbolomnyoi2498 9 месяцев назад
how do we block the japanese hack
@razakpapi
@razakpapi 9 месяцев назад
andrew tate?
@sachinshettyvs2848
@sachinshettyvs2848 9 месяцев назад
Thankyou😃
@mightyduckyo
@mightyduckyo 9 месяцев назад
Step 1 is center, should we also scale so variance = 1
@user-ns8rn8fu3z
@user-ns8rn8fu3z 10 месяцев назад
Hi sir is k means and kneighborhood algorithms are same ?
@Isomorphist
@Isomorphist 10 месяцев назад
Great playlist.
@michael_bryant
@michael_bryant 10 месяцев назад
This is the first time that PCA has actually made sense mathematically. Great video
@conradsnowman
@conradsnowman 10 месяцев назад
I cant help but notice the middle dotted line looks like a logistic regression curve. I should know this.. But is there any relation?
@sedgeleyp
@sedgeleyp 10 месяцев назад
Sounds just like Andrew Tate
@gulsumyldrm4039
@gulsumyldrm4039 2 месяца назад
Thanks ı dont want to listen more ….
@anubratadas1
@anubratadas1 11 месяцев назад
As mentioned by @omidmo7554, I was exactly in a similar situation. you have explained it so lucidly. Thank you so much Victor!
@jospijkers1003
@jospijkers1003 11 месяцев назад
SVD 3:08
@vinaykumardaivajna5260
@vinaykumardaivajna5260 Год назад
Great Explination
@rodrigoma7422
@rodrigoma7422 Год назад
O chatgpt me mandou aqui 😳
@pereeia9048
@pereeia9048 Год назад
Amazing video, perfectly explained the concepts without getting bogged down in the math/technical details.
@jacobrafati4200
@jacobrafati4200 Год назад
Andrew Tate?
@mikelmenaba
@mikelmenaba Год назад
Great explanation mate, thanks!
@babyroo555
@babyroo555 Год назад
incredible explanation!