Тёмный

The Viterbi Algorithm : Natural Language Processing 

ritvikmath
Подписаться 168 тыс.
Просмотров 106 тыс.
50% 1

How to efficiently perform part of speech tagging!
Part of Speech Tagging Video : • Part of Speech Tagging...
Hidden Markov Model Video : • Hidden Markov Model : ...
My Patreon : www.patreon.co...

Опубликовано:

 

20 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 203   
@DistortedV12
@DistortedV12 3 года назад
NLP students in the future are going to love this
@utkarshgarg8783
@utkarshgarg8783 3 года назад
I am a NLP student and I am lovin' this :)
@LeonPark
@LeonPark 2 года назад
lovin it
@thepriestofvaranasi
@thepriestofvaranasi 2 года назад
Lovin it bro, you're a visionary
@xzl20212
@xzl20212 2 года назад
I do
@bedoor109
@bedoor109 Год назад
True
@raghavarora3044
@raghavarora3044 3 года назад
This was honestly one of the few videos where someone has actually explained something so clearly and efficiently! Great job! Keep up the good work!
@ritwikgoel
@ritwikgoel 2 года назад
Deadass
@r2d2b3c4
@r2d2b3c4 2 года назад
Mate, I just had a lecture on Viterbi in NLP context from uni and I was having nearly a breakdown due to all the smart formulas our teacher has gave us. It was not possible to understand it for me from the lecture. But you have shown it and explained it so clearly, I am amazed and shocked at the same time. You are a legend! Please carry on with the videos, you are saving and changing lives with this
@muradal-ahmad4048
@muradal-ahmad4048 3 года назад
Awesome video, very informative! Viterbi explanation starts at 07:28, if you're somehow familiar with HMMs basic concept.
@sashankvemulapalli6238
@sashankvemulapalli6238 2 года назад
This literally has to be the best resource to understand Viterbi algorithm across the whole of the internet. Even the inventor themselves wouldn't have been able to explain it better!!! THANKS A TON
@libertylocsin4666
@libertylocsin4666 2 года назад
NLP student here. Love this. Your my hero. :D
@kewtomrao
@kewtomrao 2 года назад
I had the same doubt you had at around 13:30 , but you cleared it without causing any confusion!!! Awesome explanation!!! Hopefully your channel becomes more popular!! cheers and good day ahead!!
@Mew__
@Mew__ Год назад
Exactly. I couldn't be convinced when I was told Viterbi isn't greedy, but it makes sense now. Essentially, there's a big difference between argmax of the next connection, and argmax of the cumulative previous connections.
@unlimitedsky8506
@unlimitedsky8506 Год назад
The prove of the dijkstra invariants is very similar to how you would prove the statement with the viterbi algorithm. In case you're interested in the exact prove!
@croncoder862
@croncoder862 Год назад
Very well explained, I actually came from the Coursera NLP specialization since I had many doubts over there, but after watching this, everything is clear.
@devindoinmonkmode
@devindoinmonkmode 3 месяца назад
Me tooo, this guy is damm crazy
@yurimuniz7
@yurimuniz7 2 месяца назад
Same here. My expectations were high with that course given the quality of the deep learning specialization. But I'm kinda disappointed so far. I've been learning much more from materials like this over the internet.
@YuryGurevich
@YuryGurevich 3 года назад
This is the best explanation I have encountered. Thank yup very much.
@tarunr130
@tarunr130 9 месяцев назад
The best way, anything academic related, has ever been explained to me on RU-vid. Amazing!! Thanks a lot.
@fjoell
@fjoell 18 дней назад
Nicely explained definitely. One thing I could imagine helping a bit would be to show that in the example we're technically always considering DT, NN and VB, but since the emission probabilities are only non-zero for certain words in the sentence only the nodes with non-zero emission probabilities are written down in the example. As in technically from the start for example we would need to consider whether the first word is a definite article, noun or verb. But the emission probabilities for the starting word are 0 for both verbs and nouns so they would never be part of the path leading to maximum probability. In the next step only NN and VB are written down because DT has a 0 emission probability for fans and so on.
@hossainalif3909
@hossainalif3909 Год назад
I'm not gonna say I can solve all the problems of veterbi algorithm from now on, but I can say I have a clear concept after watching this, thank you sir....
@ritvikmath
@ritvikmath Год назад
Of course!
@anirudhgangadhar6158
@anirudhgangadhar6158 Год назад
One of the best explanations I have ever come across. I was struggling with POS tagging a bit but now its crystal clear. Thanks a lot :)
@farnaznouraei8351
@farnaznouraei8351 3 года назад
The best explanation I've seen on this topic. Thank you!
@SajjadZangiabadi
@SajjadZangiabadi Год назад
The teacher does an excellent job of explaining the Viterbi Algorithm and providing a clear example. It’s always great to see educators who are passionate about their subject and able to convey complex concepts in an understandable way.
@flyingsnowball9224
@flyingsnowball9224 Год назад
This is THE BEST lesson on the Viterbi algorithm ever. THANK YOU!
@NaifAlqahtani
@NaifAlqahtani 2 года назад
My guy.. you have a way with words. Ma Sha Allah. beautifully explained
@brendanking6326
@brendanking6326 3 года назад
Super helpful explanation on exactly when you can discontinue the candidate paths. I've seen a few explanations of that point and this one is definitely the clearest
@LIS75608
@LIS75608 26 дней назад
The best lecture ever about this concept.
@ritvikmath
@ritvikmath 25 дней назад
Thanks!
@noorulaminbhat
@noorulaminbhat 9 месяцев назад
One of the best videos on the subject, that I have ever watched. Great work and keep it up!
@thealiens386
@thealiens386 Год назад
I have 11 hours to my algorithms exam, this video helped so much thank you!
@utkarshgarg8783
@utkarshgarg8783 3 года назад
This video can never have a dislike. Simply amazing Ritvik. Thanks!
@raamdemaas
@raamdemaas 2 года назад
The explanation is amazing. Couldn't wrap my head around it earlier with the text book
@ginkei
@ginkei Год назад
Genuinely the best explanation there is, Enlightenment reached!
@naveen_malla
@naveen_malla Год назад
Dude this is awesome. I came here because I did not uderstand the explanation of a Coursera course. No offense to them but you did a great job. Thank you.
@ritvikmath
@ritvikmath Год назад
Glad it was helpful!
@leonardliu1995
@leonardliu1995 2 года назад
Very good video! Cleared up my doubts about why we can't have branch pruning of the lower probability branch! Thanks a lot!
@AnhNguyen-df1nm
@AnhNguyen-df1nm 11 месяцев назад
God damn what a thorough explanation. Respect brother
@MattRosinski
@MattRosinski 2 года назад
Beautiful clear explanation! Thanks Ritvik!
@LiIRobJr
@LiIRobJr Год назад
I understood this the first time he explained it. Great video man
@Ram-vu4lk
@Ram-vu4lk 2 года назад
Calmly explained to make this algorithm understandable in intuitive way. :)
@smangla07
@smangla07 Год назад
Awesome video. Clearly explains a difficult to comprehend algorithm
@ritvikmath
@ritvikmath Год назад
Glad it was helpful!
@simiouch5128
@simiouch5128 2 года назад
Very clear and instructive explanation. You're a great teacher :) Thank you for these videos
@mycodeKK
@mycodeKK 2 года назад
Thanks man you're thousand times better than my prof.
@wonbulzhang2240
@wonbulzhang2240 11 месяцев назад
teaching slowly, but it is very clear. Good job!
@SaiGollapudi
@SaiGollapudi Год назад
Awesome job! Simple explanation. Pls continue to make such videos.
@ritvikmath
@ritvikmath Год назад
Thanks a lot!
@trejohnson7677
@trejohnson7677 2 года назад
This algorithm is appealing to the sensibilities. You can feel the authors nature & propensities.
@supervince110
@supervince110 2 года назад
Your explanation is brilliant
@agnelomascarenhas8990
@agnelomascarenhas8990 2 года назад
Vitterbi algorithm was well explained. Hidden Markov Model was a bit difficult to follow because the contrast on whiteboard was low. I like the way you explain keeping formal clutter out of the way.
@MGtvMusic
@MGtvMusic 2 года назад
Amazing Video! Was Stuck at this part for so long
@kanikagupta6103
@kanikagupta6103 2 года назад
bhaiya thank you so much ! had some much difficulty coz everyone explained easy half of it.. but you did amazing job !!!!
@elaine19931120
@elaine19931120 5 месяцев назад
Thank you, it's much clearer than my professor
@sarasterlie4799
@sarasterlie4799 Год назад
This channel is a lifesaver! Thank You!!!
@aryasharma9379
@aryasharma9379 Год назад
Really well explained! You are a great teacher.
@jannisn.3600
@jannisn.3600 Год назад
Insanely well explained, thank you very much!
@DemianUsul
@DemianUsul Год назад
This is incredible well explained. Thank you and congratulations 🎉
@xraymao5405
@xraymao5405 2 года назад
Huge thank you for this great content. It is truly gold
@totoro_r1668
@totoro_r1668 2 года назад
Really like your video, it's super clear even for me, who come from a linguistic background! Thank you
@ngocquangle5975
@ngocquangle5975 8 месяцев назад
thank you so much, it took me so much time to learn this
@rxs5556
@rxs5556 5 месяцев назад
This video finally nailed it for me. Thanks!
@HadiAhmed7546
@HadiAhmed7546 Год назад
Hey thanks for the video - its super helpful - just one question. When you branched off the start node you only considered the possible state as DT, but isnt there also a 0.2 prob that the first state is a NN?
@richardfutrell9668
@richardfutrell9668 9 месяцев назад
The transition from start to NN is possible, but the emission probability for "the" given NN is zero so we can skip it.
@sudhanshurai1146
@sudhanshurai1146 Год назад
you earned a subscriber man. Very well explained.
@proteus333
@proteus333 Месяц назад
This video was so good I loved it, thank you so much!
@sethcoast
@sethcoast 2 года назад
Amazing explanation thank you!
@prepotente_irreale
@prepotente_irreale Месяц назад
Thanks for the amazing, structured and pretty damn good video and explanation :)
@vipulmishra8682
@vipulmishra8682 2 года назад
That was a great video! Your exlanation is very clear! Thanks a lot!
@darshansolanki5535
@darshansolanki5535 3 года назад
Great content, excited to see more videos on parse trees and sentence correctness grammer
@Abhishek-pb8kt
@Abhishek-pb8kt 6 месяцев назад
Wonderful 🎉 very engaging and beautifully explained.
@lechx32
@lechx32 Год назад
This is a very clear explanation, thank you
@wenerjy
@wenerjy Год назад
Thank you so much, I had the same questions as you!!
@meghajohn
@meghajohn 2 года назад
Very well explained. Clear and succinct!
@SignalFlux
@SignalFlux 11 месяцев назад
Great video, you are a great explainer. One note, are you sure that the reason why Viterbi is fast (O(L*P^2) rather than O(P^L)) is because you can discard paths (13:49)? It seems to me like the generic Viterbi formulation does not discard any paths (as it seems from the pseudo code on wikipedia), rather it's efficiency comes from the very nature of a dynamic program where the algo builds on previous work in a smart way (overlapping sub problems etc...). As you yourself say at the very end (19:57), you look at all nodes in the previous layer at each step. At each layer there are P nodes and at each node there are P options, which repeated L times means there are L*P^2 ops to do. So I guess it's not even necessary for Viterbi to prune paths to reach that good of a runtime.
@danieldrew2356
@danieldrew2356 8 месяцев назад
thanks very intuitive and well planned video - was really easy to follow!
@ritvikmath
@ritvikmath 8 месяцев назад
Glad it was helpful!
@coupmd
@coupmd 2 года назад
You're a legend. Thank you so much!
@matthewmoore6116
@matthewmoore6116 2 года назад
You my friend, are very good at teaching
@kanhabansal524
@kanhabansal524 Год назад
best explanation over internet
@ritvikmath
@ritvikmath Год назад
Wow thanks!
@funskill-relaxationsounds7521
This helped a lot bro! God bless you
@emid6811
@emid6811 2 года назад
Thank you so much! I learned a lot. 👏
@jerilynliu8956
@jerilynliu8956 2 года назад
Thank you for this amazing video! It is so informative and time-saving :D
@guppy13
@guppy13 7 месяцев назад
You made this very easy to understand, thank you
@ritvikmath
@ritvikmath 7 месяцев назад
You're very welcome!
@selwia.771
@selwia.771 Год назад
Excellent explanation. Thank you very much
@souravganguly5680
@souravganguly5680 3 года назад
Beautiful explanation
@gigabytechanz9646
@gigabytechanz9646 3 года назад
very clear and detailed explanation! Thanks
@anandarajguntuku1037
@anandarajguntuku1037 22 дня назад
Thanks a lot...Short and very clear
@yangwang9688
@yangwang9688 3 года назад
Looking forward to coding viterbi algo from scratch!
@monarch0251
@monarch0251 3 года назад
So helpful video! really helped me a lot. I just have one suggestion, instead of green marker. try some dark marker like brown. Green shines a little extra.
@monarch0243
@monarch0243 2 года назад
I agree
@houchj0372
@houchj0372 2 года назад
Excellent video, I like the way that you explain complex things very understandable. Would you please continue to talk about Maximum Entropy Markov Models?
@Kishan31468
@Kishan31468 2 года назад
0 dislikes in the video tell everything. Damn Amazing.
@marouanebenbetka
@marouanebenbetka 6 месяцев назад
Clear Explanation, Thank you!
@茱莉-x2o
@茱莉-x2o Год назад
very clearly explained. thank you very much
@hhehe24
@hhehe24 Год назад
very clear and direct, thanks
@일언-s2p
@일언-s2p 3 года назад
Hey, thank you so much for sharing all of these helpful videos with us. I really appeciate it! I can see you explained about the decoding algorithm with HMM. Could you also explain about evaluation and learning algorithms?
@jetlime08
@jetlime08 2 года назад
Amazing video ! Thanks for the great contribution :)
@cicidu8577
@cicidu8577 2 года назад
This is so clear! Thank you so much!
@josy4767
@josy4767 Год назад
Really great - I was really able to follow along
@maxkasper7766
@maxkasper7766 Год назад
Great video! I think this actually helped me to better understand a different Algorithm called PELT (for Changepoint Detection). Still, I am not 100% sure about PELT so if you would cover it in a different video I would be very grateful☺❤
@tanaykamath1415
@tanaykamath1415 Год назад
So well explained, Thank you!!
@user-wr4yl7tx3w
@user-wr4yl7tx3w Год назад
great video but I didn't quite get the big O for Viterbi. where do you get the p^2?
@Xnshau
@Xnshau 2 года назад
Brilliantly explained 👍
@murkyPurple123
@murkyPurple123 3 года назад
Brilliant explanation!
@xy11021
@xy11021 9 месяцев назад
Great video, greetings from munich, lmu ! Thanks you
@kiran5918
@kiran5918 Год назад
So good, it makes me cry
@ritvikmath
@ritvikmath Год назад
Thanks!
@boosterfly
@boosterfly 2 года назад
Just subscribed! This is awesome!
@אליהשלמה-ב8י
@אליהשלמה-ב8י 9 месяцев назад
excellent explanation
@caitlynkailinchen1216
@caitlynkailinchen1216 3 года назад
Thank you so much for this! I finally understand this
@ArminBishop
@ArminBishop 2 года назад
Finally, I understood it. thanks.
@datatalkshassan
@datatalkshassan 2 года назад
My Question is, here the "The" word only has "DT" as one possible Parts of Speech of it. What if your sentence had started with, say, "Fans", which have more than one possible "Parts of Speech". Viterbi algorithm will always pick the parts of speech with maximum probability (and it will always be same no matter what the sentence is). Wouldn't that be wrong?
@mananlalit
@mananlalit 2 года назад
Very nice explanation!
@francisst-amour646
@francisst-amour646 2 года назад
very good presentation
@ramtinfarajollahi7250
@ramtinfarajollahi7250 2 месяца назад
not used to commenting, but thank you, subscribed as well
@secondtonone2284
@secondtonone2284 2 года назад
Thank you so much for posting this awesome tutoring video. It really helped me understand the algorithm indeed. Can I ask a question? We have two probability matrices in the example. In reality, when we have a sequence data set, do we use transition and emission probabilities that a trained model by EM algorithm produced or the probabilities we can calculate from the empirical data?
Далее
Word2Vec : Natural Language Processing
13:17
Просмотров 39 тыс.
Hidden Markov Model : Data Science Concepts
13:52
Просмотров 122 тыс.
Редакция. News: 60-я неделя
41:13
Просмотров 1,7 млн
Viterbi Algorithm
11:18
Просмотров 93 тыс.
Part of Speech Tagging : Natural Language Processing
10:40
Digital Communications: Viterbi Algorithm
26:28
Просмотров 95 тыс.
Conditional Random Fields : Data Science Concepts
20:11
Markov Chains Clearly Explained! Part - 1
9:24
Просмотров 1,2 млн
Viterbi Algorithm | HMM | Solved Decoding Example
20:29
Natural Language Processing: Crash Course AI #7
13:29
Просмотров 141 тыс.
All Machine Learning algorithms explained in 17 min
16:30
Hidden Markov Models
30:18
Просмотров 86 тыс.