Тёмный
No video :(

Hidden Markov Model : Data Science Concepts 

ritvikmath
Подписаться 163 тыс.
Просмотров 118 тыс.
50% 1

Опубликовано:

 

5 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 190   
@paulbrown5839
@paulbrown5839 3 года назад
To get to the probabilities in the top right of the board, you keep applying P(A,B)=P(A|B).P(B) ... eg. A=C3, B=C2 x C1 x M3 x M2 x M1 ... keep applying P(A,B)=P(A|B).P(B) and you will end up with same probabilities as shown on the whiteboard top right of screen for the viewer. Great video!
@ritvikmath
@ritvikmath 3 года назад
Thanks for that!
@ummerabab8297
@ummerabab8297 Год назад
Sorry, but I still don't get the calculation at the end. The whole video was explained flawlessly but the calculation was left out. I don't understand. If you can please further help. Thankyou.
@toyomicho
@toyomicho Год назад
@@ummerabab8297 Here is some code in python showing the calculations in the output, you'll see that the hidden sequence s->s->h has the highest probability (0.018) ##### code #################### def get_most_likely(): starting_probs={'h' :.4, 's':.6} transition_probs={'hh':.7, 'hs':.3, 'sh':.5, 'ss':.5, } emission_probs = {'hr':.8, 'hg':.1,'hb':.1, 'sr':.2, 'sg':.3, 'sb':.5} mood={1:'h', 0:'s'} # for generating all 8 possible choices using BitMasking observed_clothes = 'gbr' def calc_prob(hidden_states:str)->int: res = starting_probs[hidden_states[:1]] # Prob(m1) res *= transition_probs[hidden_states[:2]] # Prob(m2|m2) res *= transition_probs[hidden_states[1:3]] # Prob(m3|m2) res *= emission_probs[hidden_states[0]+observed_clothes[0]] # Prob(c1|m1) res *= emission_probs[hidden_states[1]+observed_clothes[1]] # Prob(c2|m2) res *= emission_probs[hidden_states[2]+observed_clothes[2]] # Prob(c2|m3) return res #Use BitMasking to generate all possible combinations of hidden states 's' and 'h' for i in range(8): hidden_states = [] binary = i for _ in range(3): hidden_states.append(mood[binary&1]) binary //=2 hidden_states = "".join(hidden_states) print(hidden_states, round(calc_prob(hidden_states),5)) ##### Output ###### sss 0.0045 hss 0.0006 shs 0.00054 hhs 0.000168 ssh 0.018 hsh 0.0024 shh 0.00504 hhh 0.001568
@AakashOnKeys
@AakashOnKeys 3 месяца назад
@@toyomicho I had the same doubt. Thanks for the code! Would be better if author pins this.
@13_yashbhanushali40
@13_yashbhanushali40 Год назад
Unbelievable Explanation!! I have referred to more than 10 videos where basic working flow of this model was explained but I must say that rather I'm sure that this is the most easiest explanation one can ever find on youtube , the way of explanation considering the practical approach was much needed and you did exactly that Thanks a ton man !
@user-xj1pi5ec6x
@user-xj1pi5ec6x 6 месяцев назад
True experts always make it easy.
@beyerch
@beyerch 3 года назад
Really great explanation of this in an easy to understand format. Slightly criminal to not at least walk through the math on the problem, though.
@mohammadmoslemuddin7274
@mohammadmoslemuddin7274 3 года назад
Glad I found your videos. Whenever I need some explanation for hard things in Machine Learning, I come to your channel. And you always explain things so simply. Great work man. Keep it up.
@ritvikmath
@ritvikmath 3 года назад
Glad to help!
@chadwinters4285
@chadwinters4285 3 года назад
I have to say you have an underrated way of providing intuition and making difficult to understand concepts really easy.
@pinkymotta4527
@pinkymotta4527 2 года назад
Crystal-clear explanation. Didn't have to pause video or go back at any point of video. Would definitely recommend to my students.
@zishiwu7757
@zishiwu7757 3 года назад
Thank you for explaining how HMM model works. You are a grade saver and explained this more clearly than a professor.
@ritvikmath
@ritvikmath 3 года назад
Glad it was helpful!
@stevengreidinger8295
@stevengreidinger8295 4 года назад
You gave the clearest explanation of this important topic I've ever seen! Thank you!
@nathanielfernandes8916
@nathanielfernandes8916 Год назад
I have 2 questions: 1. The Markov assumption seems VERY strong. How can we guarantee the current state only depends on the previous state? (e.g., person has an outfit for the day of the week instead of based on yesterday) 2. How do we collect the transition/emission probabilities if the state is hidden?
@songweimai6411
@songweimai6411 Год назад
Really appreciate your work. Much better than the professor in my class who has a pppppphhhhdddd degree.
@coupmd
@coupmd 2 года назад
Wonderful explanation. I hand calculated a couple of sequences and then coded up a brute force solution for this small problem. This helped a lot! Really appreciate the video!
@mirasan2007
@mirasan2007 3 года назад
Dear ritvik, I watch your videos and I like the way you explain. Regarding this HMM, the stationary vector π is [0.625, 0.375] for the states [happy, sad] respectively. You can check the correct stationary vector by multiplying it with the transpose of the Transition probability Matrix, then it should result the same stationary vector as result: import numpy as np B = np.array([[0.7, 0.3], [0.5, 0.5]]) pi_B = np.array([0.625, 0.375]) np.matmul(B.T, pi_B) array([0.625, 0.375])
@jirasakburanathawornsom1911
@jirasakburanathawornsom1911 2 года назад
Im continually amazed by how well and easy to understand you can teach, you are indeed an amazing teacher
@ahokai
@ahokai 2 года назад
I don't know why I had paid for my course and then came here to learn. Great explanation, thank you!
@Infaviored
@Infaviored Год назад
If there is a concept I did not understand from my lectures, an i see there is a video by this channel, i know I will understand it afterwards.
@ritvikmath
@ritvikmath Год назад
thanks!
@Infaviored
@Infaviored Год назад
@@ritvikmath no, thank you! Ever thought of teaching at an university?
@Dima-rj7bv
@Dima-rj7bv 3 года назад
I really enjoyed this explanation. Very nice, very straightforward, and consistent. It helped me to understand the concept very fast.
@ritvikmath
@ritvikmath 3 года назад
Glad it was helpful!
@slanglabadang
@slanglabadang 6 месяцев назад
I feel like this is a great model to use to understand how time exists inside our minds
@seansanyal1895
@seansanyal1895 4 года назад
hey Ritvik, nice quarantine haircut! thanks for the video, great explanation as always. stay safe
@ritvikmath
@ritvikmath 4 года назад
thank you! please stay safe also
@clauzone03
@clauzone03 3 года назад
You are great! Subscribed with notification after only the first 5 minutes listening to you! :-)
@ritvikmath
@ritvikmath 3 года назад
Aw thank you !!
@caspahlidiema4027
@caspahlidiema4027 3 года назад
The best ever explanation on HMM
@ritvikmath
@ritvikmath 3 года назад
thanks!
@rssamarth099
@rssamarth099 10 месяцев назад
This helped me at the best time possible!! I didn't know jack about the math a while ago, but now I have a general grasp of the concept and was able to chart down my own problem as you were explaining the example. Thank you so much!!
@mengxiaoh9048
@mengxiaoh9048 Год назад
thanks for the video! I've watched two other videos but this one is the easiest to understand HMM and I also like that you added the real-life application NLP example at the end
@ritvikmath
@ritvikmath Год назад
Glad it was helpful!
@totomo1976
@totomo1976 Год назад
Thank you so much for your clear explanation!!! Look forward to learning more machine-learning related math.
@1243576891
@1243576891 3 года назад
This explanation is concise and clear. Thanks a lot!
@ritvikmath
@ritvikmath 3 года назад
Of course!
@Aoi_Hikari
@Aoi_Hikari 4 месяца назад
i had to rewind the videos a few times, but eventually i understood it, thanks
@VascoDaGamaOtRupcha
@VascoDaGamaOtRupcha Год назад
You explain very well!
@claytonwohl7092
@claytonwohl7092 3 года назад
At 2:13, the lecturer says, "it's not random" whether the professor wears a red/green/blue shirt. Not true. It is random. It's random but dependent on the happy/sad state of the professor. Sorry to nitpick. I definitely enjoyed this video :)
@ritvikmath
@ritvikmath 3 года назад
Fair point !! Thanks :)
@kanhabansal524
@kanhabansal524 Год назад
best explanation over internet
@ritvikmath
@ritvikmath Год назад
Thanks!
@sarangkulkarni8847
@sarangkulkarni8847 Месяц назад
Absolutely Amazing
@shivkrishnajaiswal8394
@shivkrishnajaiswal8394 17 дней назад
Nice explanation!! One of the usecases mentioned was NLP. I am wondering if HMM will be helpful given that we now have Transformers architectures.
@Molaga
@Molaga 3 года назад
A great video. I am glad I discovered your channel today.
@ritvikmath
@ritvikmath 3 года назад
Welcome aboard!
@mousatat7392
@mousatat7392 Год назад
amazing keep up very cool explenation
@ritvikmath
@ritvikmath Год назад
Thanks!
@mihirbhatia9658
@mihirbhatia9658 3 года назад
I wish you went through Bayes Nets before coming to HMM. That would make the conditional probabilities so much more easier to understand for HMMs. Great explanation though !! :)
@gopinsk
@gopinsk 2 года назад
I agree Teaching is an art. You have mastered it. Application to real world scenarios are really helpful. Really feel so confident after watching your videos. Question, How did we get the probabilities to start with? are those arbitrary or followed any scientific method to arrive at those numbers?
@OskarBienko
@OskarBienko Год назад
I'm curious too. Did you figure it out?
@ananya___1625
@ananya___1625 Год назад
As usual awesome explanation...After referring to tons of videos, I understood it clearly only after this video...Thank you for your efforts and time
@ritvikmath
@ritvikmath Год назад
You are most welcome
@laurelpegnose7911
@laurelpegnose7911 2 года назад
Great video to get an intuition for HMMs. Two minor notes: 1. There might be an ambiguity of the state sad (S) and the start symbol (S), which might have been resolved by renaming one or the other 2. About the example configuration of hidden states which maximizes P: I think this should be written as a tuple (s, s, h) rather than a set {s, s, h} since the order is relevant? Keep up the good work! :-)
@pibob7880
@pibob7880 Год назад
After watching this it left me with the impression that local maximization of conditional probabilities lead to global maximization of the hidden markov model. Seems too good to be true... I guess the hard part is finding out the hidden state transition probabilities?
@awalehmohamed6958
@awalehmohamed6958 2 года назад
Instant subscription, you deserve millions of followers
@anna-mm4nk
@anna-mm4nk 2 года назад
appreciate that the professor was a 'she' took me by surprise and made me smile :) also great explanation, made me remember that learning is actually fun when you understand what the fuck is going on
@linguipster1744
@linguipster1744 3 года назад
oooh I get it now! Thank you so much :-) you have an excellent way of explaining things and I didn’t feel like there was 1 word too much (or too little)!
@SuperMtheory
@SuperMtheory 4 года назад
Great video. Perhaps a follow up will be the actual calculation of {S, S, H}
@ritvikmath
@ritvikmath 4 года назад
thanks for the suggestion!
@5602KK
@5602KK 3 года назад
Incredible. All of the other videos I have watched have me feeling quite over whelmed.
@ritvikmath
@ritvikmath 3 года назад
glad to help!
@shahabansari5201
@shahabansari5201 3 года назад
Very good explanation of HMM!
@ritvikmath
@ritvikmath 3 года назад
Glad it was helpful!
@mia23
@mia23 3 года назад
Thank you. That was a very impressive and clear explanation!
@ritvikmath
@ritvikmath 3 года назад
Glad it was helpful!
@shubhamjha5738
@shubhamjha5738 3 года назад
Nice one
@ritvikmath
@ritvikmath 3 года назад
Thanks 🔥
@zacharyzheng3610
@zacharyzheng3610 Год назад
Brilliant explanation
@ritvikmath
@ritvikmath Год назад
Thanks!
@MegaJohnwesly
@MegaJohnwesly Год назад
oh man. Thanks alot :). I tried to understand here and there by reading..But I didn't get it. But this video is gold
@ritvikmath
@ritvikmath Год назад
Glad it helped!
@tindo0038
@tindo0038 2 месяца назад
here is my quick implementation of the discussed problem index_dict = {"happy": 0, "sad": 1} start_prob = {"happy": 0.4, "sad": 0.6} transition = [[0.7, 0.3], [0.5, 0.5]] emission = { "happy": {"red": 0.8, "green": 0.1, "blue": 0.1}, "sad": {"red": 0.2, "green": 0.3, "blue": 0.5}, } observed = ["green", "blue", "red"] cur_sequece = [] res = {} def dfs(cur_day, cur_score): if cur_day >= len(observed): res["".join(cur_sequece)] = cur_score return cur_observation = observed[cur_day] for mood in ["happy", "sad"]: new_score = cur_score new_score += emission[mood][cur_observation] # at the start, there is no previous mood if cur_sequece: new_score += transition[index_dict[mood]][index_dict[cur_sequece[-1]]] else: new_score += start_prob[mood] cur_sequece.append(mood) dfs(cur_day + 1, new_score) cur_sequece.pop() dfs(0, 0) print(res)
@ashortstorey-hy9ns
@ashortstorey-hy9ns 2 года назад
You're really good at explaining these topics. Thanks for sharing!
@wendyqi4727
@wendyqi4727 Год назад
I love your videos so much! Could you please make one video about POMDP?
@froh_do4431
@froh_do4431 3 года назад
really good work on the simple explanation of a rather complicated topic 👌🏼💪🏼 thank you very much
@Roman-qg9du
@Roman-qg9du 3 года назад
Please show us an implementation in python.
@ritvikmath
@ritvikmath 3 года назад
Good suggestion!
@Sasha-ub7pz
@Sasha-ub7pz 3 года назад
Thanks, amazing explanation. I was looking for such video but unfortunately, those authors have bad audio.
@spp626
@spp626 2 года назад
Such a great explanation! Thank you sir.
@srijanshovit844
@srijanshovit844 9 месяцев назад
Awesome explanation I understood in 1 go!!
@jinbowang8814
@jinbowang8814 Год назад
Really nice explanation! easy and understandable.
@kiran10110
@kiran10110 3 года назад
Damn - what a perfect explanation! Thanks so much! 🙌
@ritvikmath
@ritvikmath 3 года назад
Of course!
@minapagliaro7607
@minapagliaro7607 7 месяцев назад
Great explanation ❤️
@kristiapamungkas697
@kristiapamungkas697 3 года назад
You are a great teacher!
@ritvikmath
@ritvikmath 3 года назад
Thank you! 😃
@shaoxiongsun4682
@shaoxiongsun4682 Год назад
Thanks a lot for sharing. It is very clearly explained. Just wondering why the objective we want to optimize is not the conditional probability P(M=m | C = c).
@louisc2016
@louisc2016 2 года назад
I really like the way you explain something, and it helps me a lot! Thx bro!!!!
@ResilientFighter
@ResilientFighter 3 года назад
Ritvik, it might be helpful if you add some practice problems in the description
@GarageGotting
@GarageGotting 3 года назад
Fantastic explanation. Thanks a lot
@ritvikmath
@ritvikmath 3 года назад
Most welcome!
@Justin-General
@Justin-General 2 года назад
Thank you, please keep making content Mr. Ritvik.
@souravdey1227
@souravdey1227 3 года назад
Really crisp explanation. I just have a query. When you say that the mood on a given day "only" depends on the mood the previous day, this statement seems to come with a caveat. Because if it "only" depended on the previous day's mood, then the Markov chain will be trivial. I think what you mean is that the dependence is a conditional probability on the previous day's mood: meaning, given today's mood, there is a "this percent" chance that tomorrow's mood will be this and a "that percent" chance that tomorrow's mood will be that. "this percent" and "that percent" summing up to 1, obviously. The word "only" somehow conveyed a probability of one. I hope I am able to clearly explain.
@gnkk6002
@gnkk6002 3 года назад
Wonderful explanation 👌
@ritvikmath
@ritvikmath 3 года назад
Thank you 🙂
@ls09405
@ls09405 9 месяцев назад
Great Video. But how did you calculate {SSH} is maximum?
@mansikumari4954
@mansikumari4954 11 месяцев назад
This is great!!!!!
@hichamsabah31
@hichamsabah31 3 года назад
Very insightful. Keep up the good work.
@arungorur3305
@arungorur3305 4 года назад
Ritvik, great videos.. I have learnt a lot.. thx. A quick Q re: HMM. How does one create transition matrix for hidden states when in fact you don't know the states.. thx!
@silverstar6905
@silverstar6905 4 года назад
verry nice explanation. looking forward to seeing something about quantile regression
@skyt-csgo376
@skyt-csgo376 2 года назад
You're such a great teacher!
@user-or7ji5hv8y
@user-or7ji5hv8y 3 года назад
This is really great explanation
@juanjopiconcossio3146
@juanjopiconcossio3146 Год назад
Great great explanation. Thank you!!
@alecvan7143
@alecvan7143 2 года назад
Very insightful, thank you!
@beckyb8929
@beckyb8929 3 года назад
beautiful! Thank you for making this understandable
@NickVinckier
@NickVinckier 3 года назад
This was great. Thank you!
@ritvikmath
@ritvikmath 3 года назад
Glad you enjoyed it!
@srinivasuluyerra7849
@srinivasuluyerra7849 2 года назад
Great video, nicely explained
@SPeeDKiLL45
@SPeeDKiLL45 2 года назад
Great Video Bro ! Thanks
@curiousredpand90
@curiousredpand90 3 года назад
Ah you explained so much better than my Ivy League professor!!!
@user-or7ji5hv8y
@user-or7ji5hv8y 4 года назад
Great video
@ritvikmath
@ritvikmath 4 года назад
thanks !
@jijie133
@jijie133 Год назад
Great video!
@0xlaptopsticker29
@0xlaptopsticker29 4 года назад
love this and the garch python video
@ritvikmath
@ritvikmath 4 года назад
thanks :)
@chia-chiyu7288
@chia-chiyu7288 3 года назад
Very helpful!! Thanks!
@ritvikmath
@ritvikmath 3 года назад
Glad it was helpful!
@barhum5765
@barhum5765 Год назад
God bless your soul man
@otixavi8882
@otixavi8882 2 года назад
Great video, however I was wondering if the hidden state transitioning probabilities are unknown, is there a way to compute/calculate them based on the observations?
@deepshahsvnit
@deepshahsvnit 9 месяцев назад
Please share next videos content in description
@nicolas12189
@nicolas12189 2 года назад
Hey in future videos could you provide an unobstructed view of the board, either at the beginning or end of the video, just for a few seconds? Sometimes it’s helpful to screenshot your notes
@yvonneruijia
@yvonneruijia 3 года назад
Please share how to implement it in python or matlab! Truly appreciate it!!
@user-ls3bi6jk8u
@user-ls3bi6jk8u 7 месяцев назад
good explanation. But the last part of determining the moods is left out. How did you get s,s,h
@dingusagar
@dingusagar Год назад
nice explanation
@ritvikmath
@ritvikmath Год назад
🙏 thanks
@qiushiyann
@qiushiyann 4 года назад
Thank you for this explanation!
@newwaylw
@newwaylw Год назад
Why are we maximizing the joint probability? Shouldn't the task to find the most likely hidden sequence GIVEN the observed sequence? i.e. maximizing the conditional probability argmax P(m1m2m3| c1c2c3)?
@PeteThomason
@PeteThomason 2 года назад
Thank you, that was a very clear introduction. They key thing I don't get is where the transition and emission probabilities come from. In a real-world problem, how do you get at those?
@jordanblatter1595
@jordanblatter1595 2 года назад
In the case of the NLP example with part of speech tagging, the model would need data consisting of sentences that are assigned tags by humans. The problem is that there isn't much of that data lying around.
@paulbrown5839
@paulbrown5839 3 года назад
@ritvikmath Any chance of a follow up video covering some of the algos like Baum-Welch, Viterbi, please? ... i'm sure you could explain them well. Thanks a lot.
@ritvikmath
@ritvikmath 3 года назад
Good suggestion! I'll look into it for my next round of videos. Usually I'll throw a general topic out there and use the comments to inform future videos. Thanks!
@hex9219
@hex9219 3 месяца назад
awesome
@froh_do4431
@froh_do4431 3 года назад
Is it possible to describe in a few words, how we can calculate/compute the transition- and emission probabilities?
@RezaShokrzad
@RezaShokrzad 3 года назад
BIG LIKE, Absolutely awesome. just could you explain about the interpretation of {SSH}? Should we compute all 8 cases of m_i, then compare them?
@ritvikmath
@ritvikmath 3 года назад
Thanks! And yes exactly, we can do that. In practice, of course with many time periods and states this gets too expensive so we have more efficient ways to compare them but at the end of the day we are still getting the maximum.
@kanchankrishna3686
@kanchankrishna3686 5 месяцев назад
Why are there 8 possible combinations (6:10)? I got 9 from doing M1/G, M1/B, M1/R, M2/G, M2/B, M2/R, M3/G, M3/R, M3/B ?
@hmyswonderland4532
@hmyswonderland4532 3 года назад
great video! but i was wondering why the p(C2|m3,m2,m1)..., why the m3 is related to the c2?
Далее
The Viterbi Algorithm : Natural Language Processing
21:13
Conditional Random Fields : Data Science Concepts
20:11
I Day Traded $1000 with the Hidden Markov Model
12:33
Gaussian Mixture Models
17:27
Просмотров 70 тыс.
Markov Chains : Data Science Basics
10:24
Просмотров 64 тыс.
This is why Deep Learning is really weird.
2:06:38
Просмотров 385 тыс.
Metropolis - Hastings : Data Science Concepts
18:15
Просмотров 100 тыс.
Markov Decision Processes - Computerphile
17:42
Просмотров 165 тыс.