Тёмный

Markov Chain Monte Carlo and the Metropolis Alogorithm 

Jeff Picton
Подписаться 1 тыс.
Просмотров 228 тыс.
50% 1

Опубликовано:

 

29 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 120   
@Ash338
@Ash338 12 лет назад
Excellent presentation. Very clear, with nice examples and simple codes. Thank you.
@badbad_
@badbad_ 8 лет назад
Sir, you are a hero. I read a bunch of definitions, explanations and examples and only yours can make me really understand MCMC. Now I can continue my final assignment
@mayankpj
@mayankpj 9 лет назад
Nice work! You explained very clearly and the recording is also very nicely done...
@sethtrowbridge9122
@sethtrowbridge9122 8 лет назад
Yeah I see you, League of Legends. hiding out there in the task bar-- thinking you'll just chill until Mr. Picton gets some free time. Well this great intellect has moved on. When given a choice between toxicity and flaming or creating helpful videos, I'll have you know, Jeff Picton chose the high road.
@NasusTCotS
@NasusTCotS 5 лет назад
This video might be the only thing saving my thesis. Thanks :D
@svetoslavbliznashki1710
@svetoslavbliznashki1710 9 лет назад
A great lecture indeed! Thanks very much :) The matlab code you shared really made it as clear as it gets. Keep them coming :)
@ohrfeigenbaumhauweg
@ohrfeigenbaumhauweg 7 лет назад
Thank you. This really helped my understanding the model and the applications.
@chx75
@chx75 5 лет назад
The Markov condition is not "x4 depends only on x3", but "if we know x3, x4 becomes independent of x2 and x1"
@premratan7511
@premratan7511 9 лет назад
Great video, Jeff Picton. It was really helpful. Thank you very much.
@aliabdollahzadeh1748
@aliabdollahzadeh1748 9 лет назад
Great work, almost answered all my questions. Thanks
@GabiRav
@GabiRav 10 лет назад
Great explanation , but....MONTE CARLO IS IN MONTE CARLO , not in LAS VEGAS :-)
@TanguyI
@TanguyI 9 лет назад
You Americans, so egocentric :-P Very clear video BTY. Thanks!
@JP-re3bc
@JP-re3bc 7 лет назад
Ah the legendary quality of American public education. Yes! Monte Carlo is in Africa, and Africa is some place in the south of Europe. No?
@RalphDratman
@RalphDratman 4 года назад
The town of Monte Carlo is in the tiny principality of Monaco (that is, a territory originally ruled by a prince) on the Mediterranean coast of France. Monte Carlo was -- and still is -- famous for its iconic, palatial gambling casino.
@ddaniel5857
@ddaniel5857 11 лет назад
It is really being of great help for me, thank you very much!
@QuantCoder
@QuantCoder 12 лет назад
Nicely done. Would have been better if the Hastings correction to alpha was discussed. It was mentioned and even kept in the presentation, but then neglected. Seems either losing it, and justifying the loss would be good, or leaving it out would be better.
@tamerkhraisha6974
@tamerkhraisha6974 7 лет назад
Excellent explanation
@ankitranjan8292
@ankitranjan8292 8 лет назад
This is an awesome lecture that clears the mcmc concept. I am curious to know how can we apply it in partitioning of jobs on 2 parallel machines in order to minimize makespan?
@metalismystyle
@metalismystyle 10 лет назад
Great video! Do you know how I would use the Metropolis algorithm to select random points from the tails of a Normal Distribution (or do we always have to sample from a Uniform distribution?) at a higher probability than selecting points close to the mean? i.e. I need the target distribution to be a Normal Distribution and the proposed Distribution to be the tails ((-4*sigma, -3*sigma) and (3*sigma 4*sigma)) of the Normal Distribution? Is this possible? Thanks a lot!
@ribaat2024
@ribaat2024 11 лет назад
i couldnt agree more with you! Well done author!!
@nautiyogi8386
@nautiyogi8386 6 лет назад
Brilliant tutorial !
@ateoc9246
@ateoc9246 4 года назад
in 31:41, have you any evidence for choose the accept/reject test function like this? If yes, where can i find it?
@antonmarkov3715
@antonmarkov3715 6 лет назад
Thank you very much, that helped my a lot!
@renzocoppola4664
@renzocoppola4664 7 лет назад
You made it sound easy.
@VisajDesai
@VisajDesai 5 лет назад
Hey Jeff, how does the software construct the normpdf of x(i) and x_c in the gaussian code example? Considering we start off with only a single x(i) value and then sample a single point x_c, how can one create an entire pdf to be used in the equation?
@SaulBerardo
@SaulBerardo 11 лет назад
I'm also confused. A clarification about it would be welcome.
@bobcrunch
@bobcrunch 8 лет назад
Good job, but you missed the punch line at 7:10 that a histogram of the number of times you land in an interval matches the shape of the curve; i.e., the number of times is a maximum in an interval centered at 0 and falls off in both directions. Maybe it was obvious to others, but maybe I'm a little slow.
@haseebshehzad2372
@haseebshehzad2372 7 лет назад
I need the document presented in the video. Any help? Thanks
@MaxKesin
@MaxKesin 8 лет назад
Great video - do you have any more from this class?
@WoeiPatrickP90
@WoeiPatrickP90 6 лет назад
Hey you play League of Legends too bro??? me too hahaaa
@jacobm7026
@jacobm7026 5 лет назад
Jeff, you're fantastic for doing this. I've been struggling all semester trying to grasp this concept intuitively. I've finally seen the light
@lukechen8606
@lukechen8606 8 лет назад
This video is cool! I really like the two examples you give, illustrating the idea of MCMC concretely and clearly. Thanks!
@cdclaxton
@cdclaxton 7 лет назад
Just in case it helps someone watching this very good video, here is some R code to demonstrate the Metropolis algorithm: # Metropolis algorithm -- Gaussian distribution library(ggplot2) mu
@GoodTechConf
@GoodTechConf 7 лет назад
When you present Markov Chains, It seems to me that your Xi mean two things. Xi as a vector, is the GLOBAL state of the automata at time i. And you say Xi is also a single state of the automata. A better way should be to say Xi is the global state, and name the individual states Sj Xi = {S1,S2 ... Sn}
@gerarudnik9534
@gerarudnik9534 4 года назад
was looking for this comment. thank you!
@ahme0307
@ahme0307 11 лет назад
at 15:33 the first product between X0=[0.5 0.2 0.3] with T is not equals to [0.2 0.6 0.2]. actually it is [ 0.18 0.64 0.18], and converges to [0.2213 0.4098 0.3689]. am I missing missing some thing?
@RodrigoSilva-yn4on
@RodrigoSilva-yn4on 5 лет назад
I guess you're right! I also realized that, that's why I decided to read the comments!
@jollyrogererVF84
@jollyrogererVF84 Год назад
Monte Carlo named after a casino in Los Vegas? You need to get out of the US a bit more 😂 What about Monte Carlo the gambling city in Europe predating Los Vegas by a couple of hundred years! Otherwise a great video. Cheers
@papiedra
@papiedra 6 лет назад
I didn't understood the difference between Metropolis algorith and MCMC?
@spurious
@spurious 3 года назад
Your history of the naming is wrong. Monte Carlo methods get their name from the gambling den in Europe where Ludwig Boltzmann lost his savings trying to use math to win, using methods that now quality as Monte Carlo techniques
@czarekkawecki6548
@czarekkawecki6548 Год назад
The video is great, but why would you think that the name comes from a casino in Las Vegas and not from the original one in Monaco, that the american one was named after?? 😂😂
@hmsn22
@hmsn22 8 лет назад
One of the best explanations of MCMC I have seen on the web. Wonderful job . Wonderful
@paulfrischknecht3999
@paulfrischknecht3999 9 лет назад
@3:00 Wiki says it's from Monte Carlo in Monaco.
@picjeffton
@picjeffton 11 лет назад
Well there is a Monte Carlo in Vegas... but ya you're right.
@zilezile4942
@zilezile4942 4 года назад
Learn more about logistic regression with R drive.google.com/file/d/1qcq_186AMe2XK9aNiSLxLbvXlAmryWXX/view?usp=sharing
@arnaldopereira8435
@arnaldopereira8435 2 года назад
Make more videos, Jeff!
@Mark-IamNum1
@Mark-IamNum1 Месяц назад
It is named after the casino in Monte Carlo - not in Las Vegas.
@juliusctw
@juliusctw 9 лет назад
Thank for the video, I have some questions. Let's say that we didn't know that the distribution was gaussian, how do we decide what proposal distribution to use? Even if we knew that the distribution is gaussian, how did you know to use normpdf (which already centers at 0 with sigma of 1) ? If the actual distribution was N(2,1) instead, would you still use normpdf ?
@great2816
@great2816 5 месяцев назад
monte carlo name came from famous casino in monaco not vegas i believe.
@콘충이
@콘충이 3 года назад
Thank you so much! this vid is really helpful Can you explain why the alrogithm(22:28) creates N(0,1) instead of N(0,10) or N(0,140), etc...? is it because that the normpdf is based on N(0,1)?
@lauramanuel7619
@lauramanuel7619 8 лет назад
Thanks for the code. As a programmer, seeing how something would be coded makes a lot more sense than seeing a mathematical formula. :) The last example was also quite useful and a great way to tie it all together.
@bv9613
@bv9613 5 лет назад
Interesting. About the climate example. Wouldn’t cloud formation be important since albedo was and perhaps that would be more important than the feedback, or just as?
@DreamWorker-jm5xn
@DreamWorker-jm5xn 5 лет назад
Some "Professors" teach students just to show how much they know about the topic, by using alien language (edit: but some are good prof). I spent hours in those language, but instead i can understand mcmc within 36 minutes. You're a superhero!!
@abdullahalsulieman2096
@abdullahalsulieman2096 Год назад
Jeff, I have an algorithm that I need help interpret.
@Overdose21127
@Overdose21127 12 лет назад
I spent dozens of hours reading papers about MCMC. all that is sh... RU-vid - the best source of any knowledge. Evidence of this - is the lecture above. Well done, author, well done... Thanks
@olofsamuelsson8759
@olofsamuelsson8759 10 лет назад
Thank You for putting effort in making this presentation. However it is presented like theory, theory, theory, practice, practice. That's from my experience is not as effective as theory, practice, theory, practice... From every bit of information it should be clear how to apply it. Watching this I have lost it somewhere in the middle, because there were no possibility to instantly test the understanding of it. When looking at MatLab examples, it is not clear what is observed, what is predicted, why is it done like that and how did You plot that graph, when there is no code for it. Yes, You have presented part of this information in the theory, but it is like reading book about bicycle and hoping that it will teach you to ride a bicycle.
@paulfrischknecht3999
@paulfrischknecht3999 9 лет назад
You say the method will visit the nodes an amount proportional to "their probability" many times. But we don't give any probability to the nodes a-priori, so really the output of the method *defines* this "per node probability" no?
@RAP4EVERMRC96
@RAP4EVERMRC96 2 года назад
Nice lecture, whats your Elo? :p
@vidyashankar1389
@vidyashankar1389 9 лет назад
everythig was brilliant!! great job.. m interested also in knowing your approach to the functions step_param and ebm_model while it could explain a more clearer picture.. Thanks in advance.
@SergioHernandez-wd7mb
@SergioHernandez-wd7mb 7 лет назад
Hi, great tutorial, thanks. I have a couple of doubts 29'30" About the initial guess, what literature can I read to determine such a value of the initial guess? 30' About proposal distribution and the cost function, is there any other tutorial or literature to understand how to design such a proposed distribution or using exp(-cost) should suffice considering a wide range of phenomena and datasets? Thanks again
@Paivren
@Paivren 6 лет назад
So at 19:30, the q distribution is equivalent to the transition matrix T from the markov chain formalism at 14:00, right?
@harmonyliu8239
@harmonyliu8239 7 лет назад
one question: How do we choose the proposal q? Is there any requirements for this choice?
@jeremyjacobsen4300
@jeremyjacobsen4300 9 лет назад
Great lecture. Thanks for showing code. This is the most straight forward MCMC tutorial that I've seen on youtube thus far.
@gauthamchandra2081
@gauthamchandra2081 4 года назад
very coherently explained, most videos go into unnecessary esoteric detail.
@paradox9086
@paradox9086 10 лет назад
Thank you so much for a very clear explanation
@francisbaffour-awuahjunior3099
@francisbaffour-awuahjunior3099 3 года назад
What is the explicit equation for the energy balance model?
@paulfrischknecht3999
@paulfrischknecht3999 9 лет назад
I don't see the difference between irreducible and aperiodic. IMO the graph is aperiodic (in the sense that there is no subgraph where we will get stuck) iff it is irreducible (for every pair of states (x,y), x and y are mutually reachable with nonzero probability).
@ahealey5961
@ahealey5961 9 лет назад
Paul Frischknecht irreducible is probability of reaching any state while starting at another state is positive. The periodicity, d, is the largest integer such that returning to a certain state i is always a multiple of d. ie if you can reach i after {2,4,6,8,10} steps then d=2 since {2,2(2),2(3),2(4)..} .. An aperoidic MC would be {2,3,4,6,7} here then is no d such that n*d will generate the periods.
@yuanyuan3056
@yuanyuan3056 7 лет назад
Very clear explaination!
@waguebocar9680
@waguebocar9680 7 лет назад
very programm monte carlo
@piotrbjastrzebski
@piotrbjastrzebski 10 лет назад
Something that presents MCMC in a concise and clear way. Like it a lot.
@picjeffton
@picjeffton 11 лет назад
Typically all of the molecules would be altered at once, as the position of each molecule is a variable parameter and the collection of these constitutes a state of the system. I described moving them individually to simply convey the intuition of making small changes to the system. But, my intuition tells me that selecting single molecules with random reselection would be fine and preserve ergodicitiy.
@gumbo64
@gumbo64 Год назад
easily the best MCMC explanation I've seen, huge thanks
@dsm5d723
@dsm5d723 3 года назад
Taleb brought me here; the Kali Yuga keeps me grinding.
@picjeffton
@picjeffton 11 лет назад
I agree. I just didn't feel like opening latex to write out the equation and just took a screen cap of it from a paper I had.
@grandeterra1698
@grandeterra1698 8 лет назад
Jeff thank you for these videos. I am self studying MCMC and is there any chance that you may share the simulation codes?
@Mooorifo
@Mooorifo 10 лет назад
Have you got a written program for the disks?
@SandroBoschetti
@SandroBoschetti 11 лет назад
Thank you very much for your great lecture. It is really being of great help for me.
@picjeffton
@picjeffton 11 лет назад
You're quite right. For the purposes of this video though, let's just pretend that is how arithmetic works.
@PedroRibeiro-zs5go
@PedroRibeiro-zs5go 6 лет назад
Very very good explanation!! Thanks! :D
@xenonmob
@xenonmob 3 года назад
snazzy intro music
@rafaellinhares153
@rafaellinhares153 5 лет назад
you must play dota.
@undertehlaw
@undertehlaw 11 лет назад
At 9:58, when "another" molecule is chosen, was that through a process that had a chance of reselecting the first molecule again?
@marcosmetalmind
@marcosmetalmind 4 года назад
very good
@chloeduan8301
@chloeduan8301 8 лет назад
ths is so great, thank you!
@ruili6415
@ruili6415 4 года назад
Clear explaination. Thank you Jeff. A question existing in my brain is: How do we set the judgement criteria during the model iteration?
@JuliaLondonChannel
@JuliaLondonChannel 5 лет назад
Gréât vidéo 👍🏻
@hannahshen2907
@hannahshen2907 4 года назад
That is a really good explanation! Thank you!!!
@ablack0
@ablack0 8 лет назад
Thanks for this great explanation!
@cliffwang5481
@cliffwang5481 7 лет назад
Thanks so much for your inspiring explanation!
@yonatan1myers
@yonatan1myers 10 лет назад
At last a clear explanation of this
@jonathansmall4573
@jonathansmall4573 7 лет назад
I tried running that matrix program. Unfortunately it doesn' tconverge to (0.2, 0.4, 0.4) as you said. I don't know what I am doing wrong.
@picjeffton
@picjeffton 7 лет назад
Jonathan Small I messed up the arithmetic in that example.
@jonathansmall4573
@jonathansmall4573 7 лет назад
He he. Actually I tried again. This time using in-built matrix multiplication function in Python. It worked. Thanks :)
@FA-tq9ip
@FA-tq9ip 4 года назад
@@picjeffton When I find the product of the starting state X0 and the Markov transition matrix I do not get that the probabilities of the next state X1 are as shown [0.2, 0.6, 0.2] but rather [0.18, 0.64, 0.18]. Am I doing the multiplication wrong or is that part of the arithmatic error? Thanks for your help and the video.
@leonardomaffeidasilva9774
@leonardomaffeidasilva9774 3 года назад
thank you. Really helped me
@TheGoodInquisitor
@TheGoodInquisitor 10 лет назад
Thank you for your clearness. Now I really have an idea.
@momnaahsan8079
@momnaahsan8079 4 года назад
Great Lecture. Thankyou.
@MrGeorgerififi
@MrGeorgerififi 7 лет назад
nice simple examples. thank u
@stipepavic843
@stipepavic843 7 лет назад
thx alot ! also good old league of legends days XD
@harmonyliu8239
@harmonyliu8239 7 лет назад
So nicely explained!!!!! Thank you !!!!
@225kirt
@225kirt 11 лет назад
I liked the song
@bobcrunch
@bobcrunch 11 лет назад
I get the same answer.
@SoumakBhattacharjee08
@SoumakBhattacharjee08 5 лет назад
nice video.
@MrFenh
@MrFenh 7 лет назад
Great video. Thank you, Jeff!
@dannyndnyad4182
@dannyndnyad4182 6 лет назад
18:24 u are welcome
@scottmacnevin3555
@scottmacnevin3555 7 лет назад
Well done! Thank you
@rafaellima8146
@rafaellima8146 7 лет назад
Thank you so much!
@GabiRav
@GabiRav 11 лет назад
Can someone explain this?
Далее
(ML 18.1) Markov chain Monte Carlo (MCMC) introduction
17:04
Metropolis-Hastings - VISUALLY EXPLAINED!
24:45
Просмотров 34 тыс.
Watermelon magic box! #shorts by Leisi Crazy
00:20
Просмотров 16 млн
Monte Carlo Simulation
10:06
Просмотров 1,4 млн
Metropolis - Hastings : Data Science Concepts
18:15
Просмотров 102 тыс.
6. Monte Carlo Simulation
50:05
Просмотров 2 млн
Markov Chains Clearly Explained! Part - 1
9:24
Просмотров 1,2 млн
Markov chain Monte Carlo
19:55
Просмотров 46 тыс.
The Metropolis-Hastings Algorithm (MCMC in Python)
20:27