Тёмный

Importance Sampling 

Mutual Information
Подписаться 73 тыс.
Просмотров 62 тыс.
50% 1

Опубликовано:

 

25 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 233   
@guilhermecorrea3604
@guilhermecorrea3604 2 года назад
I think people underestimate how good this channel is. Can't wait for it to blow up! Good job
@Mutual_Information
@Mutual_Information 2 года назад
lol I'm going with the slow and steady strategy
@pradkadambi
@pradkadambi 2 года назад
The quality of these videos is always phenomenal.
@nikoskonstantinou3681
@nikoskonstantinou3681 2 года назад
Im still confused.... why haven't you blown up yet?? Your content is levels higher than a lot of stuff in RU-vid!
@Mutual_Information
@Mutual_Information 2 года назад
lol I hear it takes time for the algorithm to like you. I'm not terribly worried. Slow and steady for now
@marcinelantkowski662
@marcinelantkowski662 2 года назад
This must be the best explanation of importance sampling available online, or at least on YT. And this channel in general is such a gem. Can't wait for more of your content
@Mutual_Information
@Mutual_Information 2 года назад
Second donation ever! Thank you! And yes more is coming. I'm working on a big fat series, hence no recent vids. But they're coming
@slayerxyz0
@slayerxyz0 Год назад
One interesting use of importance sampling is in path tracing (similar to ray tracing) in computer graphics, since path tracing is a Monte-Carlo method for computing the rendering equation. You can use importance sampling to get a better (less noisy) image with the same number of samples by using a sampling distribution which provides more frequent samples where the contribution from the BRDF/BSDF is higher, essentially sampling fewer dim paths which don't contribute to the total lighting of a pixel.
@jobiquirobi123
@jobiquirobi123 2 года назад
I like how you really go deep on uncommonly shown but very powerful techniques.
@Mutual_Information
@Mutual_Information 2 года назад
hell yea! Trying to make this for people who actually want to use this stuff one day. All these details become important.
@AlexG4MES
@AlexG4MES 2 года назад
Every single word you say it's the absolute minimum for bestly conveying and explaining the full meaning of the formulas. Congratulations, and thank you for being an excellent teacher
@Mutual_Information
@Mutual_Information 2 года назад
Thanks a lot - glad you appreciate the script
@usptact
@usptact 3 месяца назад
Hallelujah! Finally got a simple explanation of what Importance Sampling is! Thanks a ton!
@bernardosantosrocha6629
@bernardosantosrocha6629 9 месяцев назад
Just sending a thanks for the clarity of the graphs. painting the samples the color of the distribution is a great touch
@pjamshidian8
@pjamshidian8 Год назад
Fantastic video. It's clear that you put a massive amount of effort into your graphical representations and explanations!!
@Mutual_Information
@Mutual_Information Год назад
Yea I'm hoping that'll make the difference in the long run
@yehiamoustafa9801
@yehiamoustafa9801 2 месяца назад
best explanation of importance sampling i have found , thanks alot
@jeremywinston4447
@jeremywinston4447 2 года назад
This is how you suposed to make an explanation video. Very very clear and concise. Well scripted, well organized, keep up you great work!!
@Mutual_Information
@Mutual_Information 2 года назад
Ha yea the script is the hard part!
@Recessio
@Recessio 4 месяца назад
This is THE best explanation of importance sampling I have come across. I'm studying for a PhD in Astrophysics, I've been linked to so many textbooks and college courses that make it really confusing. This was so simple and has really helped me understand this and move on to further topics. Thank you so much!
@Mutual_Information
@Mutual_Information 4 месяца назад
Thank you for telling me - I love hearing about those cases where this stuff hits just right!
@kafaayari
@kafaayari 2 года назад
Well I was trying to understand variational inference but with no luck. This gem helped to me. To be honest this is the best video on topic and this guy is a brilliant teacher. Please make more of this kind of videos.
@Mutual_Information
@Mutual_Information 2 года назад
Thanks! Variance inference will be covered one day - promise!
@Eric-jh5mp
@Eric-jh5mp Год назад
Wow that's an awesome explanation. I'm taking a Monte Carlo STATS class right now and this was far more clear then my professor was about what is actually happening here. Great video!
@Mutual_Information
@Mutual_Information Год назад
Happy to hear it Eric !
@_Mute_
@_Mute_ 2 года назад
You earned this sub. Fantastic quality! This is also the most intuitive explanation of a concept like this I've ever seen! I sometimes think other channels with similar topics either ramble a bit much or go too fast in parts and I get lost, but this is just the right amount of building the foundation slowly and confidently to arrive at the final idea. Keep going with these videos and you are sure to get algorithmed eventually 👍
@Mutual_Information
@Mutual_Information 2 года назад
Thank you very much! It's a work in progress too. I'm learning the rhythm and what does/doesn't need to be said. Things will get better and I'm sure it'll get recognized.
@sanjaythorat
@sanjaythorat 2 года назад
I second your opinion @Mute. Thanks @Mutual Information for the video.
@giuliomosemancuso9478
@giuliomosemancuso9478 26 дней назад
I've never seen someone dressing a Pyjama explaining things so well! Good job!
@1ssbrudra
@1ssbrudra 11 месяцев назад
This is exceptionally well explained. Just one suggestion, when explaining remove yourself when going down the analytical steps and bring yourself back. Grabs attention instantly.
@Mutual_Information
@Mutual_Information 11 месяцев назад
Smart idea, I'll try that. Seriously, you'll see in the next vid, thanks!
@tobiasopsahl6163
@tobiasopsahl6163 2 года назад
Excellent video! I find myself lost in graduate statistic books, since they often explain concepts like this based on a lot of other statistical concepts, that I do not always have a good understanding for. It certainly helped to broaden the perspective a bit. It is easy to find excellent recourses on the most common and hyped methods, but not important but often overlooked topics like this. Thanks!
@Mutual_Information
@Mutual_Information 2 года назад
Thank you that's a big point of the channel. All the basic topics get covered at a high quality level, but there's clearly a real appetite for a few steps beyond it.
@mingtianni
@mingtianni Год назад
Such a beautiful talk! I was searching for an intro on importance sampling. And this is beyond my expectation. Thank you.
@wuchunricardo4846
@wuchunricardo4846 Год назад
My professor tells nothing about importance sampling, this clip really can help me to understand
@BehrouzMousavi
@BehrouzMousavi 3 месяца назад
Perfect intro. Please share more of the available methods over finding q(x)!
@natnaeldaba7317
@natnaeldaba7317 Год назад
The best explanation of Importance Sampling I've seen so far. Good job!!
@grahamjoss4643
@grahamjoss4643 2 года назад
thanks for sharing. I'm an undergrad CS student and this was cool
@Mutual_Information
@Mutual_Information 2 года назад
Glad it helped - there's plenty more to come!
@Mutual_Information
@Mutual_Information 2 года назад
Also, if this topic is covered any of your classes.. I would greatly appreciate the favor of sharing this vid with the class :)
@yli6050
@yli6050 Год назад
Amazing visualization and lucid explanation ❤This was the kind of video that bring you joy of understanding, appreciate the beauty of math and people behind the original idea! Bring your favorite wine to watch this!
@Mutual_Information
@Mutual_Information Год назад
You're too kind Y Li - thank you!
@vaek_54
@vaek_54 2 года назад
Nice video, thank you ! The last condition for "When is Important Sampling used" is a sufficient condition for the use of IS rather than a necessary condition in my opinion. In Reinforcement Learning we try to evaluate values (the f(x)) for a target policy (the p(x)) using a sampling policy (the q(x)). It is used because using p is not sample efficient as it only can be used with recently sampled data. Using q allows us to make use of the all data sampled since the beginning of the training. But we are not at all choosing q to be high where |pf| is.
@Mutual_Information
@Mutual_Information 2 года назад
!! It's wild you mentioned that. I actually made this vid as a pre-req to my RL sequence. Yes! The IS case I mentioned here is not the full story. I tried to allude to that a bit in the intro :)
@alexmtbful
@alexmtbful 2 года назад
Wow - this must have been a lot of work to do. A clear structure, so many details, theoretical knowledge as well as practical tips, astonishing/valuable graphics and super clear audio. Thank you!
@Mutual_Information
@Mutual_Information Год назад
You nailed it - it was a lot of work lol. Thanks for noticing :)
@FlorentinoDing
@FlorentinoDing 9 месяцев назад
I spent nearly two days to try to working this out and all you did just show me some figures, that's incredible, thanks!
@Mutual_Information
@Mutual_Information 9 месяцев назад
My job is done ;)
@CYQ-sg2yu
@CYQ-sg2yu 8 месяцев назад
Very professional explanation on every detail of IS!
@apah
@apah 4 месяца назад
What an excellent explanation. Glad to see your latest video is performing well !
@tslau8022
@tslau8022 2 года назад
Among all the videos I've found on youtube about Importance Sampling., this video is so far the best explanation.
@Mutual_Information
@Mutual_Information 2 года назад
That's a win!
@Siroitin
@Siroitin 5 месяцев назад
This channel is so haunting. It's like no matter what I search, this channel always returns
@tolkienfan1972
@tolkienfan1972 3 месяца назад
This was SSSOOOOOO much easier to understand than the wikipedia page! Thank you!
@stijnh1974
@stijnh1974 Год назад
Thank you very much for the great intuition on this technique ! I am using it to understand the SMC algorithm, where Importance Sampling is a key ingredient.
@Mutual_Information
@Mutual_Information Год назад
Excellent, glad it helps
@LuddeWessen
@LuddeWessen 2 года назад
Somehow you manage to give intuition _and_ technical detail. Fantastic video, like all your other videos! 😎
@yodarocco
@yodarocco Год назад
I think this is the kind of video that you have to look when you already have more or less idea of what the algorithm does, and then it helps you to summarize and understand better.
@zuhair95
@zuhair95 2 года назад
OMG, YOU SAVE MY excessive thoughts about how to handle the theoretical side in the practical side (in Particle filter - based SLAM algorithms for probabilistic mobile robotics systems) . Many thanks.
@Mutual_Information
@Mutual_Information 2 года назад
excellent! Glad I could help
@jiangpengli86
@jiangpengli86 3 месяца назад
Thank you for this fantastic tutorial video. It really helps a lot.
@stergiosbachoumas2476
@stergiosbachoumas2476 Год назад
That was actually a very nice way of presenting Importance Sampling. Thank you!
@Mutual_Information
@Mutual_Information Год назад
Glad you liked it and thanks for watching ;)
@jacoblynd2808
@jacoblynd2808 Год назад
Fantastic video! I'm giving an internal lit review on quasi-adiabatic path integrals and this really helped me get some perspective on the core of the method! Super clear lecture and great use of visuals! Thank you so much!
@Mutual_Information
@Mutual_Information Год назад
Excellent, glad it helped!
@123ming1231
@123ming1231 Год назад
I subscribe the channel because of this video, the quality is insane
@Mutual_Information
@Mutual_Information Год назад
Thank you Ming ;)
@dexio85
@dexio85 2 года назад
Those topics are widely used in computer graphics but they are explained in such a convoluted way. For example I only understood what "unbiased" means with your explanation. You do have a tallent to explain things!
@Mutual_Information
@Mutual_Information Год назад
Thank you RexDex!
@joaofrancisco8864
@joaofrancisco8864 11 месяцев назад
That is absurdly well-explained. Very high quality in the every aspect of the video!
@Mutual_Information
@Mutual_Information 11 месяцев назад
Thank you - more good stuff coming!
@wendyqi4727
@wendyqi4727 Год назад
Omg, I struggled with these concepts for a while. Thank you so much for the explanation and visualization!
@Mutual_Information
@Mutual_Information Год назад
The struggle is over Wendy! Happy it helped :)
@ArnaldurBjarnason
@ArnaldurBjarnason Год назад
I stumbled upon your kelly criterion video some time ago and liked it. Now, properly looking at your channel, I'm blown away. Really high quality explanations (props to the usage of manim as well) of hard to understand ideas 👏👏👏
@Mutual_Information
@Mutual_Information Год назад
Oh yea, the quality is improving. Took me a long time but I think I'm getting the essentials. I'm also not using Manim.. maybe I should but I've always wanted to build something bespoke for this.
@shounakdesai4283
@shounakdesai4283 7 месяцев назад
great video. i bounced off from a lot of videos just for Importance sampling and this was the best of all.
@olofjosefsson4424
@olofjosefsson4424 Год назад
Great video! If I would like to add anything it would be maybe 2-3 questions in the end of the presented material to see if you did grasp the key points in the video (with answers in the description)! Thank you
@Mutual_Information
@Mutual_Information Год назад
That's.. a good idea. OK I think I'll give that a shot in future video.. I need some ways to build interaction with the audience. Thanks!
@jeroenritmeester73
@jeroenritmeester73 Год назад
I think the pace of this video is great, but I missed the motivation for this up until the very end. The why should generally come first: "why do I need this explanation?"
@spyder5052
@spyder5052 Год назад
Like many others, I’m surprised you’re not bigger than you are! I’ve been binging your videos and they’re all very high quality. Liked and subbed 😊
@062.jannatulferdausanu7
@062.jannatulferdausanu7 7 месяцев назад
This is the best video to understand importance sampling. Thank you❤
@cwaddle
@cwaddle Год назад
Great intuitive recap of jensens inequality,!
@hw5622
@hw5622 Год назад
Nice video! Thank you for the succinct explanation for a first understanding !
@sunilmathew2914
@sunilmathew2914 Год назад
Great video. Really liked the visualizations.
@Mutual_Information
@Mutual_Information Год назад
Thanking me dollars - thank you very much!
@mdnafi3650
@mdnafi3650 7 месяцев назад
Man ! I wish you I could learn real time analysis from you !! Superb !!!
@pepinzachary
@pepinzachary 4 месяца назад
Fantastic video, well done! I'm watching for path tracing rather than ML :)
@djfl58mdlwqlf
@djfl58mdlwqlf 2 года назад
great to see you again I have no idea why your video has such a low view... This deserves millions
@Mutual_Information
@Mutual_Information 2 года назад
lol thank you, we'll see! millions is a very very high bar for technical stuff. I'm happy with a lot less
@wasifhaidersyed3813
@wasifhaidersyed3813 2 года назад
Awesome! Keep it up, man! Your dedication is level is touching the 7th sky!
@covers3212
@covers3212 Год назад
impressive teaching skills, this was an amazing lesson
@manolisnikolakakis7292
@manolisnikolakakis7292 Год назад
Thank you so much for this. A topic I considered very complex is now crystal clear thanks to you!
@Arkantosi
@Arkantosi Год назад
Great channel! Lucky I found this. I like the quality of the presentation and the LaTeX math displayed. Well done sir!
@Mutual_Information
@Mutual_Information Год назад
I'm for the people who think Latex looks beautiful
@SamChiu-m9b
@SamChiu-m9b 10 месяцев назад
Amazing explanation. Top-notch delivery!
@jessicasumargo6547
@jessicasumargo6547 Год назад
thanks for making statistics feel comprehensible for me
@aliasziken7847
@aliasziken7847 Год назад
high quality, excellent tutorial, thx
@samsonyu5679
@samsonyu5679 Год назад
Very useful, the intuition, visualizations and math have a nice combined flow!
@Mutual_Information
@Mutual_Information Год назад
Thanks Samson - glad you liked it. Come back anytime ;)
@JesusRenero
@JesusRenero 2 года назад
Excellent explanation and video! Congratulations for that, and THANKS!
@iloraishaque2594
@iloraishaque2594 2 года назад
Fantastic explanation , thank you
@posthocprior
@posthocprior Год назад
A good explanation. Thanks.
@draggerkung4847
@draggerkung4847 2 года назад
Thank you. It's very clear.
@cziffras9114
@cziffras9114 4 месяца назад
Now the true question is: how can one be clearer than that? Wonderful work, thank you so much
@flooreijkelboom1693
@flooreijkelboom1693 2 года назад
Amazing video, thank you for this.
@RahmanIITDelhi
@RahmanIITDelhi 2 года назад
One of the best explanation so far i have seen....If you can show how we can code it in python that would be helpfull......Thanks...
@monuk4594
@monuk4594 Год назад
Loved the vid. Thanks a lot, and appreciate the effort that went into making this. Keep up the good work, and hoping for this channel to grow big.
@Mutual_Information
@Mutual_Information Год назад
Thank you - glad you like it!
@migueliglesiasalcazar8334
@migueliglesiasalcazar8334 9 месяцев назад
Absolutely great video. Keep making this kind of content please. It is very helpful!
@wqwq2024
@wqwq2024 6 месяцев назад
Excellent job. Thank you!
@kirar2004
@kirar2004 Месяц назад
Very clear explanation! Thanks!
@welcomeaioverlords
@welcomeaioverlords 2 года назад
Well done! And thank you.
@raphaelbaur4335
@raphaelbaur4335 2 года назад
Wonderful animations!
@ruslansergeev4061
@ruslansergeev4061 11 месяцев назад
An absolute phenomenon 💪💪💪 Beautiful explanation.
@MeshRoun
@MeshRoun 2 года назад
I think what I like the most about your videos is the reference book by your right, always showing up :) Do you have a complete list of your recommended/favorite books?
@Mutual_Information
@Mutual_Information 2 года назад
lol you noticed! Yea these textbooks are essential :) I think one day I'll put together a list of my favorites. I can tell you a few of them here: Machine Learning: A Probabilistic Perspective is definitely my number 1. There's actually a new edition available for pre-order on Kevin Murphy's site. Second would be The Elements of Statistical Learning, a classic. Then Deep Learning by Bengio et al. And, just because I'm reading it right now, I really love Reinforcement Learning by Sutton and Barto. It does a great job creating a unifying framework on a wide and rapidly evolving field.
@sjpbrooklyn7699
@sjpbrooklyn7699 7 месяцев назад
You said: “The dimension of x is high ... This integral is impossible to calculate exactly ... A small set of samples have an outsize impact on the average.” This describes my doctoral dissertation problem in polymer chemistry. I wanted to determine average thermodynamic properties of a desirable variable like the end to end distance or radius of gyration (average distance of molecular units from center of mass) of a very long polymer molecule of, say, several thousand units. In thermodynamics this is the integral of [the end-to-end distance times exp(-U/kt)d(tau)] where U is energy, t is temperature, and k is Boltzmann’s constant, divided by the integral of (exp-U/kt)d(tau), also called the partition function. Tau is the volume element of the phase space for the molecule and represents all possible geometric conformations or shapes of the molecule. The conformation of the polymer is completely defined geometrically by listing the dihedral angles about successive backbone atoms from one end to the other (ignoring side chains for simplicity). Given such a list, you can generate all of the molecule’s coordinates in three dimensions, from which you can then calculate the energy of the molecule, U, using any number of standard chemical functions. Each of the dihedral angles can vary continuously from 0 to 2pi. The multi-dimensional “phase space” defined by tau is unwieldy because tiny changes in any dihedral angle can bring distant atoms together in energetically unpredictable ways and there is no analytical solution to the integral. In the 1950s Monte Carlo methods were used to generate coordinates for a single polymer molecule by using a random number generator to create a list of dihedral angles and then calculating (a) the end-to-end distance of the polymer whose angles corresponded to the list and (b) its energy. In a single computer “experiment” researchers could generate thousands of polymers and calculate the average end-to-end distance using the exponential function as the weighting function. In principle, this worked, but in practice, polymers with very high energies due to atomic overlaps and therefore very low weights dominated the outputs so the averages converged too slowly to be useful. In the 1960s Moti Lal, a chemist at Unilever Labs in the UK, became aware of Metropolis’s seminal paper from 1953 in the Journal of Chemical Physics that laid out the statistical ideas of importance sampling and applied them to the polymer problem. However, the available computing power (IBM 360/50) confined his polymers to 30 monomer units on a 2-dimensional spatial lattice. As a graduate student at NYU 1968 I had access to a Control Data Corp. CDC 6600 supercomputer at the Courant Institute and used the Metropolis-Lal method to generate more realistic polymers in 3 dimensions with free rotation about their backbone bonds (i.e., not restricted to an artificial lattice). Just as you pointed out, samples generated with this method tended to represent more “important” regions of polymer conformation space so it took fewer samples to get stable averages. This allowed me to also generate the numerical distributions of end-to-end distances of polymers of several hundred units and with sufficient accuracy to determine which of a number of theoretical analytic functions best described those distributions.
@jameshimelic4454
@jameshimelic4454 Год назад
this is a great video. thank you!
@ZarakJamalMirdadKhan
@ZarakJamalMirdadKhan 2 года назад
Very informative channel
@BilalTaskin-om6il
@BilalTaskin-om6il Год назад
Thank you. Great video.
@bukharifaraz
@bukharifaraz 11 дней назад
Mind blowing !! You hit the nail !!
@geraltofrivia9424
@geraltofrivia9424 Год назад
The CLT is one of the wonders of the universe.
@tylernardone3788
@tylernardone3788 2 года назад
Outstanding as always. Really a standout in this space. Thanks!
@Mutual_Information
@Mutual_Information 2 года назад
Thanks Tyler, the appreciation goes a long way
@jakob6628
@jakob6628 Год назад
Exceptional explanation! Thank's a ton!
@caedknight1218
@caedknight1218 2 года назад
excellent as always.
@minhtriet6873
@minhtriet6873 2 года назад
no discuss about qquality of this video, very incredible!
@Mutual_Information
@Mutual_Information 2 года назад
Thank you!
@lordsykes6310
@lordsykes6310 2 месяца назад
Thank you for the excellent tutorial! I have a little problem. What is the V_p[f(x)] called, which is not just variance, I assume? I want to search for the calculation method, however I don't know the keyword to search.
@Mutual_Information
@Mutual_Information 2 месяца назад
It is just variance, basically. It's the variance of the random variable defined by sampling x (according to the distribution p(x)) and plugging it into f(x). So you can do this way. Sample x's with p(x). plug them into f(x), giving many f(x)'s samples. Calculate the empirical variance of these samples. If you want to compute it exactly, then you do the p(x)f(x) integration to get the mean. And then you did integration for the variance. This answer your Q?
@lordsykes6310
@lordsykes6310 2 месяца назад
@@Mutual_Information That's clear! Thanks for the explanation!
@TripImmigration
@TripImmigration 4 месяца назад
We can feel how much you need to control yourself to not ask " class, are you understanding?" 😂 Thanks to reminds me of Montecarlo tho
@saraheslami7795
@saraheslami7795 10 месяцев назад
Awesome visualizations!
@Mutual_Information
@Mutual_Information 9 месяцев назад
Thank you Sarah!
@neithane7262
@neithane7262 9 месяцев назад
Like your channel A LOT, explanation are really clear and animations are really cool. A bit sad that you don't explain the maths sometimes
@Mutual_Information
@Mutual_Information 9 месяцев назад
Vids take a lot longer than I'd like.. stuff gets cut out to save time :(
@konn81
@konn81 8 месяцев назад
excellent explanation
@avishkarsaha8506
@avishkarsaha8506 2 года назад
god these videos are invaluable
@Throwingness
@Throwingness 2 года назад
Somehow I am able to follow this.
@王莫敌
@王莫敌 2 года назад
That's a great video which helps me alot! Could you please also introduce a little bit about Resampled Importance Sampling (RIS)? Thank you so much
@lenoken7894
@lenoken7894 Год назад
Great video ❤
@adityaghosh8601
@adityaghosh8601 Год назад
It would be great if u make individual videos for the topic discussed in video.
@Mutual_Information
@Mutual_Information Год назад
I'm finishing up part 6 on the RL series, which is a huge vid that's taking a long time (~30 min). BUT, after that I've committed to making short, more bite-sized videos (4-12 minutes). I can upload more frequently that way and it's less on the audience.
@kafaayari
@kafaayari 2 года назад
Thx again for the great content. I have two questions though. * At 10:04, how did our new function (p(x)/q(x))*f(x) perfectly fit to p(x)? * In variational encoders, the q(z) must be selected such that KL(q||p) must be minimum. But as far as I understood, it's another concept, right? Because in your example q is a very different distribution than p.
@Mutual_Information
@Mutual_Information 2 года назад
Hey Ali, thanks! To your questions: 1) It's not terribly surprising they are close. p(x) is a component (p(x)/q(x))*f(x). If you imagine another case (but just as arbitrary as what I was showing on screen), where f(x) = 1 for all x and q(x) is the uniform distribution.. then (p(x)/q(x))*f(x) and p(x) would be the same thing! 2) Yes q in the variational inference is a separate concept. There are connections, but the criteria for selecting q in this case vs that case are different, so you should think about the q's differently.
@kafaayari
@kafaayari 2 года назад
@@Mutual_Information Thanks! Now it's crystal clear.
@nicolabranchini9903
@nicolabranchini9903 2 года назад
@@kafaayari Variational autoencoders are an algorithm which is a special case of variational EM , where both model parameters ( usually denoted by theta, the decoder ) and " variational " parameters (usually denoted by phi the encoder ) are learnt at the same time . For variational * inference *, where the theta are known , and one only has to "learn" the variational parameters , there is a deep connection with importance sampling . In fact , it is nothing more (or less) than a special case of *adaptive* importance sampling, where the adaptation metric is the KL divergence to some posterior distribution . Other adaptive IS methods are more sophisticated , as for example they always use *all* samples previously generated in the adaptation (in VI parlance, ) , and *not* just the ones from the latest iteration (as popular VI techniques do ) . These concepts are surprisingly not well understood in the VI literature (although they are more in the AIS literature) . I can recommend proceedings.mlr.press/v151/kviman22a/kviman22a.pdf for some recent work on these connections, but there is more. By the way @Mutual Information , nice video :)
@kafaayari
@kafaayari 2 года назад
@@nicolabranchini9903 Thank you very much for the details. I'll check the paper as well.
Далее
The Fisher Information
17:28
Просмотров 65 тыс.
Accept-Reject Sampling : Data Science Concepts
17:49
Просмотров 66 тыс.
Неплохое начало лекции
00:51
Просмотров 257 тыс.
Китайка нашла Новый Дом😂😆
00:20
Monte Carlo Simulation
10:06
Просмотров 1,4 млн
The Boundary of Computation
12:59
Просмотров 1 млн
6. Monte Carlo Simulation
50:05
Просмотров 2 млн
How to Learn Probability Distributions
10:55
Просмотров 42 тыс.
Monte Carlo Methods : Data Science Basics
19:14
Просмотров 122 тыс.
Gaussian Processes
23:47
Просмотров 126 тыс.
An introduction to importance sampling
14:19
Просмотров 58 тыс.
Неплохое начало лекции
00:51
Просмотров 257 тыс.