Тёмный
Generable
Generable
Generable
Подписаться
The latest advances in Bayesian computing allow users to estimate models that were previously out of reach for most practitioners in industry and academia. Today, we are able to fit models with full Bayesian inference that jointly estimate hundreds of thousands and sometimes millions of parameters. As model complexity grows, we need tools to make sense of these models so we can better understand their strengths and more importantly their weaknesses. By analogy, if we are building a plane, it is our responsibility to test under which conditions it can and cannot fly. We owe this much to our p̶a̶s̶s̶e̶n̶g̶e̶r̶s̶ users.

In this channel, we are focusing on understanding and explaining uncertainty in the broadest sense of the word including interesting model structures, model inferences, predictions, causal inference, decision analysis, and communicating models and uncertainty.


Aki Vehtari: On Bayesian Workflow
1:05:33
3 года назад
Комментарии
@mehmetb5132
@mehmetb5132 Месяц назад
Wondered why we have '-1' in "2 * Phi(asin((R - r) / sigma) - 1" in the golf example model. (Min 51:38)
@jujuchristov1693
@jujuchristov1693 4 месяца назад
Anyone know why loo_predict(blm) and predict(blm[-obs.,],data[obs,]) are giving me a predicted odds of 0.28 and 0.8 respectively? These estimates are so far apart. “0.8” seems more accurate to me but the events true outcome was “0” so loo_predict did a better job. Does loo_predict just not work with high Pareto values, is that why??
@Grapesleadtowaffles
@Grapesleadtowaffles 10 месяцев назад
I came across this after I was explaining square root and exponents equating to 2d and 3D structures. As I understand it a 4d anything would appear as a 3D object to us. However, though we cannot see it, are we able to interact? If so how would we measure those interactions. Further more, the interest in tools that seemingly fit this standard to measure that in which we can not perceive.
@musiknation7218
@musiknation7218 Год назад
How to consider priors in Bayesian regression with some data
@Eizengoldt
@Eizengoldt 7 месяцев назад
Dont know
@chriskroell6956
@chriskroell6956 Год назад
She’s awesome
@prod.kashkari3075
@prod.kashkari3075 Год назад
Lmfao this guys so funny
@josephjohns4251
@josephjohns4251 Год назад
Just beginning to learn about Bayesian analysis ... thanks for the great video and everyone for links in comments ... Question: Is it correct to say that, in the world cup example 1, the only variables that are calculated by Stan are: b real, sigma_a >=0, and sigma_y >=0? In other words, Stan figures out (simultaneously/jointly): (1) the best b and sigma_a for the equation a = b*prior_scores + sigma_a*[eta_a ~ N(0,1)] (2) the best sigma_y so that student_t(df = 7, a[team_1] - a[team_2], sigma_y) best predicts ~ sqrt_adjusted[score(team_1) - score(team_2)] That seems kind of weird to me that after we figure out the formula for a, it kind of boils down to just one parameter = sigma_y
@macanbhaird1966
@macanbhaird1966 2 года назад
Thanks for this! Most interesting and useful
@macanbhaird1966
@macanbhaird1966 2 года назад
Wow! Brilliant - this really helped me a lot. Thank you.
@siriyakcr
@siriyakcr 2 года назад
Contain information would been shared ,👍🏻
@cyruskavwele5304
@cyruskavwele5304 2 года назад
Is it possible to include a factor variable in the model? If yes any examples please.
@stevebronder9985
@stevebronder9985 2 года назад
Most excellent!!!
@mocatrade
@mocatrade 2 года назад
Yeah, he Roks
@RoungYul
@RoungYul 2 года назад
Great
@michaelwiebe8273
@michaelwiebe8273 2 года назад
What's with the blurred box in the bottom of the slides?
@mocatrade
@mocatrade 2 года назад
There was a pop-up on Lizzie's screen that we had to edit out.
@nickhockings443
@nickhockings443 2 года назад
As a clinician treating patients CATE (conditional average treatment effect) is not adequate. What is needed is conditional treatment effect distribution (C-TED). We need to know what the risk of bad outcomes is, for each treatment option, including varying the timing, dose and protocol. We need to know if the tails of the C-TED can be anticipated, detected, and mitigated. For this reason we don't want to compute average outcomes, we need to propagate distributions through the model, from the observed variables, through the hidden (latent) variables to the outcomes. Knowing the shape of the distribution is critically important. In pathophysiology and therapeutics the causal effects may often be non-linear.
@rodolpho_santos
@rodolpho_santos 2 года назад
Very good explanation, Thanks!
@XShollaj
@XShollaj 3 года назад
That was beautiful - Thank you for the wonderful package, Paul!
@emf1775
@emf1775 3 года назад
Gelman is quite nice to listen to. His RL voice sounds different from his blog voice somehow
@doug_sponsler
@doug_sponsler 3 года назад
(1:35) "A lot of us...a lot of us are." The melancholy of that statement was so tangible :)
@Sycolog
@Sycolog 3 года назад
Thank you so much for building bmrs. You saved my master thesis, got me into Bayesian statistics and made me learn R, which is now a staple tool of my professional career.
@iirolenkkari9564
@iirolenkkari9564 2 года назад
Very valuable package indeed! I'm wondering how to model the covariance structure in a bayesian longitudinal setting, similar to covariance patterns such as compound symmetry, autoregressive, Topelitz etc. in the frequentist world. In the frequentist world, taking serial correlation into consideration narrows the confidence intervals of the parameters. How to model the covariance structure in a bayesian longitudinal setting? I'm wondering if a bayesian intercept always introduces compound symmetry, similar to a random intercept in a frequentist linear mixed effects model? I suspect taking serial correlation would narrow the posterior distributions of the model parameters, strengthening the bayesian inference. However, I'm not at all sure if my thoughts are anywhere near correct. The brms package is a very valuable resource. However, the parts about covariance structures seem to be still in progress. If anyone has good theoretical (and why not practical) bayesian references regarding these covariance modeling issues (serial correlation etc.), I would appreciate them very much.
@JesseFagan
@JesseFagan 4 года назад
What was the bug he fixed? I want to know how he solved the problem.
@omarfsosa
@omarfsosa 4 года назад
There was a factor of 2 missing. Full story is here: statmodeling.stat.columbia.edu/2014/07/15/stan-world-cup-update/
@crypticnomad
@crypticnomad 4 года назад
I know this is a rather old video but it is still highly relevant and useful. At 47:02 I don't think a standard EV calculation really does that situation justice. With high payout/low loss situations like that I think it is better to weight the payouts by their utility. For example losing $10 may have basically no subjective utility loss when compared to the subjective utility gained from having $100k. Lets say that to me having $100k has 20k times as much ultility as losing $10 does. When you switch from an ev calculation based on the win/loss to a subjective ultility of the payouts there is a drastic increase in the EV(although still negative in this case). e.g: win=10000 dollars lose = 10 dollars (win*5.4e-06)-((1-5.4e-06)*lose)= ~-$9.46 dollars win_util = 20000 utility points or "utils" lose_util = 1 utils (win_util*5.4e-06)-((1-5.4e-06)*lose_util)=-0.89 utils This is a simple example and we could for sure argue about the subjective utility values but I think overall it shows that the normal EV calculation doesn't really do the situation justice when you think about the utility of the win versus the utility of the loss. One could also flip this around and talk about the subjective utility of losing samples versus winning samples. Like say this was overall +ev but that the subjective value in winning so rarely was less than the subjective value loss from losing so often. I got this concept from game theory. There are plenty of examples, especially in poker, where doing something that is -ev right now could lead to a +ev situation later on. Poker players call that implied ev and an example could be calling with some sort of draw when the current raw pot odds don't justify it but you know that when you do make your hand that the profits gained will make up for the marginal loss now. So for example lets say I have some idea for a product or service that would earn $50k a year off a $100k investment. With using a fairly standard 10x income for valuation estimation I could say the subjective utility of winning that 100k is actually worth 50k utility points versus the 10k utility points implied by an even weighting. This specific situation would still be -ev though. All of that leads me down the path of seriously doubting most of rational economics.
@arnabghosh8843
@arnabghosh8843 4 месяца назад
and even just going with constant valuation of money, you get _multiple entries_. So, sure, the probability of 1 contribution winning is whatever he got, but you can submit multiple submissions whose combination could definitely lead to a win. especially, when you consider that you can make about 10k submissions. made a little sad to see that reductive analysis (and with all the floating) when I was excited to see a really interesting decision problem at hand :/
@yoij-ov3sd
@yoij-ov3sd 4 года назад
At 16:27 you talk about checking predictive posterior distributions for games against their actual results to check if they are within their respective 95% CIs. Are these games training data or unseen data?
@Houshalter
@Houshalter 4 года назад
He didn't say they were held out samples. So probably they were in the training data. Ideally you shouldn't do that. Because it would hide overfitting. However bayesian methods are much less prone to overfitting. In his example he found some completely different problem.
@yoij-ov3sd
@yoij-ov3sd 4 года назад
@@Houshalter thanks
@crypticnomad
@crypticnomad 4 года назад
@@Houshalter I've heard people argue that bayesian methods don't overfit but the developers sometimes do have incorrect assumptions about priors and their distributions which can lead to situations that may look similar to overfitting in the classic sense of the word. For example say we naively look at some time series data and we think we have a solid basic understanding of the distributions of the processes that formed that data. We fit a model and it seems to do well on the trainging set but fails pretty horribly when testing on unseen data. There are many reasons this could happen and almost all of them are based on the fact that our training sample didn't include enough data to really estimate the distributions and their parameters, we picked the wrong distributions/priors for those processes or that the processes that generate our data vary over time.
@Houshalter
@Houshalter 4 года назад
@@crypticnomad bayesian can absolutely suffer from a bad model. But it's a different problem than normal overfitting. And a validation test would not necessarily show any difference from the training set
@mattn2364
@mattn2364 5 лет назад
"Soccer games are random, it all depends how good the acting is"
@johnnyedwards1948
@johnnyedwards1948 5 лет назад
Also really liked the golf putt example.
@erwinbanez6442
@erwinbanez6442 7 лет назад
Thanks for this. Any link to the slides?
@SrikantGadicherla
@SrikantGadicherla 6 лет назад
www.dropbox.com/s/sfi0pcf7hais91r/Gelman_Stan_talk.pdf?dl=0 This (for given slides) talk was given in Aalto university, October 2016.
@NikStar210
@NikStar210 7 лет назад
Prof. Gelman: At 19:00 you talk about checking how the model fits the data; are there any tools in Stan to avoid overfitting?
@generableHQ
@generableHQ 7 лет назад
There are no "tools", but this may help: andrewgelman.com/2017/07/15/what-is-overfitting-exactly/
@SpaceExplorer
@SpaceExplorer 7 лет назад
Thanks Dr. Gelman
@KyPaMac
@KyPaMac 7 лет назад
That golf putting model is just about the coolest thing ever.
@RobetPaulG
@RobetPaulG 7 лет назад
Thanks a lot for making this code available for download. That was really helpful for getting started in Stan.
@swadeshibiden6912
@swadeshibiden6912 3 года назад
where is the code??
@usptact
@usptact 7 лет назад
Thanks for the great presentation and explanations on real models. This made me laugh: "working with live posterior"
@khiemnguyentho5847
@khiemnguyentho5847 7 лет назад
The data set from (bit.ly/lc-loans) was modified for this video illustration. Could you provide the modified one that i can follow? Thanks
@generableHQ
@generableHQ 7 лет назад
There is CSV file in this directory: bit.ly/rstanarm-share
@shneazy
@shneazy 7 лет назад
Hi, using the code in your Google drive folder I only get 2,964 observations. When I drop the ,fileEncoding = "latin1" portion of the get_data.R script and then run the on slide 12 of the presentation I get 38,607.
@willtudor-evans6055
@willtudor-evans6055 7 лет назад
Nice fix, I get 41,279 but maybe dataset has since increased.