Тёмный
CausalAI
CausalAI
CausalAI
Подписаться
We'll be posting videos related to causal inference and related topics, including AI, Machine Learning, and Data Science.

For more information about our latest results, see causalai.net.
Комментарии
@kurtbecker3827
@kurtbecker3827 2 месяца назад
What a shame... this video is barely audible. A topic of this kind requires top notch audio and I am unable to understand all words sometimes even cannot understand the entire sentence.
@raminsafizadeh
@raminsafizadeh 9 месяцев назад
Can barely understand a word! It is borderline rude to be so nonchalant about pronunciation and clarity of speech.
@edupignatelli
@edupignatelli Год назад
Is there any publication that put these presentation in formal writing?
@ericfreeman8658
@ericfreeman8658 Год назад
53:45 For counterfactual decision-making, "Agents usually act in a reflexive manner without consider the reasons or the causes for behaving in a particular way. Whenever this is the case, they can be exploited without never realizing. " I think it is just what people do in RL as exploration, e.g., \epsilon greedy. Is there any difference or did I miss anything?
@oscarv2293
@oscarv2293 Год назад
Amusing👑! Boost your online stats = P r o m o s m !!!
@user-wr4yl7tx3w
@user-wr4yl7tx3w Год назад
7:50
@PengZhenwu
@PengZhenwu Год назад
very good course!
@mbomba4415
@mbomba4415 Год назад
Thanks For Sharing
@jimmychen4796
@jimmychen4796 Год назад
hard to follow, really badly explained
@PengZhenwu
@PengZhenwu Год назад
very interesting course!
@Fun-bz7ou
@Fun-bz7ou Год назад
What's the difference between X and do(X)?
@rugdeeplearn7420
@rugdeeplearn7420 10 месяцев назад
X represents a variable, whose state X is observed (i.e. you don't decide its value, you get it from data). On the opposite, do(X) represents the fact that you set deliberately the value of that variable to X (i.e. you DO an action, that corresponds to have your variable x=X)
@MrKrtek00
@MrKrtek00 2 года назад
Is it me, or he got the best porn-name ever? Anyways, it is a great talk
@syedhasan773
@syedhasan773 8 месяцев назад
bruh.
@syedhasan773
@syedhasan773 8 месяцев назад
he's got a point though
@c.s.842
@c.s.842 2 года назад
Terrible sound. Is it beyond MIT capabilities to furnish the speaker with a body microphone.? I have missed half of this very interesting lecture . What a shame!
@kurtbecker3827
@kurtbecker3827 2 месяца назад
Yes, every word is important because the subject matter is quite difficult to understand
@EmperorsNewWardrobe
@EmperorsNewWardrobe 2 года назад
35:06 THE 7 PILLARS OF CAUSAL WISDOM 47:12 Pillar 1: graphical models for prediction and diagnosis 57:08 Pillar 2: policy analysis deconfounded 1:19:15 Pillar 3: the algorithmization of counterfactuals 1:23:29 Pillar 4? Formulating a problem in three languages 1:36:35 Pillar 5: Transfer learning, external validity, and sample selection bias 1:50:19 Pillar 6: Missing data 1:50:55 Pillar 7: Causal discovery
@davidrandell2224
@davidrandell2224 2 года назад
AI will never ‘know ‘ the cause of gravity. Even though Galilean relative motion gives 50/50 odds that the earth approaches the released object: gravity. Cause of gravity: the earth is expanding at 16 feet per second constant acceleration. Common knowledge since 2002: “The Final Theory: Rethinking Our Scientific Legacy “, Mark McCutcheon. Try to keep up.
@washedtoohot
@washedtoohot Год назад
Your point being…?
@kamalakbari5609
@kamalakbari5609 2 года назад
Thanks for the nice talk, Elias!
@ewertondeoliveira1540
@ewertondeoliveira1540 3 года назад
What is the intuition behind the "remainder" at 1:13:50?
@AjayTalati
@AjayTalati 3 года назад
What does he mean when he says the agents causal graph G, captures the "invariants" of the SCM M of the environment? Any simple example?
@spitfirerulz
@spitfirerulz 3 года назад
I think it means that information about causal relationships of the variables in an SCM M (e.g. which variables "listen" to which others) can be adequately described by graph G. This captures the key properties of the causal relationship which do not vary in different circumstances. We would still need M because, for instance, we need to describe whether the functions are linear, complicated etc.
@olivrobinson
@olivrobinson 3 года назад
Really clear and awesome material. Thank you for this! I'll be moving on to the next video...
@michaeltamillow9722
@michaeltamillow9722 3 года назад
29:00 - the example doesn't make any sense, since it says people are exercising MORE as they get older. In fact, based on the chart ALL 50 year olds exercise more than ALL 20 year olds. The logic of the eXercise axis is conveniently ignored to prove a point. Not a good example, and hopefully not how you conduct science...
@michaeltamillow9722
@michaeltamillow9722 3 года назад
I should mention that I understand Simpson's paradox, I am simply commenting on the specific, contrived, example that does not work. I am not even fully convinced that Cholesterol (the latent variable) might be correlated with age between the ranges of 10 and 50 if all other factors are held constant.
@SterileNeutrino
@SterileNeutrino Год назад
@@michaeltamillow9722 Good point. This diagram is actually on page 212 of the "Book of Why", something has gone wrong with that example. Maybe the 40 and 50 cloud should be shifted to the left? But it's all about projecting a high-dimensional point cloud onto fewer dimensions the wrong way, yielding a meaningless result, here one about the "typical person". (Cholesterol is also probably mostly correlated with sugar uptake IRL, but that's for some other time 🙂) Fun: "Yule-Simpson’s paradox in Galactic Archaeology"
@BenOgorek
@BenOgorek 3 года назад
51:15 head hurting thinking about distinction between watching an agent do() something and watching an agent do something
@CausalAI
@CausalAI 3 года назад
Hey Ben, I think the discussion after this summary slide may provide further elaboration, but let me know... -E
@WannabePianistSurya
@WannabePianistSurya 3 года назад
God that question session was painful to watch.
@JamesWattMusic
@JamesWattMusic 4 года назад
Interesting talk. I have a question about the vaccine example around 24:00. Why would you say the vaccine is "Good" if it killed more than the disease? Why would eradicating a disease with a more deadly cure be a good solution? For example, if a disease kills X people every year, should we kill 2X, 3X.... 10X, 100X the people at once to eradicate? Its an ethical problem. Thanks
@loljustice31
@loljustice31 4 года назад
Very informative, thank you for uploading.
@sujith3914
@sujith3914 4 года назад
It is unfortunate that the mindset of scaling up is sufficient to achieve the most sophisticated AI is a rather prevalent one and not one that is adopted by only a few. I guess there is a bright side to it that it provides people with very few resources a chance to make good contributions as well, because just scaling up is not sufficient.
@CausalAI
@CausalAI 4 года назад
Hi Sujith, my hope with the tutorial is that if the examples and tasks are minimally clear, the understanding that scaling-up is not the only issue will follow naturally. In other words, there is no controversial statement, it's just basic logic. Deliberately, we designed the minimal or easiest possible examples so that this point could be understood; obviously, things just get more involved in larger settings.
@silent_monk
@silent_monk 4 года назад
Thanks for the great talk. Is there a rough timeline for when we can expect the survey paper to be released? Looking forward to it.
@CausalAI
@CausalAI 4 года назад
Hi Rootworn41, we are working on it, I am hoping to have good news soon! Thanks!
@kennethlee143
@kennethlee143 4 года назад
This is an inspirational talk. I wish I can meet Elias in person one day!
@sujith3914
@sujith3914 4 года назад
I know right, when he goes tangentially the talk gets even more interesting. I wish there was a platform where he is asked to just speak his mind, without any time limit, just outlining his interests, passion, vision etc.
@Ewerlopes
@Ewerlopes 4 года назад
Amazing. I am trying to digest the literature of causal inference for quite a while now. I was not happy with the limitation of "association" methods. I really think that CI will ofter us the next level in the development of general AI. Thanks for the talk, prof. Elias!
@shashank7601
@shashank7601 4 года назад
I'm not sure if this is the right place to ask this question, but if hypothetically you give an arbitary SCM to an RL agent, will it then be able to perform all layers of the ladder of causation including counterfactuals? And how would this arbitary SCM look like (ie. how is it robust enough to perform counterfactuals). Is this SCM just hard coded if - then statements given to the agent?
@CausalAI
@CausalAI 4 года назад
Hi there, That's an excellent and somewhat popular question, thank you for the opportunity of clarifying. I hypothesize that this is the case since it goes against our strongly held belief that ae ll what we need is more data, or that data is enough, not the case in causal inference. I'll try to elaborate next. Given a fully specified SCM, all the three layers (i.e., any counterfactual) are immediately computable through definitions 2, 5, and 7, as discussed in the PCH chapter (causalai.net/r60.pdf). Call this SCM M. Unfortunately, there is NOTHING about the output of M's evaluation that makes it more or less related to the actual SCM that underlies the environment, say M*. The first main result in the aforementioned chapter is called the "Causal Hierarchy Theorem" (CHT) (page 22), which says that even if we train the SCM M with layer 1 data, it still doesn't say anything about layers 2 or 3. I will leave you to check this statement (hint: the chapter should help). In other words, it makes not so much sense to ask about the "robustness" of M's predictions, given that they are unrelated to M*. Cheers, Elias
@nihaarrshah
@nihaarrshah 4 года назад
Hi Professor, are there slides available for this talk? Thank you! - Nihaar (your student)
@CausalAI
@CausalAI 4 года назад
Hi Nihaar, I just saw your msg. See slides here: crl.causalai.net . Thanks, EB
@nihaarrshah
@nihaarrshah 4 года назад
@@CausalAI thanks prof!
@zeeeeeeeeeavs
@zeeeeeeeeeavs 4 года назад
Does anyone know where I can find the mathematical proof each level need infomation of that level or above for 3-level hierarchy (association, intervention, counterfactuals) of causality
@CausalAI
@CausalAI 4 года назад
Hi Kalyana, I think you will enjoy this chapter -- causalai.net/r60.pdf .
@zeeeeeeeeeavs
@zeeeeeeeeeavs 4 года назад
CausalAI Thank you so much! This is great!
@zeeeeeeeeeavs
@zeeeeeeeeeavs 4 года назад
Thank you so much for the talk. Would you share the slides with us?
@fairuzshadmanishishir8171
@fairuzshadmanishishir8171 4 года назад
nice speech
@Wavams
@Wavams 4 года назад
40:39 add ] on first line
@Wavams
@Wavams 4 года назад
at 38:18; should 2: off-policy learning read "agent learns from other agents' experiments"? or some other word instead of 'actions'. trying to see how to motivate the difference between samples from do(x) and x between 2 and 3
@fairuzshadmanishishir8171
@fairuzshadmanishishir8171 4 года назад
Good Speech