Тёмный

14. Causal Inference, Part 1 

MIT OpenCourseWare
Подписаться 5 млн
Просмотров 129 тыс.
50% 1

MIT 6.S897 Machine Learning for Healthcare, Spring 2019
Instructor: David Sontag
View the complete course: ocw.mit.edu/6-S897S19
RU-vid Playlist: • MIT 6.S897 Machine Lea...
Prof. Sontag discusses causal inference, examples of causal questions, and how these guide treatment decisions. He explains the Rubin-Neyman causal model as a potential outcome framework.
License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu

Опубликовано:

 

26 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 40   
@bobo0612
@bobo0612 2 года назад
Brilliant! Thank you for the video and I feel very blessed to be borned in this age when such brilliant lectures are available for free for everyone!
@junqichen6241
@junqichen6241 2 года назад
This is the most intuitive and comprehensive guide on causal inference. Thank you Prof. Sontag.
@turboblitz4587
@turboblitz4587 2 года назад
Really nice explanations, you kept it simple in the beginning but explained the gist! Thanks for uploading these lectures
@deepaksehra
@deepaksehra 2 года назад
What a fantastic teacher and the lecture itself. Thanks for posting this, although I am pretty late to get to it!!
@yogeshsingular
@yogeshsingular 2 года назад
The best lecture on causal inference online
@AradAshrafi
@AradAshrafi 3 года назад
Thank you so much for sharing it with us. It was amazing :)
@GarveRagnara
@GarveRagnara 3 года назад
Simply a great lecture. I just recently started diving into this field, and with this lecture I think I have learned the most so far.
@TheRetrobek
@TheRetrobek Год назад
wow, awesome intro to causal inference!
@borisn.1346
@borisn.1346 2 года назад
Amazing lecture - Thanks!
@edwardeikman3496
@edwardeikman3496 2 года назад
Superb introduction for a non-mathematician “domain expert” To understand what the technical expert needs. Unfortunately the underlying quality of the real world data we work with often is insufficiently standardized or machine actionable. This technology is needed for the problems that actually occupy most of a physicians time which is predicting and assessing the effects of treatments particularly once we get off the original guidance from guidelines which might not work in an individual patient.
@CodeEmporium
@CodeEmporium 2 года назад
Fantastic explanation! Imma make a video on this topic too.
@angelakoloi983
@angelakoloi983 3 года назад
Perfect!
@juliocardenas4485
@juliocardenas4485 2 года назад
Wonderful!!
@fanlin31415
@fanlin31415 2 года назад
really a great lecture!
@sanjav
@sanjav 3 года назад
Great explanation. I wish I had teachers like him.
@alexandersumer4295
@alexandersumer4295 3 года назад
You do. Right here on RU-vid.
@williamrich3909
@williamrich3909 Год назад
Fantastic!
@TheRilwen
@TheRilwen 2 года назад
I have two question and will be grateful for expert and practicioner answers: 1) When calculating CATE you subtract two regressions. This must increase the error considerably. Do we do anything about it? 2) I think in practice, when defining the parameters/independent variables, there's a risk of Simpson paradox. E.g. where's the line between exercising (1) and not exercising (0)? What can one do about it to sleep calmly? Could we do some sort of "hyperparameter tuning" to find the best parameter definitions? It can be tricky...
@shiyanliu1039
@shiyanliu1039 3 года назад
Nice!
@allena794
@allena794 2 года назад
What if a confounder variable only influences the outcome?it's a violation or not
@acceleratebiz
@acceleratebiz 2 года назад
At the 12:30 mark, X₂←X₁→X₃ is described as a v-structure that can be distinguished from a chain structure with data. That's not a v-structure in that sense, you would need X₂→X₁←X₃.
@McStevenF
@McStevenF 11 месяцев назад
Where does the counterfactual data come from?
@7vrda7
@7vrda7 2 года назад
I would say that Y1 is the red pill and Y0 blue, not the other way
@off4on
@off4on 3 года назад
Question: how do we infer the graphical causal model from data? In the lecture, and the one that follows, we assume a model already exists and use data to answer questions about this model. There are no model selection or model checking involved. Is there a way to infer the causal model from observational data?
@off4on
@off4on 3 года назад
@@dl5017 Thanks for the suggestions. Although that does not answer my question.
@dl5017
@dl5017 3 года назад
It did, I pointed you to the econml documents which describe the many tools to use ML to infer causal models from observational data. There is not one method to explain on RU-vid, there are many, check the work of Susan Athey et al on RU-vid for forest based approaches for one, you can implement those approaches with econml....
@dl5017
@dl5017 3 года назад
DoWhy is about taking a causal graph model, maybe you drew it yourself in daggity, and apply potential outcomes framework to it, which is what you see taught in these lectures.
@off4on
@off4on 3 года назад
@@dl5017 I see what you mean now. Afaik Athey still assumes that we know what we are looking for, e.g., how a drug affects clinical outcome. What I was asking is the inference on models themselves, e.g. deciding whether rooster crowing causes sunrise or the other way round from data. The former is on identifucation and estimation, the latter is something else.
@off4on
@off4on 3 года назад
OK, replying to myself but also to share with people new to causal inference -- the process of inferring causal graphs from (perhaps observational) data is called "causal discovery".
@ninadgandhi9040
@ninadgandhi9040 Год назад
Are the problem sets for the course available!? Can't seem to find them
@mitocw
@mitocw Год назад
The problem sets are not available for this course. Some instructors do not want to publish them because they are currently in use in the course.
@ninadgandhi9040
@ninadgandhi9040 Год назад
@@mitocw Oh okay. Thanks for the reply!
@Tashildz
@Tashildz 3 года назад
can I translate in arabic (dubbing ) for our students
@davidsontag88
@davidsontag88 3 года назад
Yes
@habibmrad8116
@habibmrad8116 3 года назад
@@davidsontag88 even in Chinese, Arabic, or whatever language, I can understand this amazing explanation.... it seems very clear :)
@Joel-ru4nm
@Joel-ru4nm 12 дней назад
Less than 10 minds of the video, I feel like there would be overtreatment of the patient given predictive results. Do more is better than do less
@maggieselbstschopfer1956
@maggieselbstschopfer1956 2 года назад
This professor is so handsome.
@fazlfazl2346
@fazlfazl2346 2 месяца назад
Bad teaching. Non-coherent. Difficult to follow.
Далее
15. Causal Inference, Part 2
1:02:17
Просмотров 28 тыс.
2000 vs 2100
00:15
Просмотров 19 тыс.
Bayes theorem, the geometry of changing beliefs
15:11
MIT Introduction to Deep Learning | 6.S191
1:09:58
Просмотров 304 тыс.
26. Chernobyl - How It Happened
54:24
Просмотров 2,8 млн
"Causality and Data Science," Professor Guido Imbens
52:35
16. Learning: Support Vector Machines
49:34
Просмотров 1,9 млн
What is Causation? | Episode 1511 | Closer To Truth
26:47
2000 vs 2100
00:15
Просмотров 19 тыс.