Тёмный

Double Machine Learning for Causal and Treatment Effects 

Becker Friedman Institute University of Chicago
Подписаться 12 тыс.
Просмотров 37 тыс.
50% 1

Victor Chernozhukov of the Massachusetts Institute of Technology provides a general framework for estimating and drawing inference about a low-dimensional parameter in the presence of a high-dimensional nuisance parameter using a generation of nonparametric statistical (machine learning) methods.

Опубликовано:

 

18 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 15   
@ForeverSensei2030
@ForeverSensei2030 7 лет назад
Appreciate your works, Professor.
@mengxiazhang93
@mengxiazhang93 3 года назад
The presentation is very helpful! Thank you!
@jicao9205
@jicao9205 2 года назад
The presentation is awesome. Thank you!
@mastafafoufa5121
@mastafafoufa5121 3 года назад
Aren't we looking at predicting E[Y| (D,Z)] in other words how D and Z jointly influence Y as a first step and then E[D|Z] as a second step? In the slide at 10:52, they predict E[Y|Z] instead of E[Y| (D,Z)] which is a bit confusing as treatment is not controlled and stochastic as well...
@MrTocoral
@MrTocoral 3 года назад
E[Y|D,Z] would be the ultimate goal (predicting the outcome as a joint function of treatment and covariates). This is what is done for instance by standard ML methods as presentend in the beginning, but in this case doesn't provide a good estimator of the treatment effect. I think the approach here is similar to multilinear regression where we first regress D on Z, then we obtain a residual which we regress on Y to isolate the effect of D independently of Z. So the question is rather here : why do we regress D-E[D|Z] on Y-E[Y|Z] instead of Y ? In Multilinear Regression, the first step ensures that the residual will not be correlated to Z, so regressing this residual on Y or Y-E[Y|Z] is equivalent. But here, since the model is semilinear (I think, but perhaps also because we use ML methods), there may be some effect of g(Z) on Y correlated to D even if we take the residual D-E[D|Z]. So we need to evaluate Y-E[Y|Z] to approach the real treatment effect.
@darthyzhu5767
@darthyzhu5767 7 лет назад
great talk, wondering where to access the slides.
@ruizhenmai1194
@ruizhenmai1194 4 года назад
@@mathieumaticien Hi the slides have already been removed
@VainCape
@VainCape 2 года назад
@@ruizhenmai1194 why?
@patrickpower7102
@patrickpower7102 3 года назад
In "perfectly set-up" randomized control trials, m_0 wouldn't vanish, but rather would be a constant value of 0.5 for all values of Z, no? (6:25)
@PrirodnyiCossack
@PrirodnyiCossack 3 года назад
Yes, though one can assume that that constant had been partialed out, which would give zero.
@gwillis3323
@gwillis3323 3 года назад
no, because D isn't binary, D is continuous. D=m(z) + V, where V is a random variable which does not depend on z. In a perfect trial, D=V, so for example, D might be drawn from a Gaussian distribution with sufficient support to make the inferences you wish to make. You could go further and say that in a "perfect" trial, V is a uniform distribution over some sufficiently large domain. I think here, "perfect" just means "not confounded at all"
@marcelogallardo9218
@marcelogallardo9218 3 года назад
Most impressive.
@user-zj1kz6mh6g
@user-zj1kz6mh6g 3 месяца назад
I am safe in my knowledge and curiosity
@MrRestorevideos
@MrRestorevideos 3 месяца назад
Machine learner who worked back in the 30's 🤣
@chockumail
@chockumail 2 месяца назад
"I resisted to call it ML and I gave up " and Machine learners in 30's :) Hilarious
Далее
Conditional Average Treatment Effects: Overview
57:10
Incredible Wheel Restoration Process 🚙
01:00
Просмотров 792 тыс.
14. Causal Inference, Part 1
1:18:43
Просмотров 127 тыс.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Просмотров 277 тыс.
Average Treatment Effects: Double Robustness
41:43
Просмотров 10 тыс.