Hi Leslie, I'm a medical student in Indonesia in the progress of conducting relatively "new" research about travel health, this video helps me a lot since the research contains many independent variables and has not yet found any control variables. I'm planning to use sensitivity analysis as a way to cover the weakness of this research. Thank you Leslie :)
I have a question around PS. Why does weighting minimize bias more than stratification or matching methods? Second, does the choices of these techniques should be guided by the research question?
Stratification (subclassification) "coarsens" the grouping (less similar units get grouped together), and with matching, not all units can get a good match. In general, the choice of technique should be guided by features of the dataset.
Hi Leslie, I understand that we vary the strength by varying the coefficients. But what does U look like? Is it just some arbitrary normal distribution with a mean and std dev? Thanks :)
Yes exactly! In general, when people want to assess the impact of a continuous U, assuming a normal distribution for U is common. Then the mean and SD of U depend on parameters governing the relationships between U and treatment and U and outcome.
I didn't actually use tipr in making this video (learned about it later). I used custom code available here: lmyint.github.io/causal_fall_2020/sensitivity-analyses-for-unmeasured-variables.html
@@wataru_fukuokaya Yes, that code uses a useful general approach: simulation of the unmeasured confounder. It simulates an unmeasured variable U that is a common cause of treatment and outcome (and is independent of the measured confounders). In this way, you can use U in any subsequent analysis as if it were a measured confounder. The code creates multiple U's with different strengths of association with the treatment and outcome. When you use U in your analysis to estimate causal effects (e.g., Cox regression), you can include these U's in the model to see how your estimates change.