An effect size is “a standardized measure of the size of an effect”. Unlike p values, effect sizes can be objectively compared to determine whether a treatment had any practical usefulness. Cohen’s d is the most commonly used measure of effect size for t tests. This video makes three points:
(a) Using an example from Rosnow & Rosenthal, we learn how very different p values can result from exactly the same effect size.
(b) We learn about Jacob Cohen’s conventions for interpreting d, including practical examples and the overlap of the distributions.
(c) We discover the basis for conducting a power analysis before beginning data collection.
Finally, I give you four reasons why we should report the effect size of a study (Neill, 2008):
• because of the APA says so,
• when generalization is not important, effect sizes provide context
• when sample size is small, effect sizes give meaning
• when sample size is large, effect sizes lend clarity
In short, there is no reason why you should fail to report effect size.
References
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). New York: Academic Press. (p. 12)
Sawilowsky, S (2009). New effect size rules of thumb. Journal of Modern Applied Statistical Methods. 8(2), 467-474.
Effect size calculator for t Tests: drive.google.com/drive/folder...
This video teaches the following concepts and techniques:
Cohen’s d effect size for t tests
Link to a Google Drive folder with all of the files that I use in the videos including the Effect Size Calculator for t Tests and datasets. As I add new files, they will appear here, as well.
drive.google.com/drive/folder...
27 июн 2017