*My takeaways:* 1. Probability sampling: simple random sampling 0:58 2. A data analysis example 6:11 - How to tight standard deviation: take larger samples, not more samples 14:40 3. How to visualise and understand the data: error bar 17:30 - When confidence intervals don't overlap, we can conclude that means are statistically significantly different 18:30 4. Standard error 25:04 - Standard error vs standard deviation 29:33 - Problem with standard error: we don’t have population standard deviation 30:46, but we can use sample standard deviation to get a close estimation - Three different distributions and their skews 36:15, when we use sample standard deviation to estimate population standard deviation, more samples are needed for distributions with more skews 5. The good results are always aligned with confidence intervals 43:35
Standard deviation: Value of symmetric distance towards both sides from the mean that accounts for ~96% of the samples. Confidence interval: 1.96*sd (accounts for a symmetric interval around the mean that accounts for 95% of the samples) Standard error (of the means): is standard deviation of a sample population scaled by the number of samples. It is approximately equal to the standard error of the whole population. 37:00 - The bigger extremes between samples, the higher error between sample and whole population would be for lesser number of samples. 40:00 - The size of population doesn't matter, but the skew and the size of the step.
It would be useful to carefully explain the difference between sample (a random draw of size n from the population), individual sample elements ( each member of the sample), and replications (number of samples drawn). Otherwise it could be easy to confuse which one your talking about.
I would also recommend reading the recommended book and taking some time to digest the information there (like thinking and staring at the wall). It can be really confusing if we just watch the lecture. Especially this lecture requires reading from students' side too. It builds upon a few lectures before. The lectures before this particular one were easy to follow. Nonetheless, the material is very important and extremely interesting if you understand it well.
+1 for reading the text - I've been reading the assigned readings after each lecture and the way it covers the same material in a slightly different way with different examples has really helped to set the knowledge in my mind.
I am a bit surprised he didn't comment on how the standard deviation of the sample has a bias as an estimator for the standard deviation of the population, and how you should divide by (n-1) instead of n (n being the sample size) when doing this estimation...
@@aidenigelson9826 actually we always take just one sample but the idea is that if u keep taking infinitely many samples the mean of these samples mean is the true population mean. For experiments we always take one sample and calculate the p values and thus confidence interval to get an idea of how that sample represents the true population.
Thank you MIT for making these courses open! To verify if we chose the right sample size, we identify what fraction of times we break the Empirical rule. But I'm not clear on why it is fine to use the estimated standard error (and not the population standard error) while computing this fraction? Say the population is skewed and we chose a small sample, wont the estimated standard error be more inaccurate? If that is so, how can it be used to verify the distance between population and sample means?
I am wondering if the numTrials plays a part at the end in calculating the confidence interval. Sample size of 200 gave 95% but how does numTrials affect this?
Well, when he is comparing different distributions vs population size... why the uniform has at sample size 25 more or less a difference of 7.5% ( 37:21 ) and in the next slide about 25% ( 39:12 )?