Thank you for responding, Nuka! I agree with everything you're saying. :)
Staghorne also misses an important point about interpretation of statistics: there is a world of difference between statistically significant and practically significant
Thank you for pointing this out! I'm so used to this by now that I don't find the distinction between statistically and practically significant paradoxical anymore, but it is indeed important to keep in mind.
Just because two means are statistically significantly different does not mean that it's practical to call them "different": in sample sizes that large [5,000] you could end up talking about utterly minuscule effect sizes!
Actually, I think the fact that statistics can pinpoint such minute effects is a testament to the power of the approach. In a practical situation, however, this specificity is often blunted by incorrect assumptions and various inescapable biases. It's a bit sad that all these beautiful numbers we can compute are so sensitive to factors we cannot treat or control with math alone... :/
In the interests of being positive and pragmatic, I'll mention that it's easy to point out all of the reasons why something is bad, but something else entirely to posit a better system.
Oops! I didn't mean to come off as negative; apologies all around.
Furthermore, it was not at all my intention to imply that significance testing is a bad idea. On the contrary, I agree with all your points here—not to mention that I think the research is super-interesting and that I loved reading about it here on Flayrah.
Personally, I think significance testing is one of the best tools in statistics, and I love seeing it used in research (yours very much included), as long as one is aware of the limitations. The distinction between p-values and hypothesis probabilities, in particular, is one of those things I think are beneficial to know, in order to discuss quantitative scientific results effectively; hence my original comment.
Thank you for responding, Nuka! I agree with everything you're saying. :)
Thank you for pointing this out! I'm so used to this by now that I don't find the distinction between statistically and practically significant paradoxical anymore, but it is indeed important to keep in mind.
Actually, I think the fact that statistics can pinpoint such minute effects is a testament to the power of the approach. In a practical situation, however, this specificity is often blunted by incorrect assumptions and various inescapable biases. It's a bit sad that all these beautiful numbers we can compute are so sensitive to factors we cannot treat or control with math alone... :/
Oops! I didn't mean to come off as negative; apologies all around.
Furthermore, it was not at all my intention to imply that significance testing is a bad idea. On the contrary, I agree with all your points here—not to mention that I think the research is super-interesting and that I loved reading about it here on Flayrah.
Personally, I think significance testing is one of the best tools in statistics, and I love seeing it used in research (yours very much included), as long as one is aware of the limitations. The distinction between p-values and hypothesis probabilities, in particular, is one of those things I think are beneficial to know, in order to discuss quantitative scientific results effectively; hence my original comment.