I agree entirely with Staghorne here =) Statistics are always to be taken with a grain of salt; I include them in the data not because I'm trying to say "therefore these are undisputed conclusions", but rather, in the interest of transparency and "showing my work". Were I to just make a whole bunch of claims about "what the data showed", I would be subject to immense criticism (for good reason): who am I to decide what counts as "significant" or not? At least using t-tests and regression analyses, I can say "well, according tho this procedure, it is statistically significant".
As Staghorne mentioned, this is certainly not a perfect process: t-tests and regression analyses are riddled with all kinds of assumptions about the distribution of the data. I run with those analyses primarily because they're the ones most frequently used by social psychologists. I'm also partial to structural equation modeling which, while again not perfect, has the advantage of being a bit more accurate in taking into account the simultaneous relationship of multiple predictor variables, multiple outcome variables, shared covariance, etc...
Staghorne also misses an important point about interpretation of statistics: there is a world of difference between statistically significant and practically significant; it's a bit less of a big deal in this case, given that our sample size is not utterly ridiculously huge; but in sample sizes of, say, 5,000, just because two means are statistically significantly different does not mean that it's practical to call them "different": in sample sizes that large you could end up talking about utterly minuscule effect sizes!
I would say that I agree, in general, with your comments Staghorne, but in the interests of being positive and pragmatic, I'll mention that it's easy to point out all of the reasons why something is bad, but something else entirely to posit a better system. In place of these statistics, can you think of a better way to present / analyze the data (in a manner that's both reasonably simple for people to interpret and in a space-friendly manner?) The problems you're describing are indeed problems that have long existed with inferential statistics. But until I'm able to survey the entirety of the furry population, they're the best we can do =P
I agree entirely with Staghorne here =) Statistics are always to be taken with a grain of salt; I include them in the data not because I'm trying to say "therefore these are undisputed conclusions", but rather, in the interest of transparency and "showing my work". Were I to just make a whole bunch of claims about "what the data showed", I would be subject to immense criticism (for good reason): who am I to decide what counts as "significant" or not? At least using t-tests and regression analyses, I can say "well, according tho this procedure, it is statistically significant".
As Staghorne mentioned, this is certainly not a perfect process: t-tests and regression analyses are riddled with all kinds of assumptions about the distribution of the data. I run with those analyses primarily because they're the ones most frequently used by social psychologists. I'm also partial to structural equation modeling which, while again not perfect, has the advantage of being a bit more accurate in taking into account the simultaneous relationship of multiple predictor variables, multiple outcome variables, shared covariance, etc...
Staghorne also misses an important point about interpretation of statistics: there is a world of difference between statistically significant and practically significant; it's a bit less of a big deal in this case, given that our sample size is not utterly ridiculously huge; but in sample sizes of, say, 5,000, just because two means are statistically significantly different does not mean that it's practical to call them "different": in sample sizes that large you could end up talking about utterly minuscule effect sizes!
I would say that I agree, in general, with your comments Staghorne, but in the interests of being positive and pragmatic, I'll mention that it's easy to point out all of the reasons why something is bad, but something else entirely to posit a better system. In place of these statistics, can you think of a better way to present / analyze the data (in a manner that's both reasonably simple for people to interpret and in a space-friendly manner?) The problems you're describing are indeed problems that have long existed with inferential statistics. But until I'm able to survey the entirety of the furry population, they're the best we can do =P