Reference Collection to push back against "Common Statistical Myths"

Note: This topic is a wiki, meaning that this main body of the topic can be edited by others. Use the Reply button only to post questions or comments about material contained in the body, or to suggest new statistical myths you’d like to see someone write about.


I am not claiming to be the leading authority on any or all of the things listed below, but several of us on Twitter have repeatedly floated the idea of creating a list of references that may be used to argue against some common statistical myths or no-nos.

By posting in thread format, I will start but others should feel free to chime in. However, it MAY be easier if the first post (or one of the first posts) is continually updated so all references on a particular topic are indexed at the top. I am not sure of the best way to handle this - either I will try to periodically edit the first post to keep the references updated, or maybe Frank will have a better idea for how to structure and update this content.

I was hoping to organize this into a few key myths/topics. While I am happy to add any topic that authors think is important, the intent here is not to recreate an entire statistical textbook. I’m hoping to provide an easy to navigate list of references so when we get one of the classic review comments like “the authors should add p-values to Table 1” we have some rebuttal evidence that’s easy to find.

I’ve listed a few below to start. Please feel free to email, Twitter DM, or comment below. This will be a living ‘document’ so if there’s something you think is missing, or if I cite a paper that you feel has a fatal flaw and does not support its stated purpose, let me know. We’ll see how this goes.

TOPIC: P-Values in Table 1 of Randomized Trials

Rationale: In RCT’s, it is a common belief that one should always present a table with p-values comparing the baseline characteristics of the randomized treatment groups. This is not a good idea for the following reasons.

Altman DG, Dore CJ. Randomisation and baseline comparison in clinical trials. Lancet 1990; 335: 149-153. (https://www.ncbi.nlm.nih.gov/pubmed/1967441)

Begg CB. Significance tests of covariate imbalance in clinical trials. Controlled Clin Trials 1990; 11: 223-225. (https://www.ncbi.nlm.nih.gov/pubmed/2171874)

Senn SJ. Baseline comparisons in randomized clinical trials. Stat Med 1991; 10: 1157-1160 (https://www.ncbi.nlm.nih.gov/pubmed/1876802)

Senn SJ. Testing for baseline balance in clinical trials. Stat Med 1994; 13: 1715-1726. (https://www.ncbi.nlm.nih.gov/pubmed/7997705)

TOPIC: Covariate Adjustment in RCT

Rationale: Somewhat related to the above, many consumers of randomized trials believe that there is no need for any covariate adjustment in RCT analyses. While it is true that there is no need for a valid RCT, there are benefits to adjusting for baseline covariates that have strong relationships with the study outcome, as explained by the references below. If a reader/reviewer questions why you have chosen to adjust, these may prove helpful.

Canner PL. Covariate adjustment of treatment effects in clinical trials. Controlled Clin Trials 1991; 12: 359-366. (https://www.ncbi.nlm.nih.gov/pubmed/1651207)

Neuhaus JM. Estimation Efficiency with Omitted Covariates in Generalized Linear Models. J Am Stat Assoc 1998; 93: 1124-1129.

Hauck WW, Anderson S, Marcus SM. Should We Adjust for Covariates in Nonlinear Regression Analyses of Randomized Trials? Controlled Clin Trials 1998; 19: 249-256. (https://www.ncbi.nlm.nih.gov/pubmed/9620808)

Steyerberg EW, Bossuyt PMM, Lee KL. Clinical trials in acute myocardial infarction: should we adjust for baseline characteristics? Am Heart J 2000; 139(5): 745-751. (https://www.ncbi.nlm.nih.gov/pubmed/10783203)

Hernandez AV, Steyerberg EW, Habbema JDF. Covariate adjustment in randomized controlled trials with dichotomous outcomes increases statistical power and reduces sample size requirements. J Clin Epi 2004; 57(5): 454-460. (https://www.ncbi.nlm.nih.gov/pubmed/15196615)

Hernandez AV, Eijkemans MJC, Steyerberg EW. Randomized controlled trials with time-to-event outcomes: How much does prespecified covariate adjustment increase power? Ann Epi 2006; 16(1): 41-48. (https://www.ncbi.nlm.nih.gov/pubmed/16275011)

Gray LJ, Bath P, Collier T. Should stroke trials adjust for functional outcome for baseline prognostic factors? Stroke 2009; 40: 888-894. (https://www.ncbi.nlm.nih.gov/pubmed/19164798)

Kent DM, Trikalinos TA, Hill MD. Are unadjusted analyses of clinical trials inappropriately biased toward the null? Stroke 2009; 40(3): 672-673. (https://www.ncbi.nlm.nih.gov/pubmed/19164784)

Lingsma H, Roozenbeek B, Steyerberg E. Covariate adjustment increases statistical power in randomized controlled trials. J Clin Epi 2010; 63(12): 1391. (https://www.ncbi.nlm.nih.gov/pubmed/20800991)

Groenwold RHH, Moons KGN, Peelen LM, Knol MJ, Hoes AW. Reporting of treatment effects from randomized trials: A plea for multivariable risk ratios. Contemp Clin Trials 2011; 32(3): 399-402. (https://www.ncbi.nlm.nih.gov/pubmed/21195797)

Ciolino JD, Martin RH, Zhao W, Jauch EC, Hill MD, Palesch YY. Covariate imbalance and adjustment for logistic regression analysis of clinical trial data. J Biopharm Stat 2013; 23(6): 1383-1402. (https://www.ncbi.nlm.nih.gov/pubmed/24138438)

TOPIC: Analyzing “Change” Measures in RCT’s

Rationale: Many authors and pharmaceutical clinical trialists make the mistake of analyzing change from baseline instead of making the raw follow-up measurements the primary outcomes, covariate-adjusted for baseline. To compute change scores requires many assumptions to hold (for more detail, see Frank’s post on this: https://www.fharrell.com/post/errmed/#change). It is generally better to analyze the “follow up” measurement as the outcome with a covariate adjustment for the baseline value, as this seems to better match the question of interest: for two patients with the same pre-trial value of the study outcome, one given treatment A and the other treatment B, will the patients tend to have different post-treatment values?

Vickers AJ, Altman DG. Analysing controlled trials with baseline and follow up measurements. BMJ 2001; 323: 1123.

TOPIC: Using Within-Group Tests in Parallel-Group Randomized Trials

Rationale: Researchers often analyze randomized trials and other comparative studies by separate analysis of changes from baseline in each parallel group. Sometimes, they will incorrectly conclude that their study proves that a treatment effect exists if there is a “significant” p-value for the within-group test for the treatment group, although this ignores the presence of the control group (what’s the purpose of having the control group if you’re not going to compare the treated group against the control group?)

Bland JM, Altman DG. Best (but oft forgotten) practices: testing for treatment effects in randomized trials by separate analyses of changes from baseline in each group is a misleading approach. Am J Clin Nutr 2015; 102(5); 991-994.

TOPIC: Sample Size / Number of Variables for Regression Models

Rationale: It is common to see regression models with far too many variables included relative to the amount of data (as a reviewer, I’ll see papers that report a “risk score” that includes 20+ variables in a logistic regression model with ~200 patients and ~30 outcome events). A commonly cited rule of thumb is “10 events per variable” in logistic regression, but in fact the specific number is more complex than that, though it may function as a useful “BS test” at a first glance.

Courvoisier DS, Combescure C, Agoritsas T, Gayet-Ageron A, Perneger TV. Performance of logistic regression modeling: beyond the number of events per variable, the role of data structure. J Clin Epi 2011; 64(9): 993-1000. (https://www.ncbi.nlm.nih.gov/pubmed/21411281)

van Smeden M, de Groot JA, Moons KG, Collins GS, Altman DG, Eijkemans MJ, Reitsma JB. No rationale for 1 variable per 10 events criterion for binary logistic regression analysis. BMC Medical Research Methodology 2016; 16(1): 163. (https://www.ncbi.nlm.nih.gov/pubmed/27881078)

Ogundimu EO, Alman DG, Collins GS. Adequate sample size for developing prediction models is not simply related to events per variable. J Clin Epi 2016; 76:175-82. (https://www.ncbi.nlm.nih.gov/pubmed/26964707)

van Smeden M, Moons KG, de Groot JA, Collins GS, Altman DG, Eijkemans MJ, Reitsma JB. Sample size for binary logistic prediction models: beyond events per variable criteria. Stat Methods Med Res 2018 (epub). (https://www.ncbi.nlm.nih.gov/pubmed/29966490)

Riley RD, Snell H, Ensor J, Burke DL, Harrell FE, Moons KG, Collins GS. Minimum sample size for developing a multivariable prediction model: PART II – binary and time-to-event outcomes. Stat Med 2019; 38(7): 1276-1296. (https://www.ncbi.nlm.nih.gov/pubmed/30357870)

TOPIC: Stepwise Variable Selection (Don’t Do It!)

Rationale: though stepwise selection procedures are taught in many introductory statistics courses as a way to make multivariable modeling easy and data-driven, statisticians generally dislike it for several reasons, many of which are explained in the reference below:

Smith G. Step away from stepwise. Journal of Big Data 2018; 5: 32 (https://journalofbigdata.springeropen.com/articles/10.1186/s40537-018-0143-6)

TOPIC: Screening covariates to include in multivariable models with bivariable tests

Rationale: People sometimes decided to include variables in multivariable models only if they are “significant” predictors of the outcome when included in the model by themselves (i.e. they are crudely associated with the outcome). This is a bad idea, much like stepwise variable selection.

Sun GW, Shook TL, Kay GL. Inappropriate use of bivariable analysis to screen risk factors for use in multivariable analysis. Journal of Clinical Epidemiology. 1996. 49, 8:907-16 (https://www.sciencedirect.com/science/article/pii/089543569600025X)

Greenland S. Modeling and variable selection in epidemiologic analysis. American Journal of Public Health. 1989. 79, 3:340-9. (https://doi.org/10.2105/AJPH.79.3.340)

TOPIC: Post-Hoc Power (Is Not Really A Thing)

Rationale: in studies that fail to yield “statistically significant” results, it is common for reviewers, or even editors, to ask the authors to include a post hoc power calculation. In such situations, editors would like to distinguish between true negatives and false negatives (concluding there is no effect, when there actually is an effect, and the study was just too small to pick it up). However, reporting post-hoc power is nothing more than reporting the p-value a different way, and will therefore not answer the question editors want to know.

Hoenig JM, Heisey DM. The Abuse of Power: The Pervasive Fallacy of Power Calculations for Data Analysis. The American Statistician 2001; 55 (https://www.vims.edu/people/hoenig_jm/pubs/hoenig2.pdf)

Lenth RV. Post Hoc Power: Tables and Commentary. (https://stat.uiowa.edu/sites/stat.uiowa.edu/files/techrep/tr378.pdf)

Goodman SN, Berlin JA (1994) The use of predicted confidence intervals when planning experiments and the misuse of power when interpreting results. Ann Intern Med 121:200-206 (https://annals.org/aim/fullarticle/707593/use-predicted-confidence-intervals-when-planning-experiments-misuse-power-when)

TOPIC: Misunderstood “Normality” Assumptions

Rationale: one of the pieces of information that many folks who have taken an introductory statistics class retain is that the Normal distribution is basically everything, and they often assume that the data need to be normally distributed for ALL statistical procedures to work correctly. However, in many of the procedures and tests that we use, the normality of the error terms (or residuals) matters, not the normality of the data points themselves.

TOPIC: Absence of Evidence is Not Evidence of Absence

Rationale: It’s also common to break results into “significant” (p<0.05) and “not significant” (p>0.05); when the latter occurs, many interpret the phrase “no significant effect” as evidence that there is no effect, when this is not really true (thanks to @davidcnorrismd for adding another reference below).

Altman DG, Bland JM. Statistics Notes: Absence of Evidence is Not Evidence of Absence. BMJ 1995; 311; 485. (https://www.bmj.com/content/311/7003/485.full)

Braithwaite R. EBM’s six dangerous words. JAMA . 2013;310(20):2149-2150. doi:10.1001/jama.2013.281996

Gelman A, Stern H. The Difference Between “Significant” and “Not Significant” is Not Itself Statistically Significant. The American Statistician 2012.
(https://www.tandfonline.com/doi/abs/10.1198/000313006X152649)

TOPIC: Inappropriately Splitting Continuous Variables Into Categorical Ones

Rationale: people often choose to split a continuous variable into dichotomized groups or a few bins (e.g. using quartiles to divide the data into four groups, then comparing the highest versus the lowest quartile). There are select and limited reasons why one may choose to partition continuous variables into categories, but more often than not this is a bad idea and done simply because it’s believed to be “easier” to perform or understand.

Naggara, O. et al. Analysis by categorizing or dichotomizing continuous variables is inadvisable: an example from the natural history of unruptured aneurysms. AJNR. American journal of neuroradiology 32, 437–40 (2011). (http://www.ajnr.org/content/32/3/437)

Royston, P., Altman, D. & Sauerbrei, W. Dichotomizing continuous predictors in multiple regression: a bad idea. Statistics in medicine 25, 127–41 (2006). (https://www.ncbi.nlm.nih.gov/pubmed/16217841)

Dawson, N. & Weiss, R. Dichotomizing continuous variables in statistical analysis: a practice to avoid. Medical decision making : an international journal of the Society for Medical Decision Making 32, 225–6 (2012). (https://journals.sagepub.com/doi/10.1177/0272989X12437605)

Altman, D. Problems in dichotomizing continuous variables. American journal of epidemiology 139, 442–5 (1994). (https://academic.oup.com/aje/article-abstract/139/4/442/78599?redirectedFrom=fulltext)

Thoresen, M. Spurious interaction as a result of categorization. BMC medical research methodology 19, 28 (2019). (https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-019-0667-2)

Altman, D. & Royston, P. The cost of dichotomising continuous variables. BMJ (Clinical research ed.) 332, 1080 (2006). (https://www.bmj.com/content/332/7549/1080.1)

TOPIC: Use of normality tests before t tests

Rationale: This is commonly recommended to researchers who are not statisticians.

Example Citation: Ghasemi A, Zahediasl S. Normality tests for statistical analysis: a guide for non-statisticians. Int J Endocrinol Metab . 2012;10(2):486–489. doi:10.5812/ijem.3505 (link)

The assumption of normality needs to be checked for many statistical procedures, namely parametric tests, because their validity depends on it.

Problem: The nominal size and power of the unconditional t test is changed with the combined procedure in unknown ways.

Rasch D, Kubinger KD, Moder K (2011): The two-sample t test: pre-testing its assumptions does not pay off. Statistical Papers 52(1): 219-231 (link)

Rochon J, Kieser M (2010): A closer look at the effect of preliminary goodness‐of‐fit testing for normality for the one‐sample t ‐test. Br J Math Stat Psychol 64: 410-426 (link)

Rochon J, Gondan M, Kieser M (2012): To test or not to test: Preliminary assessment of normality when comparing two independent samples. BMC Med Res Methodol 12: 81 (link)

Schoder V, Himmelmann A, Wilhelm KP (2006): Preliminary testing for normality: some statistical aspects of a common concept. Clin Exp Dermatol 31: 757-761 (link)

TOPIC: I2 in meta-analysis doesn’t refer to an absolute measure of heterogeneity

Rationale: when reporting heterogeneity results in a meta-analysis the value of I2 is often misinterpreted and treated like an absolute measure of heterogeneity when in fact is not.

Borenstein M, Higgins JPT, Hedges LV, Rothstein HR (2017) Basics of meta-analysis: I 2 is not an absolute measure of heterogeneity. Res Syn Meth 8:5–18. https://doi.org/10.1002/jrsm.1230

TOPIC: Number Needed to Treat (NNT)

Andrade C (2015) The numbers needed to treat and harm (NNT, NNH) statistics: what they tell us and what they do not. The Journal of clinical psychiatry 76:e330-3 https://doi.org/10.4088/JCP.15f09870

Citrome L, Ketter T (2013) When does a difference make a difference? Interpretation of number needed to treat, number needed to harm, and likelihood to be helped or harmed. International journal of clinical practice 67:407–11 https://doi.org/10.1111/ijcp.12142

TOPIC: Propensity-Score Matching - Not Always As Good As It Seems

Rationale: Convetional covariates adjustment is enough for most cases with adequate sample size and propensity-score matching is not necessarily superior.

Elze MC, et al. (2017). Comparison of Propensity Score Methods and Covariate Adjustment: Evaluation in 4 Cardiovascular Studies. Journal of American College of Cardiology 69(3):345-357. https://doi.org/10.1016/j.jacc.2016.10.060

Gary King on “Why Propensity Scores Should Not Be Used for Matching” https://www.youtube.com/watch?v=rBv39pK1iEs

Brooks JM, Ohsfeldt RL. Squeezing the balloon: propensity scores and unmeasured covariate balance. Health services research. 2013 Aug;48(4):1487-507.

Ali MS, Groenwold RH, Klungel OH. Propensity score methods and unobserved covariate imbalance: comments on “squeezing the balloon”. Health services research. 2014 Jun;49(3):1074-82.

TOPIC: Responder Analysis

Rationale: In some cases, authors attempt to dichotomize a continuous primary efficacy measure into “responders” and “non-responders." This is discussed at length in another thread on this forum, but here are some scholarly references:

Snapinn SM, Jiang Q. Responder analyses and the assessment of a clinically relevant treatment effect. Trials 2007. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2164942/

TOPIC: Significance testing in pilot studies

Rationale: Authors often perform null-hypothesis testing in pilot studies and report p-values. However, the purpose of pilot studies is to identify issues in all aspects of the study ranging from recruitment to data management and analysis. Pilot studies are not usually powered for inferential testing. If testing is done, p-values should not be emphasized and confidence intervals should be reported. Results on potential outcomes should be regarded as descriptive. A CONSORT extension for pilot and feasibility studies exist, and is a useful reference to include in submissions and cover letters. Editors may not be aware of this extension of CONSORT.

Eldridge SM et al. CONSORT 2010 statement: extension to randomised pilot and feasibility trials, BMJ 2016

Moore CG et al. Recommendations for Planning Pilot Studies in Clinical and Translational Research, Clin Transl Sci 2011

Additional Requested Topics (will add refs when practical, but feel free to add your own suggestions here)

Feel free to add your own suggestion here, we are happy to revisit and update whenever practical

39 Likes

Really good initiative Andrew. Sorry for the shameless self-promotion but I do have a thread with some misconceptions and references that might be helpful for getting this list together

7 Likes

Wow this is incredible Andrew. In a moment I’m goint to make my first attempt to converting a topic to a wiki topic that anyone can edit. Let’s see if that’s a good approach for growing this resource which you so nicely started.

Update: it’s now a wiki. Apparently you click on a small orange pencil symbol inside a small orange box to edit the topic. Then you’ll see another option to Edit Wiki. Perhaps others will reply here with more pointeres.

4 Likes

Would this be the appropriate thread to add references on the issues related to using parametric assumptions on ordinal data? This has always bothered my mathematical conscience.

Prof. Harrell had posted a great link to a recent paper in another thread:

A draft copy an be found here (I assume it is OK to post a link to the draft):

Analyzing Ordinal Data with Metric Models: What Could Possibly Go Wrong?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2692323

2 Likes

What is the scope of material that should be contributed? The theme of this site in general and this list so far is med stats / clinical trials / epi. Would materials from other disciplines be appreciated, or would they be redundant or off topic; e.g., there are publications on post hoc power in subfields of biology (ecology, evolution, and animal behavior) that present the issues from that field’s perspective, but have nothing to do with the core themes of this site.

1 Like

I’ll leave that for Frank to answer, as this website is his brainchild. I don’t think it’s unreasonable to present a paper from a different field that still addresses a core statistical topic (the examples you cite are good ones). Here is the site description from the home page, so perhaps this can be our guide, though of course there is some judgement in what exactly fits into this:

This is a place where statisticians, epidemiologists, informaticists, machine learning practitioners, and other research methodologists communicate with themselves and with clinical, translational, and health services researchers to discuss issues related to data: research methods, quantitative methods, study design, measurement, statistical analysis, interpretation of data and statistical results, clinical trials, journal articles, statistical graphics, causal inference, medical decision making, and more.

2 Likes

Amazing initiative. It will be really great to breakdown the misconception myths that have been been plaguing the field for a while, specially in applied contexts. Thanks for the effort of putting this wiki page up with references!

2 Likes

What are your thoughts on also including informative and well written blogs and shiny apps?

1 Like

I’ve got no problem with it. I was initially trying to prioritize scientific publications if only because when you’re appealing to an editor they might be more inclined to take that seriously versus a blog (fairly or not…) but there are certainly some excellent blog posts that may be useful here as well.

4 Likes

Would Gelman and Stern’s point in this paper be appropriate to add to the list?

The Difference Between “Significant” and “Not Significant” is not Itself Statistically Significant
https://www.tandfonline.com/doi/abs/10.1198/000313006X152649

3 Likes

This is a great resource. Thank you.

2 other prevalent myths come to mind:

  1. Matching - or not all that has intuitive appeal is actually good (or worthy)
  2. The unbearable lightness of NNTs
3 Likes

Some excellent suggestions in the last few posts - please feel free to add your favorites on those (after all, I want this to be crowd-sourced, not just my favorites!) I’ll also try to add a few when I get a chance.

Great initiative. Look forward to seeing this build.

I have some quibbles about the advice on covariate adjustment in RCTs. A classic paper is Pocock et al, Subgroup analysis and other (mis)uses of baseline data in clinical trials. However, things have moved on a little since then. But I’ll start from the beginning.

The reason you do not adjust an RCT is because you have randomised. If you have not introduced any bias through the conduct of the trial, then any difference between the groups must be due to a treatment effect or chance. The p-value accounts for those instances where you got unlucky. If you’re going to start adjusting, why randomise?

It is fine, of course, to do exploratory analysis beyond the primary endpoint but emphasis should always be on the unadjusted result, because you randomised. You have the luxury of actually random samples, you don’t need to resort to tricks from other designs.

The big problem with adjusting results is the sheer scope for finding models that give you the answer you want. Measure two dozen baseline characteristics, chuck them all in and see what comes out in the wash (see also: subgroups, pre-specification of as an unreliable marker for biological plausibility).

Pre-specification could help, but why would you pre-specify a baseline imbalance? If you knew it was an important prognostic factor, why didn’t you stratify the randomisation? Why did you leave yourself scope to cheat when you could have designed out the problem?

And that brings us onto a more recent development on covariate adjustment. The key to an RCT is to “analyse as randomised” (see also: intention-to-treat). So if you stratified the randomisation, you should stratify the analysis by exactly the same factors. The unadjusted p-value is accounting for a lot of possible outcomes that you made impossible by design. So you are fully entitled to take account of that (but it may be wise to report the unadjusted results also, and/or stick in a reference for the statto reviewer).

There’s a nice empirical reference for that last point: Reporting and analysis of trials using stratified randomisation in leading medical journals: review and reanalysis

And the EMA guidline (broken because I can’t post 3 links HTTPS&c–www.ema.europa.eu/en/documents/scientific-guideline/guideline-adjustment-baseline-covariates-clinical-trials_en.pdf) may also be of interest (aimed at Pharma and device manufacturers).

Thanks again for this. It’s a really useful initiative.

I have to strongly disagree with that. Much has been written about this. Briefly, you have to covariate adjust in RCTs to make the most out of the data, i.e., to get the best power and precision. It’s all about explaining explainable outcome heterogeneity, and nothing to do with balance. And concerning stratification, Stephen Senn has shown repeatedly that the correct philosophy is to pose the model and then randomize consistent with that, not the other way around as you have suggested.

8 Likes

Does anyone think a section on using normality tests before doing a t-test is needed? I see it frequently in the rehabilitation literature.

Example:

I looked up the paper and this is what they did:

Blockquote
The Kolmogorov-Smirnov-Lilliefors test was applied to evaluate the normal distribution for each
investigated group. To detect the presence of outliers, the Grubb’s test was performed. Levene’s test was used to test for variance homogeneity. For normally distributed data and variance homogeneity, Student’s t-test was applied to access gender-specific differences and a one-way analysis of variance (ANOVA) followed by post hoc Scheffé’s test to analyze differences between cohorts. The subjects were later grouped based on the variability in the sacrum orientation and lumbar lordosis during different standing phases. Due to small size of the individual sub-groups, the non-parametric Friedman test was performed to assess the differences between repeated measurements in the subgroups, followed by post hoc Nemenyi test. Additionally, a regression analyses was applied and the coefficient of determination (R2) was calculated. P-values of <0.05 were considered statistically significant. The statistical analyzes were performed with R 3.2.5 (R-Core-Team, 2016).

In defense of the authors – their hypothesis was that there would be greater variance in the low back pain group vs asymptomatic participants, so some of these methods were understandable.

Their study found large amount of variability in sacral orientation and lordotic curvature.

I thought the following stack exchange threads were appropriate:

Anyone have more scholarly references?

2 Likes

I’m aware of at least three four papers on this topic:

  • Rasch D, Kubinger KD, Moder K (2011): The two-sample t test: pre-testing its assumptions does not pay off. Statistical Papers 52(1): 219-231 (link)
  • Rochon J, Kieser M (2010): A closer look at the effect of preliminary goodness‐of‐fit testing for normality for the one‐sample t‐test. Br J Math Stat Psychol 64: 410-426 (link)
  • Rochon J, Gondan M, Kieser M (2012): To test or not to test: Preliminary assessment of normality when comparing two independent samples. BMC Med Res Methodol 12: 81 (link)
  • Schoder V, Himmelmann A, Wilhelm KP (2006): Preliminary testing for normality: some statistical aspects of a common concept. Clin Exp Dermatol 31: 757-761 (link)
5 Likes

Great thread. Thanks for listing the attendant articles.

1 Like

I think this would be a fine addition; though I think it is somewhat related to a topic already listed, Misunderstood “Normality” Assumptions. You and @COOLSerdash should feel free to edit and add things to this section, including the “Rationale” at the top as well as the references. As noted in a reply above, while scholarly references are preferred due to the goal of this resource, well-written blog posts also are welcome as they may provide additional useful ammunition for authors in their efforts to reply to reviewers and/or editors.

1 Like

IIRC, Box (or maybe Rozeboom?) said testing for some of these assumptions was like putting a rowboat out on the ocean to see if it’s calm enough for the Queen Mary.