Note: This topic is a wiki, meaning that this main body of the topic can be edited by others. Use the Reply button only to post questions or comments about material contained in the body, or to suggest new statistical myths you’d like to see someone write about.
I am not claiming to be the leading authority on any or all of the things listed below, but several of us on Twitter have repeatedly floated the idea of creating a list of references that may be used to argue against some common statistical myths or no-nos.
By posting in thread format, I will start but others should feel free to chime in. However, it MAY be easier if the first post (or one of the first posts) is continually updated so all references on a particular topic are indexed at the top. I am not sure of the best way to handle this - either I will try to periodically edit the first post to keep the references updated, or maybe Frank will have a better idea for how to structure and update this content.
I was hoping to organize this into a few key myths/topics. While I am happy to add any topic that authors think is important, the intent here is not to recreate an entire statistical textbook. I’m hoping to provide an easy to navigate list of references so when we get one of the classic review comments like “the authors should add p-values to Table 1” we have some rebuttal evidence that’s easy to find.
I’ve listed a few below to start. Please feel free to email, Twitter DM, or comment below. This will be a living ‘document’ so if there’s something you think is missing, or if I cite a paper that you feel has a fatal flaw and does not support its stated purpose, let me know. We’ll see how this goes.
TOPIC: P-Values in Table 1 of Randomized Trials
Rationale: In RCT’s, it is a common belief that one should always present a table with p-values comparing the baseline characteristics of the randomized treatment groups. This is not a good idea for the following reasons.
Altman DG, Dore CJ. Randomisation and baseline comparison in clinical trials. Lancet 1990; 335: 149-153. (https://www.ncbi.nlm.nih.gov/pubmed/1967441)
Begg CB. Significance tests of covariate imbalance in clinical trials. Controlled Clin Trials 1990; 11: 223-225. (https://www.ncbi.nlm.nih.gov/pubmed/2171874)
Senn SJ. Baseline comparisons in randomized clinical trials. Stat Med 1991; 10: 1157-1160 (https://www.ncbi.nlm.nih.gov/pubmed/1876802)
Senn SJ. Testing for baseline balance in clinical trials. Stat Med 1994; 13: 1715-1726. (https://www.ncbi.nlm.nih.gov/pubmed/7997705)
TOPIC: Covariate Adjustment in RCT
Rationale: Somewhat related to the above, many consumers of randomized trials believe that there is no need for any covariate adjustment in RCT analyses. While it is true that there is no need for a valid RCT, there are benefits to adjusting for baseline covariates that have strong relationships with the study outcome, as explained by the references below. If a reader/reviewer questions why you have chosen to adjust, these may prove helpful.
Canner PL. Covariate adjustment of treatment effects in clinical trials. Controlled Clin Trials 1991; 12: 359-366. (https://www.ncbi.nlm.nih.gov/pubmed/1651207)
Neuhaus JM. Estimation Efficiency with Omitted Covariates in Generalized Linear Models. J Am Stat Assoc 1998; 93: 1124-1129.
Hauck WW, Anderson S, Marcus SM. Should We Adjust for Covariates in Nonlinear Regression Analyses of Randomized Trials? Controlled Clin Trials 1998; 19: 249-256. (https://www.ncbi.nlm.nih.gov/pubmed/9620808)
Steyerberg EW, Bossuyt PMM, Lee KL. Clinical trials in acute myocardial infarction: should we adjust for baseline characteristics? Am Heart J 2000; 139(5): 745-751. (https://www.ncbi.nlm.nih.gov/pubmed/10783203)
Hernandez AV, Steyerberg EW, Habbema JDF. Covariate adjustment in randomized controlled trials with dichotomous outcomes increases statistical power and reduces sample size requirements. J Clin Epi 2004; 57(5): 454-460. (https://www.ncbi.nlm.nih.gov/pubmed/15196615)
Hernandez AV, Eijkemans MJC, Steyerberg EW. Randomized controlled trials with time-to-event outcomes: How much does prespecified covariate adjustment increase power? Ann Epi 2006; 16(1): 41-48. (https://www.ncbi.nlm.nih.gov/pubmed/16275011)
Gray LJ, Bath P, Collier T. Should stroke trials adjust for functional outcome for baseline prognostic factors? Stroke 2009; 40: 888-894. (https://www.ncbi.nlm.nih.gov/pubmed/19164798)
Kent DM, Trikalinos TA, Hill MD. Are unadjusted analyses of clinical trials inappropriately biased toward the null? Stroke 2009; 40(3): 672-673. (https://www.ncbi.nlm.nih.gov/pubmed/19164784)
Lingsma H, Roozenbeek B, Steyerberg E. Covariate adjustment increases statistical power in randomized controlled trials. J Clin Epi 2010; 63(12): 1391. (https://www.ncbi.nlm.nih.gov/pubmed/20800991)
Groenwold RHH, Moons KGN, Peelen LM, Knol MJ, Hoes AW. Reporting of treatment effects from randomized trials: A plea for multivariable risk ratios. Contemp Clin Trials 2011; 32(3): 399-402. (https://www.ncbi.nlm.nih.gov/pubmed/21195797)
Ciolino JD, Martin RH, Zhao W, Jauch EC, Hill MD, Palesch YY. Covariate imbalance and adjustment for logistic regression analysis of clinical trial data. J Biopharm Stat 2013; 23(6): 1383-1402. (https://www.ncbi.nlm.nih.gov/pubmed/24138438)
TOPIC: Analyzing “Change” Measures in RCT’s
Rationale: Many authors and pharmaceutical clinical trialists make the mistake of analyzing change from baseline instead of making the raw follow-up measurements the primary outcomes, covariate-adjusted for baseline. To compute change scores requires many assumptions to hold (for more detail, see Frank’s post on this: https://www.fharrell.com/post/errmed/#change). It is generally better to analyze the “follow up” measurement as the outcome with a covariate adjustment for the baseline value, as this seems to better match the question of interest: for two patients with the same pre-trial value of the study outcome, one given treatment A and the other treatment B, will the patients tend to have different post-treatment values?
Vickers AJ, Altman DG. Analysing controlled trials with baseline and follow up measurements. BMJ 2001; 323: 1123.
TOPIC: Using Within-Group Tests in Parallel-Group Randomized Trials
Rationale: Researchers often analyze randomized trials and other comparative studies by separate analysis of changes from baseline in each parallel group. Sometimes, they will incorrectly conclude that their study proves that a treatment effect exists if there is a “significant” p-value for the within-group test for the treatment group, although this ignores the presence of the control group (what’s the purpose of having the control group if you’re not going to compare the treated group against the control group?)
Bland JM, Altman DG. Best (but oft forgotten) practices: testing for treatment effects in randomized trials by separate analyses of changes from baseline in each group is a misleading approach. Am J Clin Nutr 2015; 102(5); 991-994.
TOPIC: Sample Size / Number of Variables for Regression Models
Rationale: It is common to see regression models with far too many variables included relative to the amount of data (as a reviewer, I’ll see papers that report a “risk score” that includes 20+ variables in a logistic regression model with ~200 patients and ~30 outcome events). A commonly cited rule of thumb is “10 events per variable” in logistic regression, but in fact the specific number is more complex than that, though it may function as a useful “BS test” at a first glance.
Courvoisier DS, Combescure C, Agoritsas T, Gayet-Ageron A, Perneger TV. Performance of logistic regression modeling: beyond the number of events per variable, the role of data structure. J Clin Epi 2011; 64(9): 993-1000. (https://www.ncbi.nlm.nih.gov/pubmed/21411281)
van Smeden M, de Groot JA, Moons KG, Collins GS, Altman DG, Eijkemans MJ, Reitsma JB. No rationale for 1 variable per 10 events criterion for binary logistic regression analysis. BMC Medical Research Methodology 2016; 16(1): 163. (https://www.ncbi.nlm.nih.gov/pubmed/27881078)
Ogundimu EO, Alman DG, Collins GS. Adequate sample size for developing prediction models is not simply related to events per variable. J Clin Epi 2016; 76:175-82. (https://www.ncbi.nlm.nih.gov/pubmed/26964707)
van Smeden M, Moons KG, de Groot JA, Collins GS, Altman DG, Eijkemans MJ, Reitsma JB. Sample size for binary logistic prediction models: beyond events per variable criteria. Stat Methods Med Res 2018 (epub). (https://www.ncbi.nlm.nih.gov/pubmed/29966490)
Riley RD, Snell H, Ensor J, Burke DL, Harrell FE, Moons KG, Collins GS. Minimum sample size for developing a multivariable prediction model: PART II – binary and time-to-event outcomes. Stat Med 2019; 38(7): 1276-1296. (https://www.ncbi.nlm.nih.gov/pubmed/30357870)
TOPIC: Stepwise Variable Selection (Don’t Do It!)
Rationale: though stepwise selection procedures are taught in many introductory statistics courses as a way to make multivariable modeling easy and data-driven, statisticians generally dislike it for several reasons, many of which are explained in the reference below:
Smith G. Step away from stepwise. Journal of Big Data 2018; 5: 32 (https://journalofbigdata.springeropen.com/articles/10.1186/s40537-018-0143-6)
TOPIC: Screening covariates to include in multivariable models with bivariable tests
Rationale: People sometimes decided to include variables in multivariable models only if they are “significant” predictors of the outcome when included in the model by themselves (i.e. they are crudely associated with the outcome). This is a bad idea, much like stepwise variable selection.
Sun GW, Shook TL, Kay GL. Inappropriate use of bivariable analysis to screen risk factors for use in multivariable analysis. Journal of Clinical Epidemiology. 1996. 49, 8:907-16 (https://www.sciencedirect.com/science/article/pii/089543569600025X)
Greenland S. Modeling and variable selection in epidemiologic analysis. American Journal of Public Health. 1989. 79, 3:340-9. (https://doi.org/10.2105/AJPH.79.3.340)
TOPIC: Post-Hoc Power (Is Not Really A Thing)
Rationale: in studies that fail to yield “statistically significant” results, it is common for reviewers, or even editors, to ask the authors to include a post hoc power calculation. In such situations, editors would like to distinguish between true negatives and false negatives (concluding there is no effect, when there actually is an effect, and the study was just too small to pick it up). However, reporting post-hoc power is nothing more than reporting the p-value a different way, and will therefore not answer the question editors want to know.
Hoenig JM, Heisey DM. The Abuse of Power: The Pervasive Fallacy of Power Calculations for Data Analysis. The American Statistician 2001; 55 (https://www.vims.edu/people/hoenig_jm/pubs/hoenig2.pdf)
Lenth RV. Post Hoc Power: Tables and Commentary. (https://stat.uiowa.edu/sites/stat.uiowa.edu/files/techrep/tr378.pdf)
Goodman SN, Berlin JA (1994) The use of predicted confidence intervals when planning experiments and the misuse of power when interpreting results. Ann Intern Med 121:200-206 (https://annals.org/aim/fullarticle/707593/use-predicted-confidence-intervals-when-planning-experiments-misuse-power-when)
TOPIC: Misunderstood “Normality” Assumptions
Rationale: one of the pieces of information that many folks who have taken an introductory statistics class retain is that the Normal distribution is basically everything, and they often assume that the data need to be normally distributed for ALL statistical procedures to work correctly. However, in many of the procedures and tests that we use, the normality of the error terms (or residuals) matters, not the normality of the data points themselves.
TOPIC: Absence of Evidence is Not Evidence of Absence
Rationale: It’s also common to break results into “significant” (p<0.05) and “not significant” (p>0.05); when the latter occurs, many interpret the phrase “no significant effect” as evidence that there is no effect, when this is not really true (thanks to @davidcnorrismd for adding another reference below).
Altman DG, Bland JM. Statistics Notes: Absence of Evidence is Not Evidence of Absence. BMJ 1995; 311; 485. (https://www.bmj.com/content/311/7003/485.full)
Braithwaite R. EBM’s six dangerous words. JAMA . 2013;310(20):2149-2150. doi:10.1001/jama.2013.281996
Gelman A, Stern H. The Difference Between “Significant” and “Not Significant” is Not Itself Statistically Significant. The American Statistician 2012.
TOPIC: Inappropriately Splitting Continuous Variables Into Categorical Ones
Rationale: people often choose to split a continuous variable into dichotomized groups or a few bins (e.g. using quartiles to divide the data into four groups, then comparing the highest versus the lowest quartile). There are select and limited reasons why one may choose to partition continuous variables into categories, but more often than not this is a bad idea and done simply because it’s believed to be “easier” to perform or understand.
Naggara, O. et al. Analysis by categorizing or dichotomizing continuous variables is inadvisable: an example from the natural history of unruptured aneurysms. AJNR. American journal of neuroradiology 32, 437–40 (2011). (http://www.ajnr.org/content/32/3/437)
Royston, P., Altman, D. & Sauerbrei, W. Dichotomizing continuous predictors in multiple regression: a bad idea. Statistics in medicine 25, 127–41 (2006). (https://www.ncbi.nlm.nih.gov/pubmed/16217841)
Dawson, N. & Weiss, R. Dichotomizing continuous variables in statistical analysis: a practice to avoid. Medical decision making : an international journal of the Society for Medical Decision Making 32, 225–6 (2012). (https://journals.sagepub.com/doi/10.1177/0272989X12437605)
Altman, D. Problems in dichotomizing continuous variables. American journal of epidemiology 139, 442–5 (1994). (https://academic.oup.com/aje/article-abstract/139/4/442/78599?redirectedFrom=fulltext)
Thoresen, M. Spurious interaction as a result of categorization. BMC medical research methodology 19, 28 (2019). (https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-019-0667-2)
Altman, D. & Royston, P. The cost of dichotomising continuous variables. BMJ (Clinical research ed.) 332, 1080 (2006). (https://www.bmj.com/content/332/7549/1080.1)
TOPIC: Use of normality tests before t tests
Rationale: This is commonly recommended to researchers who are not statisticians.
Example Citation: Ghasemi A, Zahediasl S. Normality tests for statistical analysis: a guide for non-statisticians. Int J Endocrinol Metab . 2012;10(2):486–489. doi:10.5812/ijem.3505 (link)
The assumption of normality needs to be checked for many statistical procedures, namely parametric tests, because their validity depends on it.
Problem: The nominal size and power of the unconditional t test is changed with the combined procedure in unknown ways.
Rasch D, Kubinger KD, Moder K (2011): The two-sample t test: pre-testing its assumptions does not pay off. Statistical Papers 52(1): 219-231 (link)
Rochon J, Kieser M (2010): A closer look at the effect of preliminary goodness‐of‐fit testing for normality for the one‐sample t ‐test. Br J Math Stat Psychol 64: 410-426 (link)
Rochon J, Gondan M, Kieser M (2012): To test or not to test: Preliminary assessment of normality when comparing two independent samples. BMC Med Res Methodol 12: 81 (link)
Schoder V, Himmelmann A, Wilhelm KP (2006): Preliminary testing for normality: some statistical aspects of a common concept. Clin Exp Dermatol 31: 757-761 (link)
TOPIC: I2 in meta-analysis doesn’t refer to an absolute measure of heterogeneity
Rationale: when reporting heterogeneity results in a meta-analysis the value of I2 is often misinterpreted and treated like an absolute measure of heterogeneity when in fact is not.
Borenstein M, Higgins JPT, Hedges LV, Rothstein HR (2017) Basics of meta-analysis: I 2 is not an absolute measure of heterogeneity. Res Syn Meth 8:5–18. https://doi.org/10.1002/jrsm.1230
TOPIC: Number Needed to Treat (NNT)
Andrade C (2015) The numbers needed to treat and harm (NNT, NNH) statistics: what they tell us and what they do not. The Journal of clinical psychiatry 76:e330-3 https://doi.org/10.4088/JCP.15f09870
Citrome L, Ketter T (2013) When does a difference make a difference? Interpretation of number needed to treat, number needed to harm, and likelihood to be helped or harmed. International journal of clinical practice 67:407–11 https://doi.org/10.1111/ijcp.12142
TOPIC: Propensity-Score Matching - Not Always As Good As It Seems
Rationale: Convetional covariates adjustment is enough for most cases with adequate sample size and propensity-score matching is not necessarily superior.
Elze MC, et al. (2017). Comparison of Propensity Score Methods and Covariate Adjustment: Evaluation in 4 Cardiovascular Studies. Journal of American College of Cardiology 69(3):345-357. https://doi.org/10.1016/j.jacc.2016.10.060
Gary King on “Why Propensity Scores Should Not Be Used for Matching” https://www.youtube.com/watch?v=rBv39pK1iEs
Brooks JM, Ohsfeldt RL. Squeezing the balloon: propensity scores and unmeasured covariate balance. Health services research. 2013 Aug;48(4):1487-507.
Ali MS, Groenwold RH, Klungel OH. Propensity score methods and unobserved covariate imbalance: comments on “squeezing the balloon”. Health services research. 2014 Jun;49(3):1074-82.
TOPIC: Responder Analysis
Rationale: In some cases, authors attempt to dichotomize a continuous primary efficacy measure into “responders” and “non-responders." This is discussed at length in another thread on this forum, but here are some scholarly references:
Snapinn SM, Jiang Q. Responder analyses and the assessment of a clinically relevant treatment effect. Trials 2007. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2164942/
Additional Requested Topics (will add refs when practical, but feel free to add your own suggestions here)
Feel free to add your own suggestion here, we are happy to revisit and update whenever practical