Reference Collection to push back against "Common Statistical Myths"

Note: This topic is a wiki, meaning that this main body of the topic can be edited by others. Use the Reply button only to post questions or comments about material contained in the body, or to suggest new statistical myths you’d like to see someone write about.


I am not claiming to be the leading authority on any or all of the things listed below, but several of us on Twitter have repeatedly floated the idea of creating a list of references that may be used to argue against some common statistical myths or no-nos.

By posting in thread format, I will start but others should feel free to chime in. However, it MAY be easier if the first post (or one of the first posts) is continually updated so all references on a particular topic are indexed at the top. I am not sure of the best way to handle this - either I will try to periodically edit the first post to keep the references updated, or maybe Frank will have a better idea for how to structure and update this content.

I was hoping to organize this into a few key myths/topics. While I am happy to add any topic that authors think is important, the intent here is not to recreate an entire statistical textbook. I’m hoping to provide an easy to navigate list of references so when we get one of the classic review comments like “the authors should add p-values to Table 1” we have some rebuttal evidence that’s easy to find.

I’ve listed a few below to start. Please feel free to email, Twitter DM, or comment below. This will be a living ‘document’ so if there’s something you think is missing, or if I cite a paper that you feel has a fatal flaw and does not support its stated purpose, let me know. We’ll see how this goes.

Reference collection on P value and confidence interval myths

https://www.tandfonline.com/doi/full/10.1080/00031305.2016.1154108

https://www.tandfonline.com/doi/full/10.1080/00031305.2019.1583913

Are confidence intervals better termed “uncertainty intervals”? - PubMed (nih.gov)

Reverse-Bayes analysis of two common misinterpretations of significance tests

https://amstat.tandfonline.com/doi/full/10.1080/00031305.2018.1529625

P-Values in Table 1 of Randomized Trials

Rationale: In RCT’s, it is a common belief that one should always present a table with p-values comparing the baseline characteristics of the randomized treatment groups. This is not a good idea for the following reasons.

Covariate Adjustment in RCT

Rationale: Somewhat related to the above, many consumers of randomized trials believe that there is no need for any covariate adjustment in RCT analyses. While it is true that there is no need for a valid RCT, there are benefits to adjusting for baseline covariates that have strong relationships with the study outcome, as explained by the references below. If a reader/reviewer questions why you have chosen to adjust, these may prove helpful.

Analyzing “Change” Measures in RCT’s

Rationale: Many authors and pharmaceutical clinical trialists make the mistake of analyzing change from baseline instead of making the raw follow-up measurements the primary outcomes, covariate-adjusted for baseline. To compute change scores requires many assumptions to hold (for more detail, see Frank’s blog post on this: Statistical Thinking - Statistical Errors in the Medical Literature). It is generally better to analyze the “follow up” measurement as the outcome with a covariate adjustment for the baseline value, as this seems to better match the question of interest: for two patients with the same pre-trial value of the study outcome, one given treatment A and the other treatment B, will the patients tend to have different post-treatment values?

Using Within-Group Tests in Parallel-Group Randomized Trials

Rationale: Researchers often analyze randomized trials and other comparative studies by separate analysis of changes from baseline in each parallel group. Sometimes, they will incorrectly conclude that their study proves that a treatment effect exists if there is a “significant” p-value for the within-group test for the treatment group, although this ignores the presence of the control group (what’s the purpose of having the control group if you’re not going to compare the treated group against the control group?)

Sample Size / Number of Variables for Regression Models

Rationale: It is common to see regression models with far too many variables included relative to the amount of data (as a reviewer, I’ll see papers that report a “risk score” that includes 20+ variables in a logistic regression model with ~200 patients and ~30 outcome events). A commonly cited rule of thumb is “10 events per variable” in logistic regression, but in fact the specific number is more complex than that, though it may function as a useful “BS test” at a first glance.

Stepwise Variable Selection (Don’t Do It!)

Rationale: Though stepwise selection procedures are taught in many introductory statistics courses as a way to make multivariable modeling easy and data-driven, statisticians generally dislike it for several reasons, many of which are explained in the reference below:

Screening covariates to include in multivariable models with bivariable tests

Rationale: People sometimes decided to include variables in multivariable models only if they are “significant” predictors of the outcome when included in the model by themselves (i.e. they are crudely associated with the outcome). This is a bad idea, partly due to the same reasons as stepwise regression (it is essentially a variant of stepwise regression done manually), partly due to the fact that it neglects the multivariate structure, and a variable’s effect on the outcome might be different when viewed in isolation and when several variables are considered simultaneously.

Post-Hoc Power (Is Not Really A Thing)

Rationale: In studies that fail to yield “statistically significant” results, it is common for reviewers, or even editors, to ask the authors to include a post hoc power calculation. In such situations, editors would like to distinguish between true negatives and false negatives (concluding there is no effect, when there actually is an effect, and the study was just too small to pick it up). However, reporting post-hoc power is nothing more than reporting the p-value a different way, and will therefore not answer the question editors want to know.

Misunderstood “Normality” Assumptions

Rationale: One of the pieces of information that many folks who have taken an introductory statistics class retain is that the Normal distribution is basically everything, and they often assume that the data need to be normally distributed for ALL statistical procedures to work correctly. However, in many of the procedures and tests that we use, the normality of the error terms (or residuals) matters, not the normality of the data points themselves.

Absence of Evidence is Not Evidence of Absence

Rationale: It’s also common to break results into “significant” (p<0.05) and “not significant” (p>0.05); when the latter occurs, many interpret the phrase “no significant effect” as evidence that there is no effect, when this is not really true (thanks to @davidcnorrismd for adding another reference below).

Inappropriately Splitting Continuous Variables Into Categorical Ones

Rationale: People often choose to split a continuous variable into dichotomized groups or a few bins (e.g., using quartiles to divide the data into four groups, then comparing the highest versus the lowest quartile). There are select and limited reasons why one may choose to partition continuous variables into categories, but more often than not this is a bad idea and done simply because it’s believed to be “easier” to perform or understand.

Use of normality tests before t tests

Rationale: This is commonly recommended to researchers who are not statisticians.

Example Citation: Ghasemi A, Zahediasl S. Normality tests for statistical analysis: a guide for non-statisticians. Int J Endocrinol Metab . 2012;10(2):486–489. doi:10.5812/ijem.3505

The assumption of normality needs to be checked for many statistical procedures, namely parametric tests, because their validity depends on it.

Problem: The nominal size and power of the unconditional t test is changed with the combined procedure in unknown ways.

I2 in meta-analysis doesn’t refer to an absolute measure of heterogeneity

Rationale: When reporting heterogeneity results in a meta-analysis the value of I2 is often misinterpreted and treated like an absolute measure of heterogeneity when in fact is not.

Number Needed to Treat (NNT)

Propensity-Score Matching - Not Always As Good As It Seems

Rationale: Conventional covariates adjustment is enough for most cases with adequate sample size, and propensity-score matching is not necessarily superior.

Responder Analysis

Rationale: In some cases, authors attempt to dichotomize a continuous primary efficacy measure into “responders” and “non-responders." This is discussed at length in another thread on this forum, but here are some scholarly references:

Significance testing in pilot studies

Rationale: Authors often perform null-hypothesis testing in pilot studies and report p-values. However, the purpose of pilot studies is to identify issues in all aspects of the study ranging from recruitment to data management and analysis. Pilot studies are not usually powered for inferential testing. If testing is done, p-values should not be emphasized and confidence intervals should be reported. Results on potential outcomes should be regarded as descriptive. A CONSORT extension for pilot and feasibility studies exist, and is a useful reference to include in submissions and cover letters. Editors may not be aware of this extension of CONSORT.

P-values do not “trend towards significance”

Rationale: It is common for investigators to observe a “non-significant” result and say things like the result was “trending towards significance”, suggesting that had they only been able to collect more data, surely the result would have been significant. This misunderstands the volatility of p-values when there is no effect of the treatment under test. Simply put, p-values don’t “trend”, and “almost significant” results are not guaranteed to become significant with more data - far from it.

Additional Requested Topics

Feel free to add your own suggestion here, we are happy to revisit and update whenever practical.

63 Likes

Really good initiative Andrew. Sorry for the shameless self-promotion but I do have a thread with some misconceptions and references that might be helpful for getting this list together

8 Likes

Wow this is incredible Andrew. In a moment I’m goint to make my first attempt to converting a topic to a wiki topic that anyone can edit. Let’s see if that’s a good approach for growing this resource which you so nicely started.

Update: it’s now a wiki. Apparently you click on a small orange pencil symbol inside a small orange box to edit the topic. Then you’ll see another option to Edit Wiki. Perhaps others will reply here with more pointeres.

5 Likes

Would this be the appropriate thread to add references on the issues related to using parametric assumptions on ordinal data? This has always bothered my mathematical conscience.

Prof. Harrell had posted a great link to a recent paper in another thread:

A draft copy an be found here (I assume it is OK to post a link to the draft):

Analyzing Ordinal Data with Metric Models: What Could Possibly Go Wrong?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2692323

2 Likes

What is the scope of material that should be contributed? The theme of this site in general and this list so far is med stats / clinical trials / epi. Would materials from other disciplines be appreciated, or would they be redundant or off topic; e.g., there are publications on post hoc power in subfields of biology (ecology, evolution, and animal behavior) that present the issues from that field’s perspective, but have nothing to do with the core themes of this site.

1 Like

I’ll leave that for Frank to answer, as this website is his brainchild. I don’t think it’s unreasonable to present a paper from a different field that still addresses a core statistical topic (the examples you cite are good ones). Here is the site description from the home page, so perhaps this can be our guide, though of course there is some judgement in what exactly fits into this:

This is a place where statisticians, epidemiologists, informaticists, machine learning practitioners, and other research methodologists communicate with themselves and with clinical, translational, and health services researchers to discuss issues related to data: research methods, quantitative methods, study design, measurement, statistical analysis, interpretation of data and statistical results, clinical trials, journal articles, statistical graphics, causal inference, medical decision making, and more.

2 Likes

Amazing initiative. It will be really great to breakdown the misconception myths that have been been plaguing the field for a while, specially in applied contexts. Thanks for the effort of putting this wiki page up with references!

2 Likes

What are your thoughts on also including informative and well written blogs and shiny apps?

1 Like

I’ve got no problem with it. I was initially trying to prioritize scientific publications if only because when you’re appealing to an editor they might be more inclined to take that seriously versus a blog (fairly or not…) but there are certainly some excellent blog posts that may be useful here as well.

5 Likes

Would Gelman and Stern’s point in this paper be appropriate to add to the list?

The Difference Between “Significant” and “Not Significant” is not Itself Statistically Significant
https://www.tandfonline.com/doi/abs/10.1198/000313006X152649

5 Likes

This is a great resource. Thank you.

2 other prevalent myths come to mind:

  1. Matching - or not all that has intuitive appeal is actually good (or worthy)
  2. The unbearable lightness of NNTs
3 Likes

Some excellent suggestions in the last few posts - please feel free to add your favorites on those (after all, I want this to be crowd-sourced, not just my favorites!) I’ll also try to add a few when I get a chance.

Great initiative. Look forward to seeing this build.

I have some quibbles about the advice on covariate adjustment in RCTs. A classic paper is Pocock et al, Subgroup analysis and other (mis)uses of baseline data in clinical trials. However, things have moved on a little since then. But I’ll start from the beginning.

The reason you do not adjust an RCT is because you have randomised. If you have not introduced any bias through the conduct of the trial, then any difference between the groups must be due to a treatment effect or chance. The p-value accounts for those instances where you got unlucky. If you’re going to start adjusting, why randomise?

It is fine, of course, to do exploratory analysis beyond the primary endpoint but emphasis should always be on the unadjusted result, because you randomised. You have the luxury of actually random samples, you don’t need to resort to tricks from other designs.

The big problem with adjusting results is the sheer scope for finding models that give you the answer you want. Measure two dozen baseline characteristics, chuck them all in and see what comes out in the wash (see also: subgroups, pre-specification of as an unreliable marker for biological plausibility).

Pre-specification could help, but why would you pre-specify a baseline imbalance? If you knew it was an important prognostic factor, why didn’t you stratify the randomisation? Why did you leave yourself scope to cheat when you could have designed out the problem?

And that brings us onto a more recent development on covariate adjustment. The key to an RCT is to “analyse as randomised” (see also: intention-to-treat). So if you stratified the randomisation, you should stratify the analysis by exactly the same factors. The unadjusted p-value is accounting for a lot of possible outcomes that you made impossible by design. So you are fully entitled to take account of that (but it may be wise to report the unadjusted results also, and/or stick in a reference for the statto reviewer).

There’s a nice empirical reference for that last point: Reporting and analysis of trials using stratified randomisation in leading medical journals: review and reanalysis

And the EMA guidline (broken because I can’t post 3 links HTTPS&c–www.ema.europa.eu/en/documents/scientific-guideline/guideline-adjustment-baseline-covariates-clinical-trials_en.pdf) may also be of interest (aimed at Pharma and device manufacturers).

Thanks again for this. It’s a really useful initiative.

I have to strongly disagree with that. Much has been written about this. Briefly, you have to covariate adjust in RCTs to make the most out of the data, i.e., to get the best power and precision. It’s all about explaining explainable outcome heterogeneity, and nothing to do with balance. And concerning stratification, Stephen Senn has shown repeatedly that the correct philosophy is to pose the model and then randomize consistent with that, not the other way around as you have suggested.

11 Likes

Does anyone think a section on using normality tests before doing a t-test is needed? I see it frequently in the rehabilitation literature.

Example:

I looked up the paper and this is what they did:

Blockquote
The Kolmogorov-Smirnov-Lilliefors test was applied to evaluate the normal distribution for each
investigated group. To detect the presence of outliers, the Grubb’s test was performed. Levene’s test was used to test for variance homogeneity. For normally distributed data and variance homogeneity, Student’s t-test was applied to access gender-specific differences and a one-way analysis of variance (ANOVA) followed by post hoc Scheffé’s test to analyze differences between cohorts. The subjects were later grouped based on the variability in the sacrum orientation and lumbar lordosis during different standing phases. Due to small size of the individual sub-groups, the non-parametric Friedman test was performed to assess the differences between repeated measurements in the subgroups, followed by post hoc Nemenyi test. Additionally, a regression analyses was applied and the coefficient of determination (R2) was calculated. P-values of <0.05 were considered statistically significant. The statistical analyzes were performed with R 3.2.5 (R-Core-Team, 2016).

In defense of the authors – their hypothesis was that there would be greater variance in the low back pain group vs asymptomatic participants, so some of these methods were understandable.

Their study found large amount of variability in sacral orientation and lordotic curvature.

I thought the following stack exchange threads were appropriate:

Anyone have more scholarly references?

2 Likes

I’m aware of at least three four papers on this topic:

  • Rasch D, Kubinger KD, Moder K (2011): The two-sample t test: pre-testing its assumptions does not pay off. Statistical Papers 52(1): 219-231 (link)
  • Rochon J, Kieser M (2010): A closer look at the effect of preliminary goodness‐of‐fit testing for normality for the one‐sample t‐test. Br J Math Stat Psychol 64: 410-426 (link)
  • Rochon J, Gondan M, Kieser M (2012): To test or not to test: Preliminary assessment of normality when comparing two independent samples. BMC Med Res Methodol 12: 81 (link)
  • Schoder V, Himmelmann A, Wilhelm KP (2006): Preliminary testing for normality: some statistical aspects of a common concept. Clin Exp Dermatol 31: 757-761 (link)
5 Likes

Great thread. Thanks for listing the attendant articles.

1 Like

I think this would be a fine addition; though I think it is somewhat related to a topic already listed, Misunderstood “Normality” Assumptions. You and @COOLSerdash should feel free to edit and add things to this section, including the “Rationale” at the top as well as the references. As noted in a reply above, while scholarly references are preferred due to the goal of this resource, well-written blog posts also are welcome as they may provide additional useful ammunition for authors in their efforts to reply to reviewers and/or editors.

1 Like

IIRC, Box (or maybe Rozeboom?) said testing for some of these assumptions was like putting a rowboat out on the ocean to see if it’s calm enough for the Queen Mary.