Subgroup analysis and Meta-regression in Meta-analysis

The variance reduction vs the single arcsine is going to be very small in the vast majority of cases, compared to other error sources. This is a case of taking normal theory (which is only an approximation) too literally. The chance of a combined estimate that is not even in the range of the data requires the analyst to check if the implied sample size used by the Freeman–Tukey method is sensible.

On simple mini-max grounds, the FTT fails to dominate the arcsine as there exist cases where the latter is better. If errors are weighted, a small chance of a large, embarrassing error with the FTT doesn’t seem worth the effort to.use it, considering the notion of a single “sample size” for multiple proportions has no theoretical justification.

Rover and Friede produce an example where the FTT reverses the order of two data points on the combination scale. This reminds me of the problem I noted on the use of parametric models on ordinal data:

Blockquote
The early critics of parametric models on ordinal data noted that arbitrary scale transformations could change the observed sign of the effect…
The implication is that no information is communicated by parametric models on ordinal data.

I don’t think this method is as bad as that, but I see no reason to use something that has a small chance of changing the ordering of the data points, which destroys information.

1 Like