Standardized mean difference on very small dataset

Hello everyone,

I am currently working on a meta-analysis for my internship.

I want to know if the difference observed between the bioaccessibility of As in raw product (control) vs in cooked product (treatment) is consistent across articles. My first idea was to calculate a Cohen’s D for each cooking treatment in my article (raw vs grilled, raw vs boiled … ) and then to compute a forest plot for each of these cooking treatments.

The problem is that I’m limited by the structure and the scarcity of my dataset. In fact, my bioaccessibility mean was obtained from a triplicate of measure only.

So, to resume my calculation of the Cohen’s D results from the difference between mean obtain from a triplicate.
In a statistical point of view, I’m not sure of the reliability of my calculation. What do you think about that ?

Thank you for your answers.


Think carefully about using the standardized mean difference, as it confounds the objective effects of the intervention with the design of the study. Heterogeneity is always a critical factor in the conclusions you can draw from retrospective meta-analyses.

If your measurement has a natural unit, it is generally agreed that this is a valid way to aggregate different studies.

You would not know this from the easily accessible literature on how to do a meta-analysis. I’ve collected a lot of papers in a separate thread, but the most important ones are described here: