The minimally clinically important difference (MCID) is frequently used to judge the value of medical treatments as well as to compute power in a frequentist design. It is reasonable to compute a sample size of a randomized trial so that there is a high probability (e.g., 0.9) that the null hypothesis of no treatment effect can be rejected when the true effect equals the MCID.

Bayesian effectiveness assessment is much different from frequentist assessments. One often captures evidence for benefit by computing the posterior probability that the treatment effect \Delta is in the right direction given the data and the prior, e.g., \Pr(\Delta > 0 | \text{data, prior}). For some studies, one may want to have evidence that the treatment effect **exceeds** the MCID, so it is reasonable to compute \Pr(\Delta > \delta | \text{data, prior}) where MCID=\delta. But what if a standard prior is used, the sample size is huge, and the posterior mean/median/model for \Delta equals \delta? The posterior probability of \Delta > \delta will be about 0.5, whereas many clinicians would consider the treatment worthwhile.

For Bayesian power calculations, it is often best to allow for uncertainty in MCID by examining evidence over a distribution of MCID. But it may be reasonable to desire a probability greater than 0.9 (Bayesian power) that the posterior probability will exceed a decision threshold (e.g., 0.95) were \Delta to exactly equal \delta.

Back to evidence for treatment effectiveness, it would be reasonable for one to compute \Pr(\Delta > \epsilon) where \epsilon < \delta. The value \epsilon could be called a threshold for a trivial treatment effect (TTTE) or a minimal noticeable (to the patient) or measureable treatment effect (MNTE). It may be reasonable to set \epsilon = \frac{\delta}{k} for some k such as 2 or 3. For the treatment effect is stated as an effect ratio (e.g., hazard or odds ratio) the division by k would be on the log ratio scale.

Once \epsilon is chosen one may compute \Pr(\Delta > \epsilon) to measure evidence for a non-trivial treatment effect. In this way, an observed difference at MCID=\delta will not result in an unimpressive posterior probability.

I hope that we can formalize these ideas, coming up with general definitions for MCID and TTTE/MNTE that are useful when using Bayes to get direct evidence for treatment benefit.

As an aside, it would be a mistake to crunch any of these measures in terms of how much patients improve from baseline. They should be couched in terms of what a parallel-group design does: estimates the difference in outcomes were the patient to get treatment A vs. were she to get treatment B.