Article: Going from evidence to recommendations: Can GRADE get us there?

I thought the following was worth reading:

Mercuri, M, Baigrie, B, Upshur, REG. Going from evidence to recommendations: Can GRADE get us there? J Eval Clin Pract. 2018; 24: 1232– 1239.

From the abstract:

In this paper, we reveal several issues with the underlying logic of GRADE that warrant further discussion. First, the definitions of the “grades of evidence” provided by GRADE, while explicit, are functionally vague. Second, the “criteria for assigning grade of evidence” is seemingly arbitrary and arguably logically incoherent. Finally, the GRADE method is unclear on how to integrate evidence grades with other important factors, such as patient preferences, and trade-offs between costs, benefits, and harms when proposing a clinical practice recommendation…It is our view that the issues presented in this paper undermine GRADE’s justificatory scheme, thereby limiting the usefulness of GRADE as a tool for developing clinical recommendations.

I find the following things very strange:

  1. Rather than consult the experts of decision making under uncertainty (statisticians) and using the formal decision theoretic tools, “evidence based medicine” decided to invent all sorts of heuristics that do not withstand close examination.

  2. Other fields look at medicine as a model for “evidence based decision making.”

Contrast the GRADE approach with this decision analysis using value of information.

Claxton, K. and Posnett, J. (1996), An economic approach to clinical trial design and research priority-setting. Health Econ., 5: 513-524. (link)

This paper presents a decision-analytic approach to trial design which takes explicit account of the costs of sampling, the benefits of sample information and the decision rules of cost-effectiveness analysis. It also provides a consistent framework for setting priorities in research funding and establishes a set of screens (or hurdles) to evaluate the potential cost-effectiveness of research proposals. The framework permits research priority setting based explicitly on the budget constraint faced by clinical practitioners and on the information available prior to prospective research. It demonstrates the link between the value of clinical research and the budgetary restrictions on service provision, and it provides practical tools to establish the optimal allocation of resources between areas of clinical research or between service provision and research.


An empirical study of “evidence hierarchies”

The limited predictive validity of the EPC approach to GRADE seems to reflect a mismatch between expected and observed changes in treatment effects as bodies of evidence advance from insufficient to high SOE. In addition, many low or insufficient grades appear to be too strict.

1 Like