Individual response

This formulation of the problem seems equivalent to the approach to statistical inference recommended by Seymour Geisser I discussed in this thread:

On the first two pages of chapter 1 of Predictive Inference: An Introduction, He goes on to summarize a number of your cautions in one lengthy paragraph (bolded sections are my emphasis):

Blockquote
…This measurement error convention, often assumed to be justifiable from central limit theorem considerations…was seized upon and indiscriminently adopted for situations where it is dubious at best and erroneous at worst. … applications of this sort regularly occur … much more frequently in the softer sciences. The variation here is rarely of the measurement error variety. As a true physical description of the statistical model used is often inappropriate if we stress testing and estimation of “true” entities, the parameters. If these and other models are considered in their appropriate context they are potentially very useful, i.e. their appropriate use is as models that can yield approximations for prediction of further observables, presumed to be exchangeable in some sense with the process under scrutiny. Clearly hypothesis testing and estimation as stressed in almost all statistics books involve parameters. Hence this presumes the truth of the model and imparts an inappropriate existential meaning to an index or parameter. Model selection, contrariwise is a preferable activity, because it consists of searching for a single mode (or a mixture of several).that is adequate for the prediction of observables even though it is unlikely to be the true one. This is particularly appropriate in the softer areas of application, which are legion, where the so-called true explanatory model is virtually so complex as to be unattainable

Geisser later points out (page 3):

Blockquote:
As regards to statistical prediction, the amount of structure one can reasonably infuse into a given problem could very well determine the inferential model, whether it be frequentist, fiducialist,
likelihood, or Bayesian. Any one of them possesses the capacity for implementing the predictive
approach, but only the Bayesian mode is always capable of producing probability distributions for prediction.

Given that background, what am I to make of the criticisms of statisticians by causal inference
proponents? It seems to me none of their criticisms are relevant to either a Bayesian approach
or the frequentist approach recommended by @f2harrell in Regression Modelling Strategies. At best,
they criticize a practice of statistics that no one competent would advocate.