This sounds perfectly valid (if you can hold the claim of it being a good enough control arm). The intention of any such study would in “my” framework still be clearly causal and calling it association would in my understanding only muddy the water.
Do you agree with the notion that there’s no association for its own sake and everything is either descriptive/predictive/causal?
I don’t quite understand the history of the drug you’re describing. If some patients given the drug lived much longer than all historical patients with the same disease and this is why it was granted accelerated approval (even without support from typically-designed RCTs), then why would the same drug be withdrawn from the market for safety-related reasons (??) Wouldn’t patients who know they are certainly going to die in a short time without treatment be willing to accept treatment with a risky but possibly efficacious drug (?) The fact that it was withdrawn voluntarily suggests that the basis for the approval was suspect (?)…
I’m meaning predict not in the sense of wanting to predict future observations but more in the sense of estimation.
In the provided paper by Kezios, she differentiates thourougly between causal (also active prediction) and prediction (also passive prediction).
I read the introduction and Reflections as an agreement with my view.
Would what I call descriptive fall under passive prediction?
What do you think about the idea to mentally downgrade everything that’s only talking about association as descriptive? And would you push against association studies that claim their goal was to assess “the association between X1 and Y (and then some adjustment set)” given they’re likely misaligned?
The Kezios paper gives you the empirical foundation: among studies framed in associational language with a specific exposure-disease relationship of primary interest — her “seemingly causal” category — full alignment of goal, methods, and interpretation occurred in 4% of cases. The associational framing is not a neutral description of the study’s epistemic position. It is a framing that reliably predicts methodological misalignment, outcome-focused variable selection, and coefficient over-interpretation.
The specific target of the pushback should be the adjustment set claim. “We examined the association between X and Y adjusting for Z” contains an implicit causal structure. Adjustment is not a neutral act. It is a causal operation that changes what the estimate means — conditioning on a confounder removes a backdoor path, conditioning on a mediator blocks the effect of interest, conditioning on a collider opens a new non-causal path. If you perform adjustment without implicitly committing to a causal model of the relationships among X, Y, and Z. The association framing allows authors to perform this causal operation while avoiding accountability for the causal assumptions. That is the lack of accountability Kezios is describing.
So the pushback is not “you shouldn’t have adjusted” — it is “you adjusted, which means you had a causal model in mind, and you are obligated to show it.” If they cannot produce a DAG or a principled defense of the adjustment set in causal terms, the adjustment should be removed and the finding reported as a crude association, or the study should be reframed explicitly as causal and held to causal standards. Those are the two honest options. The current middle ground — adjust without justification, report as association, interpret as causal is what I think she pushes against.
A paper that says “we found that X was independently associated with Y after adjustment for Z” and then recommends management changes based on that finding has silently upgraded itself from descriptive to causal somewhere between the Results and the Discussion.
I don’t see adjustment as solely a causal action.
Yes I didn’t word that accurately, I was primarily extrapolating from the paper. Adjustment roles in descriptive inference improving precision by accounting for covariate related variation, and in prediction, where causal structure is irrelevant and inclusion is justified by predictive performance.
The context I had in mind was narrower: the observational study with a specific named exposure, a disease outcome, and an adjustment set chosen without stating justification what Kezios calls the ‘seemingly causal’ study. In that setting, adjusting for Z while asking specifically about X→Y is hard to interpret as anything other than an implicit causal operation, because the only coherent reason to condition on Z in that context — rather than simply describing Y or predicting Y from all available variables equally is to isolate the X-Y relationship from Z’s influence.
Thank you for your questions. I selected this drug as an example randomly - one that was approved on this basis. Agree that withdrawing the drug for safety concerns is inconsistent with the context of terminal disease. I’m not familiar with the indication or what other drugs may have been approved after that approval. It was just an example how observational studies can be acceptable in some cases but there are many others. (Worth noting that HIV advocates pushed the FDA to consider conditional approvals based on surrogate endpoints)
AI Query response: Between 2002 and 2021, the FDA granted 116 accelerated approvals for oncology indications based on single-arm trials. Endpoints: Roughly 98% of these approvals rely on Response Rate (RR) or Duration of Response (DOR) as surrogate endpoints.
-
Line of Therapy: Most (approx. 74%) are for second-line or later treatments for metastatic disease.
-
Confirmatory Trials: Manufacturers are required to conduct post-approval randomized controlled trials (RCTs) to verify clinical benefit (often using overall survival or progression-free survival).
-
Success Rate: About 38–43% of these indications eventually fulfill their requirement to verify clinical benefit and convert to traditional approval.
I agree with Erin this seems strange (and interesting). That drug might make a worthy discussion under its own heading.
On a related note, here is a case report we published last year on a potential safety signal with this EZH2 inhibitor (tazemetostat) in a patient with SMARCB1-deficient renal medullary carcinoma (RMC) who was treated with tazemetostat and subsequently developed a brain metastasis whose morphology, immunohistochemistry, and methylation profile had drifted toward a glial phenotype while still carrying the same PPP2R5E::KLC1 fusion and SMARCB1 loss as the pre-treatment tumor, largely confirming clonal identity. So the same tumor lineage appeared to have undergone glial transdifferentiation after exposure to an epigenetic modulator.
The mechanistic story is appealing: EZH2 inhibition by tazemetostat relaxes H3K27me3-mediated lineage constraints, SMARCB1-null RMC cells already sit on a precarious epigenetic landscape as we also recently showed here.
But, sticking to the point of this thread, this is exactly the setting where one should be most disciplined as we have a single patient, an unmeasured counterfactual, and a temporal association that could just as easily reflect natural history of an aggressive tumor with rare CNS tropism whose survival outcomes are improving thanks to our ongoing research efforts. We thus tried to be careful in the paper to frame it as a hypothesis-generating observation rather than evidence that tazemetostat caused the transdifferentiation, and I think it’s a useful concrete example of a case that is descriptively striking (glial mimicry was previously unheard of – certainly in RMC) but causally only a conjecture.
With tazemetostat now off the market, and its overall weak signal of efficacy in RMC, it is unlikely that much more data points will be generated in this setting.