It reports a very low odds ratio for death in patients taking ACE inhibitors (but not ARBs) compared to those not taking these medications, among hospitalized patients who tested positive for COVID-19. Point estimate is 0.33.
The odds ratio is so low that I strongly suspect an artifact explains the finding. I have been thinking about what the explanation might be.
Iāve actually been wondering about this study myself. The table 2 fallacy is an issue when we adjust lots of variables w/o consideration of the underlying causal model.
I think that this is not as big an issue wien looking at HTN indicated drugs, as it offers a unique comparison of drugs with similar indications but different effects. Thus, if ACE-I look protective, but ARBs & other anti-HTN drugs donāt, then I think the results are more believable.
The critical question is if their is some other variable that selectively influences choice of ACE-I over ARBs in a way that could introduce bias.
Thanks, Raj. This is the sentence that got my attention:
āA multivariable logistic-regression analysis was performed to ascertain the effects of age, race, coexisting conditions (coronary artery disease, congestive heart failure, cardiac arrhythmia, diabetes mellitus, COPD, current smoking, former smoking, hypertension, immunocompromised state, and hyperlipidemia), hospital location (according to country), and medications (ACE inhibitors, ARBs, beta-blockers, antiplatelet agents, statins, insulin, and oral hypoglycemic agents) on the likelihood of in-hospital death.ā
It seems to me that a proper causal model would treat some of these variables as having a fundamentally different relationship with outcomes compared to the drugs of interest.
Yes, you are right. The should ideally only select patients with indication due to HTN, and exclude all others.
As written, their will be issues with pts on ACE-I for say CHF/DM, where selection with Hospitalization could introduce a collider bias to falsely make ACE-I look protective.
Interesting question! But it seems table 2 here report the ācrudeā unadjusted number and the would imply a similar OR as in the adjusted analysisāACEIsurvnonv:9 vs 3% ish.
Therefore I dont think its the adjustment but probably some confounding by indiciation in play
Given this is based on EHR data from 169hosp in many countries and exposure is. āCV drug therapy recorded at the time of hospital admissionā. Always difficult to disentangle medications that have been stopped at admission. I wonder it this may be those that were ADMITTED with ACEi, not those that had ACEi before admission that were paused at admission (due to various reasons such as AKI or low BP). not really explains difference to ARBs thoughā¦
Thank you so much, @PerPersvensson! I also think confounding by indication is likely, but I cannot yet explain why this would be the direction of the odds ratio. Perhaps the main conclusion is: the causal model here is complex!
Fully agree @byrdjb ! this is complex and difficult to understand why ACEi would be"protective" (both in crude and adjusted). And especially when HF was associated with poorer outcome (which is not so difficult to understand), The methods are not very clear about exactly how the defined treatment exposure is defined/collected in time, how are patients on ACEi before admission but not after admisson defined? if a patient on ACEi pass the ED and is kept on ACEi when admitted to hospital, to me that is a flag for a healthier patient compared to those patients in which ACEi treatment is withdrawn at admission (usually because a lot of bad things). That could in theory explain why ACE i looks protective if only medication at admission is exposure, but it is a speculation of course. There were 26 non-surivivors with HF (indication for ACEi) but only 16 non-surv with ACEi. Treatment withdrawn at admission? or on ARBā¦
It is crude in the sense that I did not really try to color-code anything, but merely put the pieces on the board as I see them. Are there other pieces that should be on the board? Are some of these in the wrong relationship with one another?
Very good improvements. I wonder whether thereās something we could do with this conceptual model that would be of value, some formalization of it? In the meanwhile, I continue to work on our site of REPLACE COVID, a multi-center RCT on this topic with data coordinating center at University of Pennsylvania.
I am back @PerPersvensson with a heightened level of concern after the correction to the Lancet paper with overlapping authors, and questions about its data, provided by the same data source as this paper.
I Could nor agree more @byrdjb, having followed this undfoling story on twitter. https://twitter.com/PerPersvensson/status/1266811990720339971?s=20
or IĀ“d say concerned is an understatement if you read the link
I have worked some with EHR-data in the last years. In Sweden where I live we are very lucky to have many nationwide high quality health care and quality of care registers, Still you need to be very careful with missing data ,validity and be aware of all kind of bias, and thistakes timeā¦
And that is with data from one country - these papers (claims to) have collected data from all over the world in a couple of months. There are SO many quesstions, Ethics, GDPR, data quality etc etc. I actually dont believe it any more, very sad
Credibility would not be so badly strained if the authors were able to disclose which institutions have contributed data, were willing to name countries, were willing to share analytical code, and/or were stepping forth with some sort of at least partial audit trail of the astonishing amount of work required to assemble these data. Itās alarming, very alarming.
Not often, but from time to time conmen actually show up in medical research, we had a very convincing one at my university (KI) a couple of years agoā¦ Not my dept of course
Should not rule out that possibility here, probably some very nervous coauthors in that case.
Unfortunately this topic may have moved from a Table 2 Fallacy to a fallacy on a much higher levelā¦