The End of the "Syndrome" in Critical Care

Thank you. I am sorry for the delay in responding. Do I understand correctly that the current differential diagnosis for the symptoms suggestive of obstructive sleep apnoea or hypopnoea is:

  1. No obstructive sleep apnoea or hypopnoea (Evidence: AHI <5 events per hour)
  2. Obstructive sleep apnoea or hypopnoea: (Evidence: AHI >4 events per hour)
  3. Other possible diagnoses

Are you are suggesting that there should be a more detailed differential diagnosis (with addition of the sentences in italics) to avoid missing Arousal Failure with recovery and Arousal and Recovery failure?:

  1. No sleep apnoea or hypopnoea (Evidence: AHI <5 events per hour but no prolonged apnoea pattern of arousal failure)
  2. Sleep apnoea or hypopnoea with: [A] Repetitive Reduction in Airflow / RRA (Evidence: AHI >4 events per hour without prolonged apnoea pattern of arousal failure) or (B) Arousal failure with recovery or (Evidence: AHI >0 of latter events of prolonged apnoea per hour) or (C) Arousal and recovery failure (Evidence: AHI >0 of latter events of very prolonged apnoea per hour)
  3. Other possible diagnoses

Some points:

  1. The treatment for RRA is an oral device or CPAP, weight reduction, etc. Should the treatment for Arousal Failure with recovery and Arousal and Recovery Failure be expected to be the same as for RRA?
  2. Do we know from observational studies how prevalent Arousal Failure with recovery and Arousal and Recovery Failure is in patients with symptoms suggestive of obstructive sleep apnoea or hypopnoea when the AHI is <5 events per hour and the AHI is > 4 events per hour? Is it possible to suspect Arousal Failure with recovery and Arousal and Recovery Failure clinically (e.g. with additional evidence of neurological dysfunction)?
  3. In order to establish the threshold for AHI where treatment for RRA provides a probability of benefit, I would do a study to estimate the probability of symptom resolution in a fixed time interval at different AHI values with no treatment or sham treatment (presumably a zero probability at all AHI values on no treatment) and on treatment. This might be done by fitting a logistic regression function or some other model to the data on treatment and on no treatment (when the curve might be zero for all values of AHI) and on treatment (when the probability of symptom resolution should rise as the AHI rises). Treatment should then be considered where the latter curve appears to rise above zero. This rise above the control curve may well happen at an AHI of 5 events per hour, or above or below 5 events per hour (e.g. 3 events per hour). This would be an approach setting a threshold based on evidence (as opposed to consensus guesswork).
  4. The above would apply to ‘RRA Obstructive Sleep Apnoea / Hypopnoea’. However for Obstructive Sleep Apnoea / Hypopnoea with Arousal Failure with recovery and Arousal and Recovery Failure the curve might be different with perhaps a clear probability of benefit at any AHI > 0. Note therefore that there may be a number of different AHI treatment indication thresholds created by a study of this kind. The symptoms alone might provide criteria for a diagnosis of ‘Clinical Obstructive Sleep Apnoea / Hypopnoea’, but for a ‘physiological’ diagnosis there may be 3 different criteria for (i) RRA, (ii) Arousal Failure and (iii) ‘Arousal and Recovery Failure. Each of these would also be sufficient to diagnose ‘Physiological Obstructive Sleep Apnoea / Hypopnoea’ (i.e. each might be a ‘sufficient’ criterion for the diagnosis) as well as prompting the doctor to offer treatment options. However, the probability of benefit from each treatment based from the logistic regression curve based on the AHI as a measure of disease severity and the adverse effects of treatment would have to be discussed with the patient during shared decision making.
  5. Although I consider Sleep Apnoea / Hypopnoea in my differential diagnoses in internal medicine and endocrinology and have some understanding of its investigation and management, I have never personally conducted Polysomnography or personally treated patients with CPAP etc., so please correct any misunderstandings. However, based on my work of trying to improve diagnostic and treatment indication criteria in endocrinology, the above is how I would approach the problem for Sleep Apnoea / Hypopnoea. I agree with you that this is a problem that needs close collaboration between clinicians and statisticians. The advice of statisticians such as @f2harrell or @stephen or someone similar in your area would be essential. I think that this type of work to improve diagnosis and treatment selection criteria is a huge growth area for future close collaboration between clinicians and statisticians. I am trying to encourage students and young doctors (and their teachers) to do this in the Oxford Handbook of Clinical Diagnosis, especially in the forthcoming 4th edition.
1 Like

I appreciate that your response is primarily designed to show a clinical/statistical approach to disentangling different diseases that have been, historically, lumped under one syndrome umbrella (e.g, “sleep apnea”). This is certainly very valuable. But for this particular condition, I’m not sure how much further ahead this disentangling would put us going forward.

I sense that sleep doctors are frustrated by systematic reviews which seem to call into question the value of treating sleep apnea. It costs a lot for the machinery needed to treat this condition properly. And I imagine that any time a payer can point to a list of “non-positive” trials as justification to withhold coverage for treatment, they are tempted to do so. This is presumably why the consequences of non-positive RCTs done historically in this field have been so frustrating for sleep physicians.

Lawrence suspects (I think) that one reason why RCTs of sleep apnea have not been able, historically, to show that treating apneas reduces a patient’s risk of death or cardiovascular outcomes, is that patients enrolled in previous trials should never have been lumped together in the first place. Using an arbitrarily-defined AHI cutoff, people with very mild underlying disease(s) causing their apneas were likely lumped together with patients with more severe/prognostically worse underlying disease(s), an approach to trial inclusion that was destined to produce “noisy” results (and therefore non-positive RCTs).

But even if sleep researchers wanted to rectify this problem going forward, knowing what we know today about the pitfalls of using syndromes as trial inclusion criteria, I really doubt that we’d have the ethical equipoise to do so (?)

Once we know that an existing treatment makes people feel and function better (and perhaps decreases the risk of motor vehicle accidents substantially…), it becomes ethically indefensible to leave them untreated for periods of time long enough to show benefits with regard to less common clinical outcomes (e.g. death, cardiovascular events, car accidents). This would be like denying people with chronic pain their analgesics for 5 years in order to show that those with untreated pain have a higher risk of suicide- unacceptable.

For the field of sleep medicine, this seems like a real dilemma (?) We have a strong clinical suspicion that if we were to design RCTs more rigorously, using patients with more homogeneous underlying pathology and worse (untreated) prognoses, we would be able to show significant benefits of treating their apneas with regard to “higher level” endpoints (mortality/cardiovascular events). However, because the treatment works so well for shorter-term patient-level endpoints, we are ethically prevented from designing the longer-term trials needed to show these higher-level benefits…

3 Likes

Thank you @ESMD. There are at least two issues here. The one that I tred to address is how to set for a particular test and its results, thresholds for probabilities of benefit that makes it worthwhile offering a treatment such as CPAP to a patient. This will vary from treatment to treatment and for different target outcomes (e.g. reduction of daytime sleepiness, snoring, alarm to the spouse because episodes of apnoea , etc). I would have thought that treating this alone should be justification for the use of CPAP. If as @llynn suggests, the diagnosis might be missed and CPAP not provided because of failure to detect dangerous episodes that occur less frequently than 5 episodes per hour, then this should be worth correcting. This could be tested with an RCT in the three types of Obstructive Sleep Apnoea / Hypopnoea as I suggested.

If in addition there is a risk from Obstructive Sleep Apnoea / Hypopnoea of cardiac and vascular complications down the line (as suggested already by observational studies) and a possibility that CPAP might reduce this risk, then this is an extra bonus at no extra cost whether it is true or not. However as you say, it is not possible to test this with an RCT by randomising patients to CPAP or no CPAP. However, it might be possible to use other techniques by following up a cohort of those with an AHI below a threshold and not treated with CPAP and also a cohort of people with an AHI above a threshold and given CPAP.

1 Like

On a related note, the idea of using a more comprehensive outcome scale that credits for reduction in, e.g., daytime sleepiness but also gives credit for any mortality reduction seems to be in order.

1 Like

Sorry for the delay and thanks to all the thoughtful comments and suggestions. In my response I will present some fundamental concepts for the broader audience so please forgive the basic nature of a portion of this discussion. Here we are using “sleep apnea” as the prototypic set of diseases defined as a synthetic syndrome under the 1980s-2022 era of pathologic consensus but the fundamental considerations for the timely extrication of this pitfall from critical care science applies broadly.

I agree. There are multiple pathologies associated with the sleep apnea hypopnea syndrome (SAHS)… Obstructive SAHS (OSAHS) is a subset of the syndrome for which CPAP is indicated. The separation of central and obstructive sleep apnea using the AHI is not possible but the AHI is used for both of them and there is considerable overlap particularly because low respiratory drive associated with central sleep apnea often causes the upper airway to collapse producing mixed central and obstructive apneas. .

Here you can see the first aspect of the problem, disease overlap when defined by a common criteria set with weakly objective terms (such as at least 50% having an obstructive compnent).separating the diseases.
.
Using a new diagnsotic paradigm (that I am proposing here), the various sleep apnea diseases are considered pulmonary arrhythmia and defined and quantified by .the time series components of the arrhythmia. These are then studied to identify pathology correlates looking for those providing severity functions. Considering the example of arousal failure this is a pulmonary arrhythmia with a sentinel pattern and its incidence of this could be determined by retrospective review of archived time series. Arousal failure has been theorized as the cause of the not uncommon “opioid associated unexpected hospital death”.so this is important. .

Yes as noted above just as we do for cardiac arrhythmias. Interestingly in the 80s counting premature ventricular contractions was once a way of quantifying a cardiac arrhythmia. The cutoff for treatment was 5 per. minute. Studies shows excess death in the treatment group so the addition approach was abandoned. So the issue would be simply, Does the patient have the sentinel pattern of arousal failure or not, the AHI being irrelevant. This is not much different than your approach. It just eliminates all of the known problems of repeatability of the AHI from lab to lab in the same patient.
. .
For all the reasons above it is pivotal to start fresh as if the AHI does not exist and think about how to define each disease in the “syndrome”… This is appropriate since the AHI was guessed and based on the number of fingers on ones hand (10 second for an apnea and hypopnea) and the original cutoffs of 10, 20, and 30 (which have been changed to 5, 15 and 30 to increase sensitivity) were capricious and based on the metric system not human pathophysiology. The selection of cutoffs of 5, 10, 20 or 100 was common 1980s Pathological Consensus. …

The decision to render diagnosis all of the sleep apnea diseases AND severity index them by the same simple sum of 10 second airflow cessation or attenuations was the apical error which rendered all the subsequent pathologic consensus. So in my view the first thing to do is abandon all anchor bias for the original guess and look at the time series patterns themselves to define the diseases and severity indices, Here we have the thought experiment “what if the AHI did not exist”. First we would identify the highest diagnsotic measurement type achievable for each target pathology.

Diagnostic measurements in descending order of statistical relevance (order of 2 and 3 is debatable)

  1. Specific measurement which varies as a function of severity of the target pathology
  2. Non specific measurement which varies with the severity of the target pathology.
  3. .Specific measurement only rendering true and false state (eg PCR testing)
  4. Non specific measurement which does not vary with the severity of the target pathology

The guessed measurements group used in 1980-2022 pathological consensus.( AHI, SIRS & Sepsis III are of the type 4. (AKI is type 2). Type 4 measurements may not “rise to the statistical level”. if the measurements are set too sensitive (eg AHI of 5) as this will produce a profound signal to noise problem not mitigated by severity since severity is not a function of a type 4 measurement such as the AHI…

If there are no type 1 measurements then a type 2 measurement must be determined by formal research. So starting as if the AHI does not exist we would examine the time series patterns for severity correlation, first seeking specific patterns not present in health. Again, the separation of the health state from the disease state by quantification of excess events which otherwise occur in the health state .(eg a 10 second apnea). is particularly problematic so we seek sentinel time series patterns which do not occur in health. This is the approach taken by analogous cardiac arrhythmia diagnostics.

Yes. It is likely that the severity of excessive daytime sleepiness (EDS) will be function of a different pathological time series patterns then, for example, hypertension or opioid associated sleep apnea death due to arousal failure. . . . . . . . .

Arousal is generally required to rescue the patient if the airway is obstructed during sleep. So a treatment which prevents obstruction may “treat” arousal failure because the arousal would not be required. This is exactly what we must learn. I recommend CPAP post op for OSA patients with prolonged apneas during sleep and low SPO2 nadirs (arousal failure) precisely because we do not know the efficacy of either but CPAP is considered more effective… .

No. One theory of arousal failure is that it is due in some cases to plasticity of the arousal response as one eventually is not aroused from sleep when living by a nearby train track… If this is true then RRA eventually induces arousal failure and are therefore more likely to be severe themselves. This is an unknown area.

Pretest probability is unknown. `“At Risk” patients include Patients receiving opioids which increase the arousal threshold and in the presence of central injury or genetic conditions such as Arnold Chari malformation and congenital or acquired diseases of Hypoventilation. However some of these are often occult. At the present time arousal failure is generally occult and only identified by screening overnight oximetry…

Absolutely. and not just for OSAHS but for all the guessed “synthetic syndromes” presently defined by the 1980s technique of pathological consensus. This entire problem of measurement (which is profound) needs to be formally addressed by the top statisticians. Only then will we begin to move to the next paradigm.

Linking the pathological time series patterns (severity indexing of RRA, arousal failure and recovery failure for example) to specific pathologies (EDS, Hypertension).would seem to be a second step after identification of measurements for all the pathological time series patterns…

Finally, is there a place for the old guessed measurements like the AHI. I have to say no. There might be value in counting apneas but counting threshold 10 second apneas has proven to be completely inadequate and probaly would add nothing to more robust counting measures which considered duration, and RRA patterns. .

The new future of clinical measurement must begin with an understanding that the apical mistake was to consensus guess a type 4 diagnsotic measurement and base 35 years on this guess as a measurement standard, mandated by grant and publication gatekeepers, A lesson to be learned about the harm of central gatekeepers and lack of seeking the required measurement oversight by mathematicians (statisticians) should also not be missed. .

Its very difficult for researchers in the field to do anything about this. A formal mathematical presentation of this problem in a paper by respected statisticians is required to release the researchers from the 1980s pathological consensus paradigm.
. .

1 Like

The premature ventricular contractions (PVCs) example is even better an example that it seems. Work done at Duke University Cardiology in the late 1980s showed that the frequency of PVCs is not independently prognostic once the amount of permanent ventricular damage is taken into account. But what is prognostic is the nature of the PVCs. If they are the R-on-T wave type of PVC they are independently prognostic. You need frequency and morphology.

3 Likes

Excellent. The morphology of a time series is responsive to the pathophysiology just as it is in cardiac arrhythmia.

Let us now look at what happens to a field of science under “Pathological Consensus”. This same problem I am about to present is present for the past 35 years in the other science fields of Sepsis and ARDS. Perhaps also in AKI but I don’t follow that literature.

Attached is a Cochrane review of opiates and sedating drugs in Sleep Apnea. Opiate associated sudden sleep apnea death is the scourge of hospital post op wards. Occurring without waring its effects are devastating and the occurrence while not common is also not rare.

From case reports of the few sleep apnea patients which were monitored during sleep apnea death or near death, the cause is arousal failure and we know that opiates delay the arousal response. .So the population to study would be those with pre-existing arousal failure in which such opioid induced delay could be deadly and the primary endpoint would be delay in arousal or the emergence of incomplete recovery (the wide complex tachycardia and R on T analog).

Here we see that they instead used the guessed metric AHI, looking for increased severity by a rise in AHI. This is not even expected based on the pathophysiology but that is what the consensus agrees that severity is so that’s what they looked for. They treat the AHI like a blood pressure as if it rises as severity increases. Of course that what the pathological consensus says it does. 5-15 mild 16-30 moderate. greater than 30 severe. Those are the consensus rules. They also use the ODI (oxygen desaturation index) an old metric based on counting 3 or 4 % dips in SPO2. Its another guessed threshold counting metric like the AHI.

Finally they talk about duration of the apnea in the conclusion but it was not an endpoint. What is duration"? The mean? In any case the study provides no information useful to determine how to screen, what to look for, or any other useful information. It is an exercise in the statistical processing of useless guessed numbers.

This is the state of the art of Sepsis, Sleep apnea, and ARDS. Useless research which looks real just as Langmuir described in 1953 but now, not just in some poor person’s lab, but mandated worldwide by funding and publication gatekeepers..

Feynman’s cargo cult research on a worldwide scale with the fires beautifully laid out, the hapless statistician dutifully doing the work and the Cochrane gate keeper of the science never looking beyond the runway For 35 years the cargo planes never land yet the natives do not lose faith. They dance around their editorials talking about the next, more perfect, runway lining fires they will light.

The entire affair reads right out of the description of the known potential pathologic sociality of science. We have known since the 1960s how this happens and it happened anyway persisting for decades unabated…

The diligent enthusiastic Ptolemaic scientists have been vindicated by the near identical failings of their counterparts in the 21st century… .

2 Likes

Yikes! Mortifying to think that there might be physicians who applied the findings from the linked Cochrane review to their prescribing practices for patients with sleep apnea…

1 Like

Yes the unfortunate consequence of 1980s “Pathological Consensus”, the research using the guessed metric may be used for high risk medical decision making. It is very likely that many physicians made decisions based on this as it has been highly cited.

Furthermore The Cochrane Collaborative is considered a gate keeper. They are and should be highly respected. They did not know, neither did the authors or the statisticians. This is a tragedy on a grand scale.

This is why I have called for a recall of the the use of Pathological Consensus based metrics for clinical trials. They may be useful for clinical use but they were guessed and have no place as valid measurements in clinical trials where there use can lead to wrongful conclusions and be harmful to risk defining decision making . The recall should be promulgated by the leadership soon.

Finally, a notation of the fact that the AHI and ODI were used for severity indexing in this study and this is not a valid surrogate for Sleep Apnea severity (especially as it relates to opiates) should be provided promptly as this article is still being highly cited, it has been included in at least one guideline, and is (apparently) still considered reliable evidence.

1 Like

Quoting the conclusion of the article above, specifically relating to opioid, hypnotic, and sedating medication use in OSA.

“The findings of this review show that currently no evidence suggests that the pharmacological compounds assessed have a deleterious effect on the severity of OSA as measured by change in AHI or ODI”.

Since the AHI is the standard measure of OSA severity this is a powerful statement.

Most here at this forum are outlier expert statisticians, mathematicians and clinicians. Most are here because we love science and math. There are no citations which are going to generate CV expansion.

However, this paper is a reminder that we are all at the bedside. What we do or do not do effects decision making and patient care. We see a methodological mistake like this and we want to turn away. We don’t want to get involved. Its like seeing bad care at the hospital, healthcare workers want to turn away. We should not… It is a responsibility we bear as healthcare scientists. We have a great purpose, . Let us embrace it without undo deference to expediency.

Thanks to all and especially Dr. Harrell for this wonderful, scientifically rich, forum…

.

1 Like

To deny the negative influence of opioids and sedatives on the patency of the upper airway is to deny the years of solid, real world, pathophysiologic evidence (aka ‘common sense’) that supports the notion that these commonly used drugs worsen SDB and may do more harm than good. (see refs below).

But the saving grace of the above statement may be the ending…"… as measured by change in AHI or ODI”. The AHI has unjustly assumed the perch atop as the gold standard for diagnosis of OSA, and the bold untruth of the Cochrane conclusion statement exemplies the danger of unchecked ‘pathological consensus’. So the qualifier at the end of the statement may actually be it’s saving grace.

Cochrane Reports strive to be the final word on safety and efficacy evidence.
My favorite from Cochrane is ref (4). “…studies of perioperative monitoring with a pulse oximeter were not able to show an improvement in various outcomes.” Im delighted the bean counters havent snatched pulse oximeters from our hands because they dont offer benefit. I shudder to think what would happen.

Lets try another: “Tom Brady is not deserving of his title as the G.O.A.T., as measured by grimaces in the faces of the fans in the stands”. (away games only)

I’m enjoying the discourse.
FJO

  1. Overdyk, Frank J., et al. “Association of opioids and sedatives with increased risk of in-hospital cardiopulmonary arrest from an administrative database.” PloS one 11.2 (2016): e0150214.
  2. Overdyk, Frank J., and David R. Hillman. “Opioid modeling of central respiratory drive must take upper airway obstruction into account.” The Journal of the American Society of Anesthesiologists 114.1 (2011): 219-220.
  3. Izrailtyan, Igor, et al. “Risk factors for cardiopulmonary and respiratory arrest in medical and surgical hospital patients on opioid analgesics and sedatives.” PloS one 13.3 (2018): e0194553.
  4. Pedersen, Tom, Ann M. Møller, and Bente D. Pedersen. “Pulse oximetry for perioperative monitoring: systematic review of randomized, controlled trials.” Anesthesia & Analgesia 96.2 (2003): 426-431.
2 Likes

This December 17th article extends the conversation above by presenting the consequences of Pathological Consensus during the pandemic. It is likely that many patients died during the pandemic due to adherence to False Evidenced Based Medicine. (FEBM). False EBM is EBM based on RCT performed using fake (guessed) measurements. Here we see again that we are all at the bedside and that patient harm can be the consequence of the performance of RCT using fake consensus measurements… The action is not benign. .ARDS: hidden perils of an overburdened diagnosis | Critical Care | Full Text

Here it’s interesting to see the focused consequences of the broad systemwide failure of the platform approach of pathological consensus. Dr. Tobin’s article exposes one dimension, the adverse bedside care of “ARDS” driven by those wishing to standardize criteria for RCT within pathological consensus, the present failed domain of critical care syndrome science.

Of course, Dr. Tobin is only lamenting in the vertical space and as the thread above shows the problem is much broader than that. Agonizing the vertical, even with the eloquence of Dr. Tobin, renders little more than the appearance of alternative thought making the science appear more robust. It does not disturb the root cause, the domain itself.

Now, after the pandemic, the enlightened mind can see readily perceive the broad counter instances the trialists still discount as isolated anomalies. We know they do because they have introduced a new synthetic syndrome they call “NON COVID ARDS”. They are allowed to simply create new synthetic syndromes without discovery of its measurements within the domain of pathological consensus.
In other words, since severe COVID pneumonia met the guessed Berlin criteria for ARDS but did not do well under its protocolized treatment defined by RCT for ARDS they will now simply exclude COVID by defining two new synthetic syndromes “COVID ARDS” and “NON COVID ARDS”. Other domains require discovery of new syndromes, not so the domain of pathological consensus. In that domain leaders are free to simply pivot on a whim in preparation for the next RCT and protocol set.

I have shown that that pathological consensus is the domain in which critical care syndrome science operates and that the variable SETS of different diseases with disparate average treatment effects captured by guessed measurements (criteria) of the the pathological consensus causes non reproducibility of the RCT… Here is an image which shows the problem which they now seek to solve by administrative action (excluding COVID rather than reconsideration of the failed dogma).

Pathological consensus replaces the more difficult task of discovery, They guess, rather than discover, the measurements allowing RCT to proceed. At the beside the protocols replace the more difficult patient focused care allowing beside care to be easily performed without prolonged bedside attendance of the academic and this also allows easy pseudo quality measurements as a function of compliance with the surrogates derived from the domain.

It should now be obvious to all that pathological consensus is a learned (1980s) top down administrative technique it is not science and renders only pathological science. In this domain the leaders are free to adjust the top whenever needed, as if one can simply guess a new orbit for mars.

The new variables of the pandemic exposed the pathological science of ARDS as this simplistic technique built from the top down and grounded in invalid ephemeral assumptions. ARDS science is simply built on the pathological consensus domain. The same domain upon which sepsis and sleep apnea were built so all are freely amendable. This is not a benign construct. Rather it is a dangerous fundamental standardized error in methodology which leads to false EBM and patient harm.

Now is the time to put forward a set of guidelines defining how clinicians will interpret the work of trialists. And statisticians who perform RCT under Pathological Consensus. In NASA, astronauts were following the rules of the scientists when an lethal fire (the pandemic equivalent) changed everything. The astronauts were scientists themselves, well trained in physics and more importantly in the live expression of it. They set up their own rules and announced they would not consume the work of the scientists unless certain guidelines to assure quality of the science (safety) were met. . This is what is required now in critical care.

In realty the free thinking physiology and pathophysiology trained clinician, is, ironically the more academic of the players given that the trialists and their acolytes cling to simplistic thinking of the failed dogma. The math they present seems so empowering but without solid measurements the math is an embellishment, a slight of hand of the trialists luring in the math enamored masses despite repeated decadal cycles of failure and top down amendments to suit the output like “NON COVID ARDS” .

So what might clinicians set as rules defining the minimum standards for research they are willing to consume? Here are a few starters.

  1. Measurements (e.g. threshold criteria) for RCT cannot be guessed.

  2. SETS comprised of many different diseases with potentially disparate average treatment effects cannot be combined as a syndrome for RCT just because they meet a guessed set of criteria and have similar clinical presentations.

  3. All present standard consensus derived measurements (criteria) must be investigated to determine if they were guessed and to define the scientific rigor of their derivation.

  4. Since RCT performed using Pathological Consensus (guessed measurements) are not just wasteful, they lead to false evidence based medicine and are therefore dangerous to the health of the public. Discovery first, RCT second, No guessing.

If you haven’t watched the 20min talk describing the unbelievable, tragic history of fake measurements, false RCT, false EBM, and Pathological Consensus don’t miss it. It will remind you that even after the lessons of Galileo, science still operates as a collective which often mandates failed dogma and this is potentially dangerous for our patients, our families when they become sick, and to us all. …

2 Likes

The methods of PathologicalConsensus are simple… The consensus group is free to update and change the measurements for RCT…

Don’t miss the sister discussion of a new update at

Changing SOFA would presumably change the criteria for RCT of the “Synthetic Syndrome” of sepsis which is based on the old 1996 SOFA. .Here you see how variable the synthetic syndrome actually is and why the RCT are not reproducible.

Not only does the guessed set of thresholds capture a variable mix of diseases but the guessed set itself also changes at the will of the consensus group…

RCT reproducibility using guesses (especially freely changeable guesses) as measurements should not be expected…

Very exciting developments.

After 35+ years of refusal to budge the thought leaders related to the sleep apnea hypopnea syndrome (SAHS) have decided to convene discussion about the need for new measurements for severity in sleep apnea.

It is likely that this thread (11K views) and sister thread What is a fake measurement tool and how are they used in RCT - #46 by llynn (with over 13K views) had much to do with this welcome nascent acquiescence. Thanks to everyone that forwarded the links to these threads.

The power of social media is enhanced by the fact that most researchers want to do the right thing, they are simply blinded by dogma and the drive to get grants by conforming to the prevailing thought. They are first trained within, and then contribute to, generational cycles of indoctrination.

This is why relentless pressure for introspection and reform from social media can work. Most want to get it right, they just have to be convinced of the need to open their intellectual horizon…

Sign up to join today. This is an opportunity for statisticians to see how measurements for RCT are derived near the end of the 1970s -2020s era of #PathologicalConsensus

January 27, 2023 (Friday) at 12noon EST/18:00 CET @ATSSRN @EuroRespSoc
https://t.co/ZCDiVqta10

1 Like

The Smoking Gun
Proving the Need for Guidelines for Development of RCT Measurements

Up until now I did not have proof that RCT measurements were simply guessed and I’m sure many of you did not believe me. It is hard to believe that 3 decades of Sleep apnea, ARDS, and sepsis as well as other critical care RCT use guessed (not discovered) threshold sets as measurements for RCT and OBs trials. Recall that in my video I presented the Apnea Hypopnea Index (AHI) as the prototypic pathological consensus as it is a set of thresholds guessed in the 1980s as the standard (indeed mandated) measurement for sleep apnea research.

Here in the ATS and European Respiratory Society Seminar is “The Smoking Gun”
Watch at least these about 5 minutes Scroll to 2200 & watch at least to 2730.
http://ow.ly/pbaT50MJK66

I contend that guidelines are necessary now because standard and mandated RCT measurements are often guessed (without discovery) and that this started in the 1980s when trialists decided they could implement their own guessed threshold sets as measurements and thereby move directly to perform RCT, effectively bypassing the pivotal discovery phase of science. This I call "Pathological Consensus".

I appreciate the courageous candor those in the seminar display. Recall that I mentioned the Chicago consensus meeting discussed in my video… Only statisticians will be able to help them by setting up guidelines for RCT measurements because most trialists in the fields of sleep apnea and critical care were indoctrinated in this 1970-1980s methodology just like I was until the 1990s.

It is only about 5 minutes so please watch the section of the seminar from 2200 - 2730
Now that you have seen “the smoking gun”… Please. Help.

Its up to us. There is no backup…

I began the campaign teaching about the pitfall of deriving “fake RCT measurements” for the performance of RCT in the study of synthetic syndromes before the pandemic

What is a fake measurement tool and how are they used in RCT.

It is amazing that in the middle of these teachings, the emergence of the pandemic provided a natural social experiment to show how critical care researchers would respond to a clear counter-instance to RCT applied to synthetic syndrome science.

Acute Respiratory Distress Syndrome (ARDS) is one of those “synthetic syndromes”. ARDS was defined nearly 50 years ago by a pulmonologist named Tom Petty. Editorial: The adult respiratory distress syndrome (confessions of a "lumper") - PubMed

Dr. Petty was a confessed lumper and he included severe viral pneumonia in the ARDS definition he made up. Unbelievably, nearly 50 years later, Dr. Petty’s decision to include severe viral pneumonia in ARDS would have severe impact on perceived EBM care for COVID pneumonia in 2020 and provides the basis for us to see how synthetic syndrome scientists think when faced with a disease (COVID pneumonia) that never existed before Dr. Petty’s guess but meets the criteria for the syndrome made up by Dr. Petty nearly 50 years earlier.

So here you see them over 30 years studying corticosteroid treatment of Tom Petty’s ARDS.
image
https://www.minervamedica.it/en/freedownload.php?cod=R02Y2010N06A0441

Then about 12 years later in 2019 Cochrane Review provides a metanalysis examining the mortality endpoint associated with Corticosteroids in ARDS. One year before the pandemic) Cochrane review states: "We found insufficient evidence to determine with certainty whether corticosteroids… were effective at reducing mortality in people with ARDS.…" This RCT evidence caused perceived “evidence based” opposition to corticosteroid treatment of “ARDS” due to severe COVID-19 pneumonia (remember severe COVID pneumonia was called ARDS because Tom Petty included severe viral pneumonia in the ARDS definition he made up in the 1970s)**.

In contrast to the 35 years of “ARDS” RCT, this is the study of Corticosteroids in severe COVID Pneumonia which never-the-less meets the 1970s definition and the 2012 criteria for ARDS.

Now those clinging to the RCT and metanalysis for ARDS (the Tom Petty synthetic syndrome) were totally confused.

So this is what has emerged for this social experiment…The splitting the ARDS synthetic syndrome into a dichotomy

But why a dichotomy? Why should we believe that NON COVID ARDS is a syndrome? If we had a pandemic of pancreatitis would we be splitting ARDS secondary to pancreatitis based on its effect on the results of a metanalysis wherein so called pancreatitis associated ARDS dominated. .

It is interesting to watch and read the effects of the cognitive flagellations as the application of RCT to Tom Petty’s 1970s guessed synthetic syndrome is exposed as non-science by the emergent pandemic . But couldn’t we see it was non-science all along? We bundled viral pneumonia induced pulmonary dysfunction due to a novel virus in 2020 with pulmonary disfunction associated with trauma for an RCT because one pulmonologist thought that was a good idea in 1976?

Worse though is the emergence of the new synthetic syndrome for RCT “Non COVID ARDS”. This shows the lesson of the pandemic was not learned. RCT are not applicable to guessed cognitive buckets of diseases because the bucket mix changes all the time. Sometimes that change is due to a new virus. Scientists must learn the lesson the new virus has taught about the futility of guessing syndromes and making up measurements for RCT.

Now think again about this process:

  1. Prior to1975 Tom Petty guesses a syndrome “ARDS” and includes severe viral pneumonia.
  2. 1987 RCTs begin investigating corticosteroids in the treatment of Tom Petty’s guessed syndrome…
  3. Over 3 decades the results of the RCT for “ARDS” are mixed but largely weak or negative.
  4. COVID Pneumonia emerges as the greatest acute killer since WW2.
  5. Academics decide that since COVID pneumonia meets Tom Petty’s 1970s definition (as amended) and corticosteroids have failed in RCT these drugs should not be used in severe COVID pneumonia because RCT of Tom Pettys ARDS have been negative.
  6. An RCT is performed only on severe COVID pneumonia and it shows efficacy.
3 Likes

This RCT of the synthetic syndrome “obstructive sleep apnea” has it all.

  1. Use of two guessed measurements for endpoints (the AHI and ODI)
  2. The guessed measurements are both comprised of guessed thresholds from the 1980s,
  3. Selection of two arbitrary (guessed) threshold changes from baseline for endpoints. (50% & 25%)
  4. Selection of an arbitrary threshold as an endpoint. (AHI of 20, which previously was considered moderate severity but for this RCT is selected as an endpoint target).

“Responder rates were used to evaluate efficacy end points, with responders defined by a 50% or greater reduction in AHI to 20 or fewer events per hour and a 25% or greater reduction in oxygen desaturation index (ODI) from baseline.”

Remember sleep apnea is the prototypic synthetic syndrome of those discussed in this thread. Hypoglossal nerve stimulation with an implanted device is presently marketed as an optional treatment for Obstructive sleep apnea.

Remember in the the Seminar of ATS and ERS the AHI was identified as non reproducible and poorly correlated with morbidity

If you missed it earlier here in the ATS and European Respiratory Society Seminar they discuss the AHI
Watch at least these about 5 minutes Scroll to 22:00 & watch at least to 27:30.
http://ow.ly/pbaT50MJK66

As I have pointed out in the past, in “synthetic syndrome science”, since the measurements for the RCT endpoints are arbitrary and comprised of guessed thresholds, the researcher is free to pick any arbitrary and guessed threshold value for an endpoint or any threshold change from baseline for an endpoint and in fact here they guess two different arbitrary changes from baseline for two different guessed measurements for the RCT.

4 Likes

If you have followed this thread please join in the discussion at the new linked thread which discusses Syndrome Science based research of ARDS, which was perceived as relating to the critical care management of COVID pneumonia.

I was wondering whether acute chest might be another example of this lumping together of disparate entities into a syndrome. It seems acute chest syndrome can occur due to fat embolus or atelectasis secondary to vasoocclusive pain or due to infection or asthma, which seem to be pretty different causes.

1 Like

This is an excellent question because it brings into focus the relationship of clinical syndromes and the group I will call “Synthetic Syndromes”.

Synthetic syndromes are legacy syndromes guessed in the 60s and 70s which are comprised of variable sets of diseases having diverse pathophysiology but which are combined by pathological consensus. These include ARDS, Sepsis and Sleep apnea. They exists today as synthetic syndromes because they clinically present in a strikingly similar way and one or more physcians in the remote past lumped them thinking they had a common pathophysiologic basis. Over many decades the synthetic syndrome became the platform dogma for funding, research and protocolization.

Specifically, synthetic syndromes lack a common fundamental (apical) pathophysiologic basis.

In contrast, Acute Chest Syndrome (ACS) is a clinical syndrome which potentially comprises a set pathologies caused by sickle cell disease (SCD). Here the syndrome (as a cognitive entity) is really a memory or teaching support tool useful to herald the need for testing and intervention responsive to the common cause (which is SCD) and one or more associated triggers such as infection.

Clinical syndromes like ACS are highly useful. Object thinking and teaching is highly efficient and effective so lumping the different manifestations into a syndrome is effective for teaching, memory and awarness. Here there is a common pathophysiologic basis. The measurements defining a clinical syndrome may rise to the statistical level for the study of treatment targeting the apical or a substantial component of the downstream pathophysiology.

However, a methodological error occurs when trialists and statiticians assume guesed measurements defining synthetic syndromes (things that look similar but lack a common in pathophysiologic basis) like ARDS rise to the statistical level. There may be some value in teaching a given synthetic syndrome but it is cognitively hazadous to think of a synthetic syndrome as an object for a common protocol or for RCT.

This is the pivotal point that trialists must understand. Clinical syndromes with a common pathophysiologic basis are objects for which measurements may rise to the statistical level.

In contrast, legacy syndromes which, after 40-50years, have been found to lack a common pathophysiologic basis and which are comprised of a set of disparate diseases are “synthetic syndromes”. They look like a single object. Guessed measurements of synthetic syndromes do not rise to the statistical level. The mix of diseases comprising the set of diseases captured by the guessed measurements will change with each RCT and since they lack a common pathophysiologic basis this generates nonreproducibilty.

If, in the future, a common pathophysiologic basis for one or more of these synthetic syndromes and/or reproducible signals responsive to a common pathophysiology is(are) discovered, then measurements derived from that discovery will likely rise to the statistical level.

Alternatively the synthetic syndrome may be divided into subsets but only wherein a common pathophysiologic basis or fundamental specific signal for each such subset is discovered (as is true for some types of sleep apnea).

Most importantly your question brings the discussion of the difference between a “clinical syndrome” and a “synthetic syndrome”. That’s a pivotal point for all to consider.

2 Likes