The End of the "Syndrome" in Critical Care

I have been periodically discussing the state of RCT in critical care. While there have been but few on-line comments, I have been encouraged by the off-line comments and approvals of that which I am trying to teach here. This work comes from 40 years in the ICU watching the evolution of the science and I speak primarily to the young who may still be developing an overarching cognitive construct of critical care science.

This is the evolution I observed over 40yrs:

  1. The emergence of “threshold decision making” in the 1980s, The threshold approach to clinical decision making - PubMed

  2. The emergence of “Consensus Syndromes” defined by guessed threshold sets like SIRS and SOFA (e.g Sepsis , ARDS, Sleep Apnea) in the 1980-90s,

  3. Non reproducibility of RCT applied to most of the above “syndromes”

  4. The progressive protocolization of the medical care of the consensus syndromes since the turn of the century (often despite non-reproducibility of RCT).

Without this perspective it would be hard to see how all of this happened. After all, critical care syndromes seem so scientific on the surface. However, while there was much evidence that the science, mathematically embellished by statistics, was poorly productive, it was not till the Pandemic that the dangers of the weak science was exposed. Here is a paper discussing the evolution of thought back from the idea that severe COVID Pneumonia was the “syndrome” Acute Respiratory Distress Syndrome (ARDS). You see the authors struggle with how to present the discovery that treatment effective for ARDS is not effective for sever COVID pneumonia which meets the consensus criteria for ARDS.

Reading the discussion is like rewinding 30 years of the evolution of the science which pinned together 1, 2 , 3 & 4 above. Its strange that scientists think their consensus automatically applies to some future disease. Yet virtually all did. Scientists who were taught these syndromes were real cannot process the concept that a disease which meets the criteria for the syndrome is not in the syndrome. It produces cognitive dissidence. Not even the Lexicon works anymore. Is it ARDS or not? A silly question really, but one that makes sense to them in their made up world of consensus syndromes.

Every mistake in medical science has a companion mathematical mistake. The language of medical science is math. The construct of the human time matrix is comprised of time series of mathematical signals.

Here you see an alternative argument against critical care syndromes derived from the same problem but this is a more technical problem of the critical care syndrome construct.

This argument was made before the COVID pandemic and, while still valid, the argument is transcended by the evidence derived from the pandemic that the 1990s consensus syndromes themselves and their mathematical embellishments are the problem.

For critical care science we stand at a pivotal point. Yet, are the older thought leaders and their acolytes prepared to openly introspect? Imagine standing there and suddenly realizing at age 62 yrs that the geocentric model you have taught your students and studied for 30 yrs is not valid. That’s why this is an opportunity for the young to escape the paradigm generated by their thought leaders when those thought leaders were young. Its your turn now. Time to make your own path.

This is the end of ARDS. Everyone knows severe COVID pneumonia cannot be combined in for an RCT with all the other traditional ARDS cases. its not a cognitive leap to realize that ARDS as defined by the criteria is not a valid pathophysiologic entity which will stand the test of time as it exists today.

If the pandemic precipitates new thinking and open reconsideration of the fundamental dogma of critical care, we can expect a revolution in critical care science. If not we can anticipate another decade of thought leaders who secretly do not really believe the syndrome dogma they teach.

If you are a young scientist or mathematician observing all of this, this is your opportunity. In the translated words of Louis Pasture “In the fields of observation, chance favors only the prepared mind.”
Today, after this pandemic, in critical care science, fortune favors the bold youth.


Thank you for raising this here Ilynn. My science career began anew after a 15y hiatus when I was brought in to help run the first RCT using a urinary kidney injury biomarker as an inclusion criteria for a trial of EPO in AKI. As I learnt more I became more uncertain about AKI - syndrome or disease, and certainly injury with multiple different pathways & with a somewhat arbitrary definition based largely on a surrogate marker. What you write resonates with my experience in AKI research.


Thanks for your comments Dr. Pickering.

AKI (Acute Kidney Injury) is exactly the type of consensus “syndrome” which is under discussion.

“Acute kidney injury is defined as an increase in serum creatinine levels by at least 0.3 mg/dl within 48 hours or 1.5-fold the baseline, which is known or presumed to have occurred within the preceding 7 days, or—according to the urine output criterion—urine volume less than 0.5 ml/kg/hour for at least 6 hours.”

This definition has great value for clinical awareness of the potential significance of small changes in creatinine but at the cost of generating the perception that it represents a “syndrome” when in fact it is actually a lab value perturbation… So the conversion of what might have been called clinically in the past “a minor rise in creatinine” into the “syndrome” AKI is good for clinical, pragmatic reasons but with consequences for research.

The article cited shows the myriad of diseases which can cause a perturbation of creatinine. For example AKI due to sepsis is sometimes called AKI-S. One might suspect that any biomarker might be affected differently in different diseases.

For RCT all consensus syndromes are best considered to comprise a set .

Therefore when RCT are applied to a consensus syndrome,. treatment or biomarker discovery is not under test for one disease with one primary pathophysiology (with comorbidities) but rather under test for a set of diseases with a set of primary pathophysiologies (each with comorbidities).

If the composition and quantification of the set of diseases is not determined or mathematically considered in the trial, (as it generally is not with sepsis or ARDS) then the composition of the set may change with the next RCT rendering the results non reproducible.

RCTs deal with heterogeneity within the disease and the population but the application of an RCT to a cognitive bucket comprised of “mixed fruit” wherein the mixture is unknown and fluidic has been the fundamental cause of failure of RCT for three decades in acute care medical syndrome research…

I don’t know if you saw my other post but here is the article which we authored to explain the consequences to public health of this type of oversimplified thinking. ARDS was the one shining star. The one syndrome to which an applied RCT was perceived to render reliable results. The pandemic proved that was a false perception.

So a statistician could write a paper which models the application of an RCT to a set of many diseases (the diseases being hidden within the set) wherein each disease has a different mean primary outcome. The model would then show how a hidden change in the composition and quantification of the diseases in the set varies the outcome of the RCT and the outcome of a protocol of treatment responsive to a prior RCT. (As occurred in a dramatic way with COVID-19)

Trialists need this help because this is a hard concept at the intersection between a simple cognitive and pragmatic construct with clinical value and the complexity of the statistical math it creates when the treatment of such a construct is is under test.

1 Like

Thanks again, this is interesting. I think you are right about AKI, but alas there are other issues that still compromise trial quality.

By the way - The definition you quote of AKI is not entirely accurate. It is identified by those things (I’ve challenged the validity of some of them), but that is not the definition. The definition is a rapid loss of GFR. The first consensus definition ( included a reduction of GFR of > 25% as the minimum requirement. This occurs days before the increase in SCr. In the first consensus definition the idea that the loss of 25% GFR was equivalent to a 1.5 fold increase in Cr was introduced. Unfortunately, the mathematics was wrong and my first contribution in the field was to point this out (it should be 33%; GFR shot by RIFLE: errors in staging acute kidney injury - PubMed). The subsequent definitions (including what was quoted above) dropped the GFR change [because there is currently no direct way of measuring that in critical care]. This was a mistake I believe and has added to the confusion. Currently, the only successful treatment for AKI is “call the nephorologist” - trial after trial has failed, partly for the reasons you outline, but also, I believe because of the use of surrogate biomarkers of actual renal failure. SCr changes are only useful if we can guarantee constant production of SCr - unlikely (I’ve evidence that it almost stops in the case of Cardiac Arrest for example) & Urine Output criteria were drawn from Chronic conditions and without good evidence applied to the acute situation. They are also heavily compromised by ICU fluid loading practices.

1 Like

One more point which might be useful.
When a threshold value or threshold perturbation of a signal defines the population “with the condition” and the signal is responsive to the function of an organ and NOT specific to injury of the organ the syndrome is particularly broadly defined so that the number of different diseases (or pathophysiologic disorders) falling within the set is much larger.

Here you can contrast HS troponin with creatinine. Troponin is released by injured cardiac cells whereas creatinine is cleared by the the function of the kidney. The term Acute Renal Dysfunction would therefore have been a more correct name and this dysfunction (and associated creatinine rise) may be due to renal flow and have nothing to do with the kidney itself.

You can see how wide the set actually is. Similar in breath to the set for the syndromes called “Sepsis” or “ARDS” which are predominantly based on threshold values or, in the case of sepsis, threshold perturbations of markers for organ function.

Finding biomarkers responsive to injury or the mediators of organ injury is of course the goal but using organ function as a gold standard surrogate for the study of those will introduce the set problems I noted above.

There is no easy way to solve this. When a functional surrogate is used, the study has to be structured and the time series data sufficient to identify target pathophysiology and the confounding pathophysiology. This is why we use the human time series matrix model to separate these… …


Thank you for those corrections. Happy to be speaking to one of the enlightened experts. Most I have met are still arguing for their failed 20th century guessed metrics like SOFA.

Anyone desiring to learn about “syndromes” should read this amazing story of the apnea hypopnea index (AHI), and contemplate the massive number of trials performed over the last 40 years which were wasted using this fluidic and capricious “gold standard” metric.

The rise and fall of syndrome criteria is a common theme. The political impetus for the research center (and for academic careers) is much greater for the performance of RCTs using the standard surrogates than for the investigation of the surrogate. For this reason, failed surrogates last 30-40yrs as there is no accountability. In fact, compliance is exactly what is required to get the grants…

So much of the this research is unknowingly dead on initiation. I always feel badly for the young researchers excitedly presenting a poster of their research using these useless surrogates at the conferences. Its a very sad part of these often otherwise amazingly academic conferences. I try to teach these young to be wary of dogma, its generally hopeless as they have been indoctrinated by mentorship.

Arming those who seek the truth, and have the courage to resist political expediency, with a unifying mathematical explanation for the common failures of all of these of “syndrome trials” would be a great action. . Publication of a simulation of RCT applied to a set as described above. would be a breakthrough event. I hope someone will publish such a simulation because this level of understanding needs to enter the mainstream so the young researchers are no longer mentored into this trap. .

1 Like

To be fair I have linked the response to the above "rise an fall " article of Pevargenie et al published one year later.

These authors are undisputed thought leaders of OSA. Many have studied OSA using “the AHI” for decades. That this article describing 40 years of failed measurement dogma starts out with a quote about measurement is profoundly ironic and sets the stage for the rest of the reading which many of you will find quite remarkable. These are not writings about chiropractic but a mainstream field of science. .

This is relevant for the young researchers in this forum because the absolute fluidity of “the AHI” has been known for many decades . So this is not a new discovery. Yet you can see here an example how thought leaders have perceived their own dogma in the past. As Kuhn says; "Though they may begin to lose faith and then to consider alternatives, they do not renounce the paradigm that has led them into crisis. They do not, that is, treat anomalies as counter-instances, though in the vocabulary of philosophy of science that is what they are."

Therefore, the many variations of the AHI were always acceptable despite counter instances. The AHI is just evolving. There is no alternative to “the AHI” flawed as it is, the AHI is the gold standard.

Yet, even the terminology of that flawed thinking is wrong. The term “The AHI” is a misnomer as there are untold species of AHIs so there is NO “The AHI” to study, use for RCT, or even speak of. The entire 47 year process was a perpetual observation trial with recurrent non reproducible RCT using variations of the capricious fluidic AHI counting metric. Why? … No hard math.

What is lacking in this article is an explanation of why (mathematically) 4 decades of OSA, RCTs were not reproducible. It’s not clear that the authors understand this (or perhaps I missed it). They note the AHIs variability but cite the need to specify its derivation for the future. After 40+ years, certainly they would have called a moratorium on use of AHIs in RCT if they understood the problem mathematically.
. .
This addition based “science” (summing various 10 second threshold events to render a secondary thresholds of 5, 15, 30) ) was the prototypic “Threshold Science” of the 20th century… With “threshold science” an entire medical science is built on a set of guessed thresholds as a gold standard for RCT.

I was ready to dismiss the entire article as propaganda until I read this quote below.

" #### 3.2.3 Merging a test result with a clinical condition

"In the literature so far, the concepts OSA as a test result (i.e. AHI above threshold) on the one hand and OSA as a clinical disorder on the other, while being fundamentally different, have been merged inadvertently. The entanglement between the AHI and clinical manifestations of OSA has been a source of confusion in both clinical practice and clinical research. By using the term “syndrome”, this linking has been accentuated. The word “syndrome” (from Greek language: συν-δρóμος meaning “con-currence”) denotes a clinical presentation of different phenomena that together invoke a common cause on which the eventual diagnosis is established. However, this expression may introduce a bias when a cause is inferred but cannot be proven. Causal inference may be spurious when clinical manifestations are non-specific, as in OSA. In that case, a gold-standard reference test is needed to demonstrate that the presumed cause is present. If such a test is not available, the “syndrome” definition has no solid foundation and the disease construct may be nothing more than a black box. While the defective AHI is currently used to confirm the diagnosis, there is as yet no gold-standard (or “ground truth”) to define the real disease state of OSA. Therefore, the causative role of OSA in provoking symptoms and signs is hard to ascertain, even in the presence of very high AHI values."

That is introspection! That is hard after 40 years of teaching the “gold standard”. Hats off. True respect.

Finally, these thought leaders with minds opened now need courageous self-confident statisticians to step up and bring some hard math to explain why the many AHIs " cannot go forward as a gold standard (or gold standards) for RCT.

Of course, I am always hopeful some thought leaders will join the discussion in this forum. It would be great to have their perspective or the perspective of the statisticians who use AHIs as gold standards for defining OSA study populations in RCT or OBS trials. Please consider forwarding this link to them. One can only hope.


Interesting to ponder this phenomenon. As I would normally think about it, a “syndrome” is a curious hint that “something is going on here”. It should represent a notable constellation of findings that makes you want to ask more — and deeper — questions. As I recall, at least some part of the stimulus to the development of the DSM was to provide formal criteria for enrolling research studies.

But of course DSM is nowadays a compendium of (reified?) diagnoses, rather than conjectural targets of research. Has the evolution of “consensus syndromes” shared anything in common with that of DSM? Might DSM have even set the stage?


Excellent .This provides greater perspective re: the origin and purpose of syndromes. .
The DSM and ICD provide the basis for quality improvement and revenue assignment (billing codes)

With consensus “syndromes” like AKI this assignment serves an important role by changing a blood biochemical perturbation into an “object” which can then be formalized for early recognition and consultation as well as defining a billing event for revenue assignment.

The problem occurs when trialists misconstrue the defined “objects” (i.e. threshold criteria defined acute care consensus syndromes such as Sepsis, ARDS, AKI and OSA) thinking they “rise to the statistical level” (a phrase from @R_cubed ).

So this summarizes the problem.

  • Consensus Syndromes in acute care are useful for clinical awareness, quality improvement and billing.

  • Since they comprises sets of different diseases, they do not rise to the statistical level for defining a population for RCT or for training sets of ML and AI.

This explains why 30+ years of RCT which use consensus syndrome threshold criteria to define the study population are nonreproducible.

Considering an action plan to deal with this problem here are five questions. Hopefully others will have more…

  1. What are the options going forward?
  2. What is the best way to determine mathematically if this assessment is correct?
  3. How can trialists be made aware of this problem?
  4. How can we simulate the outcome variabilities associated with the application of an RCT to a set of different disease?.
  5. The recognition that there is a fundamental problem with the longstanding research methodology is a very politically sensitive issue, how can we mitigate the social anxiety it causes so it can enter mainstream academic discussion?

Thanks for initiating this conversation llin. On your Point 5 I might suggest that the problem goes wider than the triallists, there are a lot of people in ICM who identify as Educators and we need to persuade them that there is a lot of non-science built in to their syllabuses that needs to be identified and replaced. It is now 12 years since I realised why biophysical colloid osmotic pressure therapies do not do what Victorian Starling fluid physiology predicts (thanks to my friend Charles Michel) but there are some big personal and institutional reputations out there that choose to ignore change.

Thanks for the comment Dr. Woodcock. I agree, there are many dogma in acute care for which there seems to be little potential to achieve reconsideration.

Yet, there is distinct advantage for the achievement of focused reform of the 20th century “RCT of Syndrome” research construct . I think we can prove mathematically that an RCT of a large set of diseases (specified as a syndrome by a consensus set of threshold criteria) cannot be expected to render reproducible results. We can show that despite the clinical phenotypic similarities which made them appear to comprise a common condition in the simple 20th century way of perceiving syndromes, they cannot be studied as if they comprise one adverse condition.

Such a mathematical proof would revolutionize acute care medicine by setting free a new generation of scientists. It would save young careers and open young minds to the intellectual benefit and gratification found in the antidogmatic quest for truth “against the wind” relevant mechanisms such as you describe.

I suspect many of you have been contemplating how to solve this problem because everybody (but the trusting naive young researchers) know that 20th century syndrome science is an intellectual cul-de-sac. Yet no one knows what to do. Most are afraid to even publicly comment because this “syndrome science” is the lynchpin of modern critical care science.

First we must step back and ask;:" If we do not do anything, what will stop syndrome science from preceding as it is for another 20+ years?" The answer is; "Probably nothing ". This will have career wasting consequences on the next generation of critical care scientists and impede progress in the field which we are entrusted to assure moves forward as rapidly as reasonable for the health of the public.

So silence is not an answer. One could go to the public because the futility of doing research using sets of guessed 20th century thresholds as measurements would be fairly easy for them to understand but many of the public are already doubtful of the general integrity of medical science and that has had unfavorable consequences for vaccination rates etc.

After we wrote a widely cited review of the Pattern of Unexpected Hospital Death to kick the education process off, Patterns of unexpected in-hospital deaths: a root cause analysis - PubMed I built a education area at our research center and started an archive of the relational time patterns planning on teaching nurses and young physicians the relational time series patterns and contrasting the actual time patterns to the standard thresholds.

However, interest lagged as SIRS was replaced as the measurement for sepsis with SOFA in 2916, a 20th century threshold set thought to be much more robust for clinical use and for research. Furthermore simple threshold summation scores from the 1990s like the Modified Early Waring Score (MEWS) were imbedded into the electronic medical record for clinical decision making. . This “shoring up” of the 20th century threshold sets, rather than moving to teaching relational time patterns, was unanticipated and I found little interest in the real patterns. Then the Pandemic preempted further education efforts… .

The problem is of course that clinical research generally originates from the clinicians. They study using metrics they use. It’s an endless cycle of failure if the metrics they use are not science based,.

So I have thought of funding an education program which teaches the relational time patterns and contrasts them to simple 20th century threshold research, threshold thinking, and threshold based protocolized decision making.

These are the steps I believe are necessary to prevent continued stagnation. If anyone desires to help please offer alternative ideas in this thread or indicate an interest in .participation in the relational time pattern education curriculum development.

If, on the other hand, you have a defense of 20th century syndrome science, and the courage to promulgate it here, your comments, criticisms and/or debate are welcome on this thread because all we ever see is silence.

1 Like

This Link shows a very courageous scientist exposing Syndrome Science Model in his field (like the courageous scientists in Germany in “The Rise and Fall of the AHI” article re: the sleep apnea syndrome science model) . You don’t get favored for grants writing unpopular truths. Therefore, it is courage, perhaps to the point of losing career opportunities in the US…

Yet, these expert authors, perhaps siloed in each of these fields, don’t seem to realize (or perhaps they do realize) that the entire Syndrome Science Model is actually a common fundamental pathological science of Langmuir flowing from a single common 20th century cognitive error. This error held that different diseases with similar overt clinical presentations could be combined together into a mathematical SET and then studied with RCT as if the mathematical SET was one disease.

Unfortunately in the heady days of late 20th century medical science there was also a view that randomization solves for substantially all hidden heterogeneity.

So a single, fundamental 20th century cognitive error (like the single error of the geocentric model) produced the pathological “Syndrome Science Model” which, amazingly, 40 yrs later, dominates critical care science today, Then again, the geocentric model lasted 1500 years. .

Someday this will be considered a sentinel event (like the meat processing plants of Upton Sinclair’s Jungle), which provide a window into the social forces which can .disrupt modern medicine on a massive scale (much greater than wrong theories (like the hydrogen ion theory of peptic ulcer disease).

Let us promulgate the lesson and help bring an end to the waste of this pathological science…

Thanks Lawrence- interesting article. A couple of comments/questions on the text:

“One clear advantage of lumping together different diseases into a single entity was that large, randomized trials of treatment became feasible. Indeed, since the number of trial patients required to show significant clinical differences of 5–6% for key variables might require more than two thousand patients, the need to adopt definitions broad enough to allow enrolment of a sufficient number of patients became evident . We believe that this has been a main driver for modifications to the ARDS definition that occurred over subsequent decades…Such definitional simplification facilitated study enrollment. For many clinicians, each definitional refinement solidified ‘ARDS’ as a descriptor of a distinct “disease-like” entity.”

Question: If it is the the fear of slow RCT recruitment that has been driving potentially inappropriate “lumping” of subjects with wildly variable primary diseases under “syndrome” umbrellas, then hopefully researchers are looking at alternate ways to recruit patients with more homogeneous pathology (e.g., international, ongoing/adaptive platform trials)?

“COVID-19 has clearly taught us that this “atypical” form of ARDS requires different treatment than “typical” ARDS….This progressive migration towards “personalized” medicine implies the loss of the primary therapeutic advantage that initially led to lumping that justifies a uniform approach.”

Comment: Maybe the word “personalized” isn’t the right descriptor for the desired goal here (?) For many (?most) readers, the term “personalized” conjures some futuristic approach to therapy that is tailored to each individual patient. I’m not talking about reasonable qualitative modfications to treatment (e.g., adjusting drug dose based on a patient’s renal function). Rather, I’m talking about the more pie-in-the-sky types of “personalization” that some researchers seem tempted to aim for (e.g., let’s try to identify 49 different treatments for the 49 different “classes” of acute pancreatitis…). In fact, the authors aren’t proposing a truly “personalized” approach here, but are simply making a much more reasonable, achievable recommendation- namely, that we should probably restrict application of RCT methods to contexts in which they have traditionally borne fruit i.e., the study of treatments directed at discrete/well-defined diseases/clinical events (e.g., acute occlusion MI; strep pharyngitis; pneumococcal pneumonia).

“It seems more logical simply to label the diseases as they are: for example, pneumococcal respiratory distress, herpes respiratory distress, pancreatitis respiratory distress, etc. This de-lumping’ approach would push our thinking towards truly personalized medicine, realizing that not only the etiological treatment but also the appropriate respiratory approach might well be different in different situations and at different stages of the disease process.”

Comment: Same comment as above re the term “personalized,” but otherwise :clap: :clap:

1 Like

Yes, I believe a recent driver (past 2 decades) for the syndrome science model has been to increase the size of the SET under test. The inclusion of vial pneumonia for example in ARDS. Its a strange way to think however. There is no reason to `believe that a common pathophysiology exits for pneumonia and “shock lung” after trauma but it increased n substantially.

In the 1990s larger RCT became THE goal. That’s not surprising. A trialist was once perceived as the pinnacle of medical scientists. That has faded because most were operating under the “syndrome science model” so the adverse consequences of making RCTs themselves as the goal in critical care was increasingly perceived… False benefit often initially rides high but it is always short lived.
I think you are right about the term “personalized”. I think the best meaning here is as you say, EBM is provided for those things subject to that type of study, (DVT, Stress ulceration, Hypertension, TTP, COVID pneumonia, the list goes on and on).

However, We cannot create false sets of diseases with the goal of achieving “false EBM” fooling ourselves. That’s what was done with ARDS before COVID exposed it for the false EBM it was. However, it was false all along and many outliers before COVID (of insufficient number to be recognized) paid the price for easy one-size-fits-all protocols.

However we as physicians know pathophysiology and physiology and can respond to outlier cases with physiology based treatment (like surgeons respond to outlier anatomy). I think that’s what he means by “personalized medicine”.

However, in the era of protocols many were never taught the skills of pathophysiology based medicine. It also requires the experts to stay at the bedside for more than just a few words on rounds. Its tougher medicine but we cannot fool ourselves and think we have a simple replacement as a function non reproducible RCT guided care using the capricious, failed syndrome science model…

However the next step is not to give up on RCT in critical care because n is low but rather to figure out ways to do them multicenter or by some other means not yet invented. Only when a group of scientists accepts that their present methodology has failed (and we are not there yet) will the quest for a new and innovative methodology emerge in earnest. .

. .

1 Like

Here is an amazingly clairvoyant article from Dr. Fackler written in 2017 titled "The Syndrome Has Been a Good Friend; Now Say Goodbye—Quickly"

Thanks to @davidcnorrismd for the above link.

Here are two other relevant articles from Dr. Fackler et. al. The editorial below is also quite relevant. Note the discussion of the “Learning HealthCare System” (LHCS) proposed on 2006 which did not develop and instead was replaced by 15+ years of NIH supported repetitive non-reproducible RCT of syndromes (using Syndrome Science and 20th century guessed thresholds as the measurements defining the treatment population)…

Finally this is a discussion of the near futility of processing massive number of data fragments to render actionable outputs,.

@ESMD suggested on a previous post that it should be possible to compare SOFA (Sepsis 3) with the diagnosis of sepsis. This is difficult because “SOFA in the form of Sepsis 3 is sepsis syndrome” in the narrow thinking of “Syndrome Science” just as an AHI of greater than or equal 5 is Sleep Apnea Syndrome in that field of Syndrome Science. Recall I suggested that it would be very difficult because of the cyclic logic and would require expert determination of the occurrence of sepsis each case.

Here a group from Germany does exactly that. Not surprisingly they found the guessed SIRS and SOFA had poor sensitivity and delayed detection.

This paper is remarkable in its scope and the effort applied to deal with the cyclical logic of syndrome science. The chasing of these guessed metrics and endless discussions of correlations with mortality (which is never surprising) and comparison of one guess with another guess is ubiquitous in the journals…

Every nascent scientist budding forth in Syndrome Science is confronted with these guessed measurements, but they don’t know these measurements were guessed in the 1990s without data. So, they are confused by the differences in these measurements and choose to study and compare them. But they don’t know how to do that since they are all criteria for the same thing “Sepsis Syndrome” so they try to validate them by proving they corelate with Mortality, which of course almost every elevated lab value in the ICU does.

I always wondered who funds this decadal dance of the young, endlessly chasing 1990s guesses around the journals and who reads these useless articles relating to these trivial pursuits of the pathological science.

Finally this “dance. of the pathologic science” is summarily terminated by this German group who decide to compare in 2022, the results of the 1990s guesses of SOFA and SIRS to expert assessment.

Hats off. Good courageous work. Our families may one day get one of the diseases within the scope of "sepsis syndrome’ . Lets stop the naïve dance and get going with real science.

It is appropriate that In 2022, the 30th anniversary of the beginning of Syndrome Science for sepsis that this ROC set is published.

Now even the most nonplussed have to love these ROCs. If you have been reading my analysis and review of the history and failings of fake measurements and Syndrome Science, no one should be surprised.

For decades this guessed “SIRS” threshold set was both a standard and fake measurement for defining the “disease” or “syndrome” under test. using RCT. Of course no positive such RCT was reproducible.

.“Ignorance is not a simple lack of knowledge but an active aversion to knowledge, the refusal to know, issuing from cowardice, pride, or laziness of the mind" Karl Popper

In the 21st century shadow of consensus, it’s cowardice, pride and the naivete of the indoctrinated youth. .

1 Like

I recently read this:

and then contemplated my quote above of Popper. I wish I had read this along time ago as I am guilty of the transgressions cited (e.g. in the section “The Virtuous Bully”)…

In my defense, I have been trying to expose “Syndrome Science” as a pathologic science of Langmuir ever since I discovered in 1998 that the gold standard measurement for the sleep apnea syndrome (the AHI) was a guessed set of thresholds. Nothing I have done has worked and for this reason I came here to seek help from statisticians who understand better than trialists the need to assure mathematically valid and reproducible measurements for application in RCT.

However, my goal is change, not the promulgation of ineffective “virtuous” antidogmatic thought. …

I should have learned the point of the article. Arguments which may shame or otherwise denigrate thought leaders are counterproductive. Its better to remain gracious.


In the linked preprint below you see progress with development of significant cracks in 40 yrs dogma of the Syndrome Science of ARDS as evidenced by the nascent species designations.

"We present a first comprehensive molecular characterization of differences between two ARDS etiologies – COVID-19 and bacterial sepsis."

Note the separation of the Adult Respiratory Distress Syndrome (ARDS) into separate diseases (Sepsis ARDS vs COVID ARDS). This is real progress since in the past all causes of “ARDS” were combined (if they met the threshold set criteria) as a single “disease” or “syndrome” for RCT…

Here you see the problem. For nearly 4 decades RCT have been applied to “syndromes” (Sleep apnea, Sepsis, ARDS) each “syndrome” is actually a set of different diseases which “look” similar to others in the same set and are therefore studied together in the same way they were in the 20th century when these syndromes were guessed by well meaning persons from the past.

This problem is not solved by randomization. The pandemic has exposed this and I am trying to facilitate the end of wasteful and failed 20th century Syndrome Science as the standard RCT based research method in critical care.

The important point is that there is a common simple error made decades ago in the research methodology which explains the corresponding decades of failed reproducibility of the RCT.. Again, the common error was to combine, into a set, different diseases which “look” similar to others in the same set, and study the set as if the set is one disease in an RCT.

The fact that each of these syndromes has the same simple flaw is quite amazing and the discovery of this simple common flaw and the promulgation of its nature is pivotal for the future of critical care science.

This is a form of the classic pathological science of Langmuir

However, here the mistake (causing the common flaw) was a cognitive error made by the expert collective of the time. For this reason the flaw has not been perceived or detected by the thought leaders in the field who have repeatedly performed RCT using consensus measurements (defined as a function of the cognitive error) to define the disease population they called a “syndrome”.

Finding this error is very exciting because without that discovery there is no explanation for the reproducibility problem associated with the study of these “syndromes”. RCT have been reproducible in many areas of medicine and this explains why “Syndrome Science” has been a RCT failure dominated outlier.

I am planning to write an article reviewing the history of Syndrome Science and explaining the common error, how the common error was propagated and how the pandemic exposed the common error. If anyone would like to participate in the writing please DM me.