Datamethods Colleagues -
I am an anesthesiologist, and our largest professional society recently recommended that healthcare workers trained to place breathing tubes consider wearing a specialized, increasingly rare respirator mask (the “N95”) during the procedure.
This suggestion was made in the context of low access to testing.
The hospital where I worked has done an amazing job implementing rapid access to testing, such that almost all patients having scheduled (non-elective, but necessary) surgeries have a negative test prior to coming to the OR for surgery. All patients are screened for symptoms prior to coming to the OR, so the population in question are asymptomatic (symptoms override tests, in other words).
Still, concerns about the potential for false negatives have motivated some providers to advocate for universal airborne (N95 or greater) protection during intubation, because it is known to create aerosols.
The sensitivity of our test is unknown at present, but the specificity is very high. The population prevalence is also unknown, but in our state, is estimated at < 1% (for now).
I created the following visualization.
The simulation begins with a classical calculator for negative predictive value of the test, but I’m curious about your opinions of how I attempted to model uncertainty in the “Full Uncertainty” section. I model probability using draws from a beta distribution with parameters that can be set (and visualized by the user), and then do draws from those distributions to calculate a range of potential values for the parameter of interest (NPV).
Do you think this is a reasonable way of displaying uncertainty in the calculation to the user? Do you have other thoughts about ways to improve this tool?