# Multiple observations per patient, multiple binary variables, and a binary outcome

Hi!

First of all, great new platform and I’m looking forward to seeing it take off!

I have a common situation with Computed Tomography (CT) images in surgical patients, where different observers (radiologists) make different binary interpretations of features of the images. As an example, with data formatted for R below, we have patients with appendicitis where postoperative pathology shows tumors of the appendix, and matched controls with appendicitis but no tumor in the pathology report. Assume for the sake of the argument that the matching was reasonable, and that we can remove highly colinear variables intelligently based on their medical meaning (withheld here). For each patient we let several radiologists (two in the example, but could be more) interpret 12 dichotomous features of the images. I’d like to bracket problems with dichotomizing, as most of these features, including the pathology report of malignant findings, are difficult to capture in other forms.

My goal is to build a regression model to find variables that reliably predict tumors on CT (to look further into these variables), and ultimately to predict tumors with the model. I haven’t found a principled way to build this model and take multiple observers of the same images into account. Our staff statistician recommends building separate logistic regression models for the observers and taking means afterwards, but this, to me, seems like a total ad-hockery with loss of important information where radiologists disagree/agree.

The problem where no gold standard interpretation exists seems abundant, but is rarely dealt with directly in the surgical literature. Any helpful ideas?

``````data.frame(
patid = c(1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L,
15L, 16L, 17L, 18L, 19L, 20L, 21L, 22L, 23L, 24L, 25L, 26L,
27L, 28L, 29L, 30L, 31L, 32L, 33L, 34L, 35L, 36L, 37L, 38L, 39L,
40L, 41L, 42L, 43L, 44L, 45L, 46L, 47L, 48L, 49L, 50L, 51L, 52L,
53L, 54L, 55L, 56L, 57L, 58L, 59L, 60L, 61L, 62L, 63L, 64L, 65L,
66L, 67L, 68L, 69L, 70L, 71L, 72L, 73L, 74L, 75L, 76L, 77L,
78L, 79L, 80L, 81L, 82L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L,
11L, 12L, 13L, 14L, 15L, 16L, 17L, 18L, 19L, 20L, 21L, 22L, 23L,
24L, 25L, 26L, 27L, 28L, 29L, 30L, 31L, 32L, 33L, 34L, 35L, 36L,
37L, 38L, 39L, 40L, 41L, 42L, 43L, 44L, 45L, 46L, 47L, 48L,
49L, 50L, 51L, 52L, 53L, 54L, 55L, 56L, 57L, 58L, 59L, 60L, 61L,
62L, 63L, 64L, 65L, 66L, 67L, 68L, 69L, 70L, 71L, 72L, 73L, 74L,
75L, 76L, 77L, 78L, 79L, 80L, 81L, 82L),
var1 = c(0L, 1L, 0L, 1L, 1L, 1L, 1L, 0L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 0L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 0L, 1L,
1L, 1L, 1L, 1L, 0L, 1L, 0L, 1L, 1L, 1L, 1L, 0L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 0L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
0L, 1L, 1L, 1L, 1L, 1L),
var2 = c(0L, 0L, 1L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 1L, 1L, 1L, 0L, 0L, 1L, 1L, 1L, 1L, 1L, 0L, 0L, 0L,
0L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 1L, 0L,
0L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 1L, 1L, 1L, 0L, 0L,
1L, 0L, 0L, 0L, 0L, 1L, 1L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 1L, 1L, 1L, 0L, 0L, 1L, 1L, 1L, 1L, 1L, 0L,
0L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L,
1L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 1L, 1L, 1L,
0L, 0L, 1L, 0L, 0L, 0L, 0L, 1L, 1L, 0L, 0L, 0L, 1L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L),
var3 = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
0L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 0L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 0L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 0L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L),
var4 = c(1L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 1L, 1L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 1L, 1L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 1L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 1L, 0L,
0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 1L, 1L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 1L, 1L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 1L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L,
1L, 0L, 0L, 0L, 1L, 0L),
var5 = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L,
1L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L,
0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L,
0L, 0L, 0L, 0L, 0L, 0L),
var6 = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L),
var7 = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L,
1L, 1L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L,
0L, 0L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L,
0L, 0L, 0L, 0L, 0L, 0L),
var8 = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 1L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 1L,
0L, 0L, 0L, 0L, 0L, 0L),
var9 = c(0L, 0L, 1L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L,
1L, 0L, 0L, 0L, 0L, 0L, 1L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L,
1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L,
0L, 0L, 1L, 0L, 0L, 0L),
var10 = c(1L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L),
var11 = c(1L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 1L, 0L,
1L, 0L, 1L, 1L, 0L, 1L, 0L, 1L, 0L, 0L, 1L, 1L, 1L, 0L, 0L,
0L, 0L, 1L, 1L, 0L, 0L, 0L, 1L, 0L, 1L, 1L, 0L, 1L, 0L, 1L, 0L,
0L, 0L, 0L, 1L, 0L, 0L, 1L, 0L, 0L, 1L, 1L, 0L, 0L, 0L, 0L, 0L,
0L, 1L, 0L, 0L, 0L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 1L,
0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 0L,
0L, 1L, 0L, 1L, 0L, 1L, 1L, 0L, 1L, 0L, 1L, 0L, 0L, 1L, 1L, 1L,
0L, 0L, 0L, 0L, 1L, 1L, 0L, 0L, 0L, 1L, 0L, 1L, 1L, 0L, 1L, 0L,
1L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 1L, 0L, 0L, 1L, 1L, 0L, 0L, 0L,
0L, 0L, 0L, 1L, 0L, 0L, 0L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 1L, 0L,
1L, 1L, 0L, 0L, 1L, 0L),
var12 = c(1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 1L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 1L,
1L, 1L, 1L, 0L, 1L, 1L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 1L, 1L, 0L,
1L, 1L, 0L, 1L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L,
0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 1L, 0L, 1L, 0L, 0L, 0L,
0L, 1L, 1L, 1L, 1L, 0L, 1L, 1L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 1L,
1L, 0L, 1L, 1L, 0L, 1L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 0L),
outcome = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 1L, 1L, 0L,
1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 1L, 0L, 1L, 1L, 0L, 1L, 0L, 0L, 1L, 1L, 0L, 1L,
1L, 0L, 1L, 0L, 1L, 0L, 0L, 1L, 0L, 0L, 1L, 1L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 1L, 1L, 1L,
0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L,
1L, 1L, 0L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 1L, 0L, 1L, 0L, 0L, 1L, 1L,
0L, 1L, 1L, 0L, 1L, 0L, 1L, 0L, 0L, 1L, 0L, 0L, 1L, 1L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 1L,
1L, 1L, 0L, 0L, 1L, 0L),
examiner = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L)
)
``````

Why not using the “number of observers predicting a tumour” and model this response as a binomial variable?

I suggest a differente structure for the data.frame. Much of the information seems to be (unnecessarily) replicated for each observer/examiner. I would use a single row for each patient and one column per observer/examiner.

```# Changing the given data.frame (in "df") so that there is one row per patient
# and one column per examiner
L <- split(df, df\$patid)
L <- lapply(L, function(v) {
id <- v[1,"patid"]
vars <- unlist(v[1, grep("var",names(v))])
outcomes <- setNames(v[,"outcome"],paste0("examiner",v[,"examiner"]))
c(patid = id, vars, outcomes)
})
df <- as.data.frame(t(as.data.frame(L)))

# this is the way the data.frame should be given
# patid var1 var2 var3 var4 var5 var6 var7 var8 var9 var10 var11 var12 examiner1 examiner0
# X1     1    0    0    1    1    0    0    0    0    0     1     1     1         0         0
# X2     2    1    0    1    0    0    0    0    0    0     0     0     0         0         0
# X3     3    0    1    1    0    0    0    0    0    1     0     0     0         0         0
# X4     4    1    0    1    0    0    0    0    0    1     0     0     0         0         0
# X5     5    1    0    1    0    0    0    0    0    1     0     0     0         0         0
# X6     6    1    0    1    1    0    0    0    0    1     1     1     0         0         0

# Now the analysis is simple:
# get the columns with the outcomes data (select by columnnames)
outcomes <- df[ , grepl("examiner[0-9]+",names(df))]
# get the number of examiners and calculate the number of positive outcomes predicted by the examiners
nExaminers <- ncol(outcomes)
nPositives <- rowSums(outcomes)
# fit a binomial model
model <- glm(cbind(nPositives, nExaminers-nPositives) ~ ., data=df[ , grep("var",names(df))], family="binomial")
summary(model)
#
# Call:
#   glm(formula = cbind(nPositives, nExaminers - nPositives) ~ .,
#       family = "binomial", data = df[, grep("var", names(df))])
#
# Deviance Residuals:
#   Min       1Q   Median       3Q      Max
# -2.0297  -0.8785  -0.7257   0.0003   4.2478
#
# Coefficients:
#   Estimate Std. Error z value Pr(>|z|)
# (Intercept)    0.94153    1.58432   0.594   0.5523
# var1          -0.95964    0.97573  -0.984   0.3254
# var2          -1.30574    0.57631  -2.266   0.0235 *
# var3          -1.94284    1.29046  -1.506   0.1322
# var4           1.20718    0.76963   1.569   0.1168
# var5          16.41538 1427.14898   0.012   0.9908
# var6          19.44978 3309.73890   0.006   0.9953
# var7           2.40053    1.13785   2.110   0.0349 *
# var8           2.51441    1.01503   2.477   0.0132 *
# var9           0.01301    0.71893   0.018   0.9856
# var10         -2.44033    1.00774  -2.422   0.0155 *
# var11          1.34195    0.52990   2.532   0.0113 *
# var12          0.41356    0.47086   0.878   0.3798
# ---
# Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# (Dispersion parameter for binomial family taken to be 1)
#
# Null deviance: 198.29  on 81  degrees of freedom
# Residual deviance: 133.69  on 69  degrees of freedom
# AIC: 159.69
#
# Number of Fisher Scoring iterations: 16
```
1 Like

Hi Jochen!
While I greatly appreciate your comment and efforts to reshape the dataset, and it might be that it is really not optimal, I believe there is some nuance I might not have explained. The outcome (tumor/no tumor) is not a predicted outcome of the observers, but rather the gold standard pathology method. What is done by each observer is provide a yes/no to each variable (var1-var12. Your ‘var1’ is no longer var1 because it is not 0/1 for instance). What I want to model is the connection between their answers to the respective variables and the outcome (tumor/no tumor).

Each row corresponds to the response from one radiologist looking at one patient. There are two radiologist (hence two rows per patient).

I hope this clarifies my problem. I also am interested in the generalization of this problem to even more observations per patient (more radiologists).

As a further emphasis on generalizability, I’d like to avoid solutions where variables are transformed into ordinals, e.g., a derived variable like this: agreement variable 1 should be read as 0 = 0, disagreement = 1, agreement 1 = 2. This would lose information on the observers.

I see. So for each variabe (var1 - var12) and per examiner you could determine the number of correct classifications. Knowing the number of patients you can fit a binomial mixed model with only an intercept and examiner as random effect. It should be possible to stack the counts of the variables and include variable as a fixed factor in the model.

This is my second attempt:

```# data is assumed in the data.frame df

# getting column indices in df that refer to the variables and to the outcome
col_variables <- grep("var", names(df))
col_outcome <- grep("outcome", names(df))
# substituting the classification by TRUE or FALSE (correct or wrong classification)
df[,col_variables] <- df[,col_variables] == df[,col_outcome]

# split the boolean results by examiner and get the numbers of correct classification per variable
L <- split(df[, col_variables], df\$examiner)
N <- sapply(L, colSums)
N # rows are variables, cols are examiners

# We need the total number of patients for the binomial model
numPatients <- nrow(L[[1]]) # same number of patients for each examiner
numPatients # 82

## analysis for each variable separately:

variable <- "var1" # choose any of the 12 variables
response <- cbind(correct=N[variable, ], wrong=numPatients-N[variable, ])
response # rows are eaxminers, cols are num correct and num wrong
# note: both examinres classified 23 pateins correct by var1

m <- glm(response ~ 1, family = "binomial") # intercep-only model
predict(m, type = "response") # both examiners calssified 28% correctly given var1
1-1/(1+exp(coef(m))) # coefficients are anti-logits
summary(m) # intercept is significantly different from 0

## analysis for all variables together:

response <- cbind(correct=c(N), wrong=numPatients-c(N))
variables <- rep(names(df)[col_variables], times = ncol(N))
variables <- factor(variables, levels = names(df)[col_variables])
m <- glm(response~0+variables, family="binomial")
1-1/(1+exp(coef(m))) # var1 and 3 are worst, var5 and var7 are best

# Statistics should consider correlation within examiner
# I would use examiner as random factor:
examiner <- factor(rep(names(L), each = nrow(N)), levels=names(L))
lme4::glmer(response~0+variables + (1|examiner), family="binomial")
# --> does not converge, but here we have only two examiners, and with no variance
# with just 2 examiners, one can model a fixed effect as well:
m <- glm(response~0+variables+examiner, family="binomial")
summary(m) # no fidderence between the examiners, Estimates for variables as before, but with slightly highere p-values
```
1 Like

I don’t understand how you get correct classifications internally from the data, as there is no gold standard to determine which variable (var1-var12) was true/false for any given patient. To clarify, say var1 is the dichotomous answer to “is there free fluid in the abdomen”. Variability in the answer is akin to measurement error or misclassification, an is unavoidable.

I suppose the bayesian way to do this would be to assign a prior to each variable and patient, and have the data update the distributions. Prior information on sensitivity and specificity for each variable is available with some margins, and is quite different depending on what is measured.

Again, really appreciating the feedback on this class of problems!

I’m a little confused on the question so let me restate it to see if I’ve got it correct. You have a gold standard outcome, and repeated observations of a number binary variables per individual, with a different radiologist having performed each observation for each individual. You wish to determine which of the 12 variables are the best predictors of the outcome allowing for 1) repeated measures and 2) different observer effects ?

If that is the case, perhaps what you want is a cross-classified hierarchical model ? Quick explanation in the answer here: https://stats.stackexchange.com/questions/228800/crossed-vs-nested-random-effects-how-do-they-differ-and-how-are-they-specified

Example using your toy data stuffed into a data.frame called df1:

``````library(lme4)

variables <- c("var1", "var2", "var3", "var4", "var5", "var6", "var7", "var8", "var9", "var10", "var11", "var12")
form1 <- paste("outcome ~ ", paste(variables, collapse=" + "), " + (1|patid) + (1|examiner)")

m1 <- glmer(form1, data= df1, family = "binomial")
summary(m1)

> summary(m1)
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: binomial  ( logit )
Formula: outcome ~ var1 + var2 + var3 + var4 + var5 + var6 + var7 + var8 +
var9 + var10 + var11 + var12 + (1 | patid) + (1 | examiner)
Data: df1

AIC      BIC   logLik deviance df.resid
95.9    142.4    -33.0     65.9      149

Scaled residuals:
Min       1Q   Median       3Q      Max
-0.26999 -0.03331 -0.01953  0.00000  0.41487

Random effects:
Groups   Name        Variance  Std.Dev.
patid    (Intercept) 56.778646 7.53516
examiner (Intercept)  0.001723 0.04151
Number of obs: 164, groups:  patid, 82; examiner, 2

Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept)  5.994e+00  1.208e+01   0.496    0.620
var1        -4.071e+00  9.111e+00  -0.447    0.655
var2        -5.173e+00  4.832e+00  -1.070    0.284
var3        -9.338e+00  8.711e+00  -1.072    0.284
var4         6.822e+00  5.482e+00   1.245    0.213
var5         8.838e+01  2.373e+07   0.000    1.000
var6         1.599e+01  7.507e+01   0.213    0.831
var7         7.843e+00  8.457e+00   0.927    0.354
var8         1.251e+01  7.515e+00   1.665    0.096 .
var9         1.799e-01  6.240e+00   0.029    0.977
var10       -9.152e+00  7.202e+00  -1.271    0.204
var11        2.878e+00  3.874e+00   0.743    0.458
var12        7.353e-01  3.762e+00   0.196    0.845
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Correlation matrix not shown by default, as p = 13 > 12.
Use print(x, correlation=TRUE)  or
vcov(x)	 if you need it

convergence code: 0
unable to evaluate scaled gradient
Model failed to converge: degenerate  Hessian with 1 negative eigenvalues

Warning messages:
1: In vcov.merMod(object, use.hessian = use.hessian) :
variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
2: In vcov.merMod(object, correlation = correlation, sigm = sig) :
variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
``````

Again, as Jochen observed in his models, this model won’t converge with your toy dataset, because there are only 2 examiners (or, alternatively put, there is not enough data to support a model of this complexity), but it might converge in your real dataset. Bayesian approaches might also be well suited to this problem as you suggest.

Caveat: My practical experience with such models is limited but a quick google finds some didactic papers which might be helpful:

1 Like

Yes, that is a fair statement of the problem! I will look at the references and try it out. Thank you!

Anyone with pointers on to how to code this model as a bayesian regression model are welcome as well!

I thought “outcome” was the given gold standard. A “correct” classification, based on a variable, means the classification matches the value given for the gold standard.