Hi, I have a dataset that I have some basic intuition about how to analyze, but I could use some further guidance.
I am looking at the likelihood that a patient presenting to the emergency department will be admitted to the hospital based on timeofday of presentation. My data looks like this:
Time…Admission
01:14…TRUE
03:25…FALSE
14:25…FALSE
23:22…TRUE
…
…
I have thought about several different ways to approach this data, but none seem perfect yet:

Group by hour (24 groups), calcluate OR_hour (SUM_hour(FALSE)/SUM_hour(TRUE)) with standard errors SE(log(OR_hour)) = SQRT(1/SUM_hour(TRUE) + 1/SUM_hour(FALSE)), then look for relationship between log(OR) ~ Time. This seems like it would be highly dependent on how I group, and I’m not sure how to incorporate the SE into a linear model like that.

Take a cumulative sum of events admission (TRUE) and nonadmission(FALSE) across time from OO:OO to 23:59. This gives me two lines. If I normalize to total admissions and nonadmissions, they span [0,1], but then I lose information about the total number, which seems would impact my uncertainty in my calculations. I feel like this is analogous to survival curves and a Cox ProportionalHazards model might be appropriate, but the starting conditions feel different from those in KaplanMeier survival curves.
I would appreciate any guidance to any papers, resources, or other threads (that I failed to identify) that might put me on the right path here.
Best,
Alan