Whatever validation is being done, it apparently isn’t rigorous enough, because this isn’t the first prediction model to fail in practice. From another thread:
https://labblog.uofmhealth.org/lab-report/popular-sepsis-prediction-tool-less-accurate-than-claimed
My concern is that your description of “AI” as “something I don’t want to explain” is largely correct. The danger is it grants financial decision makers and developers of these algorithms too much discretionary authority over clinical decision making, when the one responsible for negative outcomes is ultimately the medical team implementing it.
These are actuarial models, whether they are called that or not. Considering the trusted role insurance companies play in the economy, their decision procedures are scrutinized. Something more than internal validation needs to be done before widespread implementation of these models.