Continuity of your work: Do you have a protégé?

We all have the luxury of looking at mortality data/studies - presumably, creating methods and techniques for its analyses etc. But at some point, we all deep down know that we shall have to contribute to that statistic - no solution to this, yet, and that unfortunately will mean that all our (important) work will have to stop, especially if your the main or sole driver of the work.

However, whether our mortality means that our works (projects, books, whatever we do) has to stop is something we can have control over, conditional on our being alive and actively working towards seeing to it.

This topic is inspired unfortunately, by the demise of one the young shining star PI in my institution, who was running great projects, but unfortunately became a victim of Covid-19. Lots of discussions are going on to see how to continue her amazing projects but since the projects had a high staff turn over, not many know how the projects and the logic behind them were and thus can run them as she did. Essentially, no protégés were left behind.

Now, for people like me who are yet to make any mark in the world of science, my loss means nothing, but for many of you - and you know yourselves, you have made and are still making great contributions to science in terms of method development, books, packages/functions, research etc, of which many other researchers depend on.

But, have you given a thought on what happens when (not if) you leave? Do you have a protégé you are mentoring to keep the work going and make it sustainable after you leave or is it all conditional on your presence?

As scientists/researchers, and people who care about advancing our human knowledge, this unfortunately should also be part of the questions we should be asking ourselves and actively addressing, just like we write our last will for the sake of our children - Do you have protégé to take over your work?


it’s a good Q and obsessed over in industry (eg only so many on the project team are allowed on the flight to the conference venue - if it crashes the work can continue with the remaining staff). My experience in academia is mostly the reverse: the PI knows very little about the data and code, theyve made no effort to understand where the data reside or how to access it; when their people leave they retain the ideas but have no one left to implement them. And it seems the people leave like clockwork


I’m so glad to see such an expansive question asked in this forum. I’m sorry to hear of your loss of a young PI at your institution, Nelly. I think about your question quite a lot in relation to my own work on dose individualization, for which oncology trialists have so far expressed little effective demand.

Reproducibility is the cornerstone of my own ‘continuity plan’. My DTAT papers have always been accompanied by code published permanently on the Open Science Framework (OSF). But I am now doing all my most active DTAT work ‘in the open’, in a public GitHub repository. I’ve had to relinquish some prudishness about ‘showing how the sausage gets made’, but the change has been liberating. Every time I do a git push I know I can rest easy.


Although oncology might not have (yet) expressed interest in your work, some excellent theoretical computer scientists in Europe have studied it and used it as an example of the power of logic programming for practical applications.

Markus Triska, for example, mentions your dose escalation trial design in this tutorial on Prolog meta-interpreters at about the 58:50:00 mark.

This intersection between computer science, logic programming, and statistics, fascinates me to no end. I thought you might appreciate it.


Our work converting time series matrices into images generate many examples of neutropenic death associated with chemotherapy dose/patient mismatch. The images are as stunnng as the outcomes tragic.

Stay healthy and get a protege David.

1 Like

I’m so glad to know Markus’s work is attracting such wide interest, including from statistical community! In fact, I am to some extent his protégé in this aspect of my work; it is entirely thanks to his efforts (as outlined in his Preparing Prolog video) and to his ongoing collaboration that I have been able to make effective use of pure Prolog in the ‘precautionary’ package.

Indeed, very much in line with Nelly’s original question, there is a profound sense of a long-term, multi-generational quest in the development of Prolog. The community involved in this effort — which now includes a cutting-edge modern Scryer Prolog system implemented in Rust — definitely have a sense of wishing to produce a legacy.

1 Like

I can’t exactly remember where I read this, but the saying goes: “If you want to know what the ‘hot’ technology will be in the near future, look at what was being studied in the computer science literature 30 years ago.”

Prolog has been out of fashion for so long, but the concepts are so fundamental, it is destined to come back in some form.

In terms of statistics, there have been some attempts to extend Prolog with probability theory. That paradigm is know as “statistical relational learning.”