To make better decisions it’s not enough to produce accurate predictions for decision-makers and it’s not enough to use proper scoring rules, it’s also important that our decision makers will have good problalistic thinking along with their unique domain knowledge.
What will be a good way to improve probabilistic thinking for decision-makers? Maybe all clinicians should spend some time playing poker
Gerd Gigerenzer addressed these questions in his book Calculated Risks (and may have updated these ideas in his later books). For example, probabilities should be presented as natural frequencies rather than percents or fractions.
I once thought that probabilistic thinking would help all of us improve our reasoning skills. I no longer believe that. In many cases probabilities are presented with incredible (false) precision that are arrived at by simply ignoring major sources of error/uncertainty that cannot be easily quantified. See for instance, Kay and King’s 2020 book Radical Uncertainty: Decision-Making Beyond the Numbers.
I’m not sure what you mean by “probabilistic thinking”, but distinguishing thinking within a proposed mathematical model vs. thinking about a model in a rigorous way is the essential feature of “probabilistic thinking”, which is an application of tools from mathematical logic and computability theory.
Thanks, my comment does require clarification. What Kay and King argue, compellingly in my view, is that a probabilistic framework is not always a productive one for thinking about risk and decision making. One of the reasons for this is the inability to even reasonably estimate, with any usable level of precision, the probabilities that would be involved.
What Kay and King argue, compellingly in my view, is that a probabilistic framework is not always a productive one for thinking about risk and decision making.
What alternative framework do these authors propose, where uncertainty, particularly among different parties, can be discussed in what Robert Ableson describes as “principled argument.”
Ricardo Rebonato in his book Plight of the Fortune Tellers makes a similar argument that discussions of risk and probability take mathematical models too literally. Yet he comes to the opposite conclusion – intuition derived from experience can be helpful if models were inputs into a broader Bayesian decision theoretic process:
Fortunately, there are views of probability that are better suited to the needs of [financial] risk management. They are collectively subsumed under the term “Bayesian probability.” (p. 19)
Too often I see formal methods being blamed, when in reality a caricature of the actual theory is being described…
The real world is too complex to shoehorn into a single framework, probabilistic or otherwise. Kay and King offer other approaches such as narrative economics (as Nobel laureate Robert Shiller has also argued) and the enlightened use of heuristics (as Gigerenzer has also argued). K^2 point out that it is easy to dismiss congitive biases as ‘irrational’ when studied in the artificial environments used by behavioral scientists, but they may have value as heuristics in real environments immersed in both context and uncertainty. There may be an evolutionary basis for why such biases are ‘baked in’. The authors argue more eloquently than I can here, but it is not just their argument - they acknowledge their debt to arguments made by Frank Knight and J. M. Keynes a century ago. (A special issue of the Cambridge Journal of Economics celebrating the centenary of Knight’s and Keynes’ works on uncertainty was published last year.)
Formal methods when applied often have no choice but to be caricatures when operating with observational data (as opposed to designed samples or designed experiments). As Freedman argued at great length, the “opportunity for error” is so large and “the number of successful applications [are] so limited”.
Persi Diaconis was even more blunt: “Unfortunately, much of the current work I see is neither interesting mathematics nor useful in practice. Worse, in its complexity, modern model building seems to drive away from the truth into a fantasy land beyond objective reality…I feel that the believability of statistical analyses is at an all time low. If we do not stand up and say something the field will vanish.”
I certainly agree that the arguments I’m advancing from Freedman, Kay & King, and others, are diametrically opposed to those of Rebonato. I would like to see, for example, how Rebonato would have us deal with what happened to nickel trading at the London Metal Exchange in March of this year. I see that he was once interviewed on the EconTalk podcast - I will listen to it at my next opportunity!
Again, I don’t understand what is meant by “probabilistic thinking” that does not take into account missing information.
If we take the betting interpretation of probability seriously, an agent has 2 quantities with which to express his forecast: the odds on the event and the proportion of wealth he/she is willing to risk on the bet.
An agent who as total confidence in the model (and acts to maximize wealth) will bet the quantity computed by the Kelly criterion.
No one sensible really bets the full recommended Kelly amount for the reasons you discuss.
The first thing you learn after study of the Black-Scholes option pricing model (which assumes normally distributed log price changes) is that real options prices trade at different implied volatilities — aka the volatility smile chart. This deviation from the model is 1. informative, 2. does not undermine the validity of the model at all. It provides an opportunity to ask questions – ie. what is the market price discounting that I may not have considered?
A prerequisite to probabilistic thinking is a setting in which it is, at least hypothetically, possible to make a list of all possible events and assign a real number to each. When that real number obeys certain properties, it is a probability, but first you have to have the prerequisite. The London Metals Exchange was notorious for lacking guardrails of other markets, such as automated rules for halting trading. Until this year, I don’t believe trading in nickel has ever been halted on the LME. In March, they halted trading of Nickel for about 2 weeks – and canceled the previous 8 hours of legitimate transactions! There are now lawsuits of course. However, how would a trader have known, a priori, that this possibiltity should have even been part of the sample space? This is not simply a matter of missing information.
A prerequisite to probabilistic thinking is a setting in which it is, at least hypothetically, possible to make a list of all possible events
One does not need an exhaustive list of events to make use of probability theory to make qualitative judgements on whether 2 related bets are over or undervalued relative to one another.
I do not know much about the structure of the LME, but market closures have occurred during military conflict or events like 9/11. I do know Taleb had made some astounding conditional predictions described in his text on managing options portfolios where crude oil traded at negative prices, which actually happened in 2020. These were made from strictly a priori logical and economic considerations.
I see the mathematics of probability more as constraints to be respected, than deductive algorithms to be applied.
I think a point has been missed: Halting trading and then cancelling the previous 8 hours of legitimate transactions. That was not on anyone’s radar, and arguably shouldn’t have been, but it did happen.
Theoretical physicists are trained to learn many ways of modeling the world. Probability is just one of the tools we use. Consider optics - we still teach 3 models of light: the ray model, the wave model, and the photon model. Only the latter is probabilistic – and its probabilistic features are unimportant in many problems (eg, the photoelectric effect). Each model has strengths and weaknesses, and physicists select the one to use in a given problem based on fitness for purpose.