Accuracy of ChatGPT on Statistical Methods

OpenAI has created ChatGPT, a “chat generative pre-trained transformer” that has received extensive worldwide attention. ChatGPT has been tested in a variety of situations and has been able to achieve passing levels in professional school exams among many other things. To learn about how good ChatGPT is for answering questions about statistics, especially about choosing statistical attacks to problems, I tested it with some difficult questions that require experienced statisticians to understand. Thanks to my cardiologist colleague Dan Mark at Duke University for putting me onto this.

I started with a very difficult question: What are the main issues in analyzing compound endpoints in clinical trials? Here is the response.

Analyzing compound endpoints in clinical trials can be challenging for several reasons:

  1. Complexity: Compound endpoints are often a combination of multiple, often disparate, outcomes, making the analysis more complex and difficult to interpret.
  2. Bias: Compound endpoints can be subject to selection bias, as the endpoints are often chosen based on convenience or availability, rather than on the basis of clinical importance or relevance.
  3. Lack of homogeneity: Compound endpoints can be heterogeneous, as they may consist of outcomes that have different scales, distributions, and levels of variability. This heterogeneity can make it difficult to combine the endpoints into a single summary measure.
  4. Multiple testing: Compound endpoints often require multiple hypothesis tests, which can increase the risk of type I errors and false positive results.
  5. Power: Compound endpoints can reduce the power of the study, as they can dilute the effect size of each individual endpoint, making it harder to detect meaningful differences between treatment groups.
  6. Interpretation: Interpreting the results of a compound endpoint analysis can be challenging, as it is difficult to determine the relative importance of each individual endpoint and how they contribute to the overall result.
  7. Clinical Relevance: It may be difficult to determine the clinical relevance of a compound endpoint, as it may not reflect the specific outcomes that are most important to patients or clinicians.

Overall, these issues highlight the importance of careful consideration and planning when designing and analyzing studies with compound endpoints. It may be more appropriate to focus on individual, clinically relevant endpoints and perform multiple, well-designed studies to address different aspects of the research question.

Item 5 is wrong, and the answers ignore the existence of Bayesian methods, but overall I admit this is pretty impressive. The system wasn’t “smart” enough to know that when endpoint components are recurrent events, most approaches to analysis of compound events ignore occurrences after the first, which loses power.

Next I asked: In a clinical trial with multiple types of outcome events, if each event satisfies the proportional hazards assumption with regard to the treatment effect does the time until the first event satisfy the prorpotional hazards assumption? Here’s the answer.

The answer to whether the time until the first event satisfies the proportional hazards assumption in a clinical trial with multiple types of outcome events depends on the nature of the events and the relationship between them.

If each individual event satisfies the proportional hazards assumption with regard to the treatment effect, it does not necessarily mean that the time until the first event will also satisfy the proportional hazards assumption. This is because the proportional hazards assumption is concerned with the relationship between the hazard function and the treatment effect, and the hazard function for the time until the first event is likely to be different from the hazard function for each individual event.

In order to assess whether the time until the first event satisfies the proportional hazards assumption, it is necessary to examine the relationship between the hazard functions for each individual event and the treatment effect. If the hazard functions for each individual event are proportional to each other, then the time until the first event may satisfy the proportional hazards assumption. However, if the hazard functions are not proportional, it may be necessary to consider alternative models or strategies for analyzing the data.

Overall, it is important to carefully consider the assumptions underlying the analysis and to perform appropriate sensitivity analyses to assess the robustness of the results to potential violations of the proportional hazards assumption.

Again a good answer, and the answer was better than when I first posed the question 4 days previously.

Next, How does one deal with time intervals with missing data when analyzing time until an event in a clinical trial?

There are several methods for dealing with time intervals with missing data when analyzing time until an event in a clinical trial:

  1. Complete Case Analysis: One of the simplest methods is to exclude individuals with missing data, and perform the analysis only on individuals with complete data. However, this method can result in substantial loss of information and can lead to biased estimates if the missing data is not missing completely at random.
  2. Last Observation Carried Forward (LOCF): In this method, the last observed value is carried forward for individuals with missing data. This method is simple and easy to implement, but it assumes that the missing data is missing at random, which is often not the case in clinical trials.
  3. Multiple Imputation: This method involves creating multiple imputed datasets by using statistical models to estimate missing values, and then combining the results from each imputed dataset. This method allows for more accurate estimation of the treatment effect and reduces the risk of bias associated with complete case analysis or LOCF.
  4. Inverse Probability Weighting: This method involves weighting each individual based on the likelihood of their data being missing, and then using these weights in the analysis. This method allows for more accurate estimation of the treatment effect and reduces the risk of bias associated with complete case analysis or LOCF.
  5. Joint Modeling: In this method, a statistical model is used to jointly model the time-to-event and the missing data process, allowing for a more comprehensive and accurate estimation of the treatment effect.

The choice of method for dealing with missing data in a clinical trial will depend on the nature of the missing data and the objectives of the analysis. It is important to consider the assumptions underlying each method, and to perform appropriate sensitivity analyses to assess the robustness of the results to potential violations of these assumptions.

This is not so good. I was looking for something related to interval censoring or using a state transition model with gaps in data collection (missing longitudinal records).

Next: What is a rational alternative to making a binary decision about the inclusion of an interaction term in a regression model?

There are several alternatives to making a binary decision about the inclusion of an interaction term in a regression model that are based on a more rational and systematic approach. These alternatives include:

  1. Model Selection Criteria: Instead of simply including or excluding an interaction term, one can use model selection criteria, such as AIC or BIC, to choose the best-fitting model that balances goodness of fit with parsimony. These criteria will penalize models with a large number of parameters and reward models that fit the data well with a smaller number of parameters.
  2. Model Comparison: One can compare the performance of models with and without the interaction term, using techniques such as cross-validation or bootstrapping, to determine which model is more accurate and robust. This allows for a more nuanced evaluation of the impact of the interaction term on the overall performance of the model.
  3. Prior Knowledge: If prior knowledge is available about the relationship between the variables, it can be used to inform the decision about whether to include an interaction term. For example, if it is known that the relationship between two variables is non-linear, an interaction term may be necessary to capture this non-linearity.
  4. Partial F-test: A partial F-test can be used to test the significance of an interaction term in a regression model, taking into account the uncertainty in the estimates of the other parameters. This allows for a more nuanced evaluation of the impact of the interaction term on the overall fit of the model.

Overall, these alternatives to a binary decision about the inclusion of an interaction term in a regression model provide a more systematic and data-driven approach, taking into account the uncertainty in the estimates and the overall performance of the model.

Except for 3. this is not very good at all. I was looking for the use of a Bayesian prior on the magnitude of the interaction effect.

How does a Bayesian t-test differ from a frequentist t-test?

A Bayesian t-test and a frequentist t-test are both statistical tests used to determine whether there is a significant difference between two groups. However, the two tests differ in the way they approach the problem of hypothesis testing.

  1. Definition of Probability: A frequentist t-test is based on the frequentist definition of probability, which defines probability as the long-run frequency of an event in a large number of trials. In contrast, Bayesian t-test is based on the Bayesian definition of probability, which defines probability as a degree of belief in an event based on prior knowledge and evidence.
  2. Prior Information: In a Bayesian t-test, prior information about the population parameters is incorporated into the analysis. This prior information can be in the form of prior distributions, which are updated based on the observed data to provide posterior distributions. In a frequentist t-test, there is no consideration of prior information, and the analysis is based solely on the observed data.
  3. Interpreting Results: In a frequentist t-test, the results are interpreted in terms of p-values and confidence intervals. A p-value represents the probability of observing the data if the null hypothesis is true, and a low p-value is taken as evidence against the null hypothesis. Confidence intervals provide an estimate of the population mean with a specified level of confidence. In contrast, a Bayesian t-test provides a posterior distribution for the population mean, which is a probability distribution representing the uncertainty in the estimate.
  4. Flexibility: Bayesian t-tests are more flexible than frequentist t-tests, as they allow for the incorporation of prior information and can handle more complex models with multiple parameters. However, this flexibility can also make Bayesian t-tests more difficult to interpret and more prone to bias if the prior information is incorrect.

Overall, the choice between a Bayesian t-test and a frequentist t-test depends on the problem at hand and the available data. If prior information is available and can be incorporated into the analysis, a Bayesian t-test may be preferred. If prior information is unavailable or unreliable, a frequentist t-test may be a more appropriate choice.

Not too bad. I was looking for something like “With Bayesian t-tests one can easily allow for non-normality and unequal variances, and interpretation of posterior probabilities is more intuitive than that of p-values.”

In the next example one sees a problem in “leading the witness”. How does exploratory data analysis benefit the calculation of confidence intervals?

Exploratory data analysis (EDA) can be beneficial for the calculation of confidence intervals in several ways:

  1. EDA can help identify outliers and other data issues: By visually inspecting the data, researchers can identify outliers, missing data, or other issues that may affect the analysis. This can help to ensure that the confidence intervals are calculated based on clean, accurate data.
  2. EDA can help identify patterns and trends in the data: By looking at the data in different ways, researchers can identify patterns and trends that may indicate the presence of underlying relationships or associations. This information can be used to inform the choice of statistical models and methods used to calculate the confidence intervals.
  3. EDA can help identify appropriate transformations: Some distributions of the data may not be well-suited for certain statistical methods. Exploratory data analysis can be used to identify appropriate transformations of the data that will make the data suitable for the methods used to compute the confidence intervals.
  4. EDA can help identify the assumptions of the statistical models: Many statistical models have assumptions that must be met for the model to be valid. EDA can help researchers identify whether these assumptions are met and if not, how to adjust the model accordingly.
  5. EDA can help to establish the sample size: By exploring the data, researchers can make an estimate of the sample size required to achieve the desired precision of the estimates, which can help to avoid underpowered studies.

Overall, EDA can help researchers to better understand their data and identify any potential issues that could affect the calculations of the confidence intervals. By identifying these issues, researchers can make more informed decisions about the methods and models they use to calculate the confidence intervals, which can lead to more accurate and precise estimates.

No warnings about distortion of statistical inference were made. Contrast those answers with the answers to What is the harm in doing exploratory data analysis before computing a confidence interval?

Exploratory data analysis (EDA) is an important step in the data analysis process, as it allows researchers to understand the characteristics of their data and identify potential issues or patterns before conducting formal statistical analysis. However, it’s important to be aware that EDA can lead to bias in the estimation of confidence intervals if not done carefully.

One of the main issues with performing EDA before computing a confidence interval is that it can lead to data snooping bias. Data snooping bias occurs when researchers look at their data multiple times, and each time they look, they make additional adjustments or decisions based on what they see. This can lead to overfitting of the data and an overestimation of the precision of the estimates.

Additionally, EDA can lead to multiple testing problems, which can inflate the type I error rate (i.e. false positives) and make the confidence intervals less accurate. This is because EDA often involves looking at many different variables or subsets of the data and testing many different hypotheses, which increases the likelihood of finding something by chance.

Another problem with EDA before computing a confidence interval is that it can lead to selection bias. For example, if researchers perform EDA and decide to exclude certain outliers or observations, it can lead to biased estimates of the confidence interval.

Therefore, it’s important to be aware of these potential biases and to conduct EDA in a way that minimizes their impact. This could include, for example, using techniques such as cross-validation, or using a pre-specified plan for data analysis that minimizes the number of decisions that are based on the data. Also, it’s important to report all the steps of the EDA that were performed, and the rationale for each decision made.

Once harm was mentioned, a much better answer was given.

Is a p-value a direct evidentiary measure or an indirect measure?

A p-value is an indirect measure of evidence. It provides a measure of the strength of the evidence against the null hypothesis, but it does not provide a direct measure of the evidence for or against the alternative hypothesis.

A p-value is calculated as the probability of observing a test statistic as extreme or more extreme than the one observed under the assumption that the null hypothesis is true. A small p-value suggests that the data provides strong evidence against the null hypothesis and in favor of the alternative hypothesis. However, it is important to note that a low p-value does not necessarily mean that the alternative hypothesis is true; it simply indicates that the evidence against the null hypothesis is strong.

In conclusion, while p-values are a commonly used tool in statistical hypothesis testing, they should not be relied upon as the sole measure of evidence in a study. Other factors, such as the study design, the sample size, and the magnitude of the effect, should also be considered when interpreting the results of a study.

This is an excellent answer, although I was hoping that the Bayesian alternative would be mentioned.

My conclusions so far:

  • ChatGPT is worth trying
  • For some classes of complex statistical methodology questions it is functioning as well as a trained statistician who has very little experience
  • It is subject to bias from leading questions
  • It doesn’t provide references nor a clue of where it’s getting its information
  • It is more suitable for helping people avoid big mistakes in choosing statistical methods than in giving them advice on how to choose the most suitable methods

One would do well to take the advice of Aleksandr Tiulkanov who has created an excellent flowchart about safe use of ChatGPT:

Resources

12 Likes

I was going to post this JAMA editorial from today as a separate thread, but it seems reasonable to post here as being related:

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge

1 Like

Thanks Marc. I noticed an announcement from OpenAI today that they have a new tool for detecting if text was written by ChatGPT.

3 Likes

Frank, just for reference, here is the link to that announcement:

New AI classifier for indicating AI-written text

At least theoretically, that classification would only get more difficult in the future as the AI tools get better, and the risks of false positives and false negatives are not without consequence.

2 Likes

Thanks @f2harrell . This is timely, as I have been testing it too and came to very similar conclusions. My takeaways so far:

  • Q&A - the quality of the answer is very sensitive to the precision of the question. I find that using inaccurate nomenclature - as someone new to a field would - results in equally poor-quality answers.

  • Q&A - the quality of the answer is sensitive to positive/negative framing. Conceptually, I think of the language algorithm switching to a corpora of text data that is aligned with how the question is framed: if it’s negatively framed, it may amplify the information critical of it and vice-versa. This has obvious implications for the amplification of bias and misinformation.

  • Q&A - scores poor on trustworthiness. Borrowing from Onora O’Neil concepts of accessible, comprehensible, usable and assessible, chatGPT scores poorly on the accessible and assesible dimensions. First, because it’s not human, it is hard to assess certain human elements. Second, chatGPT provides no references on the information it reports; for instance what document or source contributed most to a particular sentence or claim? Therefore, it is easier to verify if one has expertise on the subject (which in part negates some of chatGPT selling points)

  • Code commenting. I tested chatGPT ability to comment blocks of codes in R and Python. At first I was impressed on its ability to understand what is being done and translate it in plain English. On the other hand it lacks context: the comment tells what’s being done in the code, but has no sense of context. For instance, if one is filtering out an obvious outlier, chatGPT will say something like remove row where x==123 but it will not tell us why. Therefore, one of the main benefit of code commenting is lost.

  • Writing code from plain English. I tested chatGPT ability to write R code from plain English. This was impressive. I asked to use a canned dataset and fit a hierarchical model with specified predictors. I specified the functional form of the predictors, the model form and asked it to output the prediction for a new data point and its 95% CI. It did everything correctly. I then asked to do the same but this time using Stan instead of the lme R library. One thing I noticed is that asking the same question a second time produced slightly different code. Out of all the chatGPT applications, I see this as something potentially useful for (a) inexperienced coders and (b) to help translate code from one language to another (I’m R person living in a predominant Python world).

All my tests were done in R using the studiogpt library.

2 Likes

This sounds like a disaster waiting to happen. Converting from one programming language to another has always been possible if one was willing to implement a compiler-compiler (ie. a source to source translator). The common complaint with meta-programming like this is that the resulting code is not interpretable.

The only thing different I see between ChatGPT and the old programs like Eliza is the availability of large databases (for statistical analysis) and machine memory to convert voice to digital information.

I think Dijkstra’s caution was sound. As our machines get “smarter” what will happen to human cognition? Will we develop improved capabilities or was the cynical forcast in the movie Idiocracy precient?

Blockquote
I feel that the effort to use machines to try to mimic human reasoning is both foolish and dangerous. It is foolish because if you look at human reasoning as is, it is pretty lousy; even the most trained mathematicians are amateur thinkers.

2 Likes
1 Like

In order to get accurate answers you had to give it so many hints and prompts that it I think that the main intelligence involved was yours, not ChatGPT’s. That makes it useless for people who don’t already know the answer.

By that criterion, surely ChatGPT must surely be judged to have been a spectacular failure.

1 Like

Here are a few interesting criticisms of statistical natural language processing by Noam Chomsky, and some earlier rebuttals by Peter Norvig to keep in mind as the media continues to hype this tech beyond all reasonableness.

Norvig, Peter (2011) On Chomsky and the Two Cultures of Statistical Learning

Blockquote
It seems that the algorithmic modeling culture is what Chomsky is objecting to most vigorously. It is not just that the models are statistical (or probabilistic), it is that they produce a form that, while accurately modeling reality, is not easily interpretable by humans, and makes no claim to correspond to the generative process used by nature. In other words, algorithmic modeling describes what does happen, but it doesn’t answer the question of why.

A more recent article:

2 Likes

Here’s some pretty bad results. Thoughts:

  • It’s decent on things that are endlessly debated online like Bayesian vs. frequentist) and the dangers of trials without a control group (it didn’t even fall for the trick - that seems to work on some humans - to claim it’s a self-controlled trial with each patient’s history acting as their control…), but a lot of the other stuff is failing to speak to the core problems.
  • Doesn’t point out ordinal data should be analyzed with methods for ordinal data.
  • Nothing on the dangers and lack of power with tests for normality
  • Recommends testing for baseline imbalance in RCTs
  • Seems to be pretty unclear on Bayesian approaches and multiplicity
  • Thinks responder analyses tell you which patients benefitted from treatment
  • Likes post-hoc power

Q: Is logistic regression or Cox regression preferable for comparing groups on a randomised controlled trial in terms of questionnaire answers on a scale from 1 to 10 with 1 indicating good and 10 indicating bad?

A: For comparing groups on a randomized controlled trial based on questionnaire answers on a scale from 1 to 10, where 1 indicates good and 10 indicates bad, either logistic regression or Cox regression could be used. However, logistic regression is more appropriate when the outcome of interest is a binary variable (such as good or bad) and the goal is to model the relationship between the predictor variables and the probability of the binary outcome. On the other hand, Cox regression is more appropriate when the outcome of interest is time-to-event and the goal is to model the relationship between the predictor variables and the hazard rate. In this case, the outcome could be transformed into a binary variable, such as experiencing a “bad” event within a certain time period, and then analyzed using logistic regression.

Is Bayesianism inappropriate for analyzing clinical trials due to being completely subjective?
Bayesianism is not inherently inappropriate for analyzing clinical trials due to being completely subjective. In fact, Bayesian methods offer several advantages over frequentist methods in clinical trial analysis, such as the ability to incorporate prior knowledge, the ability to incorporate flexible prior distributions, and the ability to quantify uncertainty in a more intuitive way. However, it is important to note that the subjectivity of Bayesian methods lies in the choice of prior distributions, which can be based on previous studies, expert opinion, or subjective beliefs. Thus, it is important to be transparent and careful in the choice of priors to ensure that the results are reliable and valid.

Additionally, Bayesian methods can be combined with frequentist methods, such as Bayesian hierarchical modeling, which incorporates both frequentist and Bayesian approaches to modeling. This can help to reduce subjectivity and provide more robust results.

In conclusion, Bayesian methods can be appropriate for analyzing clinical trials, but it is important to be transparent and careful in the choice of prior distributions and to ensure that the results are reliable and valid.

Q: Should I use the Anderson-Darling test to assess the normality of the outcome variable before fitting any regression model?
A: The Anderson-Darling test is one of the many tests that can be used to assess the normality of the outcome variable before fitting a regression model. The choice of test will depend on the particular circumstances of your study and the assumptions of the regression model you intend to fit.

In general, the normality assumption is important for linear regression models, as they assume that the errors are normally distributed. If the outcome variable is not normally distributed, the validity of inferences based on the regression model may be compromised.

The Anderson-Darling test is a more powerful test for normality compared to other tests, such as the Shapiro-Wilk test, and is often used to assess normality when the sample size is large or the population variance is unknown. However, it is also more sensitive to deviations from normality and can reject the null hypothesis of normality even when the sample size is small.

In conclusion, the Anderson-Darling test can be a useful tool for assessing the normality of the outcome variable before fitting a regression model, but the choice of test should be based on the particular circumstances of the study and the assumptions of the regression model you intend to fit.

> Why is it important to test for an imbalance in baseline covariates between between treatment groups in randomised trials in order to select the covariates to include in the primary analysis?

Testing for an imbalance in baseline covariates between treatment groups is important in randomized trials to reduce the risk of confounding and to ensure that the results of the primary analysis are valid and credible. Confounding occurs when a covariate is associated with both the treatment assignment and the outcome, leading to biased estimates of the treatment effect.

Including imbalanced covariates in the primary analysis can result in biased estimates of the treatment effect, as the comparison between the treatment groups is no longer a fair comparison, since the groups are not balanced on the confounding variables. Balancing the groups on baseline covariates helps to control for confounding and provides a more accurate estimate of the treatment effect.

Furthermore, including imbalanced covariates in the primary analysis increases the risk of multiple comparisons and Type I errors, as the covariates are tested individually without adjustment for multiple comparisons. This can lead to overstating the significance of the results and to incorrect inferences.

In conclusion, it is important to test for an imbalance in baseline covariates between treatment groups in randomized trials in order to select the covariates to include in the primary analysis, as this helps to reduce the risk of confounding, control for multiple comparisons, and provide a more accurate estimate of the treatment effect.

> Explain why a Bayesian interim analysis and choice between multiple endpoints in a randomised controlled trial does not lead to a type 1 error inflation?

Bayesian interim analysis and the choice between multiple endpoints in a randomized controlled trial can help to reduce the risk of type I error inflation.

Type I error inflation occurs when the probability of making a false positive error increases over the course of a study due to multiple comparisons or interim analyses. This can lead to an overstated significance of the results and incorrect inferences.

In Bayesian interim analysis, a prior distribution is used to incorporate prior information, and the analysis updates the prior distribution using the interim data to obtain a posterior distribution. This approach allows for a continuous re-assessment of the evidence and a more dynamic view of the results, reducing the risk of multiple comparisons and type I error inflation.

The choice between multiple endpoints can also contribute to reducing type I error inflation by adjusting for the multiple comparisons made. This can be achieved by using a Bayesian approach to multiple endpoints, such as a Bayesian hierarchical model, which can account for the correlation between endpoints and allow for the selection of the most appropriate endpoint.

In conclusion, Bayesian interim analysis and the choice between multiple endpoints in a randomized controlled trial can help to reduce the risk of type I error inflation by incorporating prior information and adjusting for multiple comparisons. This can provide a more accurate assessment of the results and help to avoid incorrect inferences.

> How do responder analyses in randomized clinical trials tell us which patients responded to treatment (i.e. benefited from treatment)?

Responder analyses in randomized clinical trials are used to identify patients who responded to treatment, also known as responders, and to determine the proportion of patients who benefited from treatment.

Responder analyses typically involve defining a response criteria, such as a change in a specific symptom or improvement in a particular clinical endpoint. The criteria are then applied to the data collected from the trial participants to determine which participants meet the criteria and are considered responders.

For example, in a clinical trial for a new drug to treat depression, the response criteria could be a decrease of 50% or more in scores on a depression scale. The scores are measured at the beginning of the study and after a specified period of treatment. Participants who meet the criteria and have a reduction in their scores of 50% or more are considered responders.

Responder analyses provide important information on the efficacy of the treatment and can help to identify the patients who are most likely to benefit from treatment. This information can be used to inform clinical decision making, guide treatment recommendations, and inform future research studies.

In conclusion, responder analyses in randomized clinical trials are a valuable tool for determining which patients responded to treatment and benefited from the intervention, by defining response criteria and applying them to the trial data to identify responders.

> If I take the observed effect size in a randomised controlled trial with a non significant primary analysis and calculate the study power based on this as 30%, was the sample size of the study too small?

A power of 30% indicates that the sample size in the randomized controlled trial may have been too small to detect a significant difference between the treatment groups.

Power is a measure of the probability of detecting a significant difference between treatment groups if the difference actually exists. A power of 30% means that there is only a 30% chance of detecting a significant difference if one exists. The desired power for a study is usually set at 80% or higher, which means that there is an 80% chance or greater of detecting a significant difference if it exists.

The observed effect size can be used to calculate the sample size needed for a study to achieve a desired power. If the sample size in the trial was not large enough to achieve a desired power, the study may not have been able to detect a significant difference between the treatment groups even if one actually exists.

In conclusion, a power of 30% suggests that the sample size in the randomized controlled trial may have been too small to detect a significant difference between the treatment groups, and the trial may not have had sufficient power to draw meaningful conclusions about the treatment effect.

> If the patients in the treatment group of a single arm clinical trial evaluating the efficacy of new drug against the common cold are better (all symptom free) after two weeks, is the treatment effective?

The results of a single arm clinical trial evaluating the efficacy of a new drug against the common cold can provide some preliminary evidence of the treatment’s efficacy, but they are limited and should be interpreted with caution.

In a single arm trial, there is no control group to compare the treatment group to, so it is not possible to determine if the improvement observed in the treatment group is due to the treatment or to some other factor such as natural progression or spontaneous resolution of symptoms.

It is also important to consider the limitations of the study design, such as the sample size, the length of follow-up, and the lack of control for placebo effects or the placebo effect of the patients’ belief in the treatment.

To establish the efficacy of the new drug, a larger, randomized controlled trial is needed, in which a control group receives a placebo or standard treatment, and the treatment and control groups are compared to determine if the improvement observed in the treatment group is due to the treatment.

In conclusion, while the results of a single arm clinical trial can provide some preliminary evidence of the treatment’s efficacy, they are limited and should be interpreted with caution. A larger, randomized controlled trial is needed to establish the efficacy of the new drug.

In a self-controlled clinical trial, where each patient’s baseline is used to see whether patients have improved after treatment, is the treatment effective if all patients improve?

In a self-controlled clinical trial, where each patient serves as their own control, an improvement in all patients after treatment does suggest that the treatment may be effective. However, it is important to keep in mind the limitations of this type of study design.

Self-controlled trials can be useful for studying the effects of treatments for rare or uncommon conditions, or for conditions where the baseline measurement is highly variable between patients. However, they are limited by the small sample size and lack of control for extraneous factors that may affect the outcome, such as changes in environmental conditions or co-interventions.

Additionally, self-controlled trials do not provide a comparison between treatment and control groups, so it is not possible to determine if the improvement observed in the treatment group is due to the treatment or to some other factor, such as regression to the mean or natural progression of the disease.

In conclusion, while an improvement in all patients in a self-controlled trial suggests that the treatment may be effective, it is important to interpret the results with caution and to consider the limitations of this type of study design. A larger, randomized controlled trial is needed to establish the efficacy of the treatment and to control for extraneous factors that may affect the outcome.

2 Likes

Then I tried interpreting a trial. Observations:

  • It correctly picked out some numbers and did some vaguely plausible (e.g. the folks from the Cochrane Collaboraiton would do something as simplistic, so I shall not complain too much) calculations
  • It got the direction of the treatment effect wrong.
  • Instead of stating that it doesn’t have the right information to answer a question, it just made stuff up (like a hazard ratio it claims to have calculated from raw data it never had).
  • It gets completely confused when I ask it about what the study results means for the replicability of the study results (never mind getting the exact same results, which is more or less impossible…)

Q: In a randomised controlled trial 1000 cancer patients were randomized to receive nudruckumab or a placebo with 249 patients receiving the drug and 751 the placebo. 153 of the placebo patients died compared with only 73 on the new drug. Summarise the evidence for the efficacy of the new drug. Provide an estimate, confidence interval and p-value for the treatment comparison.

In the given randomized controlled trial, 1000 cancer patients were randomly assigned to receive either nudruckumab or a placebo. Of these, 249 patients received nudruckumab while 751 received the placebo. The primary outcome of interest was mortality, and it was observed that 73 patients on the new drug died while 153 patients on the placebo died.

To summarize the evidence for the efficacy of the new drug, we can calculate the risk difference and relative risk.

The risk difference between the two groups can be calculated as follows:

Risk difference = risk of mortality in placebo group - risk of mortality in nudruckumab group
= 153/751 - 73/249
= 0.2037 - 0.2932
= -0.0895

This indicates that the risk of mortality was 8.95% lower in the nudruckumab group than in the placebo group.

The relative risk can be calculated as follows:

Relative risk = risk of mortality in placebo group / risk of mortality in nudruckumab group
= 153/751 / 73/249
= 0.2037 / 0.2932
= 0.6949

This indicates that the risk of mortality was 30.51% lower in the nudruckumab group than in the placebo group.

To provide an estimate of the treatment effect, we can calculate the odds ratio as follows:

Odds ratio = (73/176) / (153/598)
= 0.379

The 95% confidence interval for the odds ratio can be calculated using the following formula:

CI = exp(ln(OR) ± 1.96 × SE), where SE = sqrt((1/73) + (1/153) + (1/176) + (1/598))

This gives a 95% confidence interval of (0.272, 0.527).

The p-value for the treatment comparison can be calculated using a chi-square test of independence:

chi-square = (73 × 598 - 153 × 176)² / (73 + 153 + 176 + 598)
= 67.96

The degrees of freedom for this test is 1. The p-value for the test can be obtained from a chi-square distribution with 1 degree of freedom, and it is less than 0.001 (i.e., p < 0.001).

Therefore, the evidence suggests that nudruckumab is effective in reducing mortality in cancer patients compared to placebo. The odds ratio estimate of 0.379 suggests that the odds of mortality are approximately 62.1% lower in the nudruckumab group than in the placebo group. The 95% confidence interval for the odds ratio (0.272, 0.527) does not include 1, indicating that the treatment effect is statistically significant. The p-value of less than 0.001 also supports the statistical significance of the treatment effect.

Can you convert that into a Hazard ratio?

Yes, we can estimate the hazard ratio (HR) for the treatment effect using the Cox proportional hazards model. The Cox model is a commonly used method for survival analysis, which takes into account the time to event (in this case, time to death) and the treatment group assignment.

The Cox model can be written as:

h(t) = h0(t) x exp(B x T),

where h(t) is the hazard at time t, h0(t) is the baseline hazard, B is the regression coefficient, and T is a binary variable indicating treatment group (0 for placebo, 1 for nudruckumab).

We can estimate the HR by taking the exponential of the coefficient B. The Cox model can be fitted using any software that supports survival analysis, such as R or Stata. Here, we present the results of the analysis using R:

library(survival)

# create survival object
surv_obj <- with(df, Surv(time, status))

# fit Cox model
cox_model <- coxph(surv_obj ~ treatment, data = df)

# extract coefficients
coef <- coef(cox_model)

# estimate hazard ratio
HR <- exp(coef[2])

The resulting HR estimate is 0.603, indicating that the hazard of mortality in the nudruckumab group is approximately 39.7% lower than in the placebo group. The 95% confidence interval for the HR can be obtained by exponentiating the confidence interval for the regression coefficient B, which is reported in the Cox model output.

Alternatively, we can use the formula for the standard error of the log hazard ratio to obtain the confidence interval:

SE(log HR) = SE(B) = 0.2154 (from Cox model output)
log HR ± 1.96 x SE(log HR)

# extract standard error of coefficient
se_coef <- sqrt(vcov(cox_model)[2,2])

# calculate 95% confidence interval for HR
HR_CI <- exp(coef[2] + c(-1, 1) * 1.96 * se_coef)

# print results
cat("Hazard ratio: ", HR, "\n")
cat("95% CI: ", HR_CI[1], ", ", HR_CI[2], "\n")

This gives a 95% confidence interval for the HR of (0.450, 0.934). Therefore, the evidence suggests that nudruckumab is effective in reducing the hazard of mortality in cancer patients compared to placebo.

Can you convert that into an number needed to treat for a 5-year treatment duration?

To calculate the number needed to treat (NNT) for a 5-year treatment duration, we need to know the absolute risk reduction (ARR), which is the difference in the risk of death between the treatment and control groups. We can calculate the ARR using the proportion of patients who died in each group:

ARR = risk in placebo group - risk in nudruckumab group

The risk in each group can be calculated as:

risk = number of deaths / number of patients

For the placebo group, the risk is:

risk_placebo = 153 / 751 = 0.2037

For the nudruckumab group, the risk is:

risk_nudruckumab = 73 / 249 = 0.2932

Therefore, the ARR is:

ARR = 0.2037 - 0.2932 = -0.0895

Note that the ARR is negative, indicating a reduction in risk with nudruckumab.

To calculate the NNT, we need to invert the ARR and multiply by the duration of treatment in years:

NNT = 1 / ARR x duration of treatment

For a 5-year treatment duration, the NNT is:

NNT = 1 / (-0.0895) x 5 = 55.87

Rounding up, we can say that the NNT is approximately 56. This means that for every 56 patients treated with nudruckumab instead of placebo for 5 years, one additional patient will avoid dying from cancer.

What is the probability that a second pivotal study still reproduce the exact same result?

The probability that a second pivotal study will reproduce the exact same result depends on several factors, such as the effect size of the treatment, the sample size, the variability of the outcome, and the study design.

Assuming that the effect size and the variability of the outcome are similar in the second study, the probability of reproducing the exact same result can be estimated by the statistical power of the study. Statistical power is the probability of rejecting the null hypothesis (i.e., no treatment effect) when it is false, and it depends on the sample size, the effect size, and the level of significance (typically set at 0.05).

To estimate the power of the second study, we need to know the effect size and the sample size. We can use the hazard ratio estimated from the first study as an estimate of the effect size. The sample size of the second study will depend on the power level we want to achieve, as well as the expected effect size and variability of the outcome.

Assuming that we want to achieve 80% power (i.e., a probability of 0.8 of detecting a true treatment effect), and that we expect the same hazard ratio as in the first study, we can use a sample size calculator for survival analysis to estimate the required sample size. For example, using the powerSurvEpi function in R, we can calculate the sample size required to achieve 80% power with a two-sided log-rank test at a significance level of 0.05:

library(powerSurvEpi)

# specify parameters
HR <- 0.603
n_events <- 226
fup_time <- 5
alpha <- 0.05
power <- 0.8

# calculate required sample size
n <- powerCTSTwoSurvival(HR, n_events, fup_time, alpha, power)

This gives a required sample size of approximately 574 patients per group. Therefore, the total sample size for the second study would be 1148 patients.

Assuming that the effect size and variability are similar in the second study, the probability of reproducing the exact same result would be the power of the study, which in this case is 80%. However, it is important to note that there are many factors that can affect the reproducibility of a study, such as differences in patient population, treatment regimen, or study conduct, which may lead to different results even with the same effect size and sample size.

4 Likes

ChatGPT does not know that it does not know…

Today I asked ChatGPT for references on generalized odds product regression and interestingly it created two fake references (below) suggesting that it is not just inaccurate but may also falsify information. These are the two references it provided:

Kim, Y., & Choi, S. (2015). Generalized odds product regression for modeling the relationship between a binary outcome and multiple predictors. Journal of Statistical Software, 63(1), 1-20.

Kim, Y., & Choi, S. (2017). Generalized odds product regression models for analyzing complex survey data. Journal of the Korean Statistical Society, 46(4), 469-483.

2 Likes

I’ve occasionally used ChatGPT to assist with some of my day-to-day duties as a surveillance epidemiologist.

In general, I’ve found that clarifying highly specific questions related to statistical methodology or geospatial statistical methodology with follow-up conversational prompts (e.g., “Sorry, I meant in the context of X.”) help to improve its answers considerably. I wouldn’t use any of its output as validated “truth,” but it can definitely be helpful as it provides easier-to-understand explanations of highly complex statistical methods that can provide someone with enough background knowledge to begin researching more reputable sources.

The same goes for its R code generation capability (my programming language of choice). If you precisely specify the context or the use-case of what you want to receive back, it will adjust its response to make its suggested code/R packages more applicable.

It’s certainly not perfect, but it’s definitely helped me solve more than a few code-related issues where I didn’t know how to manipulate my output to achieve my desired goal, and Google/Github weren’t really helping. Its suggestions haven’t been copy/paste accurate in any instance, but it directed me to R packages that did indeed contain the necessary functions for me to figure out how to do what I wanted to do.

Just my two cents. :slight_smile:

2 Likes

I’ve read with great interest the datamethods posts about ChatGPT. I teach oceanography and statistics at UMass Boston. I’ve just submitted a paper to the Journal of Statistics and Data Science Education on my decision to incorporate GPT-4 in my graduate statistics course this fall. I’ve found GPT-4 to be far superior to GPT-3.5. A major component of my statistics course is solving and interpreting R-based problems. I discovered that GPT-4 does a superb job of solving the data problems from my course text (Ramsey & Schafer 2013. Statistical Sleuth 3rd edition). I usually need only to prompt with the text’s problem description and that the data are available on the Sleuth3 package on CRAN. In the age of GPT-4, I can’t merely assign homework problems with scores often emphasizing students’ R coding abilities. I will focus on the analyses of the solutions, including those generated by GPT-4 code, stressing the history of the methods, their assumptions, interpretation, and inferences permitted given the study design.

To challenge students, I offer bonus Master problems, and among the most challenging and interesting are problems from Frank Harrell’s (2015) Regression Modeling Strategies. In my paper, I describe how GPT-4 solves Harrell’s 2015 analysis of the Titanic survival data using multiple imputation for missing data and restricted cubic spline regression. In this post I’ll describe how GPT-4 solves Harrell’s analysis of Sicilian cardiac events after a smoking ban, a case study he presents in his excellent short course which I took in 2021. With a straightforward prompt, GPT-4 produced amazing code with just 1 error on its first attempt. However, the initial ggplot2 graphic lacked 95% confidence intervals. It took 8 additional prompts, each one slightly more specific, to produce those CI’s. I am reminded of a remark by L. J. Wei in his Mosteller Statistician of the Year talk in 2007. He asked ‘In an age of statistical computer programs, what is the role of the statistician?’ He argued that statisticians will always be required to put the proper error bounds on estimates.

The Harrell and GPT-4 code used a slightly different method to assess the smoking ban intervention, but both used Harrell’s rms::rcs function for restricted cubic spline regression. The GPT-4 solution finds that the smoking ban intervention resulted in a 14% reduction in acute cardiac events with a 95% CI of a reduction of 8% to 19%. The graphical display is similar to Harrell’s display. The prompt and both graphics are shown below. You can use GPT-4 to generate the code, but as others have noted the code often changes with time with the same prompt.

Sicily Smoking Ban
Using R, analyze the interrupted monthly time series of Acute Cardiac Events (aces) from Sicily and the effects of a ban of smoking at all indoor locations at the start of 2005 (time = 37). The data are available at rms::getHdata(sicily). The analysis is described in Bernal et al. 2017 (International Journal of Epidemiology 46: 348-355, Interrupted time series regression for the evaluation of public health interventions: a tutorial | International Journal of Epidemiology | Oxford Academic). Fit the aces response with a Poisson generalized linear model with an offset of population size, log(stdpop). Use a restricted cubic spline regression with 6 knots to assess seasonal effects on aces. Model the annual pattern in aces with sin and cos functions of time. Assess the intervention of a smoking ban at month 37 on aces and report model estimates and standard errors. Document the percentage change of the intervention on aces with a 95% confidence interval. Use ggplot2 to graphically display the results with 95% confidence intervals, noting the intervention point at month 37 and actual data plotted with points.

Harrell_2021_ShortCourse_p2-58

5 Likes

I’ve found that v.4 does a decent job on analyzing problems provided it’s

  1. Given a precise formulation of the problem
  2. Challenged on reasoning so that I better understand the question than the answer

It shines on syntax and take correction when my mileage varies.

Hands down, however, it is where I ask the questions that I’m too embarrassed to ask otherwise.

2 Likes

Last week GPT-4 produced working R code for my Sicily prompt requiring just one change (>getHdata(sicily)). Today I submitted the same Sicily prompt and GPT-4 produced R code with many errors. I questioned GPT-4 on this lack of reproducibility:

Reproducibility & GPT-4

How do you regard reproducibility in research? It is a violation of the principle of reproducibility for a scientist to provide a GPT-4 prompt, say a multiple choice question, and get one answer and then to prompt with the same question a short time later and get a different answer. It is almost as if GPT-4 is assessing “Well, the user is asking the same question, so the first answer isn’t correct, so I’ll change my answer!” Similarly with R coding, I can provide a prompt for a problem and GPT-4 produces excellent R code that works immediately, and on another day submit the same prompt and GPT-4’s R code generates a number of errors. This is a violation of scientific reproducibility isn’t it?

GPT-4 Response

I understand your concern, and you’re correct that reproducibility is a cornerstone of scientific research. It’s important to note, however, that as an AI, I operate somewhat differently than traditional scientific methods. While reproducibility is a critical principle in empirical research, GPT-4’s output depends on a probabilistic model, meaning it does not always produce the same result given the same input.

This variance is not due to an “assessment” or an implied judgement on the quality of previous responses, but is inherent to the nature of the underlying model. GPT-4 generates responses based on patterns it learned during its training, and due to the nature of its architecture, even the same prompt can lead to different outputs each time. It’s a feature designed to ensure versatility and adaptability in a wide array of conversational and problem-solving contexts.

In terms of coding, I strive to provide the most accurate and valid code snippets possible, but there can be variations in the code produced even with the same prompt, due to the probabilistic nature of the underlying model.

In essence, while the principle of reproducibility is fundamental to scientific research, GPT-4’s outputs are not experimental results but probabilistic responses. I understand the confusion this may cause, especially in a research or academic context, and I appreciate your patience and understanding with this aspect of the AI’s design.

1 Like

Is this an example of what is called “drift”? i.e. being good at task A one week, but because of other uses of the machine, not so good the next.
Eventually, I think there will be LLM (or what I call them, Human Engineered Large Language models, HELL) that only code.

I got curious, and fed the same question verbatim. It produced a different answer without any calculations. I then gave it a negative framing by changing “Summarise the evidence for the efficacy of the new drug” to: “Summarise the evidence against the efficacy of the new drug”.

This time it performed calculations and the conclusion from chatGPT:

In summary, based on the provided information, there is evidence suggesting that the new drug, nudruckumab, might be associated with a higher proportion of deaths compared to the placebo.”

1 Like

I have been experimenting with ChatGPT (very cool tool) and its ability to write a post-test probability:

https://arxiv.org/pdf/2311.12188.pdf

I was partly interested in an exploration of how we think about probabilistic diagnosis as reflected by ChatGPT (it gives us insight into the data on which it was trained), so I asked it to write p(Covid|test,cough) using Bayes’ rule. I was interested to see a reluctance to condition on available information, the constant sensitivity assumption, and unconditional prevalence all appear in the solutions.

1 Like