Basics of Bayesian statistics – Series II

Prior distribution

In Bayesian models the prior distribution is viewed as an essential portion and is usually based on the data from previous trials. Priors can be informative or uninformative and it shows the strength on beliefs about the parameters. The posterior is determined by the data (data-driven) when the prior is uninformative and the posterior is a mixture of the prior and the data when the prior is informative. The selection of prior should be deliberate and careful, since the choice of the prior distribution can be highly subjective and can affect the final results substantively.

Likelihood principle

 In Bayesian inferences there are two types of quantities, observed and unobserved. Observed quantities are the data and an unobserved quantity includes parameters, missing data, future observations etc which may have occurred in the past, or are yet occur in the future. The likelihood functions play a central role in statistical inference. Assume that the parameter in a clinical trial is represented by the Greek Letter θ (“theta”). Then the likelihood function states that all the relevant information for making inference on θ is contained in the observed data and not in other unobserved quantities. It is a mathematical representation of the relationships between observed outcomes and the parameter θ and expressed as f(data/θ)

Posterior probabilities

The main goal of Bayesian analysis is to obtain the model posterior distribution of parameters. Posterior probability is the probability that an event that will occur after all the substantive information or evidence has been taken into account and is closely related to the priors. Posterior probability can be obtained by updating the prior probability using the Baye’s theorem.

Predictive probability

 One of the benefits of the Bayesian approach is that predictive inference is a straightforward computation once the posterior distribution has been obtained. Bayesian methods allows for the derivation of probability of unobserved outcomes given what has already been observed. This probability is called the predictive probability. The predictive distribution is the distribution of all possible unobserved values conditional on the observed values. In clinical trials the posterior distribution is used to determine when to stop a trial (based on predicting outcomes for patients not yet observed), predicting a clinical outcome from an earlier or more easily measured outcome for that patient, model checking etc.

Exchangeability of trials

 The concept of exchangeability allows more flexible modeling of most experimental setups. Exchangeability offers to change two factors without affecting the results. When the previous trials considered have good prior information, then the Bayesian clinical can assume another level of exchangeability. That is the trial can be assumed to be exchangeable with other previous trials which enables the current trial to “borrow strength” from the previous trials, while acknowledging that the trials are not identical in all respects. Exchangeable trials can be thought of as a representative sample of some super-population of clinical trials. Bayesian hierarchical models are used to implement exchangeability of trials and exchangeability of patients within trials.

Decision rules

Decision rule tells what action is to be taken based on observed input. One common type of decision rule is the traditional hypothesis testing. For Bayesian it considers that a hypothesis has been demonstrated (with reasonable assurance) if the posterior probability is large enough1.

Reference
  1. Comment, Public. “Guidance for the Use of Bayesian Statistics in Medical Device Clinical Trials.”
  2. Image Courtesy – Department of Statistics, Penn State

Leave a Reply

Your email address will not be published. Required fields are marked *