COMPONENTS OF BAYESIAN APPROACH
Author: Mr. Akhil Vijayan – Bio-statistician at Genpro Research.
Bayes theorem is considered as the fundamental theorem in Bayesian statistics formulated by Thomas Bayes. The modern mathematical form and the scientific application was developed by Pierre-Simon Laplace. He explained the principle in words and not by the equation, that is the probability of a cause (given event) is proportional to the probability of the event (given its cause). This principle is being used widely in medical statistics to solve problems. Like to know more on how Bayes theorem solves the problem in medical statistics? Read our previous blog BASICS OF BAYESIAN STATISTICS – SERIES I.
Components of the Bayesian approach are classified into six, 1. the Prior Distribution, 2. Likelihood Principle, 3. Posterior Probabilities, 4. Predictive Probability, 5. Exchangeability of Trials and 6. Decision rules.
1. Prior Distribution
Bayesian statistics begins with a prior belief that has been being expressed as a prior distribution. So that the prior distribution is being viewed as an essential portion and it is based on the data from previous trials. The information passing into the prior distribution and the properties of the resulting posterior distribution are the key things to be remembered while setting the prior distribution. The effect of the prior distribution on posterior inference is minor when the study is with a large sample size and the parameters are well-identified. Priors can be informative or uninformative and it shows the strength on beliefs about the parameters.
Informative Priors: To develop a prior distribution, the statistician uses his knowledge or information about the substantive problem perhaps based on other data, besides induced knowledgeable opinion if attainable. Informative priors provide specific and definite information regarding a variable and are usually based on previous studies. Informative priors in Bayesian analysis provide a meaningful outcome even for the small sample’s studies.
Uninformative Priors: Non-informative priors are used widely when there is no prior information available. The posterior is determined by the data (data-driven) when the prior is uninformative, and the posterior is a mixture of the prior and the data when the prior is informative.
The selection of prior should be deliberate and careful since the choice of the prior distribution can be highly subjective and can affect the results substantively.
2. Likelihood principle
Observed and unobserved are the two types of quantities In Bayesian. Observed quantities are the data and an unobserved quantity includes parameters, missing data, future observations, etc which may have occurred in the past or may yet occur in the future. The likelihood functions play a central role in statistical inference. Assume that the parameter in a clinical trial is represented by the Greek Letter θ (“theta”). TheObserved and unobserved are the two types of quantities In Bayesian. Observed quantities are the data and an unobserved quantity includes parameters, missing data, future observations, etc which may have occurred in the past or may yet occur in the future. The likelihood functions play a central role in statistical inference. Assume that the parameter in a clinical trial is represented by the Greek Letter θ (“theta”). Then the likelihood function states that all the relevant information for making inference on θ is contained in the observed data and not in other unobserved quantities. It is a mathematical representation of the relationships between observed outcomes and the parameter θ and expressed as f (data⁄θ). In spite of the fact that the frequentist approach utilizes the likelihood function, analysis based on frequentist does not generally hold fast to the probability.
Without altering the likelihood function, there are multiple ways wherein a trial can be modified. Conducting Bayesian clinical trials becomes highly flexible by adhering to the likelihood principles with respect to the aspects mentioned below
- Sample size modifications,
- Adaptive designs,
- Interim looks for the purpose of possibly stopping the trial early Multiplicity.
3. Posterior probabilities
In Bayesian analysis, the main focus is to obtain the model posterior distribution of parameters. The posterior probability is the probability that an event will occur after all the substantive information or evidence have been considered and is closely related to the priors. Posterior probability can be obtained by updating the prior probability using the Baye’s theorem.
Prior probability represents what is the actual belief until the new evidence is introduced and posterior probabilities use this information into account. The posterior probabilities in statistical term are defined as the ‘probability of event A occurring given that event B has occurred’.
- A & B are two events
- P(A) and P(B) are the known prior probabilities
- P(B|A) is the know conditional probability
4. Predictive probability
One of the benefits of the Bayesian approach is that once the posterior distribution has been obtained the predictive inference is a straightforward computation. Bayesian methods allow for the derivation of the probability of unobserved outcomes given what has already been observed. This probability is called the predictive probability. The predictive distribution is the distribution of all possible unobserved values conditional on the observed values.
In clinical trials the posterior distribution is used to determine when to stop a trial (based on predicting outcomes for patients not yet observed), predicting a clinical outcome from an earlier or more easily measured outcome for that patient, model checking, etc.
5. Exchangeability of trials
The concept of exchangeability allows more flexible modeling of most experimental setups. Exchangeability offers to change two factors without affecting the results. Bayesian clinical trials can assume another stage of exchangeability when there is sufficient prior information are obtained from previous trials. That is the trial can be assumed to be exchangeable with other previous trials this enables the current trial to “borrow strength” from the previous trials while acknowledging that the trials are not identical in all respects. Exchangeable trials can be viewed as a representative sample of the base or super-population of a clinical trial. The exchangeability of trials and exchangeability of patients within trials are achieved by the uses of Bayesian hierarchical models.
6. Decision rules
Decision rule tells what action is to be taking based on observed input. One common type of decision rule is the traditional hypothesis testing. For Bayesian trials, one common type of decision rule considers that a hypothesis has been demonstrated (with reasonable assurance) if the posterior probability is large enough.
This series covers only the components of Bayesian analysis and the advantages of Bayesian methods will be blogged later in the series.
Wish to know more about the components of Bayesian Approach? Feel free to write to us at email@example.com.