In Bayesian probability theory, if the posterior distributions p(θ  x) are in the same probability distribution family as the prior probability distribution p(θ), the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior for the likelihood function p(x  θ). For example, the Gaussian family is conjugate to itself (or selfconjugate) with respect to a Gaussian likelihood function: if the likelihood function is Gaussian, choosing a Gaussian prior over the mean will ensure that the posterior distribution is also Gaussian. This means that the Gaussian distribution is a conjugate prior for the likelihood that is also Gaussian. The concept, as well as the term "conjugate prior", were introduced by Howard Raiffa and Robert Schlaifer in their work on Bayesian decision theory.^{[1]} A similar concept had been discovered independently by George Alfred Barnard.^{[2]}
Consider the general problem of inferring a (continuous) distribution for a parameter θ given some datum or data x. From Bayes' theorem, the posterior distribution is equal to the product of the likelihood function $\theta \mapsto p(x\mid \theta )$ and prior $p\left(\theta \right)$, normalized (divided) by the probability of the data $p\left(x\right)$:
Let the likelihood function be considered fixed; the likelihood function is usually welldetermined from a statement of the datagenerating process^{[example needed]}. It is clear that different choices of the prior distribution p(θ) may make the integral more or less difficult to calculate, and the product p(xθ) × p(θ) may take one algebraic form or another. For certain choices of the prior, the posterior has the same algebraic form as the prior (generally with different parameter values). Such a choice is a conjugate prior.
A conjugate prior is an algebraic convenience, giving a closedform expression for the posterior; otherwise numerical integration may be necessary. Further, conjugate priors may give intuition, by more transparently showing how a likelihood function updates a prior distribution.
All members of the exponential family have conjugate priors.^{[3]}
The form of the conjugate prior can generally be determined by inspection of the probability density or probability mass function of a distribution. For example, consider a random variable which consists of the number of successes $s$ in $n$ Bernoulli trials with unknown probability of success $q$ in [0,1]. This random variable will follow the binomial distribution, with a probability mass function of the form
The usual conjugate prior is the beta distribution with parameters ($\alpha $, $\beta $):
where $\alpha $ and $\beta $ are chosen to reflect any existing belief or information ($\alpha $ = 1 and $\beta $ = 1 would give a uniform distribution) and Β($\alpha $, $\beta $) is the Beta function acting as a normalising constant.
In this context, $\alpha $ and $\beta $ are called hyperparameters (parameters of the prior), to distinguish them from parameters of the underlying model (here q). It is a typical characteristic of conjugate priors that the dimensionality of the hyperparameters is one greater than that of the parameters of the original distribution. If all parameters are scalar values, then this means that there will be one more hyperparameter than parameter; but this also applies to vectorvalued and matrixvalued parameters. (See the general article on the exponential family, and consider also the Wishart distribution, conjugate prior of the covariance matrix of a multivariate normal distribution, for an example where a large dimensionality is involved.)
If we then sample this random variable and get s successes and f failures, we have
which is another Beta distribution with parameters ($\alpha $ + s, $\beta $ + f). This posterior distribution could then be used as the prior for more samples, with the hyperparameters simply adding each extra piece of information as it comes.
It is often useful to think of the hyperparameters of a conjugate prior distribution as corresponding to having observed a certain number of pseudoobservations with properties specified by the parameters. For example, the values $\alpha $ and $\beta $ of a beta distribution can be thought of as corresponding to $\alpha 1$ successes and $\beta 1$ failures if the posterior mode is used to choose an optimal parameter setting, or $\alpha $ successes and $\beta $ failures if the posterior mean is used to choose an optimal parameter setting. In general, for nearly all conjugate prior distributions, the hyperparameters can be interpreted in terms of pseudoobservations. This can help both in providing an intuition behind the often messy update equations, as well as to help choose reasonable hyperparameters for a prior.
Conjugate priors are analogous to eigenfunctions in operator theory, in that they are distributions on which the "conditioning operator" acts in a wellunderstood way, thinking of the process of changing from the prior to the posterior as an operator.
In both eigenfunctions and conjugate priors, there is a finitedimensional space which is preserved by the operator: the output is of the same form (in the same space) as the input. This greatly simplifies the analysis, as it otherwise considers an infinitedimensional space (space of all functions, space of all distributions).
However, the processes are only analogous, not identical: conditioning is not linear, as the space of distributions is not closed under linear combination, only convex combination, and the posterior is only of the same form as the prior, not a scalar multiple.
Just as one can easily analyze how a linear combination of eigenfunctions evolves under application of an operator (because, with respect to these functions, the operator is diagonalized), one can easily analyze how a convex combination of conjugate priors evolves under conditioning; this is called using a hyperprior, and corresponds to using a mixture density of conjugate priors, rather than a single conjugate prior.
One can think of conditioning on conjugate priors as defining a kind of (discrete time) dynamical system: from a given set of hyperparameters, incoming data updates these hyperparameters, so one can see the change in hyperparameters as a kind of "time evolution" of the system, corresponding to "learning". Starting at different points yields different flows over time. This is again analogous with the dynamical system defined by a linear operator, but note that since different samples lead to different inference, this is not simply dependent on time, but rather on data over time. For related approaches, see Recursive Bayesian estimation and Data assimilation.
Suppose a rental car service operates in your city. Drivers can drop off and pick up cars anywhere inside the city limits. You can find and rent cars using an app.
Suppose you wish to find the probability that you can find a rental car within a short distance of your home address at any given time of day.
Over three days you look at the app at random times of the day and find the following number of cars within a short distance of your home address: $x=[3,4,1]$
If we assume the data comes from a Poisson distribution, we can compute the maximum likelihood estimate of the parameters of the model which is $\lambda =\frac{3+4+1}{3}\approx \mathrm{2.67.}$ Using this maximum likelihood estimate we can compute the probability that there will be at least one car available: $p(x>0)=1p(x=0)=1\frac{{2.67}^{0}{e}^{2.67}}{0!}\approx 0.93$
This is the Poisson distribution that is the most likely to have generated the observed data $x$. But the data could also have come from another Poisson distribution, e.g. one with $\lambda =3$, or $\lambda =2$, etc. In fact there is an infinite number of poisson distributions that could have generated the observed data and with relatively few data points we should be quite uncertain about which exact poisson distribution generated this data. Intuitively we should instead take a weighted average of the probability of $p(x>0)$ for each of those Poisson distributions, weighted by how likely they each are, given the data we've observed $x$.
Generally, this quantity is known as the posterior predictive distribution $p\left(xx\right)={\int}_{\theta}p\left(x\theta \right)p\left(\theta x\right)d\theta ,$ where $x$ is a new data point, $x$ is the observed data and $\theta $ are the parameters of the model. Using Bayes' theorem we can expand $p\left(\theta x\right)=\frac{p\left(x\theta \right)p\left(\theta \right)}{p\left(x\right)},$ such that $p\left(xx\right)={\int}_{\theta}p\left(x\theta \right)\frac{p\left(x\theta \right)p\left(\theta \right)}{p\left(x\right)}d\theta .$ Generally, this integral is hard to compute. However, if you choose a conjugate prior distribution $p\left(\theta \right)$, a closed form expression can be derived. This is the posterior predictive column in the tables below.
Returning to our example, if we pick the Gamma distribution as our prior distribution over the rate of the poisson distributions, then the posterior predictive is the negative binomial distribution as can be seen from the last column in the table below. The Gamma distribution is parameterized by two hyperparameters $\alpha ,\beta $ which we have to choose. By looking at plots of the gamma distribution we pick $\alpha =\beta =2$, which seems to be a reasonable prior for the average number of cars. The choice of prior hyperparameters is inherently subjective and based on prior knowledge.
Given the prior hyperparameters $\alpha $ and $\beta $ we can compute the posterior hyperparameters ${\alpha}^{\prime}=\alpha +\sum _{i}{x}_{i}=2+3+4+1=10$ and ${\beta}^{\prime}=\beta +n=2+3=5$
Given the posterior hyperparameters we can finally compute the posterior predictive of $p(x>0x)=1p(x=0x)=1NB\left(010,\frac{1}{1+5}\right)\approx 0.84$
This much more conservative estimate reflect the uncertainty in the model parameters, which the posterior predictive takes into account.
Let n denote the number of observations. In all cases below, the data is assumed to consist of n points ${x}_{1},\dots ,{x}_{n}$ (which will be random vectors in the multivariate cases).
If the likelihood function belongs to the exponential family, then a conjugate prior exists, often also in the exponential family; see Exponential family: Conjugate distributions.
This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (August 2020) (Learn how and when to remove this template message)

Likelihood  Model parameters  Conjugate prior distribution  Prior hyperparameters  Posterior hyperparameters^{[note 1]}  Interpretation of hyperparameters  Posterior predictive^{[note 2]} 

Bernoulli  p (probability)  Beta  $\alpha ,\beta $  $\alpha +\sum _{i=1}^{n}{x}_{i},\beta +n\sum _{i=1}^{n}{x}_{i}$  $\alpha $ successes, $\beta $ failures^{[note 3]}  $p(\stackrel{~}{x}=1)=\frac{{\alpha}^{\prime}}{{\alpha}^{\prime}+{\beta}^{\prime}}$ 
Binomial  p (probability)  Beta  $\alpha ,\beta $  $\alpha +\sum _{i=1}^{n}{x}_{i},\beta +\sum _{i=1}^{n}{N}_{i}\sum _{i=1}^{n}{x}_{i}$  $\alpha $ successes, $\beta $ failures^{[note 3]} 
$\mathrm{BetaBin}(\stackrel{~}{x}{\alpha}^{\prime},{\beta}^{\prime})$ (betabinomial) 
Negative binomial with known failure number, r 
p (probability)  Beta  $\alpha ,\beta $  $\alpha +\sum _{i=1}^{n}{x}_{i},\beta +rn$  $\alpha $ total successes, $\beta $ failures^{[note 3]} (i.e., $\frac{\beta}{r}$ experiments, assuming $r$ stays fixed)  $\mathrm{BetaNegBin}(\stackrel{~}{x}{\alpha}^{\prime},{\beta}^{\prime})$ 
Poisson  λ (rate)  Gamma  $k,\theta $  $k+\sum _{i=1}^{n}{x}_{i},\frac{\theta}{n\theta +1}$  $k$ total occurrences in $\frac{1}{\theta}$ intervals 
$\mathrm{NB}\left(\stackrel{~}{x}\mid {k}^{\prime},{\theta}^{\prime}\right)$ (negative binomial) 
$\alpha ,\beta $ ^{[note 4]}  $\alpha +\sum _{i=1}^{n}{x}_{i},\beta +n$  $\alpha $ total occurrences in $\beta $ intervals 
$\mathrm{NB}\left(\stackrel{~}{x}\mid {\alpha}^{\prime},\frac{1}{1+{\beta}^{\prime}}\right)$ (negative binomial) 

Categorical  p (probability vector), k (number of categories; i.e., size of p)  Dirichlet  $\alpha $  $\alpha +({c}_{1},\dots ,{c}_{k}),$ where ${c}_{i}$ is the number of observations in category i  ${\alpha}_{i}$ occurrences of category $i$^{[note 3]}  $\begin{array}{cc}p(\stackrel{~}{x}=i)& =\frac{{{\alpha}_{i}}^{\prime}}{\sum _{i}{{\alpha}_{i}}^{\prime}}\\ =\frac{{\alpha}_{i}+{c}_{i}}{\sum _{i}{\alpha}_{i}+n}\end{array}$ 
Multinomial  p (probability vector), k (number of categories; i.e., size of p)  Dirichlet  $\alpha $  $\alpha +\sum _{i=1}^{n}{x}_{i}$  ${\alpha}_{i}$ occurrences of category $i$^{[note 3]} 
$\mathrm{DirMult}(\stackrel{~}{x}\mid {\alpha}^{\prime})$ (Dirichletmultinomial) 
Hypergeometric with known total population size, N 
M (number of target members)  Betabinomial^{[4]}  $n=N,\alpha ,\beta $  $\alpha +\sum _{i=1}^{n}{x}_{i},\beta +\sum _{i=1}^{n}{N}_{i}\sum _{i=1}^{n}{x}_{i}$  $\alpha $ successes, $\beta $ failures^{[note 3]}  
Geometric  p_{0} (probability)  Beta  $\alpha ,\beta $  $\alpha +n,\beta +\sum _{i=1}^{n}{x}_{i}$  $\alpha $ experiments, $\beta $ total failures^{[note 3]} 
Likelihood  Model parameters  Conjugate prior distribution  Prior hyperparameters  Posterior hyperparameters^{[note 1]}  Interpretation of hyperparameters  Posterior predictive^{[note 5]}  

Normal with known variance σ^{2} 
μ (mean)  Normal  ${\mu}_{0},{\sigma}_{0}^{2}$  $\frac{1}{\frac{1}{{\sigma}_{0}^{2}}+\frac{n}{{\sigma}^{2}}}\left(\frac{{\mu}_{0}}{{\sigma}_{0}^{2}}+\frac{\sum _{i=1}^{n}{x}_{i}}{{\sigma}^{2}}\right),{\left(\frac{1}{{\sigma}_{0}^{2}}+\frac{n}{{\sigma}^{2}}\right)}^{1}$  mean was estimated from observations with total precision (sum of all individual precisions)$1/{\sigma}_{0}^{2}$ and with sample mean ${\mu}_{0}$  $N(\stackrel{~}{x}{\mu}_{0}^{\prime},{{\sigma}_{0}^{2}}^{\prime}+{\sigma}^{2})$^{[5]}  
Normal with known precision τ 
μ (mean)  Normal  ${\mu}_{0},{\tau}_{0}$  $\frac{{\tau}_{0}{\mu}_{0}+\tau \sum _{i=1}^{n}{x}_{i}}{{\tau}_{0}+n\tau},{\tau}_{0}+n\tau $  mean was estimated from observations with total precision (sum of all individual precisions)${\tau}_{0}$ and with sample mean ${\mu}_{0}$  $N\left(\stackrel{~}{x}\mid {\mu}_{0}^{\prime},\frac{1}{{\tau}_{0}^{\prime}}+\frac{1}{\tau}\right)$^{[5]}  
Normal with known mean μ 
σ^{2} (variance)  Inverse gamma  $\alpha ,\beta $ ^{[note 6]}  $\alpha +\frac{n}{2},\beta +\frac{\sum _{i=1}^{n}({x}_{i}\mu {)}^{2}}{2}$  variance was estimated from $2\alpha $ observations with sample variance $\beta /\alpha $ (i.e. with sum of squared deviations $2\beta $, where deviations are from known mean $\mu $)  ${t}_{2{\alpha}^{\prime}}(\stackrel{~}{x}\mu ,{\sigma}^{2}={\beta}^{\prime}/{\alpha}^{\prime})$^{[5]}  
Normal with known mean μ 
σ^{2} (variance)  Scaled inverse chisquared  $\nu ,{\sigma}_{0}^{2}$  $\nu +n,\frac{\nu {\sigma}_{0}^{2}+\sum _{i=1}^{n}({x}_{i}\mu {)}^{2}}{\nu +n}$  variance was estimated from $\nu $ observations with sample variance ${\sigma}_{0}^{2}$  ${t}_{{\nu}^{\prime}}(\stackrel{~}{x}\mu ,{{\sigma}_{0}^{2}}^{\prime})$^{[5]}  
Normal with known mean μ 
τ (precision)  Gamma  $\alpha ,\beta $^{[note 4]}  $\alpha +\frac{n}{2},\beta +\frac{\sum _{i=1}^{n}({x}_{i}\mu {)}^{2}}{2}$  precision was estimated from $2\alpha $ observations with sample variance $\beta /\alpha $ (i.e. with sum of squared deviations $2\beta $, where deviations are from known mean $\mu $)  ${t}_{2{\alpha}^{\prime}}(\stackrel{~}{x}\mid \mu ,{\sigma}^{2}={\beta}^{\prime}/{\alpha}^{\prime})$^{[5]}  
Normal^{[note 7]} 
μ and σ^{2} Assuming exchangeability 
Normalinverse gamma  ${\mu}_{0},\nu ,\alpha ,\beta $ 
$\frac{\nu {\mu}_{0}+n\overline{x}}{\nu +n},\nu +n,\alpha +\frac{n}{2},$ $\beta +\frac{1}{2}\sum _{i=1}^{n}({x}_{i}\overline{x}{)}^{2}+\frac{n\nu}{\nu +n}\frac{(\overline{x}{\mu}_{0}{)}^{2}}{2}$ 
mean was estimated from $\nu $ observations with sample mean ${\mu}_{0}$; variance was estimated from $2\alpha $ observations with sample mean ${\mu}_{0}$ and sum of squared deviations $2\beta $  ${t}_{2{\alpha}^{\prime}}\left(\stackrel{~}{x}\mid {\mu}^{\prime},\frac{{\beta}^{\prime}({\nu}^{\prime}+1)}{{\nu}^{\prime}{\alpha}^{\prime}}\right)$^{[5]}  
Normal 
μ and τ Assuming exchangeability 
Normalgamma  ${\mu}_{0},\nu ,\alpha ,\beta $ 
$\frac{\nu {\mu}_{0}+n\overline{x}}{\nu +n},\nu +n,\alpha +\frac{n}{2},$ $\beta +\frac{1}{2}\sum _{i=1}^{n}({x}_{i}\overline{x}{)}^{2}+\frac{n\nu}{\nu +n}\frac{(\overline{x}{\mu}_{0}{)}^{2}}{2}$ 
mean was estimated from $\nu $ observations with sample mean ${\mu}_{0}$, and precision was estimated from $2\alpha $ observations with sample mean ${\mu}_{0}$ and sum of squared deviations $2\beta $  ${t}_{2{\alpha}^{\prime}}\left(\stackrel{~}{x}\mid {\mu}^{\prime},\frac{{\beta}^{\prime}({\nu}^{\prime}+1)}{{\alpha}^{\prime}{\nu}^{\prime}}\right)$^{[5]}  
Multivariate normal with known covariance matrix Σ  μ (mean vector)  Multivariate normal  ${\mu}_{0},{\Sigma}_{0}$ 
${\left({\Sigma}_{0}^{1}+n{\Sigma}^{1}\right)}^{1}\left({\Sigma}_{0}^{1}{\mu}_{0}+n{\Sigma}^{1}\overline{x}\right),$ ${\left({\Sigma}_{0}^{1}+n{\Sigma}^{1}\right)}^{1}$ 
mean was estimated from observations with total precision (sum of all individual precisions)${\Sigma}_{0}^{1}$ and with sample mean ${\mu}_{0}$  $N(\stackrel{~}{x}\mid {{\mu}_{0}}^{\prime},{{\Sigma}_{0}}^{\prime}+\Sigma )$^{[5]}  
Multivariate normal with known precision matrix Λ  μ (mean vector)  Multivariate normal  ${\mu}_{0},{\Lambda}_{0}$  ${\left({\Lambda}_{0}+n\Lambda \right)}^{1}\left({\Lambda}_{0}{\mu}_{0}+n\Lambda \overline{x}\right),\left({\Lambda}_{0}+n\Lambda \right)$  mean was estimated from observations with total precision (sum of all individual precisions)${\Lambda}_{0}$ and with sample mean ${\mu}_{0}$  $N\left(\stackrel{~}{x}\mid {{\mu}_{0}}^{\prime},({{{\Lambda}_{0}}^{\prime}}^{1}+{\Lambda}^{1}{)}^{1}\right)$^{[5]}  
Multivariate normal with known mean μ  Σ (covariance matrix)  InverseWishart  $\nu ,\Psi $  $n+\nu ,\Psi +\sum _{i=1}^{n}({x}_{i}\mu )({x}_{i}\mu {)}^{T}$  covariance matrix was estimated from $\nu $ observations with sum of pairwise deviation products $\Psi $  ${t}_{{\nu}^{\prime}p+1}\left(\stackrel{~}{x}\mu ,\frac{1}{{\nu}^{\prime}p+1}{\Psi}^{\prime}\right)$^{[5]}  
Multivariate normal with known mean μ  Λ (precision matrix)  Wishart  $\nu ,V$  $n+\nu ,{\left({V}^{1}+\sum _{i=1}^{n}({x}_{i}\mu )({x}_{i}\mu {)}^{T}\right)}^{1}$  covariance matrix was estimated from $\nu $ observations with sum of pairwise deviation products ${V}^{1}$  ${t}_{{\nu}^{\prime}p+1}\left(\stackrel{~}{x}\mid \mu ,\frac{1}{{\nu}^{\prime}p+1}{{V}^{\prime}}^{1}\right)$^{[5]}  
Multivariate normal  μ (mean vector) and Σ (covariance matrix)  normalinverseWishart  ${\mu}_{0},{\kappa}_{0},{\nu}_{0},\Psi $ 
$\frac{{\kappa}_{0}{\mu}_{0}+n\overline{x}}{{\kappa}_{0}+n},{\kappa}_{0}+n,{\nu}_{0}+n,$ $\Psi +C+\frac{{\kappa}_{0}n}{{\kappa}_{0}+n}(\overline{x}{\mu}_{0})(\overline{x}{\mu}_{0}{)}^{T}$

mean was estimated from ${\kappa}_{0}$ observations with sample mean ${\mu}_{0}$; covariance matrix was estimated from ${\nu}_{0}$ observations with sample mean ${\mu}_{0}$ and with sum of pairwise deviation products $\Psi ={\nu}_{0}{\Sigma}_{0}$  ${t}_{{{\nu}_{0}}^{\prime}p+1}\left(\stackrel{~}{x}{{\mu}_{0}}^{\prime},\frac{{{\kappa}_{0}}^{\prime}+1}{{{\kappa}_{0}}^{\prime}({{\nu}_{0}}^{\prime}p+1)}{\Psi}^{\prime}\right)$^{[5]}  
Multivariate normal  μ (mean vector) and Λ (precision matrix)  normalWishart  ${\mu}_{0},{\kappa}_{0},{\nu}_{0},V$ 
$\frac{{\kappa}_{0}{\mu}_{0}+n\overline{x}}{{\kappa}_{0}+n},{\kappa}_{0}+n,{\nu}_{0}+n,$ ${\left({V}^{1}+C+\frac{{\kappa}_{0}n}{{\kappa}_{0}+n}(\overline{x}{\mu}_{0})(\overline{x}{\mu}_{0}{)}^{T}\right)}^{1}$

mean was estimated from ${\kappa}_{0}$ observations with sample mean ${\mu}_{0}$; covariance matrix was estimated from ${\nu}_{0}$ observations with sample mean ${\mu}_{0}$ and with sum of pairwise deviation products ${V}^{1}$  ${t}_{{{\nu}_{0}}^{\prime}p+1}\left(\stackrel{~}{x}\mid {{\mu}_{0}}^{\prime},\frac{{{\kappa}_{0}}^{\prime}+1}{{{\kappa}_{0}}^{\prime}({{\nu}_{0}}^{\prime}p+1)}{{V}^{\prime}}^{1}\right)$^{[5]}  
Uniform  $U(0,\theta )$  Pareto  ${x}_{m},k$  $max\{{x}_{1},\dots ,{x}_{n},{x}_{m}\},k+n$  $k$ observations with maximum value ${x}_{m}$  
Pareto with known minimum x_{m} 
k (shape)  Gamma  $\alpha ,\beta $  $\alpha +n,\beta +\sum _{i=1}^{n}\mathrm{ln}\frac{{x}_{i}}{{x}_{m}}$  $\alpha $ observations with sum $\beta $ of the order of magnitude of each observation (i.e. the logarithm of the ratio of each observation to the minimum ${x}_{m}$)  
Weibull with known shape β 
θ (scale)  Inverse gamma^{[4]}  $a,b$  $a+n,b+\sum _{i=1}^{n}{x}_{i}^{\beta}$  $a$ observations with sum $b$ of the β'th power of each observation  
Lognormal  Same as for the normal distribution after exponentiating the data  
Exponential  λ (rate)  Gamma  $\alpha ,\beta $ ^{[note 4]}  $\alpha +n,\beta +\sum _{i=1}^{n}{x}_{i}$  $\alpha 1$ observations that sum to $\beta $ ^{[6]} 
$\mathrm{Lomax}(\stackrel{~}{x}\mid {\beta}^{\prime},{\alpha}^{\prime})$ (Lomax distribution) 

Gamma with known shape α 
β (rate)  Gamma  ${\alpha}_{0},{\beta}_{0}$  ${\alpha}_{0}+n\alpha ,{\beta}_{0}+\sum _{i=1}^{n}{x}_{i}$  ${\alpha}_{0}/\alpha $ observations with sum ${\beta}_{0}$  $\mathrm{CG}(\stackrel{~}{x}\mid \alpha ,{{\alpha}_{0}}^{\prime},{{\beta}_{0}}^{\prime})={\beta}^{\prime}(\stackrel{~}{x}\alpha ,{{\alpha}_{0}}^{\prime},1,{{\beta}_{0}}^{\prime})$ ^{[note 8]}  
Inverse Gamma with known shape α 
β (inverse scale)  Gamma  ${\alpha}_{0},{\beta}_{0}$  ${\alpha}_{0}+n\alpha ,{\beta}_{0}+\sum _{i=1}^{n}\frac{1}{{x}_{i}}$  ${\alpha}_{0}/\alpha $ observations with sum ${\beta}_{0}$  
Gamma with known rate β 
α (shape)  $\propto \frac{{a}^{\alpha 1}{\beta}^{\alpha c}}{\Gamma (\alpha {)}^{b}}$  $a,b,c$  $a\prod _{i=1}^{n}{x}_{i},b+n,c+n$  $b$ or $c$ observations ($b$ for estimating $\alpha $, $c$ for estimating $\beta $) with product $a$  
Gamma ^{[4]}  α (shape), β (inverse scale)  $\propto \frac{{p}^{\alpha 1}{e}^{\beta q}}{\Gamma (\alpha {)}^{r}{\beta}^{\alpha s}}$  $p,q,r,s$  $p\prod _{i=1}^{n}{x}_{i},q+\sum _{i=1}^{n}{x}_{i},r+n,s+n$  $\alpha $ was estimated from $r$ observations with product $p$; $\beta $ was estimated from $s$ observations with sum $q$ 