here is a sample
and there is an unknown parameter
.
However, unlike the classical approach, the
is regarded as a random variable. The nature selects
from some "prior" population
,
then it selects the sample
according to
.
The goal is to recover the distribution
.
Such recovery is accomplished by the repeated application of the
(
Bayes
formula
):
|
|
(Bayesian technique)
|
The
is the likelihood function,
is called the "prior" distribution and the
is a normalization
constant:
The
may hold some prior knowledge about the
There are at least three general strategies to choose the prior distribution:
non-informative (diffuse) prior, invariant prior (Jeffrey's principle) and
hierarchical modelling. For analytical convenience one should try and choose
prior so that the posterior distribution
,
likelihood
and the prior
would belong to the same class of functions (normal, binomial, exponential and
so fourth). Such prior distribution is called "conjugate".
The principal technical tool is to drop normalization constants from all
calculations and track only the essential part. The normalization constant is
then recovered from the final solution according to
.
In particular, we write (
Bayesian
technique
) as
|