The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

Distributions Provided by mniw

Martin Lysy

2024-09-20

Wishart Distribution

The Wishart distribution on a random positive-definite matrix \({\boldsymbol{X}}_{q\times q}\) is is denoted \({\boldsymbol{X}}\sim \operatorname{Wish}({\boldsymbol{\Psi}}, \nu)\), and defined as \({\boldsymbol{X}}= ({\boldsymbol{L}}{\boldsymbol{Z}})({\boldsymbol{L}}{\boldsymbol{Z}})'\), where:

The log-density of the Wishart distribution is

\[ \log p({\boldsymbol{X}}\mid {\boldsymbol{\Psi}}, \nu) = -\textstyle{\frac{1}{2}} \left[\mathrm{tr}({\boldsymbol{\Psi}}^{-1} {\boldsymbol{X}}) + (q+1-\nu)\log |{\boldsymbol{X}}| + \nu \log |{\boldsymbol{\Psi}}| + \nu q \log(2) + 2 \log \Gamma_q(\textstyle{\frac{\nu }{2}})\right], \]

where \(\Gamma_n(x)\) is the multivariate Gamma function defined as

\[ \Gamma_n(x) = \pi^{n(n-1)/4} \prod_{j=1}^n \Gamma\big(x + \textstyle{\frac{1}{2}} (1-j)\big). \]

Inverse-Wishart Distribution

The Inverse-Wishart distribution \({\boldsymbol{X}}\sim \operatorname{InvWish}({\boldsymbol{\Psi}}, \nu)\) is defined as \({\boldsymbol{X}}^{-1} \sim \operatorname{Wish}({\boldsymbol{\Psi}}^{-1}, \nu)\). Its log-density is given by

\[ \log p({\boldsymbol{X}}\mid {\boldsymbol{\Psi}}, \nu) = -\textstyle{\frac{1}{2}} \left[\mathrm{tr}({\boldsymbol{\Psi}}{\boldsymbol{X}}^{-1}) + (\nu+q+1) \log |{\boldsymbol{X}}| - \nu \log |{\boldsymbol{\Psi}}| + \nu q \log(2) + 2 \log \Gamma_q(\textstyle{\frac{\nu }{2}})\right]. \]

Properties

If \({\boldsymbol{X}}_{q\times q} \sim \operatorname{Wish}({\boldsymbol{\Psi}},\nu)\), the for a nonzero vector \({\boldsymbol{a}}\in \mathbb R^q\) we have

\[ \frac{{\boldsymbol{a}}'{\boldsymbol{X}}{\boldsymbol{a}}}{{\boldsymbol{a}}'{\boldsymbol{\Psi}}{\boldsymbol{a}}} \sim \chi^2_{(\nu)}. \]

Matrix-Normal Distribution

The Matrix-Normal distribution on a random matrix \({\boldsymbol{X}}_{p \times q}\) is denoted \({\boldsymbol{X}}\sim \operatorname{MatNorm}({\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}_R, {\boldsymbol{\Sigma}}_C)\), and defined as \({\boldsymbol{X}}= {\boldsymbol{L}}{\boldsymbol{Z}}{\boldsymbol{U}}+ {\boldsymbol{\Lambda}}\), where:

The log-density of the Matrix-Normal distribution is

\[ \log p({\boldsymbol{X}}\mid {\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}_R, {\boldsymbol{\Sigma}}_C) = -\textstyle{\frac{1}{2}} \left[\mathrm{tr}\big({\boldsymbol{\Sigma}}_C^{-1}({\boldsymbol{X}}-{\boldsymbol{\Lambda}})'{\boldsymbol{\Sigma}}_R^{-1}({\boldsymbol{X}}-{\boldsymbol{\Lambda}})\big) + \nu q \log(2\pi) + \nu \log |{\boldsymbol{\Sigma}}_C| + q \log |{\boldsymbol{\Sigma}}_R|\right]. \]

Properties

If \({\boldsymbol{X}}_{p \times q} \sim \operatorname{MatNorm}({\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}_R, {\boldsymbol{\Sigma}}_C)\), then for nonzero vectors \({\boldsymbol{a}}\in \mathbb R^p\) and \({\boldsymbol{b}}\in \mathbb R^q\) we have

\[ {\boldsymbol{a}}' {\boldsymbol{X}}{\boldsymbol{b}}\sim \operatorname{Normal}({\boldsymbol{a}}' {\boldsymbol{\Lambda}}{\boldsymbol{b}}, {\boldsymbol{a}}'{\boldsymbol{\Sigma}}_R{\boldsymbol{a}}\cdot {\boldsymbol{b}}'{\boldsymbol{\Sigma}}_C{\boldsymbol{b}}). \]

Matrix-Normal Inverse-Wishart Distribution

The Matrix-Normal Inverse-Wishart Distribution on a random matrix \({\boldsymbol{X}}_{p \times q}\) and random positive-definite matrix \({\boldsymbol{V}}_{q\times q}\) is denoted \(({\boldsymbol{X}},{\boldsymbol{V}}) \sim \operatorname{MNIW}({\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}, {\boldsymbol{\Psi}}, \nu)\), and defined as

\[ \begin{aligned} {\boldsymbol{X}}\mid {\boldsymbol{V}}& \sim \operatorname{MatNorm}({\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}, {\boldsymbol{V}}) \\ {\boldsymbol{V}}& \sim \operatorname{InvWish}({\boldsymbol{\Psi}}, \nu). \end{aligned} \]

Properties

The MNIX distribution is conjugate prior for the multivariable response regression model

\[ {\boldsymbol{Y}}_{n \times q} \sim \operatorname{MatNorm}({\boldsymbol{X}}_{n\times p} {\boldsymbol{\beta}}_{p \times q}, {\boldsymbol{V}}, {\boldsymbol{\Sigma}}). \]

That is, if \(({\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}) \sim \operatorname{MNIW}({\boldsymbol{\Lambda}}, {\boldsymbol{\Omega}}^{-1}, {\boldsymbol{\Psi}}, \nu)\), then

\[ {\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}\mid {\boldsymbol{Y}}\sim \operatorname{MNIW}(\hat {\boldsymbol{\Lambda}}, \hat {\boldsymbol{\Omega}}^{-1}, \hat {\boldsymbol{\Psi}}, \hat \nu), \]

where

\[ \begin{aligned} \hat {\boldsymbol{\Omega}}& = {\boldsymbol{X}}'{\boldsymbol{V}}^{-1}{\boldsymbol{X}}+ {\boldsymbol{\Omega}} & \hat {\boldsymbol{\Psi}}& = {\boldsymbol{\Psi}}+ {\boldsymbol{Y}}'{\boldsymbol{V}}^{-1}{\boldsymbol{Y}}+ {\boldsymbol{\Lambda}}'{\boldsymbol{\Omega}}{\boldsymbol{\Lambda}}- \hat {\boldsymbol{\Lambda}}'\hat {\boldsymbol{\Omega}}\hat {\boldsymbol{\Lambda}} \\ \hat {\boldsymbol{\Lambda}}& = \hat {\boldsymbol{\Omega}}^{-1}({\boldsymbol{X}}'{\boldsymbol{V}}^{-1}{\boldsymbol{Y}}+ {\boldsymbol{\Omega}}{\boldsymbol{\Lambda}}) & \hat \nu & = \nu + n. \end{aligned} \]

Matrix-t Distribution

The Matrix-\(t\) distribution on a random matrix \({\boldsymbol{X}}_{p \times q}\) is denoted \({\boldsymbol{X}}\sim \operatorname{MatT}({\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}_R, {\boldsymbol{\Sigma}}_C, \nu)\), and defined as the marginal distribution of \({\boldsymbol{X}}\) for \(({\boldsymbol{X}}, {\boldsymbol{V}}) \sim \operatorname{MNIW}({\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}_R, {\boldsymbol{\Sigma}}_C, \nu)\). Its log-density is given by

\[ \begin{aligned} \log p({\boldsymbol{X}}\mid {\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}_R, {\boldsymbol{\Sigma}}_C, \nu) & = -\textstyle{\frac{1}{2}} \Big[(\nu+p+q-1)\log | I + {\boldsymbol{\Sigma}}_R^{-1}({\boldsymbol{X}}-{\boldsymbol{\Lambda}}){\boldsymbol{\Sigma}}_C^{-1}({\boldsymbol{X}}-{\boldsymbol{\Lambda}})'| \\ & \phantom{= -\textstyle{\frac{1}{2}} \Big[} + q \log |{\boldsymbol{\Sigma}}_R| + p \log |{\boldsymbol{\Sigma}}_C| \\ & \phantom{= -\textstyle{\frac{1}{2}} \Big[} + pq \log(\pi) - \log \Gamma_q(\textstyle{\frac{\nu+p+q-1}{2}}) + \log \Gamma_q(\textstyle{\frac{\nu+q-1}{2}})\Big]. \end{aligned} \]

Properties

If \({\boldsymbol{X}}_{p\times q} \sim \operatorname{MatT}({\boldsymbol{\Lambda}}, {\boldsymbol{\Sigma}}_R, {\boldsymbol{\Sigma}}_C, \nu)\), then for nonzero vectors \({\boldsymbol{a}}\in \mathbb R^p\) and \({\boldsymbol{b}}\in \mathbb R^q\) we have

\[ \frac{{\boldsymbol{a}}'{\boldsymbol{X}}{\boldsymbol{b}}- \mu}{\sigma} \sim t_{(\nu -q + 1)}, \]

where \[ \mu = {\boldsymbol{a}}'{\boldsymbol{\Lambda}}{\boldsymbol{b}}, \qquad \sigma^2 = \frac{{\boldsymbol{a}}'{\boldsymbol{\Sigma}}_R{\boldsymbol{a}}\cdot {\boldsymbol{b}}'{\boldsymbol{\Sigma}}_C{\boldsymbol{b}}}{\nu - q + 1}. \]

Random-Effects Normal Distribution

Consider the multivariate normal distribution on \(q\)-dimensional vectors \({\boldsymbol{x}}\) and \({\boldsymbol{\mu}}\) given by

\[ \begin{aligned} {\boldsymbol{x}}\mid {\boldsymbol{\mu}}& \sim \operatorname{Normal}({\boldsymbol{\mu}}, {\boldsymbol{V}}) \\ {\boldsymbol{\mu}}& \sim \operatorname{Normal}({\boldsymbol{\lambda}}, {\boldsymbol{\Sigma}}). \end{aligned} \]

The random-effects normal distribution is defined as the posterior distribution \({\boldsymbol{\mu}}\sim p({\boldsymbol{\mu}}\mid {\boldsymbol{x}})\), which is given by

\[ {\boldsymbol{\mu}}\mid {\boldsymbol{x}}\sim \operatorname{Normal}\big({\boldsymbol{G}}({\boldsymbol{x}}-{\boldsymbol{\lambda}}) + {\boldsymbol{\lambda}}, {\boldsymbol{G}}{\boldsymbol{V}}\big), \qquad {\boldsymbol{G}}= {\boldsymbol{\Sigma}}({\boldsymbol{V}}+ {\boldsymbol{\Sigma}})^{-1}. \]

The notation for this distribution is \({\boldsymbol{\mu}}\sim \operatorname{RxNorm}({\boldsymbol{x}}, {\boldsymbol{V}}, {\boldsymbol{\lambda}}, {\boldsymbol{\Sigma}})\).

Hierarchical Normal-Normal Model

The hierarchical Normal-Normal model is defined as

\[ \begin{aligned} {\boldsymbol{y}}_i \mid {\boldsymbol{\mu}}_i, {\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}& \overset{\:\textrm{ind}\:}{\sim}\operatorname{Normal}({\boldsymbol{\mu}}_i, {\boldsymbol{V}}_i) \\ {\boldsymbol{\mu}}_i \mid {\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}& \overset{\;\textrm{iid}\;}{\sim}\operatorname{Normal}({\boldsymbol{x}}_i'{\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}) \\ ({\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}) & \sim \operatorname{MNIW}({\boldsymbol{\Lambda}}, {\boldsymbol{\Omega}}^{-1}, {\boldsymbol{\Psi}}, \nu), \end{aligned} \]

where:

Let \({\boldsymbol{Y}}_{n\times q} = ({\boldsymbol{y}}_{1},\ldots,{\boldsymbol{y}}_{n})\), \({\boldsymbol{X}}_{n\times p} = ({\boldsymbol{x}}_{1},\ldots,{\boldsymbol{x}}_{n})\), and \({\boldsymbol{\Theta}}_{n \times q} = ({\boldsymbol{\mu}}_{1},\ldots,{\boldsymbol{\mu}}_{n})\). If interest lies in the posterior distribution \(p({\boldsymbol{\Theta}}, {\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}\mid {\boldsymbol{Y}}, {\boldsymbol{X}})\), then a Gibbs sampler can be used to cycle through the following conditional distributions:

\[ \begin{aligned} {\boldsymbol{\mu}}_i \mid {\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}, {\boldsymbol{Y}}, {\boldsymbol{X}}& \overset{\:\textrm{ind}\:}{\sim}\operatorname{RxNorm}({\boldsymbol{y}}_i, {\boldsymbol{V}}_i, {\boldsymbol{x}}_i'{\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}) \\ {\boldsymbol{\beta}}, {\boldsymbol{\Sigma}}\mid {\boldsymbol{\Theta}}, {\boldsymbol{Y}}, {\boldsymbol{X}}& \sim \operatorname{MNIW}(\hat {\boldsymbol{\Lambda}}, \hat {\boldsymbol{\Omega}}^{-1}, \hat {\boldsymbol{\Psi}}, \hat \nu), \end{aligned} \]

where \(\hat {\boldsymbol{\Lambda}}\), \(\hat {\boldsymbol{\Omega}}\), \(\hat {\boldsymbol{\Psi}}\), and \(\hat \nu\) are obtained from the MNIW conjugate posterior formula with \({\boldsymbol{Y}}\gets {\boldsymbol{\Theta}}\).

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.