1,682 research outputs found
Wishart distributions for decomposable graphs
When considering a graphical Gaussian model Markov with
respect to a decomposable graph , the parameter space of interest for the
precision parameter is the cone of positive definite matrices with fixed
zeros corresponding to the missing edges of . The parameter space for the
scale parameter of is the cone , dual to , of
incomplete matrices with submatrices corresponding to the cliques of being
positive definite. In this paper we construct on the cones and two
families of Wishart distributions, namely the Type I and Type II Wisharts. They
can be viewed as generalizations of the hyper Wishart and the inverse of the
hyper inverse Wishart as defined by Dawid and Lauritzen [Ann. Statist. 21
(1993) 1272--1317]. We show that the Type I and II Wisharts have properties
similar to those of the hyper and hyper inverse Wishart. Indeed, the inverse of
the Type II Wishart forms a conjugate family of priors for the covariance
parameter of the graphical Gaussian model and is strong directed hyper Markov
for every direction given to the graph by a perfect order of its cliques, while
the Type I Wishart is weak hyper Markov. Moreover, the inverse Type II Wishart
as a conjugate family presents the advantage of having a multidimensional shape
parameter, thus offering flexibility for the choice of a prior.Comment: Published at http://dx.doi.org/10.1214/009053606000001235 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Gaussian approximation of Gaussian scale mixture
For a given positive random variable and a given
independent of , we compute the scalar such that the distance between
and in the sense, is minimal. We also
consider the same problem in several dimensions when is a random positive
definite matrix.Comment: 13 page
Moments of minors of Wishart matrices
For a random matrix following a Wishart distribution, we derive formulas for
the expectation and the covariance matrix of compound matrices. The compound
matrix of order is populated by all -minors of the Wishart
matrix. Our results yield first and second moments of the minors of the sample
covariance matrix for multivariate normal observations. This work is motivated
by the fact that such minors arise in the expression of constraints on the
covariance matrix in many classical multivariate problems.Comment: Published in at http://dx.doi.org/10.1214/07-AOS522 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Flexible covariance estimation in graphical Gaussian models
In this paper, we propose a class of Bayes estimators for the covariance
matrix of graphical Gaussian models Markov with respect to a decomposable graph
. Working with the family defined by Letac and Massam [Ann.
Statist. 35 (2007) 1278--1323] we derive closed-form expressions for Bayes
estimators under the entropy and squared-error losses. The family
includes the classical inverse of the hyper inverse Wishart but has many more
shape parameters, thus allowing for flexibility in differentially shrinking
various parts of the covariance matrix. Moreover, using this family avoids
recourse to MCMC, often infeasible in high-dimensional problems. We illustrate
the performance of our estimators through a collection of numerical examples
where we explore frequentist risk properties and the efficacy of graphs in the
estimation of high-dimensional covariance structures.Comment: Published in at http://dx.doi.org/10.1214/08-AOS619 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Bayes factors and the geometry of discrete hierarchical loglinear models
A standard tool for model selection in a Bayesian framework is the Bayes
factor which compares the marginal likelihood of the data under two given
different models. In this paper, we consider the class of hierarchical
loglinear models for discrete data given under the form of a contingency table
with multinomial sampling. We assume that the Diaconis-Ylvisaker conjugate
prior is the prior distribution on the loglinear parameters and the uniform is
the prior distribution on the space of models. Under these conditions, the
Bayes factor between two models is a function of their prior and posterior
normalizing constants. These constants are functions of the hyperparameters
which can be interpreted respectively as marginal counts and the
total count of a fictive contingency table.
We study the behaviour of the Bayes factor when tends to zero. In
this study two mathematical objects play a most important role. They are,
first, the interior of the convex hull of the support of the
multinomial distribution for a given hierarchical loglinear model together with
its faces and second, the characteristic function of this convex
set .
We show that, when tends to 0, if the data lies on a face of
of dimension , the Bayes factor behaves like
. This implies in particular that when the data is in
and in , i.e. when equals the dimension of model , the sparser
model is favored, thus confirming the idea of Bayesian regularization.Comment: 37 page
- …
