1,870 research outputs found

    The CONEstrip algorithm

    Get PDF
    Uncertainty models such as sets of desirable gambles and (conditional) lower previsions can be represented as convex cones. Checking the consistency of and drawing inferences from such models requires solving feasibility and optimization problems. We consider finitely generated such models. For closed cones, we can use linear programming; for conditional lower prevision-based cones, there is an efficient algorithm using an iteration of linear programs. We present an efficient algorithm for general cones that also uses an iteration of linear programs

    Sets of Priors Reflecting Prior-Data Conflict and Agreement

    Full text link
    In Bayesian statistics, the choice of prior distribution is often debatable, especially if prior knowledge is limited or data are scarce. In imprecise probability, sets of priors are used to accurately model and reflect prior knowledge. This has the advantage that prior-data conflict sensitivity can be modelled: Ranges of posterior inferences should be larger when prior and data are in conflict. We propose a new method for generating prior sets which, in addition to prior-data conflict sensitivity, allows to reflect strong prior-data agreement by decreased posterior imprecision.Comment: 12 pages, 6 figures, In: Paulo Joao Carvalho et al. (eds.), IPMU 2016: Proceedings of the 16th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, Eindhoven, The Netherland

    Factorisation properties of the strong product

    Get PDF
    We investigate a number of factorisation conditions in the frame- work of sets of probability measures, or coherent lower previsions, with finite referential spaces. We show that the so-called strong product constitutes one way to combine a number of marginal coherent lower previsions into an independent joint lower prevision, and we prove that under some conditions it is the only independent product that satisfies the factorisation conditions

    Robust Inference of Trees

    Full text link
    This paper is concerned with the reliable inference of optimal tree-approximations to the dependency structure of an unknown distribution generating data. The traditional approach to the problem measures the dependency strength between random variables by the index called mutual information. In this paper reliability is achieved by Walley's imprecise Dirichlet model, which generalizes Bayesian learning with Dirichlet priors. Adopting the imprecise Dirichlet model results in posterior interval expectation for mutual information, and in a set of plausible trees consistent with the data. Reliable inference about the actual tree is achieved by focusing on the substructure common to all the plausible trees. We develop an exact algorithm that infers the substructure in time O(m^4), m being the number of random variables. The new algorithm is applied to a set of data sampled from a known distribution. The method is shown to reliably infer edges of the actual tree even when the data are very scarce, unlike the traditional approach. Finally, we provide lower and upper credibility limits for mutual information under the imprecise Dirichlet model. These enable the previous developments to be extended to a full inferential method for trees.Comment: 26 pages, 7 figure

    Maximin and maximal solutions for linear programming problems with possibilistic uncertainty

    Get PDF
    We consider linear programming problems with uncertain constraint coefficients described by intervals or, more generally, possi-bility distributions. The uncertainty is given a behavioral interpretation using coherent lower previsions from the theory of imprecise probabilities. We give a meaning to the linear programming problems by reformulating them as decision problems under such imprecise-probabilistic uncer-tainty. We provide expressions for and illustrations of the maximin and maximal solutions of these decision problems and present computational approaches for dealing with them

    Computable randomness is about more than probabilities

    Get PDF
    We introduce a notion of computable randomness for infinite sequences that generalises the classical version in two important ways. First, our definition of computable randomness is associated with imprecise probability models, in the sense that we consider lower expectations (or sets of probabilities) instead of classical 'precise' probabilities. Secondly, instead of binary sequences, we consider sequences whose elements take values in some finite sample space. Interestingly, we find that every sequence is computably random with respect to at least one lower expectation, and that lower expectations that are more informative have fewer computably random sequences. This leads to the intriguing question whether every sequence is computably random with respect to a unique most informative lower expectation. We study this question in some detail and provide a partial answer

    A new method for learning imprecise hidden Markov models

    Get PDF
    We present a method for learning imprecise local uncertainty models in stationary hidden Markov models. If there is enough data to justify precise local uncertainty models, then existing learning algorithms, such as the Baum–Welch algorithm, can be used. When there is not enough evidence to justify precise models, the method we suggest here has a number of interesting features

    Coherent frequentism

    Full text link
    By representing the range of fair betting odds according to a pair of confidence set estimators, dual probability measures on parameter space called frequentist posteriors secure the coherence of subjective inference without any prior distribution. The closure of the set of expected losses corresponding to the dual frequentist posteriors constrains decisions without arbitrarily forcing optimization under all circumstances. This decision theory reduces to those that maximize expected utility when the pair of frequentist posteriors is induced by an exact or approximate confidence set estimator or when an automatic reduction rule is applied to the pair. In such cases, the resulting frequentist posterior is coherent in the sense that, as a probability distribution of the parameter of interest, it satisfies the axioms of the decision-theoretic and logic-theoretic systems typically cited in support of the Bayesian posterior. Unlike the p-value, the confidence level of an interval hypothesis derived from such a measure is suitable as an estimator of the indicator of hypothesis truth since it converges in sample-space probability to 1 if the hypothesis is true or to 0 otherwise under general conditions.Comment: The confidence-measure theory of inference and decision is explicitly extended to vector parameters of interest. The derivation of upper and lower confidence levels from valid and nonconservative set estimators is formalize

    An almost sure limit theorem for super-Brownian motion

    Get PDF
    We establish an almost sure scaling limit theorem for super-Brownian motion on Rd\mathbb{R}^d associated with the semi-linear equation ut=1/2Δu+βuαu2u_t = {1/2}\Delta u +\beta u-\alpha u^2, where α\alpha and β\beta are positive constants. In this case, the spectral theoretical assumptions that required in Chen et al (2008) are not satisfied. An example is given to show that the main results also hold for some sub-domains in Rd\mathbb{R}^d.Comment: 14 page
    corecore