248 research outputs found

    An adaptive Ridge procedure for L0 regularization

    Get PDF
    Penalized selection criteria like AIC or BIC are among the most popular methods for variable selection. Their theoretical properties have been studied intensively and are well understood, but making use of them in case of high-dimensional data is difficult due to the non-convex optimization problem induced by L0 penalties. An elegant solution to this problem is provided by the multi-step adaptive lasso, where iteratively weighted lasso problems are solved, whose weights are updated in such a way that the procedure converges towards selection with L0 penalties. In this paper we introduce an adaptive ridge procedure (AR) which mimics the adaptive lasso, but is based on weighted Ridge problems. After introducing AR its theoretical properties are studied in the particular case of orthogonal linear regression. For the non-orthogonal case extensive simulations are performed to assess the performance of AR. In case of Poisson regression and logistic regression it is illustrated how the iterative procedure of AR can be combined with iterative maximization procedures. The paper ends with an efficient implementation of AR in the context of least-squares segmentation

    Analyzing genome-wide association studies with an FDR controlling modification of the Bayesian information criterion

    Full text link
    The prevailing method of analyzing GWAS data is still to test each marker individually, although from a statistical point of view it is quite obvious that in case of complex traits such single marker tests are not ideal. Recently several model selection approaches for GWAS have been suggested, most of them based on LASSO-type procedures. Here we will discuss an alternative model selection approach which is based on a modification of the Bayesian Information Criterion (mBIC2) which was previously shown to have certain asymptotic optimality properties in terms of minimizing the misclassification error. Heuristic search strategies are introduced which attempt to find the model which minimizes mBIC2, and which are efficient enough to allow the analysis of GWAS data. Our approach is implemented in a software package called MOSGWA. Its performance in case control GWAS is compared with the two algorithms HLASSO and GWASelect, as well as with single marker tests, where we performed a simulation study based on real SNP data from the POPRES sample. Our results show that MOSGWA performs slightly better than HLASSO, whereas according to our simulations GWASelect does not control the type I error when used to automatically determine the number of important SNPs. We also reanalyze the GWAS data from the Wellcome Trust Case-Control Consortium (WTCCC) and compare the findings of the different procedures

    Asymptotic Bayes-optimality under sparsity of some multiple testing procedures

    Full text link
    Within a Bayesian decision theoretic framework we investigate some asymptotic optimality properties of a large class of multiple testing rules. A parametric setup is considered, in which observations come from a normal scale mixture model and the total loss is assumed to be the sum of losses for individual tests. Our model can be used for testing point null hypotheses, as well as to distinguish large signals from a multitude of very small effects. A rule is defined to be asymptotically Bayes optimal under sparsity (ABOS), if within our chosen asymptotic framework the ratio of its Bayes risk and that of the Bayes oracle (a rule which minimizes the Bayes risk) converges to one. Our main interest is in the asymptotic scheme where the proportion p of "true" alternatives converges to zero. We fully characterize the class of fixed threshold multiple testing rules which are ABOS, and hence derive conditions for the asymptotic optimality of rules controlling the Bayesian False Discovery Rate (BFDR). We finally provide conditions under which the popular Benjamini-Hochberg (BH) and Bonferroni procedures are ABOS and show that for a wide class of sparsity levels, the threshold of the former can be approximated by a nonrandom threshold.Comment: Published in at http://dx.doi.org/10.1214/10-AOS869 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Similarity transformations for Nonlinear Schrodinger Equations with time varying coefficients: Exact results

    Full text link
    In this paper we use a similarity transformation connecting some families of Nonlinear Schrodinger equations with time-varying coefficients with the autonomous cubic nonlinear Schrodinger equation. This transformation allows one to apply all known results for that equation to the non-autonomous case with the additional dynamics introduced by the transformation itself. In particular, using stationary solutions of the autonomous nonlinear Schrodinger equation we can construct exact breathing solutions to multidimensional non-autonomous nonlinear Schrodinger equations. An application is given in which we explicitly construct time dependent coefficients leading to solutions displaying weak collapse in three-dimensional scenarios. Our results can find physical applicability in mean field models of Bose-Einstein condensates and in the field of dispersion-managed optical systems

    Quantum dynamical semigroups for diffusion models with Hartree interaction

    Full text link
    We consider a class of evolution equations in Lindblad form, which model the dynamics of dissipative quantum mechanical systems with mean-field interaction. Particularly, this class includes the so-called Quantum Fokker-Planck-Poisson model. The existence and uniqueness of global-in-time, mass preserving solutions is proved, thus establishing the existence of a nonlinear conservative quantum dynamical semigroup. The mathematical difficulties stem from combining an unbounded Lindblad generator with the Hartree nonlinearity.Comment: 30 pages; Introduction changed, title changed, easier and shorter proofs due to new energy norm. to appear in Comm. Math. Phy
    corecore