7,186 research outputs found

    A generalized Fellner-Schall method for smoothing parameter estimation with application to Tweedie location, scale and shape models

    Get PDF
    We consider the estimation of smoothing parameters and variance components in models with a regular log likelihood subject to quadratic penalization of the model coefficients, via a generalization of the method of Fellner (1986) and Schall (1991). In particular: (i) we generalize the original method to the case of penalties that are linear in several smoothing parameters, thereby covering the important cases of tensor product and adaptive smoothers; (ii) we show why the method's steps increase the restricted marginal likelihood of the model, that it tends to converge faster than the EM algorithm, or obvious accelerations of this, and investigate its relation to Newton optimization; (iii) we generalize the method to any Fisher regular likelihood. The method represents a considerable simplification over existing methods of estimating smoothing parameters in the context of regular likelihoods, without sacrificing generality: for example, it is only necessary to compute with the same first and second derivatives of the log-likelihood required for coefficient estimation, and not with the third or fourth order derivatives required by alternative approaches. Examples are provided which would have been impossible or impractical with pre-existing Fellner-Schall methods, along with an example of a Tweedie location, scale and shape model which would be a challenge for alternative methods

    Some Aspects of Measurement Error in Linear Regression of Astronomical Data

    Full text link
    I describe a Bayesian method to account for measurement errors in linear regression of astronomical data. The method allows for heteroscedastic and possibly correlated measurement errors, and intrinsic scatter in the regression relationship. The method is based on deriving a likelihood function for the measured data, and I focus on the case when the intrinsic distribution of the independent variables can be approximated using a mixture of Gaussians. I generalize the method to incorporate multiple independent variables, non-detections, and selection effects (e.g., Malmquist bias). A Gibbs sampler is described for simulating random draws from the probability distribution of the parameters, given the observed data. I use simulation to compare the method with other common estimators. The simulations illustrate that the Gaussian mixture model outperforms other common estimators and can effectively give constraints on the regression parameters, even when the measurement errors dominate the observed scatter, source detection fraction is low, or the intrinsic distribution of the independent variables is not a mixture of Gaussians. I conclude by using this method to fit the X-ray spectral slope as a function of Eddington ratio using a sample of 39 z < 0.8 radio-quiet quasars. I confirm the correlation seen by other authors between the radio-quiet quasar X-ray spectral slope and the Eddington ratio, where the X-ray spectral slope softens as the Eddington ratio increases.Comment: 39 pages, 11 figures, 1 table, accepted by ApJ. IDL routines (linmix_err.pro) for performing the Markov Chain Monte Carlo are available at the IDL astronomy user's library, http://idlastro.gsfc.nasa.gov/homepage.htm

    2-(1,4-Dioxo-1,4-dihydro-2-naphthyl)-2-methylpropanoic acid

    Get PDF
    The sterically crowded title compound, C₁₄H₁₂O₄, crystallizes as centrosymmetric hydrogen-bonded dimers involving the carboxyl groups. The naphthoquinone ring system is folded by 11.5 (1)° about a vector joining the 1,4-C atoms, and the quinone O atoms are displaced from the ring plane, presumably because of steric interactions with the bulky substituent

    Designing a Belief Function-Based Accessibility Indicator to Improve Web Browsing for Disabled People

    Get PDF
    The purpose of this study is to provide an accessibility measure of web-pages, in order to draw disabled users to the pages that have been designed to be ac-cessible to them. Our approach is based on the theory of belief functions, using data which are supplied by reports produced by automatic web content assessors that test the validity of criteria defined by the WCAG 2.0 guidelines proposed by the World Wide Web Consortium (W3C) organization. These tools detect errors with gradual degrees of certainty and their results do not always converge. For these reasons, to fuse information coming from the reports, we choose to use an information fusion framework which can take into account the uncertainty and imprecision of infor-mation as well as divergences between sources. Our accessibility indicator covers four categories of deficiencies. To validate the theoretical approach in this context, we propose an evaluation completed on a corpus of 100 most visited French news websites, and 2 evaluation tools. The results obtained illustrate the interest of our accessibility indicator

    Modelling the U.S. Federal Spending Process: Overview and Implications

    Get PDF
    The purpose of this paper is to show how inflation is endemic to the budgetary process of the United States Federal Government. We relate models of government expenditure to models of the economy, thus joining in theory what has in practice always been together. The description given -- although presented in summary rather than detail -- is based on hard statistical and econometric evidence amassed over more than a decade. We attempt to show that, while they are complex, the relevant processes can be modeled reasonably simply. We conclude that the forces influencing U.S. Federal expenditures -- bureaucratic, political and economic -- are too entrenched and powerful to be easily deflected from their current course. Although expenditures decline during restrictive periods, they do not decline by nearly as much as they previously increased; thus each cycle of spending begins from a higher base. After brief descriptions of the process by which fiscal and budgetary policy are formed in the name of the President and of the evolution of the broad pattern of Federal expenditure post World War II, we present simple, empirically supported models of the formation and coordination of budget requests, Congressional appropriations and the timing of Federal expenditures. Next we outline, by means of the comparative static analysis of a simple macroeconomic model with an endogenous government sector, the short and medium term economic implications of a government reacting -- through its wage bill, "mandatory" transfer payments and attempted fiscal policy -- to output, the price level and unemployment. When government involves a sizable proportion of economic activity, its budget deficit -- rather than private consumer and investment credit alone -- represents a major intertemporal credit demand, fueling both growth and inflation. In these circumstances a tight fiscal and monetary policy, which reduces this credit in response to inflation, can have precisely the opposite effect to that desired, namely, simultaneous stagnation and accelerating inflation. Finally, we speculate on the long term effects of the resulting growth of the public sector necessitated by short term political and economic forces in light of the slowly adapting nature of bureaucratic processes captured in our models

    Diagonal and Low-Rank Matrix Decompositions, Correlation Matrices, and Ellipsoid Fitting

    Get PDF
    In this paper we establish links between, and new results for, three problems that are not usually considered together. The first is a matrix decomposition problem that arises in areas such as statistical modeling and signal processing: given a matrix XX formed as the sum of an unknown diagonal matrix and an unknown low rank positive semidefinite matrix, decompose XX into these constituents. The second problem we consider is to determine the facial structure of the set of correlation matrices, a convex set also known as the elliptope. This convex body, and particularly its facial structure, plays a role in applications from combinatorial optimization to mathematical finance. The third problem is a basic geometric question: given points v1,v2,...,vnRkv_1,v_2,...,v_n\in \R^k (where n>kn > k) determine whether there is a centered ellipsoid passing \emph{exactly} through all of the points. We show that in a precise sense these three problems are equivalent. Furthermore we establish a simple sufficient condition on a subspace UU that ensures any positive semidefinite matrix LL with column space UU can be recovered from D+LD+L for any diagonal matrix DD using a convex optimization-based heuristic known as minimum trace factor analysis. This result leads to a new understanding of the structure of rank-deficient correlation matrices and a simple condition on a set of points that ensures there is a centered ellipsoid passing through them.Comment: 20 page

    Condition monitoring of an advanced gas-cooled nuclear reactor core

    Get PDF
    A critical component of an advanced gas-cooled reactor station is the graphite core. As a station ages, the graphite bricks that comprise the core can distort and may eventually crack. Since the core cannot be replaced, the core integrity ultimately determines the station life. Monitoring these distortions is usually restricted to the routine outages, which occur every few years, as this is the only time that the reactor core can be accessed by external sensing equipment. This paper presents a monitoring module based on model-based techniques using measurements obtained during the refuelling process. A fault detection and isolation filter based on unknown input observer techniques is developed. The role of this filter is to estimate the friction force produced by the interaction between the wall of the fuel channel and the fuel assembly supporting brushes. This allows an estimate to be made of the shape of the graphite bricks that comprise the core and, therefore, to monitor any distortion on them

    The Weibull-Geometric distribution

    Full text link
    In this paper we introduce, for the first time, the Weibull-Geometric distribution which generalizes the exponential-geometric distribution proposed by Adamidis and Loukas (1998). The hazard function of the last distribution is monotone decreasing but the hazard function of the new distribution can take more general forms. Unlike the Weibull distribution, the proposed distribution is useful for modeling unimodal failure rates. We derive the cumulative distribution and hazard functions, the density of the order statistics and calculate expressions for its moments and for the moments of the order statistics. We give expressions for the R\'enyi and Shannon entropies. The maximum likelihood estimation procedure is discussed and an algorithm EM (Dempster et al., 1977; McLachlan and Krishnan, 1997) is provided for estimating the parameters. We obtain the information matrix and discuss inference. Applications to real data sets are given to show the flexibility and potentiality of the proposed distribution
    corecore