232 research outputs found

    Section on the special year for mathematics of planet earth (MPE 2013)

    Full text link
    Dozens of research centers, foundations, international organizations and scientific societies, including the Institute of Mathematical Statistics, have joined forces to celebrate 2013 as a special year for the Mathematics of Planet Earth. In its five-year history, the Annals of Applied Statistics has been publishing cutting edge research in this area, including geophysical, biological and socio-economic aspects of planet Earth, with the special section on statistics in the atmospheric sciences edited by Fuentes, Guttorp and Stein (2008) and the discussion paper by McShane and Wyner (2011) on paleoclimate reconstructions [Stein (2011)] having been highlights.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS606 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Local proper scoring rules of order two

    Full text link
    Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is proper if it encourages truthful reporting. It is local of order kk if the score depends on the predictive density only through its value and the values of its derivatives of order up to kk at the realizing event. Complementing fundamental recent work by Parry, Dawid and Lauritzen, we characterize the local proper scoring rules of order 2 relative to a broad class of Lebesgue densities on the real line, using a different approach. In a data example, we use local and nonlocal proper scoring rules to assess statistically postprocessed ensemble weather forecasts.Comment: Published in at http://dx.doi.org/10.1214/12-AOS973 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Copula Calibration

    Get PDF
    We propose notions of calibration for probabilistic forecasts of general multivariate quantities. Probabilistic copula calibration is a natural analogue of probabilistic calibration in the univariate setting. It can be assessed empirically by checking for the uniformity of the copula probability integral transform (CopPIT), which is invariant under coordinate permutations and coordinatewise strictly monotone transformations of the predictive distribution and the outcome. The CopPIT histogram can be interpreted as a generalization and variant of the multivariate rank histogram, which has been used to check the calibration of ensemble forecasts. Climatological copula calibration is an analogue of marginal calibration in the univariate setting. Methods and tools are illustrated in a simulation study and applied to compare raw numerical model and statistically postprocessed ensemble forecasts of bivariate wind vectors

    Predicting Inflation: Professional Experts Versus No-Change Forecasts

    Full text link
    We compare forecasts of United States inflation from the Survey of Professional Forecasters (SPF) to predictions made by simple statistical techniques. In nowcasting, economic expertise is persuasive. When projecting beyond the current quarter, novel yet simplistic probabilistic no-change forecasts are equally competitive. We further interpret surveys as ensembles of forecasts, and show that they can be used similarly to the ways in which ensemble prediction systems have transformed weather forecasting. Then we borrow another idea from weather forecasting, in that we apply statistical techniques to postprocess the SPF forecast, based on experience from the recent past. The foregoing conclusions remain unchanged after survey postprocessing

    Using proper divergence functions to evaluate climate models

    Full text link
    It has been argued persuasively that, in order to evaluate climate models, the probability distributions of model output need to be compared to the corresponding empirical distributions of observed data. Distance measures between probability distributions, also called divergence functions, can be used for this purpose. We contend that divergence functions ought to be proper, in the sense that acting on modelers' true beliefs is an optimal strategy. Score divergences that derive from proper scoring rules are proper, with the integrated quadratic distance and the Kullback-Leibler divergence being particularly attractive choices. Other commonly used divergences fail to be proper. In an illustration, we evaluate and rank simulations from fifteen climate models for temperature extremes in a comparison to re-analysis data
    corecore