915 research outputs found

    Probabilistic temperature forecasting: a summary of our recent research results

    Full text link
    We summarise the main results from a number of our recent articles on the subject of probabilistic temperature forecasting

    Probabilistic forecasts of temperature: measuring the utility of the ensemble spread

    Full text link
    The spread of ensemble weather forecasts contains information about the spread of possible future weather scenarios. But how much information does it contain, and how useful is that information in predicting the probabilities of future temperatures? One traditional answer to this question is to calculate the spread-skill correlation. We discuss the spread-skill correlation and how it interacts with some simple calibration schemes. We then point out why it is not, in fact, a useful measure for the amount of information in the ensemble spread, and discuss a number of other measures that are more useful

    The problem with the Brier score

    Full text link
    The Brier score is frequently used by meteorologists to measure the skill of binary probabilistic forecasts. We show, however, that in simple idealised cases it gives counterintuitive results. We advocate the use of an alternative measure that has a more compelling intuitive justification

    Do medium range ensemble forecasts give useful predictions of temporal correlations?

    Full text link
    Medium range ensemble forecasts are typically used to derive predictions of the conditional marginal distributions of future events on individual days. We assess whether they can also be used to predict the conditional correlations between different days

    Moment based methods for ensemble assessment and calibration

    Full text link
    We describe various moment-based ensemble interpretation models for the construction of probabilistic temperature forecasts from ensembles. We apply the methods to one year of medium range ensemble forecasts and perform in and out of sample testing. Our main conclusion is that probabilistic forecasts derived from the ensemble mean using regression are just as good as those based on the ensemble mean and the ensemble spread using a more complex calibration algorithm. The explanation for this seems to be that the predictable component of the variability of the forecast uncertainty is only a small fraction of the total forecast uncertainty. Users of ensemble temperature forecasts are advised, until further evidence becomes available, to ignore the ensemble spread and build probabilistic forecasts based on the ensemble mean alone

    Improving on the empirical covariance matrix using truncated PCA with white noise residuals

    Full text link
    The empirical covariance matrix is not necessarily the best estimator for the population covariance matrix: we describe a simple method which gives better estimates in two examples. The method models the covariance matrix using truncated PCA with white noise residuals. Jack-knife cross-validation is used to find the truncation that maximises the out-of-sample likelihood score

    Comparing classical and Bayesian methods for predicting hurricane landfall rates

    Full text link
    We compare classical and Bayesian methods for fitting the poisson distribution to the number of hurricanes making landfall on sections of the US coastline

    Year-ahead prediction of US landfalling hurricane numbers

    Full text link
    We present a simple method for the year-ahead prediction of the number of hurricanes making landfall in the US. The method is based on averages of historical annual hurricane numbers, and we perform a backtesting study to find the length of averaging window that would have given the best predictions in the past

    Five guidelines for the evaluation of site-specific medium range probabilistic temperature forecasts

    Full text link
    Probabilistic temperature forecasts are potentially useful to the energy and weather derivatives industries. However, at present, they are little used. There are a number of reasons for this, but we believe this is in part due to inadequacies in the methodologies that have been used to evaluate such forecasts, leading to uncertainty as to whether the forecasts are really useful or not and making it hard to work out which forecasts are best. To remedy this situation we describe a set of guidelines that we recommend should be followed when evaluating the skill of site-specific probabilistic medium range temperature forecasts. If these guidelines are followed then the results of validation can be used directly by forecast users to make decisions about which forecasts to use. If they are not followed then the results of validation may be interesting, but will not be practically useful for users. We find that none of the published studies that evaluate such forecasts fall within our guidelines, and that, as a result, none convey the information that the users need to make appropriate decisions about which forecasts are best

    Statistical modelling of tropical cyclone genesis: a non-parametric model for the annual distribution

    Full text link
    As part of a project to develop more accurate estimates of the risks due to tropical cyclones, we describe a non-parametric method for the statistical simulation of the location of tropical cyclone genesis. The method avoids the use of arbitrary grid boxes, and the spatial smoothing of the historical data is constructed optimally according to a clearly defined merit function
    corecore