106 research outputs found

    A Trade-by-Trade Surprise Measure and Its Relation to Observed Spreadson the NYSE

    Get PDF
    We analyze the relationship between spreads and an indicator for information based transactions on trade-by-trade data. Classifying trades on the NYSE in six categories with respect to their volume relative to the quoted depth, we employ an ordered probit model to predict the category of a trade given the current market conditions. This approach allows us to test certain market microstructure hypothesis on the determinants of the buy-sell pressure. The difference between the predicted and the actual trade category (the surprise) is found to have explanatory power for the observed spreads beyond raw volume, volume relative to the quoted depth, and previous trading volume. The positive effect of the previous surprise on the observed spreads confirms the hypothesis that market-makers react to the increased probability of having traded with an informed trader by widening the spread.

    Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    Get PDF
    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform in terms of the root mean squared error criterion the most recent and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & A¨ýt-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling frequency chosen after Bandi & Russell (2005a) and Bandi & Russell (2005b). The power of our methodology stems from the fact that instead of trying to correct the realized quantities for the noise, we identify both the true underlying integrated moments and the moments of the noise, which are also estimated within our framework. Apart from being simple to implement, an important property of our estimators is that they are quite robust to misspecifications of the noise process.High frequency data, Realized volatility and covariance, Market microstructure

    Panel Intensity Models with Latent Factors: An Application to the Trading Dynamics on the Foreign Exchange Market¤

    Get PDF
    We develop a panel intensity model, with a time varying latent factor, which captures the influence of unobserved time effects and allows for correlation across individuals. The model is designed to analyze individual trading behavior on the basis of trading activity datasets, which are characterized by four dimensions: an irregularly-spaced time scale, trading activity types, trading instruments and investors. Our approach extends the stochastic conditional intensity model of Bauwens & Hautsch (2006) to panel duration data. We show how to estimate the model parameters by a simulated maximum likelihood technique adopting the efficient importance sampling approach of Richard & Zhang (2005). We provide an application to a trading activity dataset from an internet trading platform in the foreign exchange market and we find support for the presence of behavioral biases and discuss implications for portfolio theory.Trading Activity Datasets, Panel Intensity Models, Latent Factors, Efficient Importance Sampling, Behavioral Finance

    Dynamic modeling of large dimensional covariance matrices

    Full text link
    Modelling and forecasting the covariance of financial return series has always been a challenge due to the so-called curse of dimensionality. This paper proposes a methodology that is applicable in large dimensional cases and is based on a time series of realized covariance matrices. Some solutions are also presented to the problem of non-positive definite forecasts. This methodology is then compared to some traditional models on the basis of its forecasting performance employing Diebold-Mariano tests. We show that our approach is better suited to capture the dynamic features of volatilities and covolatilities compared to the sample covariance based models

    Impact of the tick-size on financial returns and correlations

    Full text link
    We demonstrate that the lowest possible price change (tick-size) has a large impact on the structure of financial return distributions. It induces a microstructure as well as it can alter the tail behavior. On small return intervals, the tick-size can distort the calculation of correlations. This especially occurs on small return intervals and thus contributes to the decay of the correlation coefficient towards smaller return intervals (Epps effect). We study this behavior within a model and identify the effect in market data. Furthermore, we present a method to compensate this purely statistical error.Comment: 18 pages, 10 figure

    Compensating asynchrony effects in the calculation of financial correlations

    Full text link
    We present a method to compensate statistical errors in the calculation of correlations on asynchronous time series. The method is based on the assumption of an underlying time series. We set up a model and apply it to financial data to examine the decrease of calculated correlations towards smaller return intervals (Epps effect). We show that this statistical effect is a major cause of the Epps effect. Hence, we are able to quantify and to compensate it using only trading prices and trading times.Comment: 13 pages, 7 figure

    Dynamic Modeling of Large Dimensional Covariance Matrices

    Get PDF
    Modelling and forecasting the covariance of financial return series has always been a challange due to the so-called "curse of dimensionality". This paper proposes a methodology that is applicable in large dimensional cases and is based on a time series of realized covariance matrices. Some solutions are also presented to the problem of non-positive definite forecasts. This methodology is then compared to some traditional models on the basis of its forecasting performance employing Diebold-Mariano tests. We show that our approach is better suited to capture the dynamic features of volatilities and covolatilities compared to the sample covariance based models.

    Modeling tick-by-tick realized correlations

    Get PDF
    A tree-structured heterogeneous autoregressive (tree-HAR) process is proposed as a simple and parsimonious model for the estimation and prediction of tick-by-tick realized correlations. The model can account for different time and other relevant predictors’ dependent regime shifts in the conditional mean dynamics of the realized correlation series. Testing the model on S&P 500 Futures and 30-year Treasury Bond Futures realized correlations, empirical evidence that the tree-HAR model reaches a good compromise between simplicity and flexibility is provided. The model yields accurate single- and multi-step out-of-sample forecasts. Such forecasts are also better than those obtained from other standard approaches, in particular when the final goal is multi-period forecasting

    Forecasting Multivariate Volatility using the VARFIMA Model on Realized Covariance Cholesky Factors

    Full text link
    Summary This paper analyzes the forecast accuracy of the multivariate realized volatility model introduced by Chiriac and Voev (2010), subject to different degrees of model parametrization and economic evaluation criteria. Bymodelling the Cholesky factors of the covariance matrices, the model generates positive definite, but biased covariance forecasts. In this paper, we provide empirical evidence that parsimonious versions of the model generate the best covariance forecasts in the absence of bias correction. Moreover, we show by means of stochastic dominance tests that any risk averse investor, regardless of the type of utility function or return distribution, would be better-off from using this model than from using some standard approaches.</jats:p

    Accurate estimator of correlations between asynchronous signals

    Full text link
    The estimation of the correlation between time series is often hampered by the asynchronicity of the signals. Cumulating data within a time window suppresses this source of noise but weakens the statistics. We present a method to estimate correlations without applying long time windows. We decompose the correlations of data cumulated over a long window using decay of lagged correlations as calculated from short window data. This increases the accuracy of the estimated correlation significantly and decreases the necessary efforts of calculations both in real and computer experiments.Comment: 17 pages, 10 figures; a section on financial data has been adde
    corecore