27,098 research outputs found
Model Selection Criteria for the Leads-and-Lags Cointegrating Regression
In this paper, Mallows'(1973) Cp criterion, Akaike's (1973) AIC, Hurvich and Tsai's (1989) corrected AIC and the BIC of Akaike (1978) and Schwarz (1978) are derived for the leads-and-lags cointegrating regression. Deriving model selection criteria for the leads-and-lags regression is a nontrivial task since the true model is of infinite dimension. This paper justifies using the conventional formulas of those model selection criteria for the leads-and-lags cointegrating regression. The numbers of leads and lags can be selected in scientific ways using the model selection criteria. Simulation results regarding the bias and mean squared error of the long-run coefficient estimates are reported. It is found that the model selection criteria are successful in reducing bias and mean squared error relative to the conventional, fixed selection rules. Among the model selection criteria, the BIC appears to be most successful in reducing MSE, and Cp in reducing bias. We also observe that, in most cases, the selection rules without the restriction that the numbers of the leads and lags be the same have an advantage over those with it.Cointegration, Leads-and-lags regression, AIC, Cor-rected AIC, BIC, Cp
Model Selection Criteria for the Leads-and-Lags Cointegrating Regression
In this paper, Mallows'(1973) Cp criterion, Akaike's (1973) AIC, Hurvich and Tsai's (1989) corrected AIC and the BIC of Akaike (1978) and Schwarz (1978) are derived for the leads-and-lags cointegrating regression. Deriving model selection criteria for the leads-and-lags regression is a nontrivial task since the true model is of infinite dimension. This paper justifies using the conventional formulas of those model selection criteria for the leads-and-lags cointegrating regression. The numbers of leads and lags can be selected in scientific ways using the model selection criteria. Simulation results regarding the bias and mean squared error of the long-run coefficient estimates are reported. It is found that the model selection criteria are successful in reducing bias and mean squared error relative to the conventional, fixed selection rules. Among the model selection criteria, the BIC appears to be most successful in reducing MSE, and Cp in reducing bias. We also observe that, in most cases, the selection rules without the restriction that the numbers of the leads and lags be the same have an advantage over those with it.Cointegration, Leads-and-lags regression, AIC, Corrected AIC, BIC, Cp
Subsampling-Based Tests of Stock-Return Predictability
We develop subsampling-based tests of stock-return predictability and apply them to U.S. data. These tests allow for multiple predictor variables with local-to-unit roots. By contrast, previous methods that model the predictor variables as nearly integrated are only applicable to univariate predictive regressions. Simulation results demonstrate that our subsampling-based tests have desirable size and power properties. Using stock-market valuation ratios and the risk-free rate as predictors, our univariate tests show that the evidence of predictability is more concentrated in the 1926-1994 subperiod. In bivariate tests, we find support for predictability in the full sample period 1926-2004 and the 1952-2004 subperiod as well. For the subperiod 1952-2004, we also consider a number of consumption-based variables as predictors for stock returns and find that they tend to perform better than the dividend-price ratio. Among the variables we consider, the predictive power of the consumption-wealth ratio proposed by Lettau and Ludvigson (2001a, 2001b) seems to be the most robust. Among variables based on habit persistence, Campbell and Cochrane's (1999) nonlinear specication tends to outperform a more traditional, linear specification.Subsampling, local-to-unit roots, predictive regression, stock-return predictability, consumption-based models
Smartphone dependence classification using tensor factorization
Excessive smartphone use causes personal and social problems. To address this issue, we sought to derive usage patterns that were directly correlated with smartphone dependence based on usage data. This study attempted to classify smartphone dependence using a data-driven prediction algorithm. We developed a mobile application to collect smartphone usage data. A total of 41,683 logs of 48 smartphone users were collected from March 8, 2015, to January 8, 2016. The participants were classified into the control group (SUC) or the addiction group (SUD) using the Korean Smartphone Addiction Proneness Scale for Adults (S-Scale) and a face-to-face offline interview by a psychiatrist and a clinical psychologist (SUC = 23 and SUD = 25). We derived usage patterns using tensor factorization and found the following six optimal usage patterns: 1) social networking services (SNS) during daytime, 2) web surfing, 3) SNS at night, 4) mobile shopping, 5) entertainment, and 6) gaming at night. The membership vectors of the six patterns obtained a significantly better prediction performance than the raw data. For all patterns, the usage times of the SUD were much longer than those of the SUC. From our findings, we concluded that usage patterns and membership vectors were effective tools to assess and predict smartphone dependence and could provide an intervention guideline to predict and treat smartphone dependence based on usage data.112Ysciescopu
- …
