1,939 research outputs found
Issues Concerning the Approximation Underlying the Spectral Representation Theorem
In many important textbooks the formal statement of the Spectral RepresentationTheorem is followed by a process version, usually informal, stating thatany stationary stochastic process g is the limit in quadratic mean of asequence of processes, each consisting of a finite sum of harmonicoscillations with stochastic weights. The natural issues, whether the approximationerror is stationary, or whether at least it converges to zero uniformly int , have not been explicitly addressed in the literature. The paper shows that in allrelevant cases, for T unbounded the process convergence is not uniform in t. Equivalently, when T is unbounded the numberof harmonic oscillations necessary to approximate a stationary stochastic process with a preassigned accuracydepends on t . The conclusion is that the process version of the Spectral RepresentationTheorem should explicitely mention that in general the approximation of a stationary stochastic processby a finite sum of harmonic oscillations, given the accuracy, is valid for t belongingto a bounded subset of the real axis (of the set of integers in the discrete-parametercase).Stochastic processes. Stationarity. Spectral analysis.
A Dynamic Factor Analysis of the Response of U.S. Interest Rates to News
This paper uses a dynamic factor model recently studied by Forni, Hallin, Lippi and Reichlin (2000) to analyze the response of 21 U.S. interest rates to news. Using daily data, we find that the news that affects interest rates daily can be summarized by two common factors. This finding is robust to both the sample period and time aggregation. Each rate has an important idiosyncratic component; however, the relative importance of the idiosyncratic component declines as the frequency of the observations is reduced, and nearly vanishes when rates are observed at the monthly frequency. Using an identi.cation scheme that allows for the fact that when policy actions are unknown to the market the funds rate should respond first to policy actions, we are unable to identifying a unique effect of monetary policy in the funds rate at the daily frequency.
Factor models in high-dimensional time series
High-dimensional time series may well be the most common type of dataset in the so-called
"big data" revolution, and have entered current practice in many areas, including
meteorology, genomics, chemometrics, connectomics, complex physics simulations, biological
and environmental research, finance and econometrics. The analysis of such datasets
poses significant challenges, both from a statistical as from a numerical point of view. The
most successful procedures so far have been based on dimension reduction techniques and,
more particularly, on high-dimensional factor models. Those models have been developed,
essentially, within time series econometrics, and deserve being better known in other areas.
In this paper, we provide an original time-domain presentation of the methodological
foundations of those models (dynamic factor models usually are described via a spectral
approach), contrasting such concepts as commonality and idiosyncrasy, factors and common
shocks, dynamic and static principal components. That time-domain approach emphasizes
the fact that, contrary to the static factor models favored by practitioners, the so-called general
dynamic factor model essentially does not impose any constraints on the data-generating
process, but follows from a general representation result
A dynamic factor analysis of the response of U. S. interest rates to news
This paper uses a dynamic factor model recently studied by Forni, Hallin, Lippi and Reichlin (2000) and Forni, Giannone, Lippi and Reichlin (2004) to analyze the response of 21 U.S. interest rates to news. Using daily data, we find that the news that affects interest rates daily can be summarized by two common factors. This finding is robust to both the sample period and time aggregation. Each rate has an important idiosyncratic component; however, the relative importance of the idiosyncratic component declines as the frequency of the observations is reduced, and nearly vanishes when rates are observed at the monthly frequency. Using an identification scheme that allows for the fact that when policy actions are unknown to the market the funds rate should respond first to policy actions, we are unable to identify a unique effect of monetary policy in the funds rate at the daily frequency.Interest rates
A dynamic factor analysis of the response of US interest rates to news
This paper uses a dynamic factor model recently studied by Forni, Hallin, Lippi and Reichlin (2000) to analyze the response of 21 U.S. interest rates to news. Using daily data, we find that the news that affects interest rates daily can be summarized by two common factors. This finding is robust to both the sample period and time aggregation. Each rate has an important idiosyncratic component; however, the relative importance of the idiosyncratic component declines as the frequency of the observations is reduced, and nearly vanishes when rates are observed at the monthly frequency. Using an identification scheme that allows for the fact that when policy actions are unknown to the market the funds rate should respond first to policy actions, we are unable to identifying a unique effect of monetary policy in the funds rate at the daily frequency
New Eurocoin: Tracking Economic Growth in Real Time
This paper presents ideas and methods underlying the construction of an indicator that tracks the euro area GDP growth, but, unlike GDP growth, (i) is updated monthly and almost in real time; (ii) is free from hort-run dynamics. Removal of short-run dynamics from a time series, to isolate the mediumlong-run component, can be obtained by a band-pass filter. However, it is well known that band-pass filters, being two-sided, perform very poorly at the end of the sample. New Eurocoin is an estimator of the medium- long-run component of the GDP that only uses contemporaneous values of a large panel of macroeconomic time series, so that no end-of-sample deterioration occurs. Moreover, as our dataset is monthly, New Eurocoin can be updated each month and with a very short delay. Our method is based on generalized principal components that are designed to use leading variables in the dataset as proxies for future values of the GDP growth. As the medium- long-run component of the GDP is observable, although with delay, the performance of New Eurocoin at the end of the sample can be measured.coincident indicator, band-pass filter, large-dataset factor models, generalized principal components
Opening the black box: structural factor models with large cross-sections
This paper shows how large-dimensional dynamic factor models are suitable for structural analysis. We establish sufficient conditions for identification of the structural shocks and the associated impulse response functions. In particular, we argue that, if the data follow an approximate factor structure, the “problem of fundamentalness”, which is intractable in structural VARs, can be solved provided that the impulse responses are sufficiently heterogeneous. Finally, we propose a consistent method (and n, T rates of convergence) to estimate the impulse-response functions, as well as a bootstrapping procedure for statistical inference. JEL Classification: E0, C1Dynamic Factor Models, fundamentalness, Identification, structural VARs
The Generalized Dynamic Factor Model. One-Sided Estimation and Forecasting
This paper proposes a new forecasting method that exploits information from a largepanel of time series. The method is based on the generalized dynamic factor model proposedin Forni, Hallin, Lippi, and Reichlin (2000), and takes advantage of the information onthe dynamic covariance structure of the whole panel. We first use our previous method toobtain an estimation for the covariance matrices of common and idiosyncratic components.The generalized eigenvectors of this couple of matrices are then used to derive a consistentestimate of the optimal forecast. This two-step approach solves the end-of-sample problemscaused by two-sided filtering (as in our previous work), while retaining the advantages of anestimator based on dynamic information. The relative merits of our method and the oneproposed by Stock and Watson (2002) are discussed.Dynamic factor models,principal components, time series, large cross-sections, panel data, forecasting.
- …
