Research

Working Papers

Abstract:  This paper evaluates the predictive performance of various factor estimation methods in big data. Extensive forecasting experiments are examined, using 7 factor estimation methods with 13 decision rules that determine the number of factors. The out-of-sample forecasting results show that, first, the first Partial Least Squares factor (1-PLS) tends to be the best-performing method among all the possible alternatives. This finding is prevalent in many target variables, under different forecasting horizons and models. This significant improvement can be explained by the PLS factor estimation strategy that considers the covariance with the target variable. Second, using a consistently estimated number of factors may not necessarily improve forecasting performance. The greatest predictive gain often derives from decision rules that do not estimate consistently the true number of factors.

R& R requested from International Journal of Forecasting 

Presented at: NBER-NSF Time Series Conference at Rice University, 2021,  European Winter Meeting of the Econometric Society at the University of Barcelona School of Economics, 2021 

with Seung C. Ahn

Abstract: We consider Partial Least Squares (PLS) estimation of a time-series forecasting model with the data containing a large number (T) of time series observations on each of a large number (N) of predictor variables. In the model, a subset or a whole set of the latent common factors in predictors are determinants of a single target variable to be forecasted. The factors relevant for forecasting the target variable, which we refer to as PLS factors, can be sequentially generated by a method called “Nonlinear Iterative Partial Least Squares” (NIPLS) algorithm. Two main findings from our asymptotic analysis are as follows. First, the optimal number of the PLS factors for forecasting could be much smaller than the number of the common factors in the original predictor variables relevant for the target variable. Second, as more than the optimal number of PLS factors is used, the out-of-sample explanatory power of the factors for the target variable could rather decrease while their in-sample power may increase. Our Monte Carlo simulation results confirm these asymptotic results. In addition, our simulation results indicate that unless very large samples are used, the out-of-sample forecasting power of the PLS factors is often higher when a smaller than the asymptotically optimal number of factors are used. We find that the out-of-sample forecasting power of the PLS factors often decreases as the second, third, and more factors are added, even if the asymptotically optimal number of the factors is greater than one.

Presented at: Econometrics Seminar at Georgetown University, 2022, Econometrics Seminar at the University of York, 2022, Forecasting Seminar at George Washington University, 2022, North American Summer Meeting of Econometric Society at the University of Miami, 2022, European Summer Meeting of Econometric Society at Bocconi University, 2022, Asian Summer Meeting of Econometric Society at Keio University and the University of Tokyo, 2022, NBER-NSF Time Series Conference at Boston University, 2022, SNDE Symposium for Young Researchers, 2022 

with Seung C. Ahn

Abstract: In this paper, we develop a novel supervised factor estimation method called Single Component Analysis (SCA). We consider a contemporaneous and forecasting model where a single dependent or target variable of interest exists. As the name implies, SCA produces a one-dimensional factor that asymptotically estimates all factors governing the dependent variable or target variable. The SCA method incorporates the supervised aspects of Partial Least Squares (PLS) that maximizes the covariance with the dependent or target variable. However, our SCA factor does not suffer from overfitting problem opposed to many supervised methods. Consistency of SCA factor under the two models is shown when both sample sizes N and T increase to infinity. Simulation evidence demonstrates that SCA outperforms other alternatives and shows robust forecasting performance without an overfitting issue. Empirical application on forecasting major macroeconomic and finance variables in big data also confirms promising predictive results of SCA.

Work in Progress


Measuring Macroeconomic Uncertainty with Various Factor-Augmented Forecasting

Abstract: This paper investigates measuring macroeconomic uncertainty, using factor-augmented forecasting. In this paper, uncertainty is measured by the conditional volatility of various macroeconomic series, that was not predicted from factor-augmented forecasting. Various factor-augmented forecasting methods are used to construct uncertainty measure, in order to remove as much predictable variations as possible from economic series. Failure to do so will overestimate the economic uncertainty, since it will wrongly include forecastable variations as a part of uncertainty. Target-specific factor estimation methods that incorporate the information of a target variable when factors are estimated, generate less forecasting errors and hence produce more accurate uncertainty measures. However, all the uncertainty measures constructed by factor- augmented forecasting demonstrate similar properties, show more persistent and correlated uncertainty periods with real activity, compared to other uncertainty proxies.