Skip to main content
eScholarship
Open Access Publications from the University of California

UC Riverside

UC Riverside Electronic Theses and Dissertations bannerUC Riverside

Estimation and Forecasting in Time Series Models

Abstract

This dissertation covers several topics in estimation and forecasting in time series models. Chapter one is about estimation and feasible conditional forecasts properties from the predictive regressions, which extends previous results of OLS estimation bias in the predictive regression model by considering predictive regressions with possible zero intercepts, and also allowing the regressor to follow either a stationary AR(1) process or unit root process. The main thrust of this chapter is to develop an analytical bias reduced estimator and study the mean squared error (MSE) efficiency of the estimator. Then we investigate whether this estimation bias can lead to biased feasible forecasts conditional on the available sample observations, in addition to the expression of the mean squared forecast error (MSFE). The results from this chapter shed lights on the bias reduction estimator of the predictive regressions and its MSE properties in finite samples, as well as the optimal forecasts efficiency. We apply our analytical results to both simulated and financial data with financial return prediction using variables such as dividend yield and short rate. Results show that our bias reduction works well in estimation even when the data are skewed and having fat tails, and moreover, the bias reduced estimator improves out-of-sample forecasts. All of the results highlight the importance of the bias reduction in estimation and forecasting.

Chapter two explores finite sample bias of the estimators in the first order autoregressive moving average model under a general error distribution. Since the quasi maximum likelihood estimator (QMLE) of parameters in the

first order autoregressive moving average model (ARMA(1, 1)) can be biased in finite samples, this chapter discusses bias properties of the QMLE of the ARMA(1,1) model up to order $O(T^{-1})$ by applying the stochastic expansion and the formula and sheds light on the bias correction for the parameter estimation in applied works. The analytical bias expression of the QMLE

suggests that the bias is robust to nonnormality and the simulation results show that the bias corrected QML estimators is better even when sample size increased to a moderate size.

Chapter three (joint with Yong Bao) examines estimation bias and feasible conditional forecasts from the

first-order moving average model. We develop

the second-order analytical bias of the QMLE and investigate whether this

estimation bias can lead to biased feasible optimal forecasts conditional on

the available sample observations. We find that the feasible

multiple-step-ahead forecasts are unbiased under any nonnormal distribution

and the one-step-ahead forecast is unbiased under symmetric distributions.

Chapter four (joint with Tae-Hwy Lee and Zhou Xi) discusses using extreme learning machines for out-of-sample prediction. In this chapter, we apply the artificial neural network (ANN) model to out-of-sample prediction of financial return using a set of covariates. The main challenge in ANN model estimation is the multicolinearity between the large numbers of randomly generated hidden layers. We explore several methods to deal with the large dimension regressors, such as general inverse, ridge, pretest and principal components, which are also named extreme learning machines (ELM). We find that although the ELM methods sometimes fit perfectly for in-sample data, it has very poor out-of-sample forecast ability. We then introduce some modifications to the ELM method, which is a two step algorithm, where the first step uses ELM methods with some modifications to get a set of forecasts, and the second step combines the forecasts using principal components weighting scheme. Empirical results show that our method gives best forecast for annually aggregated equity premium among all the alternatives.

Chapter five (joint with Tae-Hwy Lee) considers mallows model averaging in the presence of multicollinearity. A challenge with large dimensional data in regression is the collinearity among covariates. A common solution to this problem is to apply principal component analysis (PCA). Yet one needs to select the number of principal components. Many studies have focused on finding the optimal number of principal components assuming the linear factor model is correctly specified. In this chapter, we do not assume that the data generating process (DGP) is a linear factor model and thus there is no true number of factors. Under this circumstance, we can combine several principal component regressions with different numbers of principal components through the Mallows criteria. Under certain conditions, the model averaging estimator is minimax such that the estimation risk is smaller. We show that the Mallow model averaging estimator can improve the estimation efficiency.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View