Actualités

Parameter Estimation with Out-of-Sample Objective

Mardi | 2012-06-19
B103

Elena-Ivona DUMITRESCU – Peter Reinhard HANSEN

We discuss parameter estimation in a situation where the objective is good out-of-sample performance. A discrepancy between the out-of-sample objective and the criterion used for in-sample estimation can seriously degrade the performance. Using the same criterion for estimation and evaluation typically ensures that the estimator is consistent for the ideal parameter value, however this approach need not be optimal. In this paper, we show that the optimal out-of-sample performance is achieved through maximum likelihood estimation (MLE), and that MLE can can be vastly better than the criterion based estimation (QBE).This theoretical result is analogous to the well known Cramer-Rao bound for in-sample estimation. A drawback of MLE is that it suffers from misspecification in two ways. First, the MLE (now a quasi MLE) is inefficient under misspecification. Second, the MLE approach involves a transformation of likelihood parameters to criterion parameters, which depends on the truth. So that misspecification can result in inconsistent estimation causing MLE to be inferior to QBE. We illustrate the theoretical result in a context with an asymmetric (linex) loss function, where the CBE performs on par with MLE when the loss is close to being symmetric, while the MLE clearly dominates QBE the the loss is asymmetric. We also illustrate the theoretical result in an applicable to long-horizon forecasting.