The Granger Centre for Time Series Econometrics
  • Print
   
   

GC 17/03: Forecast evaluation tests and negative long-run variance estimates in small samples

 
Forecast evaluation tests

Summary

Given the critical role of forecasting in economic research and policy-making, the evaluation of competing forecasts of the same economic variable has become a prominent field in the econometric and empirical economic literatures. Within this field, the most common forecast evaluation exercise undertaken is a comparison of the accuracies of two or more sets of forecasts on the basis of some measure of loss associated with the forecast errors, such as the mean squared forecast error. In a key contribution to the literature, Diebold and Mariano (1995) [DM] proposed an approach for testing equal forecast accuracies, using a test statistic that is fundamentally comprised of the sample mean of the difference between the two forecasts' error loss series, standardised by an estimate of the long-run variance. Previous studies evaluating the finite sample size and power properties of the DM test have considered forecast horizons of only one or two steps ahead, using relatively large numbers of out-of-sample forecast observations. However, in practical applications, often due to the limitations of economic data, it is not uncommon for forecast evaluation to be conducted using smaller sample sizes and larger forecast horizons than have previously been studied.

In this Granger Centre discussion paper, David Harvey, Steve Leybourne and Emily Whitehouse show that when computing standard DM-type tests for equal forecast accuracy, the long-run variance estimate can frequently be negative when dealing with multi-step-ahead predictions in small, but empirically relevant, sample sizes. They subsequently consider a number of alternative approaches to dealing with this problem, including direct inference in the problem cases and use of long-run variance estimators that guarantee positivity. The finite sample size and power of the different approaches are evaluated using extensive Monte Carlo simulation exercises. Overall, for multi-step-ahead forecasts, they find that the recently proposed Coroneo and Iacone (2016) test, which is based on a weighted periodogram long-run variance estimator, offers the best finite sample size and power performance.

Download the paper in PDF format

Authors

David Harvey, Stephen Leybourne and Emily Whitehouse

 

View all Granger Centre discussion papers | View all School of Economics featured discussion papers

 

Posted on Thursday 15th June 2017

The Granger Centre for Time Series Econometrics

School of Economics
University of Nottingham
University Park
Nottingham, NG7 2RD


+44 (0)115 951 5481
dave.harvey@nottingham.ac.uk