Optimizing Policymakers’ Loss Functions in Crisis Prediction: Before, Within or After?
Early-warning models most commonly optimize signaling thresholds on crisis probabilities. The expost threshold optimization is based upon a loss function accounting for preferences between forecast errors, but comes with two crucial drawbacks: unstable thresholds in recursive estimations and an in-sample overfit at the expense of out-of-sample performance. We propose two alternatives for threshold setting: (i) including preferences in the estimation itself and (ii) setting thresholds ex-ante according to preferences only. Given probabilistic model output, it is intuitive that a decision rule is independent of the data or model specification, as thresholds on probabilities represent a willingness to issue a false alarm vis-à-vis missing a crisis. We provide simulated and real-world evidence that this simplification results in stable thresholds and improves out-of-sample performance. Our solution is not restricted to binary-choice models, but directly transferable to the signaling approach and all probabilistic early-warning models.
Should Forecasters Use Real‐time Data to Evaluate Leading Indicator Models for GDP Prediction? German Evidence
German Economic Review,
In this paper, we investigate whether differences exist among forecasts using real‐time or latest‐available data to predict gross domestic product (GDP). We employ mixed‐frequency models and real‐time data to reassess the role of surveys and financial data relative to industrial production and orders in Germany. Although we find evidence that forecast characteristics based on real‐time and final data releases differ, we also observe minimal impacts on the relative forecasting performance of indicator models. However, when obtaining the optimal combination of soft and hard data, the use of final release data may understate the role of survey information.
An Evaluation of Early Warning Models for Systemic Banking Crises: Does Machine Learning Improve Predictions?
IWH Discussion Papers,
This paper compares the out-of-sample predictive performance of different early warning models for systemic banking crises using a sample of advanced economies covering the past 45 years. We compare a benchmark logit approach to several machine learning approaches recently proposed in the literature. We find that while machine learning methods often attain a very high in-sample fit, they are outperformed by the logit approach in recursive out-of-sample evaluations. This result is robust to the choice of performance measure, crisis definition, preference parameter, and sample length, as well as to using different sets of variables and data transformations. Thus, our paper suggests that further enhancements to machine learning early warning models are needed before they are able to offer a substantial value-added for predicting systemic banking crises. Conventional logit models appear to use the available information already fairly effciently, and would for instance have been able to predict the 2007/2008 financial crisis out-of-sample for many countries. In line with economic intuition, these models identify credit expansions, asset price booms and external imbalances as key predictors of systemic banking crises.
Joint Economic Forecast
Joint Economic Forecast The joint economic forecast is an instrument for evaluating...
Joint Economic Forecast Autumn 2019: Economy Cools Further Germany’s leading economics research institutes have revised...
Joint Economic Forecast Autumn 2019 Konjunktur kühlt weiter ab – Industrie in der Rezession...
IWH-CompNet Discussion Papers
IWH-CompNet Discussion Papers The IWH-CompNet Discussion Paper series presents research...
IWH FDI Micro Database
IWH FDI Micro Database The IWH FDI Micro Database (FDI = Foreign Direct...
The CompNet Competitiveness Database The Competitiveness Research Network (CompNet)...
Optimizing Policymakers’ Loss Functions in Crisis Prediction: Before, Within or After? ...