Optimizing Policymakers' Loss Functions in Crisis Prediction: Before, Within or After?
Peter Sarlin, Gregor von Schweinitz
Abstract
Early-warning models most commonly optimize signaling thresholds on crisis probabilities. The expost threshold optimization is based upon a loss function accounting for preferences between forecast errors, but comes with two crucial drawbacks: unstable thresholds in recursive estimations and an in-sample overfit at the expense of out-of-sample performance. We propose two alternatives for threshold setting: (i) including preferences in the estimation itself and (ii) setting thresholds ex-ante according to preferences only. Given probabilistic model output, it is intuitive that a decision rule is independent of the data or model specification, as thresholds on probabilities represent a willingness to issue a false alarm vis-à-vis missing a crisis. We provide simulated and real-world evidence that this simplification results in stable thresholds and improves out-of-sample performance. Our solution is not restricted to binary-choice models, but directly transferable to the signaling approach and all probabilistic early-warning models.
Read article
Should Forecasters Use Real-time Data to Evaluate Leading Indicator Models for GDP Prediction? German Evidence
Katja Heinisch, Rolf Scheufele
Abstract
In this paper we investigate whether differences exist among forecasts using real-time or latest-available data to predict gross domestic product (GDP). We employ mixed-frequency models and real-time data to reassess the role of survey data relative to industrial production and orders in Germany. Although we find evidence that forecast characteristics based on real-time and final data releases differ, we also observe minimal impacts on the relative forecasting performance of indicator models. However, when obtaining the optimal combination of soft and hard data, the use of final release data may understate the role of survey information.
Read article
Does the Technological Content of Government Demand Matter for Private R&D? Evidence from US States
Viktor Slavtchev, S. Wiederhold
American Economic Journal: Macroeconomics,
No. 2,
2016
Abstract
Governments purchase everything from airplanes to zucchini. This paper investigates the role of the technological content of government procurement in innovation. In a theoretical model, we first show that a shift in the composition of public purchases toward high-tech products translates into higher economy-wide returns to innovation, leading to an increase in the aggregate level of private R&D. Using unique data on federal procurement in US states and performing panel fixed-effects estimations, we find support for the model's prediction of a positive R&D effect of the technological content of government procurement. Instrumental-variable estimations suggest a causal interpretation of our findings.
Read article
Optimizing Policymakers' Loss Functions in Crisis Prediction: Before, Within or After?
Peter Sarlin, Gregor von Schweinitz
Abstract
Early-warning models most commonly optimize signaling thresholds on crisis probabilities. The ex-post threshold optimization is based upon a loss function accounting for preferences between forecast errors, but comes with two crucial drawbacks: unstable thresholds in recursive estimations and an in-sample overfit at the expense of out-of-sample performance. We propose two alternatives for threshold setting: (i) including preferences in the estimation itself and (ii) setting thresholds ex-ante according to preferences only. We provide simulated and real-world evidence that this simplification results in stable thresholds and improves out-of-sample performance. Our solution is not restricted to binary-choice models, but directly transferable to the signaling approach and all probabilistic early-warning models.
Read article
Predicting Financial Crises: The (Statistical) Significance of the Signals Approach
Makram El-Shagi, Tobias Knedlik, Gregor von Schweinitz
Journal of International Money and Finance,
No. 35,
2013
Abstract
The signals approach as an early-warning system has been fairly successful in detecting crises, but it has so far failed to gain popularity in the scientific community because it cannot distinguish between randomly achieved in-sample fit and true predictive power. To overcome this obstacle, we test the null hypothesis of no correlation between indicators and crisis probability in three applications of the signals approach to different crisis types. To that end, we propose bootstraps specifically tailored to the characteristics of the respective datasets. We find (1) that previous applications of the signals approach yield economically meaningful results; (2) that composite indicators aggregating information contained in individual indicators add value to the signals approach; and (3) that indicators which are found to be significant in-sample usually perform similarly well out-of-sample.
Read article
Technological Intensity of Government Demand and Innovation
Viktor Slavtchev, S. Wiederhold
Abstract
Governments purchase everything from airplanes to zucchini. This paper investigates whether the technological intensity of government demand affects corporate R&D activities. In a quality-ladder model of endogenous growth, we show that an increase in the share of government purchases in high-tech industries increases the rewards for innovation, and stimulates private-sector R&D at the aggregate level. We test this prediction using administrative data on federal procurement performed in US states. Both panel fixed effects and instrumental variable estimations provide results in line with the model. Our findings bring public procurement within the realm of the innovation policy debate.
Read article
Does Central Bank Staff Beat Private Forecasters?
Makram El-Shagi, Sebastian Giesen, A. Jung
IWH Discussion Papers,
No. 5,
2012
Abstract
In the tradition of Romer and Romer (2000), this paper compares staff forecasts of the Federal Reserve (Fed) and the European Central Bank (ECB) for inflation and output with corresponding private forecasts. Standard tests show that the Fed and less so the ECB have a considerable information advantage about inflation and output. Using novel tests for conditional predictive ability and forecast stability for the US, we identify the driving forces of the narrowing of the information advantage of Greenbook forecasts coinciding with the Great Moderation.
Read article
Warum exportiert der Osten so wenig? Eine empirische Analyse der Exportaktivitäten deutscher Bundesländer
Götz Zeddies
AStA - Wirtschafts- und Sozialstatistisches Archiv,
No. 4,
2009
Abstract
In the aftermath of re-unification, East German exports declined around 70% due to the breakdown of COMECON trade. Although since the mid-1990s export growth rates of the New Federal States were higher than those of their West German counterparts, export performance of East German States measured by the share of exports in GDP is still comparatively poor. Whereas for a long time the low export performance of East German producers was ascribed to competitive disadvantages, in the meantime structural deficits on the micro and/or macro level are often considered as the main reason. Using bilateral trade data of German Federal States, the present paper shows on the basis of an orthodox gravity model of trade that East German exports are explicitly lower than predicted by the model. But if the gravity model is augmented by additional variables representing structural differences between Federal States, the latter explain almost entirely the lower export performance of Eastern Germany. Thus, especially the smaller firm sizes and the lower shares of manufacturing industries in gross value added are identified as important explanatory factors of the comparatively weak export performance of the New German States.
Read article
Forecasting Currency Crises: Which Methods signaled the South African Crisis of June 2006?
Tobias Knedlik, Rolf Scheufele
South African Journal of Economics,
2008
Abstract
In this paper we test the ability of three of the most popular methods to forecast South African currency crises with a special emphasis on their out-of-sample performance. We choose the latest crisis of June 2006 to conduct an out-of-sample experiment. The results show that the signals approach was not able to forecast the out-of-sample crisis correctly; the probit approach was able
to predict the crisis but only with models, that were based on raw data. The Markov-regime- switching approach predicts the out-of-sample crisis well. However, the results are not straightforward. In-sample, the probit models performed remarkably well and were also able to detect, at least to some extent, out-of-sample currency crises before their occurrence. The recommendation is to not restrict the forecasting to only one approach.
Read article