Please address media inquiries to:
phone: +49 345 7753-720
e-mail: presse@iwh-halle.de
Team Public Relations
German China Plan Likely To FailOliver HoltemöllerMNI, December 17, 2025
In this paper we investigate whether differences exist among forecasts using real-time or latest-available data to predict gross domestic product (GDP). We employ mixed-frequency models and real-time data to reassess the role of survey data relative to industrial production and orders in Germany. Although we find evidence that forecast characteristics based on real-time and final data releases differ, we also observe minimal impacts on the relative forecasting performance of indicator models. However, when obtaining the optimal combination of soft and hard data, the use of final release data may understate the role of survey information.
In this paper we analyze the small sample properties of full information and limited information estimators in a potentially misspecified DSGE model. Therefore, we conduct a simulation study based on a standard New Keynesian model including price and wage rigidities. We then study the effects of omitted variable problems on the structural parameters estimates of the model. We find that FIML performs superior when the model is correctly specified. In cases where some of the model characteristics are omitted, the performance of FIML is highly unreliable, whereas GMM estimates remain approximately unbiased and significance tests are mostly reliable.
This paper presents a method to conduct early estimates of GDP growth in Germany. We employ MIDAS regressions to circumvent the mixed frequency problem and use pooling techniques to summarize efficiently the information content of the various indicators. More specifically, we investigate whether it is better to disaggregate GDP (either via total value added of each sector or by the expenditure side) or whether a direct approach is more appropriate when it comes to forecasting GDP growth. Our approach combines a large set of monthly and quarterly coincident and leading indicators and takes into account the respective publication delay. In a simulated out-of-sample experiment we evaluate the different modelling strategies conditional on the given state of information and depending on the model averaging technique. The proposed approach is computationally simple and can be easily implemented as a nowcasting tool. Finally, this method also allows retracing the driving forces of the forecast and hence enables the interpretability of the forecast outcome.
This paper analyses the recession in 2008/2009 in Germany, which is very different from previous recessions, in particular regarding its cause and magnitude. We show to what extent forecasters and forecasts based on leading indicators fail to detect the timing and the magnitude of the recession. This study shows that large forecast errors for both expert forecasts and forecasts based on leading indicators resulted during this recession which implies that the recession was very difficult to forecast. However, some leading indicators (survey data, risk spreads, stock prices) have indicated an economic downturn and hence, beat univariate time series models. Although the combination of individual forecasts provides an improvement compared to the benchmark model, the combined forecasts are worse than several individual models. A comparison of expert forecasts with the best forecasts based on leading indicators shows only minor deviations. Overall, the range for an improvement of expert forecasts during the crisis compared to indicator forecasts is relatively small.
The paper analyzes leading indicators for GDP and industrial production in Germany. We focus on the performance of single and pooled leading indicators during the pre-crisis and crisis period using various weighting schemes. Pairwise and joint significant tests are used to evaluate single indicator as well as forecast combination methods. In addition, we use an end-of-sample instability test to investigate the stability of forecasting models during the recent financial crisis. We find in general that only a small number of single indicator models were performing well before the crisis. Pooling can substantially increase the reliability of leading indicator forecasts. During the crisis the relative performance of many leading indicator models increased. At short horizons, survey indicators perform best, while at longer horizons financial indicators, such as term spreads and risk spreads, improve relative to the benchmark.
In this paper we develop a small open economy model explaining the joint determination of output, inflation, interest rates, unemployment and the exchange rate in a multi-country framework. Our model – the Halle Economic Projection Model (HEPM) – is closely related to studies recently published by the International
Monetary Fund (global projection model). Our main contribution is that we model the Euro area countries separately. In this version we consider Germany and France, which represent together about 50 percent of Euro area GDP. The model allows for country specific heterogeneity in the sense that we capture different adjustment patterns to economic shocks. The model is estimated using Bayesian techniques. Out-of-sample and pseudo out-of-sample forecasts are presented.
This paper assesses whether the economy of East Germany is catching up with the
West German region in terms of welfare. While the primary measure for convergence and catching up is per capita output, we also look at other macroeconomic indicators such as unemployment rates, wage rates, and production levels in the manufacturingsector. In contrast to existing studies of convergence between regions of reunified Germany, our approach is purely based upon the time series dimension and is thus directly focused on the catching up process in East Germany as a region. Our testing setup includes standard ADF unit root tests as well as unit root tests that endogenously allow for a break in the deterministic component of the process. In our analysis, we find evidence of catching up for East Germany for most of the indicators. However, convergence speed is slow, and thus it can be expected that the catching up process will take further decades until the regional gap is closed.
This paper evaluates the New Keynesian Phillips Curve (NKPC) and its hybrid
variant within a limited information framework for Germany. The main interest rests on the average frequency of price re-optimization of firms. We use the labor income share as the driving variable and consider a source of real rigidity by allowing for a fixed firm-specific capital stock. A GMM estimation strategy is employed as well as an identification robust method that is based upon the Anderson-Rubin statistic. We find out that the German Phillips Curve is purely forward looking. Moreover, our point estimates are consistent with the view that firms re-optimize prices every two to three quarters. While these estimates seem plausible from an economic point of view, the uncertainties around these estimates are very large and also consistent with perfect nominal price rigidity where firms never re-optimize prices. This analysis also offers some explanations why previous results for the German NKPC based on GMM differ considerably. First, standard GMM results are very sensitive to the way how orthogonality conditions are formulated. Additionally, model misspecifications may be left undetected by conventional J tests. Taken together, this analysis points out
the need for identification robust methods to get reliable estimates for the NKPC.
Diese Arbeit beschreibt das makroökonometrische Modell des IWH: ein auf Quartalsdaten gestütztes, strukturelles Modell für die deutsche Volkswirtschaft. Der Beitrag konzentriert sich auf die Spezifikation und Schätzungen der angebotsseitigen Aspekte des Modells. Dieser Ansatz gewährleistet ein theoretisch fundiertes langfristiges Modellgleichgewicht. Somit verbindet das Modell kurzfristig gewünschte Prognoseeigenschaften mit langfristigen theoretischen Anforderungen. Für einige makroökonomische Aggregate werden kurz- bis langfristige Auswirkungen von Angebots- und Nachfrageschocks dargestellt. Zudem werden durch Modellsimulationen die Auswirkungen außenwirtschaftlicher Schocks auf das Gesamtmodell illustriert.
In this paper we test the ability of three of the most popular methods to forecast the South African currency crisis of June 2006. In particular we are interested in the out-ofsample performance of these methods. Thus, we choose the latest crisis to conduct an out-of-sample experiment. In sum, the signals approach was not able to forecast the outof- sample crisis of correctly; the probit approach was able to predict the crisis but just with models, that were based on raw data. Employing a Markov-regime-switching approach also allows to predict the out-of-sample crisis. The answer to the question of which method made the run in forecasting the June 2006 currency crisis is: the Markovswitching approach, since it called most of the pre-crisis periods correctly. However, the “victory” is not straightforward. In-sample, the probit models perform remarkably well and it is also able to detect, at least to some extent, out-of-sample currency crises before their occurrence. It can, therefore, not be recommended to focus on one approach only when evaluating the risk for currency crises.