Has the Euro Increased International Price Elasticities?
Oliver Holtemöller, Götz Zeddies
IWH Discussion Papers,
Nr. 18,
2010
publiziert in: Empirica
Abstract
This paper analyzes the role of common data problems when identifying structural breaks in small samples. Most notably, we survey small sample properties of the most commonly applied endogenous break tests developed by Brown, Durbin, and Evans (1975) and Zeileis (2004), Nyblom (1989) and Hansen (1992), and Andrews, Lee, and Ploberger (1996). Power and size properties are derived using Monte Carlo simulations. Results emphasize that mostly the CUSUM type tests are affected by the presence of heteroscedasticity, whereas the individual parameter Nyblom test and AvgLM test are proved to be highly robust. However, each test is significantly affected by leptokurtosis. Contrarily to other tests, where skewness is far more problematic than kurtosis, it has no additional effect for any of the endogenous break tests we analyze. Concerning overall robustness the Nyblom test performs best, while being almost on par to more recently developed tests in terms of power.
Artikel Lesen
Testing for Structural Breaks at Unknown Time: A Steeplechase
Makram El-Shagi, Sebastian Giesen
Abstract
This paper analyzes the role of common data problems when identifying structural breaks in small samples. Most notably, we survey small sample properties of the most commonly applied endogenous break tests developed by Brown, Durbin, and Evans (1975) and Zeileis (2004), Nyblom (1989) and Hansen (1992), and Andrews, Lee, and Ploberger (1996). Power and size properties are derived using Monte Carlo simulations. Results emphasize that mostly the CUSUM type tests are affected by the presence of heteroscedasticity, whereas the individual parameter Nyblom test and AvgLM test are proved to be highly robust. However, each test is significantly affected by leptokurtosis. Contrarily to other tests, where skewness is far more problematic than kurtosis, it has no additional effect for any of the endogenous break tests we analyze. Concerning overall robustness the Nyblom test performs best, while being almost on par to more recently developed tests in terms of power.
Artikel Lesen
Should We Trust in Leading Indicators? Evidence from the Recent Recession
Katja Drechsel, Rolf Scheufele
Abstract
The paper analyzes leading indicators for GDP and industrial production in Germany. We focus on the performance of single and pooled leading indicators during the pre-crisis and crisis period using various weighting schemes. Pairwise and joint significant tests are used to evaluate single indicator as well as forecast combination methods. In addition, we use an end-of-sample instability test to investigate the stability of forecasting models during the recent financial crisis. We find in general that only a small number of single indicator models were performing well before the crisis. Pooling can substantially increase the reliability of leading indicator forecasts. During the crisis the relative performance of many leading indicator models increased. At short horizons, survey indicators perform best, while at longer horizons financial indicators, such as term spreads and risk spreads, improve relative to the benchmark.
Artikel Lesen
A First Look on the New Halle Economic Projection Model
Sebastian Giesen, Oliver Holtemöller, Juliane Scharff, Rolf Scheufele
Abstract
In this paper we develop a small open economy model explaining the joint determination of output, inflation, interest rates, unemployment and the exchange rate in a multi-country framework. Our model – the Halle Economic Projection Model (HEPM) – is closely related to studies recently published by the International
Monetary Fund (global projection model). Our main contribution is that we model the Euro area countries separately. In this version we consider Germany and France, which represent together about 50 percent of Euro area GDP. The model allows for country specific heterogeneity in the sense that we capture different adjustment patterns to economic shocks. The model is estimated using Bayesian techniques. Out-of-sample and pseudo out-of-sample forecasts are presented.
Artikel Lesen
Is there a Superior Distance Function for Matching in Small Samples?
Eva Dettmann, Claudia Becker, Christian Schmeißer
Abstract
The study contributes to the development of ’standards’ for the application of matching algorithms in empirical evaluation studies. The focus is on the first step of the matching procedure, the choice of an appropriate distance function. Supplementary o most former studies, the simulation is strongly based on empirical evaluation ituations. This reality orientation induces the focus on small samples. Furthermore, ariables with different scale levels must be considered explicitly in the matching rocess. The choice of the analysed distance functions is determined by the results of former theoretical studies and recommendations in the empirical literature. Thus, in the simulation, two balancing scores (the propensity score and the index score) and the Mahalanobis distance are considered. Additionally, aggregated statistical distance functions not yet used for empirical evaluation are included. The matching outcomes are compared using non-parametrical scale-specific tests for identical distributions of the characteristics in the treatment and the control groups. The simulation results show that, in small samples, aggregated statistical distance functions are the better
choice for summarising similarities in differently scaled variables compared to the
commonly used measures.
Artikel Lesen
Equity and Bond Market Signals as Leading Indicators of Bank Fragility
Reint E. Gropp, Jukka M. Vesala, Giuseppe Vulpes
Journal of Money, Credit and Banking,
Nr. 2,
2006
Abstract
We analyse the ability of the distance to default and subordinated bond spreads to signal bank fragility in a sample of EU banks. We find leading properties for both indicators. The distance to default exhibits lead times of 6-18 months. Spreads have signal value close to problems only. We also find that implicit safety nets weaken the predictive power of spreads. Further, the results suggest complementarity between both indicators. We also examine the interaction of the indicators with other information and find that their additional information content may be small but not insignificant. The results suggest that market indicators reduce type II errors relative to predictions based on accounting information only.
Artikel Lesen
Quality of Service, Efficiency, and Scale in Network Industries: An Analysis of European Electricity Distribution
Christian Growitsch, Tooraj Jamasb, Michael Pollitt
IWH Discussion Papers,
Nr. 3,
2005
Abstract
Quality of service is of major economic significance in natural monopoly infrastructure industries and is increasingly addressed in regulatory schemes. However, this important aspect is generally not reflected in efficiency analysis of these industries. In this paper we present an efficiency analysis of electricity distribution networks using a sample of about 500 electricity distribution utilities from seven European countries. We apply the stochastic frontier analysis (SFA) method on multi-output translog input distance function models to estimate cost and scale efficiency with and without incorporating quality of service. We show that introducing the quality dimension into the analysis affects estimated efficiency significantly. In contrast to previous research, smaller utilities seem to indicate lower technical efficiency when incorporating quality. We also show that incorporating quality of service does not alter scale economy measures. Our results emphasise that quality of service should be an integrated part of efficiency analysis and incentive regulation regimes, as well as in the economic review of market concentration in regulated natural monopolies.
Artikel Lesen