• No results found

4. Method and data

4.2 Methods

4.2.3 Estimation techniques

Pooled OLS with panel corrected standard errors

I have now tested the OLS assumptions, and since the tests showed presence of contemporaneous correlation of errors, a basic OLS regression may fall short. When analysing time-series cross-sectional data, Beck and Katz (1995) suggest the use of OLS estimates with panel corrected standard errors to satisfy the OLS assumptions. These are well-suited for TSCS models plagued by contemporaneously correlated errors (Beck 2008). I follow this recommendation by using OLS with panel corrected standard errors as the main approach.

However, to check whether alternative and possibly better estimations yield different results, I also consider other estimation techniques for robustness checks.

52

Random or fixed effects

Alternative ways of estimating responsiveness are random effects (RE) and fixed effects (FE) models. The core difference between these two approaches lies in the role of dummy variables.

In fixed effects models, unobserved effects are built into the model by including dummy variables for each unit. Such models examine the individual differences in intercepts, assuming the same slopes and constant variance across units. All individual specific effects are removed so that over-time variation is isolated, time constant variables are removed, and intercepts are eliminated. Thus, the fixed effects is allowed to be correlated with other included independent variables, and the OLS assumption of no omitted variables is not violated (Park 2011, 8). On the other hand, the random effects model assumes that the random effect is uncorrelated with the explanatory variables included in the analysis. This assumption is strong, and if the assumption is unfulfilled, random effects models lead to biased estimates. Although there are other downsides with FE that will be discussed afterwards, FE are more often employed than RE in the political science literature because of the potentially biased estimates related with RE models.

According to Dougherty (2011, 527), fixed effects should be utilized when the observations cannot be described as being a random sample from a given population. In the case of this analysis, the population is advanced democracies, while the sample contains a range of OECD members included in the ISSP: Role of Government survey. Although it could be argued that this sample is random in a way, it is likely that some groups of countries are oversampled. If I, however, come to the conclusion that the sample is random, Dougherty (2011, 527) argues that I should check whether the grouped coefficients in the fixed effects model and the random effects model (or pooled OLS model) are significantly different. This is tested through a Hausman test. According to the view adopted by Dougherty, differing estimates in the two models should imply use of the fixed effects model. If the p-value is below 0.05, the null hypothesis of no significant differences is rejected, and FE is recommended.

However, the only question considered in the fixed effects model is whether temporal variation in X is associated with temporal variation in Y. Cross-sectional effects are eliminated, thus making it nearly impossible to estimate impacts of variables that stay almost constant over time (Beck 2008). This is a serious problem when investigating the impact of institutions because they rarely change over time, and this is also the case for institutions of direct democracy. This

53 makes fixed effects models ill-suited to estimate the effect of direct democracy on government responsiveness, and I must therefore rely on alternative estimations.

Furthermore, even though fixed effects models are more often used, others argue that random effects models are the favourable option because it keeps valuable information that is cut out by fixed effects models (Bell and Jones 2015). While RE models do have an endogeneity problem, FE models removes endogeneity without concerning the source even though this is interesting in itself. Moreover, Middleton et al. (2016) show that FE may increase rather than reduce bias in some cases, while Plümper and Troeger (2019, 39) find results that invalidate the common interpretation of the Hausman test. Although significant differences between the estimates of RE/pooled OLS models and FE models should imply the use of FE, FE models were found to give more biased estimates than the two other models. Thus, they conclude that refusion of the null hypothesis in the Hausman test should not necessarily imply that FE are recommended. Therefore, I employ random effects models as a supplement to the main pooled OLS regression models.

Interaction models

According to Brambor, Clark, and Golder (2006, 64), analysts should include interaction terms whenever they want to test conditional hypotheses. Studies investigating effects that are expected to vary under different institutional contexts often tend to make use of multiplicative interaction models (e.g. Gerber 1996; Bernauer, Giger, and Rosset 2015). Since I examine the effect of institutional set-up on government responsiveness towards preferences of citizens, more specifically on the effect of direct democracy, I make use of interaction terms. For instance, I expect bottom-up mechanisms of direct democracy to enhance responsiveness towards public preferences. Thus, the effect of public preferences (X) on government spending (Y) is expected to be at least partly conditional on the presence of bottom-up mechanisms (Z).

Random slope models

In the main analysis I rely on data that are pooled across issues, but I also want to exploit the variety of issues to see if responsiveness may be different from issue to issue. In order to do this, I rely on multilevel modelling and specification of random slope terms. The data are thus

54 treated as multilevel with country-year on the first level and issue on the second level. First, basic random intercept models are specified. These models are compared with random slopes models, and Akaike’s Information Criterion (AIC) is used to indicate whether model fit is improved when I let the effects of spending preferences on spending change vary for issue.

Moreover, I present forest plots to indicate whether responsiveness is significantly stronger or weaker than the fixed effects. To control for year effects that may be present in the TSCS data structure, I also include survey wave dummies in the models.