• No results found

6 Analyzes and Results

6.1 Descriptive Statistics

6.1.2 Variance and Symmetry

The standard deviations tell us about the variation in the different variables. As we see from table 6.1, the variation in the variable CEO compensation is much higher than the variable for the fixed salary. The reason for this is that there is a greater spread in the payment of variable CEO compensation across firms. We also see this by the differences in minimum and

maximum values of the variables. Further, change in P/B, P/E, Jensen’s alpha and EVA™

have also larger variances than the other variables. Large variances in change in P/B and P/E can be a result of different share prices from year to year across firms, and variances in

change in Jensen’s alpha and EVA™ can be a result of different beta values across firms from year to year, that represents firm risk. There are also large variances in the variables that represent firm size. These are market value and revenue, and the variances can be explained by that there are small and large firms with different share prices and number of outstanding shares, and different income. Hence, we see that all of our variables have nonzero variance.

It is especially important that the independent variables have nonzero variance, as if the independent variable has a variance equal to zero, the beta coefficient also will be zero, which

101 Minu Singh and Cigdem Yavuz

will indicate that there is no relationship between the independent and the dependent variable.

We will then not be able to test the independents variables’ effect on the dependent variables in our study, as we also discuss under regression assumption 2 in subchapter 6.5.2.

Further, by looking at the mean and median of the different variables, we can discuss the variables’ symmetry. Most of the variables have small differences between the mean and median, but the variables with the biggest differences are variable CEO compensation, change in P/B, change in P/E, change in Jensen’s alpha and change in EVA™. This indicates that some firms have higher or lower values of these variables that increase or decrease the mean values. These values differ widely from the major of the other observation values which indicates that these variables do not meet the requirements for symmetry. We consider these values as extreme values, which are values of observations that deviate from the main trend of the observations in the different variables. We can observe the extreme values of our variables by the scatterplots represented in subchapter 6.5.8 and in appendices N and O under the discussion of regression assumption 8.

Extreme values in regression analyzes can result in biased and inaccurately estimated regression coefficients, which will give less valid results, as the error term may not meet the requirement of normal distribution. This violates with regression assumption 8, presented in subchapter 6.5.8. By a closer examination of our data we see that the extreme values of our variables do occur randomly and are not a result of measurement errors, and particularly should not be removed from our study. These values show the reality, and by removing the extreme values we can get results and relationships that differ from the reality, and we can also lose valuable information. This will be critical for the validity of our results. However, if the error term does not meet the requirement of normal distribution we can get biased

regression coefficient and regressions lines, and conclusions can be drawn by wrong estimated regression coefficients, which will also decrease the validity of our results.

In order to test if the error term is normally distributed, we take basis on the skewness and kurtosis of the observations. As we will discuss in subchapter 6.5.8, skewness indicates how biased the data is, and when the data is closer to zero, the error term is closer to be normally distributed. High values of kurtosis illustrate on the other hand, abnormal sharpness or

flatness. Ideally, the values of skewness and kurtosis have to be between <+/-2, and maximum 5. It is very important that the requirement of skewness is met, as the data will be biased otherwise (Berry, 1993; Sandvik, 2013b).

102 Minu Singh and Cigdem Yavuz

By examining the values of skewness and kurtosis in table 6.1, we see that we have high values of both skewness and kurtosis for our observations. This indicates that the

requirements for normal distribution of the error term are not fulfilled, and that we should consider removing the extreme values. By examining the scatterplots between the

independent and dependent variables in subchapter 6.5.8 and in appendix P with and without extreme values, we see that there are some observations that differ from the main trend, and draw the regression line in wrong direction as we get biased and inaccurate regression

coefficients. We hence choose to remove these extreme values and not the whole observation by listwise deletion line from our data before we test our hypotheses, as we have different dependent and independent variables and we do not want to lose many observations. We will hence get missing values that are non-random as we have removed these on purpose, and we will get a lower observation number of our variables. In appendix A.1 we present the

descriptive statistics for the variables without extreme values. We consider our choice of removing extreme values as most accurate since we want to achieve unbiased results.

In order to meet the requirements of normal distribution of the error term in a more satisfying way, it is also possible to transform the variables by using natural logarithm, but only

variables that are positive. Since we have change values that are both negative and zero in our first research model, we cannot use natural logarithm, and have to keep the variables as they are. Our second research model, on the other hand, includes absolute values of the dependent and independent variable, and can hence be transformed. We hence choose to transform the variables in order to meet the requirements of normal distribution of the error term. Table 6.2 shows the descriptive statistics of the transformed variables for our second research model with extreme values, and we can hence examine if we need to remove extreme values even if we have transformed our variables.

103 Minu Singh and Cigdem Yavuz

Table 6.2 – Descriptive Statistics for second research model From table 6.2 we see that by transforming our variables we meet the requirements for

symmetry and hence the normal distribution of the error term, except from beta. Most of the transformed have small differences between mean and median, and the requirements of

skewness and kurtosis are met. Beta has however large differences between mean and median, and high values of skewness and kurtosis indicating that this variable has some extreme values, and that the assumption of normal distribution of the error term is not fulfilled. By examining the scatterplot for this variable, and the scatterplots between beta and the dependent variables in subchapter 6.5.8 in appendix Q, we see that there are some extreme values which do not follow the general trend and which we hence consider as important to remove in order to achieve unbiased regressions coefficients. We have attached the

descriptive statistics for our second research model without extreme values in appendix A.2.

We will now conduct correlation analyzes for both of our research models and their variables before we start to test our hypotheses.

LN_Var is the natural logarithm of variable CEO compensation, LN_Fix is the natural logarithm of CEO fixed salary, LN_MV is the natural logarithm of market value, LN_Rev is the natural logarithm of revenue, LN_Beta is the natural logarithm of beta, LN_OS is the natural logarithm of CEO’ direct percentage ownership in the firm, LN_Age is the natural logarithm of the CEO’s age, LN_Ten is the natural logarithm of the CEO’s tenure measured in years, LN_BS is the natural logarithm of board size, which is the total number of directors in the board, LN_BG is the natural logarithm of the percentage of female directors in the board. The observations in this table are for all of the four years, 2010-2013.

104 Minu Singh and Cigdem Yavuz