• No results found

4.3 Collection of empirical data

4.3.4 Analysis of data

The collected data was exported directly from QuestBack to the program Statistical Package for Social Science (SPSS) for the purpose of a statistical analysist. I have limited the analysis

to a few statistics due to time constraints and due to the limited frame of a master thesis.

Further on, the response rate in this study is low and more sophisticated statistical analyses should probably be reserved for bigger data sets. Firstly, data is analysed descriptively looking at single variables and their frequencies, percentages, means and standard deviations.

Some variables were recoded into the same variable or into different variables. I also computed a new variable by summing the answers to the questions on relationship to links, creating a variable on weak and strong links to industry. For the 21 variables related to skills a factor analysis was done. The factor analysis attempts to identify underlying variables that explain the pattern of correlation within the observed variables related to skills. Usually a few factors will account for most of the variation and these factors can be used to replace the original variables (Hellevik, 2002, 320-321). The 21 variables on skills were reduced to five underlying dimensions in this analysis and will be discussed in chapter 5. I used Cronbach´ s Alpha as measurement of reliability of these dimensions.

Secondly, the relationship between different variables is explored. I have used cross-tabulations, where I run Pearson´ s chi-square tests. The chi-square test is testing the null-hypothesis, which states that there is no significant relationship between the two given variables. Pearson´ s chi-square values below .05 indicate that rows and columns of the contingency are dependent (Hellevik, 2002:402-406). I run a t-test of the variables connected to industry-links and to four dimensions on skills. The same test was also run with variables on work experience and research discipline. An independent sample t-test compares the mean scores of two groups and assumes that the two groups are independent of one another, that the dependent variable is normally distributed and that the two groups have approximately equal variance on the dependent variable. Levene´s test for equality of variances examine whether the variance of the two groups is equal. Significance values above .05 indicate that the variance is equal. The independent sample t-test also sets out a null hypothesis claiming that the means of the two groups are not significantly different. The alternate hypothesis says that the means of the two groups are significantly different (Hellevik, 2004:408-409).

4.4 Reliability and validity

Reliability and validity are two technical criteria that say something about the qualities that are built into a measuring instrument. Reliability refers to consistency both over time and internally. Consistency over time, or stability of measurement over time, means that we should get the same results repeating the measurement in a different time, but under the same

conditions. The instruments would be unreliable if we get different results. Reliability can be assessed through two administrations of the same instrument at two points in time, with the so-called test-retest reliability. Internal consistency relates to the concept-indicator idea and assess to what extent the different indicators are consistent with each other. Indicators working in different directions are not consistent. Internal consistency can be estimated with split-half techniques, as the coefficient alpha (Punch, 2005: 95). Measures with a high reliability produce scores that are closer to true scores and that control for errors. A good measure instrument with high reliability also picks up variance in scores produced by respondents. The questionnaire as a measure instrument has not been tested for its reliability in this way. It would thus have been an advantage to use existing survey tools, but since such tools do not seem to be available, a questionnaire was developed especially for this purpose.

The indicators chosen in the questionnaire are, however, well founded in literature as well as in several skills framework. Another challenge is that self-assessments should be run in combination with performance tests or at two different points in time with the same respondents in order to get more reliable data on learning outcomes. On the other hand, the questionnaire included a question about what point of departure the students had taken when answering the questionnaire. 94 % said they answered from what they think they actually have learnt during their Ph.D., 3 % said they answered from what they think Ph.D. students normally learn and yet another 3 % said they answered from the explicit learning goals stated in curricula and courses. The high number that indicated they answered from what they actually learn strengthens the reliability of their answers on skills and learning outcome. The questionnaire also looks into what stage of their doctoral degree the students were. Less than 20 % reported to be at the beginning of their doctoral education, which means that more than 80 % were either half way or about to finish their degree. The students are thus likely to give reliable answers about their learning outcomes as they can tell from actual experience and that further strengthens reliability.

Validity refers to the extent to which a measurement is well-founded and corresponds accurately to the real world. Do we measure what we intend to measure? This relates to the extent to which an indicator empirically represents the concept to be measured and includes content validity, criterion-related validity and constructs validity. Researchers should be concerned with both external and internal validity. External validity refers to the extent to which the results of a study are generalisable to a bigger population. Internal validity refers to the rigor with which the study was conducted, as the study's design, and to the extent to which

the study takes into account alternative explanations for any causal relationships (Campbell and Stanley, 1963).

According to Pascarella (2001), cross-sectional data lack internal validity since they do not reveal causality between variables. Studies with a cross-sectional design do not provide insight in the value added from education since students´ knowledge when they were recruited to the education is unknown and it is thus impossible to say what effect the socialisation process through education has had (Pascarella, 2001 cited by Karlsen, 2011:67). There is however ways to overcome this challenge when measuring learning outcomes. Longitudinal studies are, as mentioned, one of them. Using a control group that has not entered in higher education is another possibility that will indicate how knowledge and skills have evolved in the group that has entered into the study program versus the group that has not. Yet another strategy is conducting interviews with employers, asking which skills and competences they would like recently graduated employees to possess and then afterwards check with the employees whether they consider these skills and competences to be crucial in their new jobs and what role their last education has played in developing these skills and competences (Rochester et al 2005, cited by Karlsen, 2011:70).

As mentioned, the empirical part of this study does not take advantage of triangulation of methods as described here, due to lack of time and resources within the frame of a master thesis. The questionnaire would thus have weak internal validity, which means that results must be interpreted carefully, especially when it comes to causality and generalisations to a bigger population. The results say first and foremost something about the respondents and their perception of their own skills acquisition. I come back to this question in chapter 6. On the other hand, the study includes a literature review, which would make findings from this study more solid. Considerations about the methodological aspects are described in paragraph 4.2.1.