• No results found

“Although we do not create data, we create theory out of data. If we do it correctly, then we are not speaking for our participants but rather are enabling them to speak in voices that are clearly understood and representative” (Strauss & Corbin, 1998)

The data analysis pertaining to this research took place at different points in time while further data collections were ongoing, as new information was getting obtained through electronic questionaries’ and colleagues from Akros. Continuous analysis of already collected data can give new insights to the researcher with regards to probable themes, categories and patterns, which may further enhance the future data collection processes (Johnson & Christensen, 2008).

While conducting the interviews I wanted to listen closely to what people were saying and how they were saying it and made notes later, as I mentioned earlier in this chapter. I was attempting to understand how they were interpreting certain events. This prevented me from jumping precipitously to my own theoretical conclusions, taking into account the interviewees’

interpretations. It assisted me with avoiding laying my first interpretations on data and forced me to examine alternative explanations (Strauss & Corbin, 1998).

I commenced the process of analysis with transcribing the audio recording of the interviews verbatim every day during the DHIO and facility trainings and read through the notes I took to comprehend their potential contribution to the study. The second step was the detailed line-by-line analysis (figure 4.4), that was necessary at the beginning of the study to generate initial categories and to suggest relationships among categories (figure 4.5); a combination of open and axial coding. Thus chunks of text were assigned codes according to their representation of a single theme or an issue associated with the research questions (Strauss & Corbin, 1998).

Figure 4.4 Transcripts analysis and open coding

Figure 4.5 Mapping of categories and relationships between them

The main idea in content analysis is that the many words of the text are classified into much fewer content categories (Weber, 1985 сited in Tesch, 2013). The basic content analysis takes a systematic, deductive approach, bringing a clear a priory theoretical sense of analytic problems to an analysis of the substantive content of the text (Shaw, 1999). I followed a key procedure in content analysis, which is to design categories that are relevant to the research purpose and to sort all occurrences of relevant words or other recording units into these categories. Then the frequency of occurrences in each category was counted (figure 4.6 à blue bars on the right

from them identify frequencies) and certain conclusions were drawn from it (Tesch, 2013). I also applied a method of the exploration of word usage, where I wanted to discover the range of meaning that a word can express in normal use. The target words were extracted with a specified amount of text immediately preceding and following them. Furthermore, I grouped together the words in which the meaning was similar and established how narrowly or broadly a certain term is construed by the author of the text and compared word uses among groups of authors (Tesch, 2013).

Concerning analysis of qualitative answers on the test described in the sub-chapter above, I went through all the answers (297 responses) and filtered irrelevant ones (the process was also described in the sub-chapter above); when I selected only essential data I followed the same steps of analysis that I applied to field notes and transcripts. To analyze quantitate data from the tests I utilized Microsoft Excel and basic formulas to calculate the average percentage, prepare graphs, etc.

I used the software Atlas.ti to carry out qualitative analysis described above. The software made it easier to revise and interpret data as well as to find specific quotes. Field notes and transcripts were revised and coded by topics that emerged through the notes according to the decentralized evidence-based decision-making framework described in chapter 2 (table 2.1) Thus while analyzing the empirical data I kept returning to the literature to look for categories that matched and examined relationships, similarities and differences. An example from the interviews’ transcripts is: “If I have a woman in labor, then I assess and I have to decide whether she is able to deliver at the facility or she should be referred to the hospital. I decide whether or not to call an ambulance”. A decision concerning immediate help and treatment for the patient is related to clinical decisions explained in chapter 2 and was grouped under the category “Decisions” à “Clinical” along with other labels like “Managerial”,

“Administrative”, “Managerial/Clinical” etc. (figure 4.6). By knowing the contents of the categories, it was possible to map out all the categories and their labels on post-its and then visualize the relationships and dependencies between them on a single sheet of paper (Saldaña, 2015). Figure 4.8 depicts the same analysis applied to data from the questionnaire. After analyzing data from the questionnaire, I conducted analysis on transcripts and field notes again, but this time I only looked for similar codes as I attached to questionnaire data; this was done in order to see the link between decisions that are already being made and data that facility staff identified as useful for decision-making, but not used before the trainings (figure 4.7).

Figure 4.6 Codes and number of repetitions. Field notes and transcripts

Figure 4.7 Codes and number of repetitions. Field notes and transcripts (second analysis)

Figure 4.8 Codes and number of repetitions. Questionnaire