• No results found

and harvesting the landscape of compulsory mental healthcare

Phase 6: producing the report

The last phase of the analysis was dedicated to the written presentation of the findings.

When transcribed and printed out, the data set for this research extended over a couple of hundred pages. Only a small portion of data set will be presented as findings in this dissertation. Therefore, great caution was required in order to select the appropriate extracts for the presentation.

I tried to extract the material so that it could represent the findings in a clear and readable way, and at the same time illustrate that there were ‘enough data extracts to demonstrate the prevalence of the theme’ (Braun & Clarke, 2006, p. 93). In the end I had to trust my own understanding and the construction of the empirical material, and present extracts that I believed were adequate.

Braun and Clarke (2006) stress that the researcher ought not merely report plain data, rather the material should be used analytically:

Extracts need to be embedded within an analytical narrative that compelling illustrates the story that you are telling about your data, and your analytic narrative needs to go beyond description of the data, and make an argument in relation to your research question. (Braun & Clarke, 2006, p. 93)

I argue that analysing the interviews thematically is already an analytical approach;

as long as the plain data is presented underneath a headline the accounts are already used in order to say something that was not necessarily intended by the interviewee.

Because music therapy within compulsory mental healthcare is not a widely explored area, I personally think that a purely descriptive presentation of the user experiences could also provide useful information for the field of music therapy. A pure description of the participant’s statements could also serve justice to the actual statements. However, I agree with Braun and Clarke (2006) that the researcher’s description of the themes and narratives is an important part of the research, for both clarifying the content of the themes, and for using the themes adequately as part of the research process.

In the presentation of the themes I have tried to find a golden mean; I wanted to portray an honest and authentic presentation of the actual conversation, and at the same time I wanted to make a structured presentation comprehensible for the reader (Kvale &

Brinkmann, 2010), based on themes that benefits the research. Sometimes quotations

stand alone, but other times it felt necessary to include the context of the given statement and preceding questions in order to promote trustworthiness. As illustrated earlier in table 2, I have compressed the interview extracts to make it readable when presented in the text. The idea has been to promote statement clearly, the way I think the research participants intended. Hence, filler words and disrupted sentences are sometimes removed if this clarifies the statement without changing the content of the statement.

Also parts are removed whenever there are long digressions, or when the topic changes during the statement. Sometimes the researcher’s questions are also condensed slightly to make the presentation more readable, without changing the content or the way the questions were posed (See table 2).

5.4 Methdological discussion

5.4.1 Trustworthiness and research quality

In order to define the quality of a research process, two terms have traditionally been used: validity and reliability. If there is a high degree of validity there is a good chance that the chosen research method, and the instrumentalisation of variables, really measure what they are meant to (Heale & Twycross, 2015). For example, if we want to know the temperature outside, the best way is probably to look at a thermometer. If we use stan-dardized thermometers, check several different thermometers, and let other researchers verify the data, there is a good chance that we come up with a reliable result. It is easy to see that these two terms remind us of important aspects of the quality of the research, especially within the quantitative domain.

It is not all difficult to superficially review the quality of my own research using the concepts of validity and reliability. I want to investigate user experiences, hence I ask users about their experiences. And I want to depict reliable results about user experi-ences; consequently I portray quotations so that everyone can see for themselves how user experiences are found in the field. In a way we could already end the discussion without much further ado. However, the concepts of validity and reliability come with certain limitations when estimating the quality of a qualitative research.

In qualitative research it is not always natural to speak about the reliability of the results, nor if the chosen research method provides adequate answers for the research question (Brynjulf Stige, Malterud, & Midtgarden, 2009). To the contrary, in qualitative research the researcher might not even know in advance what questions to pose. And as outlined earlier in the chapter, there are no clear answers for how to investigate human

experiences. Speaking from the perspective of social constructivism, I take it for granted that I cannot find or portray any neutral results through the research. The overarching goal is still to depict findings that are valid and reliable, but we need additional criteria for judging the trustworthiness of a qualitative research process.

Stige et al. (2009) suggest a broadminded approach when evaluating qualitative research.

They note that specific criteria or rules are not necessarily the best way to understand the quality, the depth or the relevance of the research: ‘The practice of rule-based eval-uation is only defensible when the study to be evaluated is based on a corresponding epistemological foundation. But this premise is often not present in qualitative research’

(Stige et al., 2009, p. 1505). Through the acronym EPICURE, the authors instead introduce 7 items that could be included in the agenda of evaluating a research:

We suggest that these two dimensions of an evaluation agenda could be commu-nicated through use of two acronyms: EPIC and CURE. The first cluster, EPIC, refers to the challenge of producing substantive stories based on engagement with a phenomenon or situation, processing of empirical material, interpreta-tion of the evolving descripinterpreta-tions, and critique in relainterpreta-tion to research processes and products. The second cluster, CURE, refers to the challenge of dealing with preconditions and consequences of research, with critique, usefulness, relevance, and ethics related to social situations and communities. Our argu-ments above indicate that neither of these two dimensions can be seen in iso-lation. Therefore, we have chosen to integrate them in the compound acronym EPICURE. (Stige et al., 2009, p. 1507)

The first part, EPIC, refers to the research as an active process in which the researcher is always an engaging part in the construction of data; presumptions, expectations, and interests are part of the interpretation processes, and we need to be aware of these elements in the construction of knowledge. The other half, CURE, points at the external world outside of the research, and might answer questions of ‘how’ we research, and

‘for whom’ the research is relevant.

Trust may be earned through transparency. I have for example outlined my postmod-ern-informed critical worldview on knowledge, and tried to describe in detail the whole research process, including a step-by-step analysis of the empirical investigation.

However, there will always be parts and procedures that are concealed from everyone but the researcher. There is perhaps no easy solution to this challenge in qualitative research, but I have tried to demonstrate honesty regarding both fortunate and unfor-tunate choices that I have made throughout the research process. In this way the reader

may evaluate the research process, and gain a better understanding for judging the overall quality of the research.

If the study is to be considered trustworthy the researcher also needs to clarify the intensions behind the study: how the results may be mis/used, and how people might be mis/treated as a result of this research. It is the researcher that is responsible for the perpetuated report (Trondalen, 2007). Even though the researcher may change opinions throughout an academic career, when first published, the works will always remain available for misuse in the future. Since I am not capable of deciding who will mis/use this research, I can only try to clarify my intentions for the research, and present my arguments in an organized manner. Then perhaps it will be more difficult to misun-derstand or to deliberately misuse the given statements. One time during the empirical investigation I experienced that the information offered by the research participant were so private that I asked whether there was anything of the information the research participant wanted not to be published.

5.4.2 Methodological challenges and limitations

Throughout the chapter I have tried to clarify some weaknesses and limitations of the chosen research method, and the shortcomings of a qualitative study such as this one.

To perform research on human experiences through a postmodernist point of view may provoke a few ontological and epistemological challenges. Below I will highlight some of the practical implications and methodological difficulties that I have encountered during the research process, as well as some critiques that seem legitimate due to the presence of hindsight.

Interviews and social research as a craftsmanship

One might think of the research interview method as a tool, and the research perfor-mance overall as a craftsmanship. As with other professions, it takes time and practice to master the art of the research interview (Kvale & Brinkmann, 2010). There is no shame in admitting that I am not an experienced researcher, neither within music therapy nor within mental healthcare. Hence, I did not administer the interview process flawlessly. Especially as regards follow-up questions, the study might have benefitted from a more experienced interviewer. And even though there was a music therapist present in every interview, I have noticed retrospectively from the interview recordings that interesting topics occasionally may have slipped away in the absence of adequate follow-up questions:

R: Do you have any thoughts about what it means for the whole week, to have music therapy on the schedule?

SARAH: Sometimes I look forward to Thursday a little, because then it’s music therapy you know.

R: Yes.

SARAH: So it means something.

R: Yes. But you also… You said that you participate in hikings?

SARAH: Yes, that’s not as fun [as music therapy], hehe. (Interview with Sarah)

In the example above it could for instance have been interesting to learn more about the importance of meaningful appointments on the weekly schedule, such as music therapy:

‘How often does she eagerly look forward to music therapy?’; ‘How does this affect the other days?’; or ‘When does it mean the most to have something to look forward to?’

The potential questions are many, and maybe I would have learned a lot just by saying:

‘That’s interesting. Could you say anything more about this?’ Hopefully I caught some of the participant’s thoughts on the matter at a later point in time, but it is likely that the data set would have looked differently if a more skilled researcher had performed the interviews.

Especially during the first interview I realized afterwards that I had not been completely neutral in the way I posed the questions. Even though the interview guide was designed to be quite neutral in its formulations, when translated to a verbal state these questions were not always as unbiased, as we can see from the transcription below:

Do you have any thoughts about what has been the most important – you’ve said something about this already, but if you have something more to apply – in that you’re allowed to have music here? (Researcher, in the interview with Lee)

Even though I transcribed, evaluated, and learned from my mistakes prior to conducting the next interviews, I might occasionally have posed the questions in value-laden ways.

Seemingly, I sometimes failed the assignment of ‘gently nudging without bias’ as scholar Tim Rapley puts it (2007, p. 20).

Another potential bias that I encountered was my researcher feedback within the inter-views. I have that there were comments and responses within the data set that appeared value-laden. When trying to tune in on the participants, and to respond enthusiastically, the directedness of the feedback may potentially have triggered similar responses, or even coloured the general experience of what made up the right answers. It seemed

for instance that I, as the researcher, had the tendency to follow up the participants’

statements with the word ‘cool’:

L: It’s a stress releaser for me, to write tunes.

R: Yes, I see… to relax?

L: Yes, relax.

R: Cool. Is that something you do now, or something you did a long time ago, or something you do occasionally?

L: I do it rarely.

R: Yeah… Stress releaser.

L. Yes, stress releaser.

R: Cool, that’s fun to hear. (Interview with Lee)

In this example, which is also an extract from the first interview, my comments might seem value-laden. And even though the comments were only meant as positive and human responses, they might have been interpreted as judgements about the quality of the statements.

Limitations of the interview as the only research method in this study I agree with the French philosopher Sartre in that human beings can have no access to other minds (as described by Onof, 2020). And previously in this chapter I have pointed at an inevitable gap between that which is thought and experienced by others and what I can comprehend as a researcher outside of others’ minds. In the following I will deliver a few critiques of the interview as the only research method in this study, and discuss briefly how additional research methods could potentially have benefitted this study.61 I previously described the interview setting as a construction site for knowledge (Kvale

& Brinkmann, 2010). And in several ways the research interview is a method that pro-vides little ecological validity to the study. The interview conversation is constructed by the researcher, and is intended for a specific purpose: ‘In terms of the level of engage-ment, most interviews represent something related to an experimental practice within interpretivist research, because the researcher has to set them up; in this way, they are artificial (Keith, 2016, p. 234).

Early on in this research process I wondered whether I could obtain user perspectives in other ways than through mere interviews that would perhaps provide a more ‘natural’

61  The Ph.D. Adjudication Committee recommended including more critiques of the interview as the only research method, in their preliminary report of this thesis.