• No results found

Several sources of data were used to produce process data both for the purpose of the study and for YSI in general. This strategic choice was made to enable triangulation between data sources to create a more comprehensive understanding of the context at hand. Some sources were collected automatically throughout the process and stored in the electronic storage systems of YSI. Other data were captured through interviews done both during the process and retrospectively. Each one is regarded as a primary source of data to the study and is described below. Relevant parts of the data available will be used to create a holistic picture of the effects laid out by the research questions.

Participant interviews. Six out of eight teams were originally selected to be part of the study. The reason for not including the other two was that these were subject to a pilot of a corporate program, where they collaborated with a larger business on specific domains from the beginning. This impacts the process to the extent that the difference in background for the team processes was too significant to cross-examine between them. One team was excluded due to ethical considerations described in chapter 3.6. Two respondents in the selected teams were randomly chosen to be part of the study, with the intention of gaining more than one participant perspective on the development of the team and competencies at the different time points. Note that there was one change to the final respondent in Team E. This was done due to availability.

Viewpoints on important aspects of their process were captured through audio-recordings of the interviews and then transcribed at a later point. Semi-structured depth interviews were conducted with a duration of 20 to 60 minutes. The focus was split between team development, competencies and mentoring functions. Some participants were more comprehensive in their answers than others and

21

answers could often be given to several questions at once. A focus was to attempt to not ask leading questions that might give an impression of a “correct answer” to the interviewee.

The interviews were done at three points during the program, visualized in Figure 3. They started with an introduction of the thesis topic and intention of the interview. The participants were then asked several questions in a semi-structured manner. Some questions at first aimed at getting participants to rank certain aspects and provide in-depth explanations of how this was in their context, others were more general questions that also required extensive answers. However, their rankings were not of relevance nor used for anything other than to provide context for the examples and following discussion of how the period had been for the participants.

Figure 3. Timing of interviews and modules of the program

Final interviews with all participants. YSI conducted interviews of their own that were based mainly on capturing the holistic progress and perspective of their participants on the program itself and their journey through it. These include aspects that are outside the limited scope of the thesis, such as the application process, innovation program content and logistical Oslo Weeks experience.

Questions relevant to the thesis are still interesting to use for providing context.

Summary interviews with each mentor group. Three unstructured interviews with a duration of approximately two hours were done with each mentor group to describe the processes of the two teams they followed. The point of this retrospective interview was to create a focused collection of other primary mentor-based data sources. Based on this a timeline of the team processes was drafted on A3 paper that could later be compared with the developmental phases of the teams. Topics were based in the research questions, namely development of competencies and team dynamics looked at with a focus on interventions done and mentoring style. The focus was on critical events and processes, which

22 led to interesting discussions and reflections. These interviews used the four data sources mentioned below to minimize recall bias and thereby increase validity.

Mentor journals. Each mentor wrote their own journal in which they described perspectives on various aspects of the teams. These were written as free reflections around events in the team and how they felt they handled it as mentors. A template was made with examples of focal points like body language, strong reactions, general vibes, conflicts or friction, own performance as a mentor and more.

Mentor meeting notes. Meetings were conducted every one-to-three weeks, depending on organizational workload and presence of team members, to discuss how the teams were doing, learn from each other as mentors, reflect and adjust the process. Notes were taken to capture the most relevant aspects of how well teams were doing and what could be done to help them develop further.

Outcome report feedback. After each outcome report of the teams, feedback was written down by the mentors in collaboration with the program manager and sent to the participants.

Electronic support systems. Slack was used as a communication tool to facilitate interaction between mentors, participants and everyone in the community. Includes analytics of message amount, time stamps, names and interactions. Data from Slack were used for providing context and to re-create the team process in the summary interviews with mentors. Google calendar was used as a source with the same purpose of improving the validity of answers provided in the summary interviews.

Competency development survey. To guide the interpretation of the data and make it more easy to find the interesting developments of teams and individuals, an overview was created through a survey that mapped changes in participant competencies from program start to program end. Both mentors rank the participants from high to low on the competencies at four time points, and these were placed at start, end and at the point of the interviews like Figure 3 shows. An average of their rankings is then used to create graphs of development in the teams through three periods. However, Mentor 2 did not want to rank the competencies of Team A due to lack of involvement with this specific team and the argument that it would not be valid.

The survey was based on a contextualized version of the KSM framework for entrepreneurial competencies, which was in turn based on the KSA framework (Lackéus, 2013). The knowledge category was changed to focus on knowledge and understanding of entrepreneurial tools and methods.

Communication ability, creativity, decisiveness and receptivity to feedback were added to the skill category. Finally, self-awareness, and three commitment factors were added to the motivational factor category. Another part of the survey was added after these categories which included team cohesion, alignment, practical collaboration and some other factors like cooperation problem handling for contextualization. Mentors were provided with a detailed terminology definition, walked through them and were also able to ask the author if they needed clarification.

23