• No results found

How mHealth can be used as a resource for traditional health evaluation methods and

7 Discussion

7.1 Insights and outcomes summarized

7.1.4 How mHealth can be used as a resource for traditional health evaluation methods and

RQ4. How can mHealth approaches and resources supplement traditional methods and measures in a protocol describing how to measure impacts of an mHealth intervention on patients and HCPs?

The design and the test of the protocol answered the first part of RQ4, related to how mHealth approaches and resources could be used to supplement and complement the traditional methods and measures. Combining standardized pre-post questionnaires – measuring both psychological and physical wellbeing and patient-provider relationships – with usage-logs, focus group feedback, and usability questionnaires from both patients and HCPs. We believed that these combinations of measures would allow us to answer not only what had changed during the intervention for both patients and HCPs, but also how and why. However, these results only demonstrate, not prove, the possibility of mHealth as a resource for health intervention research. While we have the opportunity to broaden research’s perception of what is relevant and what impacts the medical community, we do have to take these results with a grain of salt and take into account before our next studies that both mHealth and traditional research have their strengths and weaknesses.

Traditional methods and measures are specific, validated, and make it possible to replicate studies and build upon the same evidence as others in the field. However, they attempt to measure cumulative change reported at a single moment, such as the Healthcare Climate Questionnaire (HCCQ). The responses may fall victim to memory error or mood that day for the patient, which may not accurately reflect their experience during the whole intervention. This may have been the case with the second case (T1D) analysed as part of the feasibility study, in which the participant responded that they felt as though their DN had less confidence in him, even though they both reported positive effects of using the system together. Situations during an intervention, especially interpersonal ones, are much more complex than can be measured by one method applied only a few times during the study, typically 2-3 times. While our focus group meetings helped to elaborate on the T1D patients’ side, it was not possible to clarify any miscommunications from the DN’s side, as she was unable to attend the HCP focus group meeting.

mHealth technologies reinforce and facilitate the concept of patient-centred and patient-driven research. mHealth for self-management puts individuals in the driver’s seat of their own health decisions. When we acknowledge patients’ decisions in our research activities, e.g., usage-log analysis, we are also forced to acknowledge that individuals use apps differently – just looking at the heterogeneous data of the RENEWING HEALTH study demonstrated that one size does not fit all. It forces us to rethink our questions, our interpretations of the data, and our assumptions of patient needs, priorities, self-management practices, and barriers to doing so. With the help of mHealth technologies and empowered patients, we can more effectively expand the impact of research by expanding the conversation and focus of healthcare practice to include that of the patient.

As demonstrated by the mixed-method feasibility study, usage-logs can also be complementary to traditional measures. These logs refer to each interaction that a user has with an app or other mHealth device [264]. Researchers can refer to “usage patterns” to describe a patient’s journey through a study, e.g., their engagement with the intervention as well as participation in the study or when they were most and least engaged [265]. By comparing these data to other data collected during a study, we could theoretically begin to explain why these changes in usage patterns occurred. We can also begin to ask more questions than we have been able to measure before, e.g., when did that user change which type of data they collected and used in their self-management routines? These could be followed by more qualitative questions that elaborate or explain these responses. Analysis of usage logs and usage patterns is also a very new concept in medical and health research, with most cited studies occurring in the last 2-3 years. As such, the same questions apply to this form of data as patient-generated health data, e.g. how to structure the data, how trustworthy and reliable these are etc.

The results of the feasibility study demonstrated that the mHealth-focused resources and measures helped to explain why and how the experiences, relationships, and wellbeing changed during the intervention period. For example, focus-group input explained some patient participants’ motivations behind not only collecting but also sharing data. Patient participants seemed to give honest and direct feedback about technical errors that they had with the system and frustrations with the healthcare system and their HCPs that were correlated with, e.g., missing data for the T2D user, and T1D’s decreased satisfaction with his patient-provider relationship.

mHealth could provide an approach and the resources to conduct studies that lead to more in-depth and foundational questions about not only what but also how and why patients, and even HCPs, engage with an intervention in the manner that they do. Perhaps by using mHealth approaches and resources, the future publications of our research works could conclude with concrete suggestions for what information needs to be explored next, as opposed to the very popular phrases “more research is needed to…”.

these changes occurred during an intervention using a practical case, diabetes self-management interventions (Figure 16).

However, patient-generated data, such as usage logs, are prone to technological and human error. The analysis for these data has not yet been standardized and therefore are difficult to replicate and validate. Case and point, while the structure of the usage-logs for analysis was based on

interdisciplinary theories, this process has only been performed a few times; it has not been performed or validated outside of the present research team, i.e., during the Tailoring and RENEWING HEALTH projects, and therefore requires more validation and, of course, data.

Reflections of the pragmatism paradigm in the study design and administration

Loudon et al. provide a means to assess the level of pragmatic approach that a study has built into its design and performance [266]. Based on 9-domains of a study design and performance, the study could be scored from 0 (explanatory, positivist and evidence driven) to 5 (pragmatic and knowledge driven). I performed this analysis on the FullFlow feasibility study will benefit from recalling Figure 20.

Figure 20 PRECIS-2 Score for the FullFlow Feasibility Study

I provide detailed explanation for how I concluded on these scores for each PRECIS-2 domain, based on specific decisions made in the protocol and administration of the FullFlow Feasibility study, in Table 10 below.

Table 10 Explanation of the FullFlow study's scores for the 9 PRECIS-2 criteria Domain Score

Explanation

Eligibility 3

 Individuals who were 18+years, with T1D or T2D, lived within the Troms/Finnmark areas, interested and willing to try to use the intervention were considered eligible

 Providers were identified based on our own research network and colleagues’ contact within the Troms/Finnmark areas

 Because this excluded those who were not interested in mHealth technologies, from both the patient and provider side, or those who lived outcome of the Troms area, scores of 4 or 5 were not justified

 However, no limitations were placed on HOW the intervention would be used, only described the possibilities of the various ways in which it could be used, so use-related decisions were meant to be based on participating users’ level of technology interest and ability, which justified a score of 3

Recruitment 4

 All patients with T1D or T2D were recruited through their providers, who had already agreed to enter the trial

 Recruitment materials were given to those who attended an appointment at the clinic or mailed to those whom the provider perceived as potentially interested

 A score of 4 is justified because, outside of eligibility, the process of recruitment used existing workflows and protocols, which did not require much more work than a provider would normally perform to contact patients

Setting 4

 Part of the intervention occurred in the typical setting of a consultation for the purpose of diabetes care between the patient and their normal provider in their office setting

 While the research team encouraged patients to schedule these in order to try to use the intervention, it was unknown which of these were regular appointments or which were scheduled because of the study, which is why the score of 5 is not justified

Organization 3

 Patients were not trained but were provided with access to online resources to assist them in deciding how they would like to use the system

 The system was made available online so that anyone anywhere could access the patient's data (including themselves) as long as the patient provided the access key and initiated data

transmission. Therefore, this could be made available during normal clinical consultations as long as the secondary user had access to the internet and consent from the patient (the ideal and hoped-for setting of our study)

 Participating providers were trained on the system for 1hour, which would, the in real world, require outside support and technical assistance

 Additional assistance is a rather large barrier to real world use, which is why the score is only 3 and not a 4 or 5

Flexibility:

delivery 5

 Participating patients were encouraged to schedule an appointment with their provider to discuss their data after a 6month period of time (via a message sent to their smartphone app and on e-mail)

 Patients decided when and if they would indeed do this and follow-up reminders were provided only twice if they did not decide to do so (data limitations were expected to result)

 Providers were instructed to click the link in the intervention system, which automatically led them to a questionnaire page associated with the consultation of the individual who shared the data, after each consultation to provide research feedback so that the experience was fresh in their minds and it would reduce the need to remember later.

 Patients were sent the link to a 6-month questionnaire about their consultation via a message to the intervention’s app on their smartphones and on email

 Patients and providers drove the conversations together and questionnaires only asked what they talked about instead of dictating what they should talk about or how they should use the system

 While the research team did encourage use of the system, the freedom provided to patients and providers justifies the score of 5 for this criteria

Flexibility:

adherence 3

 Researchers sent follow-up messages each month to encourage and inform participants of the different functionalities that were available on the app

 However, these messages did not direct the patient about how to perform self-management or what information to record, only encouraged them to explore the possibilities of the technology that they believed would be useful for them (tailorability)

 More administrational follow-up messages included instructions for how to participate, i.e. up to 2 reminders for the start-up registration of their device to the research system so that we could remotely collect their data, baseline questionnaire, 6-month consultation scheduling and post-consultation questionnaire

 These would not be available in real life, which is why this is not a score of 4 or 5.

Follow-up 1

 As mentioned under “Flexibility: adherence”, we performed significant efforts of follow-up after each consultation throughout the duration of the study and provided email support whenever they needed.

 The score for “Follow-up” is low because such support would not be available in the real-world setting. We did not expect healthcare providers to be able to do so given their already overwhelmed schedules, and if they would like to include this in their practice, they would most certainly have to hire additional and technologically trained personnel, especially to help those experiencing tech issues.

Primary outcome 5

 Outcomes were largely based on the previous studies (described in the Methods and Results sections), provided by both healthcare providers and patients

 To ease reporting by patients, outcomes were gathered by response offered via a link in a message from us to their smartphone app/email to a set of questionnaires. Usage-log data was remotely captured (which did require some effort from the individual participant who had to enter a code in order to allow our system's access to their logs)

 Patients and providers were also invited to study-end focus group meetings, through which they could express their experiences, frustrations, and overall perceptions of their experience and the intervention's impact.

 Because outcomes were based on end-user input and the flexibility in which they could be reported was based on the user’s decisions, the score of 5 for this criteria is justified

Primary analysis 5

 We did not exclude or limit the data available if the participant stopped using the device, did not complete questionnaires or did not schedule with their providers. we simply viewed these as results reflective of real-world situations

 Missing data were seen as valid results of either the participant's experience in the trial or use of the intervention (although we did not know which one), which is why the score of 5 is justified for this criteria

Limitations of the feasibility study outcomes

The main limitations of the feasibility study as a whole included geographical location, recruitment, and technological challenges. Because this study focused on collaboration between patients and HCPs, we needed HCPs to be engaged, and therefore we needed to recruit prospective patient-participants through them. The Troms and Finnmark regions occupy a large geographical area, with a small potential participant pool from each clinic. As a result, we were only able to recruit eight patients. This led to the inability to comment on the statistical change or comparison of the usage-logs or

questionnaire responses within and between patients during the study. There were also limitations regarding the amount of quantitative data we could gather through the app due to the connectivity challenges with not all users being online 24/7. As such, we also have limited ability to comment on the usage data of app interactions. As a result, we would not have continuous interactions and gathered measures data to provide insight into behavior change during the study. However, all patient-registered measures were recorded and will be included in later analysis.

Due to the risk of participant burn-out, known as respondent or participant fatigue [267], we were limited in the number of standardized questionnaires we could include. While the patient focus group meeting did provide more understanding of their situation, we may have improved our ability to understand the responses more thoroughly by reflecting their questionnaire responses in the discussion questions that were asked in addition to basing questions off established theories.

The patient study-end focus group, fortunately, was held before city closures were enacted due to the COVID-19 crisis [268]. Unfortunately, only one HCP – a DN who did not participate with a patient in the study, and used a system called Diasend [269] instead of the tested data-sharing system – was willing and able to attend a virtual interview. The rise of the COVID epidemic only highlighted an existing research challenge - recruiting HCPs. While we did pay GPs for their time, these HCPs do not have the same funding and flexibility as hospital staff to participate in research. Therefore, the only understanding of how the HCPs who met with patient participants felt about the use of the system was based on questionnaires.

7.2 Limitations of the Pragmatism Paradigm

In addition to the practical study-specific limitations that affect the generalizability of this thesis, as

affected by the tangible and intangible factors that surround individuals. In one of our own articles (Appendix P) [260] we noted that while clinicians’ willingness to support their patients’ use of mHealth and receive patient-gathered data had increased between 2013-2017, healthcare authorities had not provided sufficient daily recommendations for how to successfully and safely accomplish this.

Today, the need for healthcare services despite social distancing measures of the pandemic have forced healthcare authorities such as the CDC to offer official guidance and recommendations about how healthcare providers and facilities should react [270], and even provide information regarding insurance coverage, something that mHealth technologies have not yet achieved [271]. These changes happened over a matter of months in response to the rapid spread of COVID-19. There are also the intangible influences such as the stress of being isolated and defeated during the pandemic, i.e.

COVID or Pandemic Fatigue, for everyone and/or the exhaustion and anxiety of being a healthcare provider. These symptoms of mental stress can manifest in many different ways from frustration with simple tasks and short tempers, to depression and loss of occupational productivity [272, 273]. While these may not seem to affect the use or perception of mHealth think of how frustrated we can normally get when our phones or internet is not working – an overly simple example yet one we can all relate to. Now add the stress of needing to connect online, perhaps with your doctor about some worrying symptoms you are experiencing from your diabetes, and not being able to. If we had the capacity to interview the HCPs and individuals with diabetes who participated during the FullFlow Project, it is within reason to expect that their perceptions of mHealth and/or the need to share patient-gathered data in-person would have changed. The functionalities of the system that we developed during the presented project would then also need to change to support social and “medical distancing” where possible.

Self-selection bias

The generalizability of our findings is also limited by a common bias – self-selection bias. This is exactly how it sounds; while recruitment information may have been made public via pamphlets or social media etc., individuals who choose to enter a study inherently skew the data that are recorded because they are willing and able to participate. As such, they may not represent the larger population.

The negative impact of self-selection bias can be larger or smaller given the study design and purpose.

In the FullFlow Project, we aimed to build upon existing and used mHealth technologies for diabetes and therefore the purpose of our findings not only expected but relied on self-selection; we aimed to recruit and analyse the specific needs of those who have or were willing to have experiences with diabetes apps and data-sharing. Our recruitment activities reflected this need by posting recruitment messages on our research group’s Facebook page (Diabetesdagboka), via our research app and through healthcare providers themselves, who had a population very specific to the geographical region. However, the danger of selection-bias for this project is evident in the interpretation and application of these findings to other circumstances. Factors, such as prevalence of smartphone use and internet coverage, accessibility of health information and support prior to study-start, could have

In the FullFlow Project, we aimed to build upon existing and used mHealth technologies for diabetes and therefore the purpose of our findings not only expected but relied on self-selection; we aimed to recruit and analyse the specific needs of those who have or were willing to have experiences with diabetes apps and data-sharing. Our recruitment activities reflected this need by posting recruitment messages on our research group’s Facebook page (Diabetesdagboka), via our research app and through healthcare providers themselves, who had a population very specific to the geographical region. However, the danger of selection-bias for this project is evident in the interpretation and application of these findings to other circumstances. Factors, such as prevalence of smartphone use and internet coverage, accessibility of health information and support prior to study-start, could have