• No results found

2 Background

2.6 mHealth intervention research

In this section, I describe more aspects of mHealth to which research could respond. The focus is on the research practices that have the potential to evaluate the technologies that have the least specific guidelines to follow and, therefore, the most potential and need for research innovation, i.e., those considered under the FDA’s enforcement discretion. The overall aim of mHealth research can be to bridge the gap between the commercial and medical systems and between HCP and patients. This can be achieved if we appropriately and equally address the unique aspects of mHealth and the needs of stakeholders related to chronic condition care. I aim to describe how methods and approaches of mHealth evaluation are building upon – not replacing - the traditional health research toolbox (Appendix A).

2.6.1 The existing “Black Box” of intervention research

There are limitations to the depth and breadth of evidence that mHealth research can provide and, subsequently, what changes could be enacted in real-world settings. Appendix A provides an overview of the traditional methods, data, and data analysis approaches available for researchers to choose from when designing an mHealth study. Here I describe the limitations of the traditional pre-post measure study design, which introduces the challenge of the “Black Box” of unknown actions and experiences during an mHealth research intervention (Figure 6).

Figure 6 Illustration of the “Black Box” concept of pre-post research study designs

The concept of the “Black Box” is most familiar in the research fields of sociology, technology, and engineering, referring to a process in which you can only control the input and observe the outcome, but not what occurs between the two [163]. In the tradition of pre-post health studies, and randomized control trials (RCTs), research has largely been confined to the “Black Box” approach in its focus on merely finding an input that, ideally, improves the output of an intervention. In 1996, Vickers noted

they only took measurements before and after. They could only answer what and how much a measurement changed after the intervention study concluded.

While this is an adequate level of knowledge for the efficacy of new medications or, more recently, medical device functionalities, the impact of which are supported by established biological truths, mHealth for patients self-management brings to light more human-driven factors that determine the efficacy and success of these new health interventions.

“To see the world as a self-contained mechanical realm compounded of sets of point-to-point linear cause-effect sequences is out of date in theoretical physics (Capra, 1982),

and it is odd that it should still have any claim in the scientific basis of therapeutics…

What we need is a research method which devises models of and looks at patterns of interaction among variables, and thinks in terms of the interactive effects of mental

expectation and physical treatment” – John Heron [165]

However, mHealth not only relies upon more factors than standard prescription or adherence but also produces more unpredictable and diverse outcomes, for which there are few and unspecific standard methods of measurement and interpretation. A criticism by Heron succinctly described why we need to adapt and to look at more factors than a pre-post design in intervention research: that the controlled and stringent structures of medical trials tend to produce “limited and misleading view of the

multidimensional reality within which practitioners and their patients live, work, and participate in therapy” [165].

Pre-post, longitudinal and randomized control trials are gold standards for producing reliable evidence thanks to their strict and unbiased designs. But – Is “what” has changed the only important factor for patients, clinicians, and researchers? Because mHealth technologies require evidence to be produced quickly and processes to be flexible enough to adapt to the ever-changing nature of these devices, and the different uses by patients, RCTs may no longer be the feasible or most appropriate option [166].

Some of the RCT characteristics that do not always fit with the needs of mHealth are i) control groups, as patients who enter into an mHealth trial can be disappointed if they do not receive the intervention, and make both healthy and unhealthy choices as a result of this, ii) because the same mHealth app, for example, may be used in so many different ways, those assigned to the intervention group may be too heterogeneous in their use for accurate analysis, and iii) the study design, exclusive recruitment criteria, and intervention designs are often so specific and controlled that they do not mirror real-world situations for end-users [167]. We might need to combine qualitative and quantitative methods of evaluation, without abandoning traditional standards or “reinventing the wheel”, and identify more continuous, iterative, and patient-involved measures to truly understand not only what mHealth technologies impact but “how” and “why”.

“How” and “why” changes occur are more relevant for the HCPs, patients, and the researchers who are designing interventions. These two questions refer to a patients’ motivations, capacity, internal and external facilitators of their health self-management. While these factors do affect patients’ use of any health technology, there are no prescribed standards or ideal ways for how to use them [168]. The patient’s choice is the determining factor when it comes to how the mHealth technology will be used for health self-management and how effective it will be for that individual patient. These “how” and

“why” factors have always been present, and we have simply lacked the ability to access them in interventions – in the “Black Box” of pre-post study designs.

2.6.2 Growing pains: adjusting research practice to mHealth

While public organizations and entities attempted to direct answers to consumers [105, 164], health researchers have and continue to work in the background. These consumer-directed solutions are in effect an initial defence or Band-Aid to evaluation challenges. Researchers, on the other hand, are taking the longer approach of developing evidence-based evaluation methods, working to bridge the gap between the consumer facing world and the medical. In 2011, the American National Institute of Health (NIH) organized the mHealth Evidence Workshop, calling for ways in which research can adapt and develop new approaches to evaluate mHealth [169].

mHealth evaluation in health research is a tall order. Challenges in research now include questions about i) how can we, as researchers produce a broader set of information for a more diverse set of stakeholders, and ii) how can we produce this evidence efficiently enough to rival the rapid development of mHealth technologies in the market. Many initial attempts chose to focus on one health area or one aspect of evaluation to avoid drowning in the diversity of uses and designs of mHealth technologies. Literature reviews published within the previous 10 years have focused on the following: barriers and limitations of two-way mHealth communication for those with diabetes, calling for more patient-driven research [170], functionalities and features, calling for more guidance and structure for evaluating reliability [171], and evidence of mHealth interventions’ impacts on diabetes health, with a call for a greater quality in the production of evidence [172]. Those preceding these approaches focused on balancing the generalizability of mHealth evaluation with the specific nature of mHealth technologies.

The Mobile App Rating Scale (MARS), published in 2015, had the aim of providing evidence of an app’s functionality, visual appeal, information quality, user engagement, and subjective quality [173].

Use of this measure has shown promise because of its quick and easy procedure [174]. A strength was that it could be used during app development to ensure its quality before the app was released to the public. There was also a user-version for individuals (uMARS) [175]. However, there was some question regarding the diversity of apps it could be applied to as it has only been used to evaluate specific types of apps [176, 177]. Some studies had to supplement the scale with additional measures of mHealth impacts, including behavior change. While thorough and comprehensive in their approach, this initiative still lacked two key needs of mHealth evaluation. The first is speed - by the time these evaluations are completed, the technology is often outdated or has been changed significantly based on consumer demands. The second is patient-involvement and measures of their use, perspectives, and needs. As described above, the questions of “how” and “why” patients use these technologies, in particular, are just as important as, and even determine, clinical health change, yet is often not addressed.

With the aim of providing even more accurately tailored evaluation approaches, alternative study designs have been proposed to meet the flexibility needs of mHealth evaluation. As opposed to providing one single, static intervention and study model, Micro-randomized trials (micro-RCTs)

study into a series of stages in which one major impact of the intervention is addressed. The outcomes of each stage provide information about mHealth impacts that aid future developments of the mHealth intervention [180].

These adaptive study and intervention designs provide an environment that more accurately reflects real-world situations– with constant changes in patient needs and technology developments. However, these efforts resulted in partial answers regarding effectiveness and relevant information. Given the traditional measures, e.g., static, objective measures of physical and mental health, researchers struggle to measure continuous behavior change and understand how an app is used and why.

We need to take a step back and, given what we know about how most mHealth is developed, consumed, governed and evaluated, ask, “Why are we not getting the answers that we aim for?”

Perhaps what we aim for does not fit the mHealth dynamic. Therefore, if what we have been looking at are the effects, perhaps we should look more toward the root of our understanding. Perhaps we should be asking, “What should we aim to understand? How should we go about it? What resources should be used to gain knowledge about this field?” These questions point to the purpose of scientific