The diagnosis and treatment of mental disorders is promoted by the World Health Organization (2013) as an activity that requires a highly trained workforce regulated by governments with a prioritized mental health agenda and action plan. The use of mobile mental health applications (apps) is a contentious issue in this context because a large proportion of such apps has been developed by individuals who are not mental health experts (Alyami, Giri, Alyami, & Sundram, 2017; Marshall, Dunstan, & Bartik, in press; Shen et al., 2015). Following an increase in the development of mental health apps-there are over 10,000 available worldwide (Torous et al., 2018)-mostly outside of any regulation, there is broad recognition by multiple stakeholders such as governments, research institutions, clinicians, and consumers that regulatory intervention is needed. This is particularly true for apps that offer a comprehensive therapeutic treatment and/or diagnosis for a mental disorder, as opposed to apps that may offer singular or novelty interventions that may or may not be useful in assisting to manage one’s mental health but that may get classified within a broad definition of what constitutes a “mental health” app. Previous reviews by Firth, Torous, Nicholas, Carney, Pratap, et al. (2017); Firth, Torous, Nicholas, Carney, Rosenbaum, et al. (2017); and Lui, Marcus, and Barry (2017) found only limited examples of mental health apps with research evidence; therefore, it is important that this research gap is addressed using suitable methodologies.
This report builds on previous published material in Professional Psychology: Research and Practice-specifically, articles by Clough and Casey (2015a, 2015b), Jones and Moffitt (2016), and Lui et al. (2017) and, more recently, Armstrong, Ciulla, Edwards-Stewart, Hoyt, and Bush (2018); Broussard and Teng (2019); and Miller et al. (2019). For a summary, see Table 1.
Together, these articles represent different strands of an ongoing scientific conversation about how this emerging area may best assist consumers to get the most effective and efficacious treatment from an app. Following a brief overview of the benefits and shortcomings of apps, this article critically reviews the issues of safety, regulation, efficacy, and effectiveness of mental health apps. It then offers a novel proposal for increasing the evidence base of effectiveness and certifying mental health apps with the involvement of practicing clinicians and the major app stores.
Benefits and Shortcomings of Mental Health Apps
Mental health apps offer potential value in a number of different ways. They are compact, are easily transportable, allow access to instant assistance, and can provide anonymity for people who do not wish to visit a mental health professional in person. Apps may also allow interactive homework activities and be set up to send information digitally to a therapist. There are apps that set reminders (e.g., take medication, do meditation) and allow individuals to participate in self-help tasks or psychoeducation if on a waiting list for face-to-face services. People using apps in rural areas can potentially access mental health services previously unavailable, and people from lower socioeconomic backgrounds may be able to access affordable treatment via an app when that treatment may be cost-prohibitive to receive in private face-to-face settings. There may be increased access for other groups such as teenagers with more effective early intervention (Wang, Varma, & Prosperi, 2018), and people with less severe mental illness could access an effective app that places less demand on primary care services.
The shortcomings of mental health apps include suboptimal use of the available technological features of smartphones (Frank, Pong, Asher, & Soares, 2018; Hendrikoff et al., 2019; Shah, Kraemer, Won, Black, & Hasenbein, 2018) but also a lack of technology literacy on the part of some users, particularly the elderly, who may struggle to operate apps and smartphones generally (Mohadis & Ali, 2014). Furthermore, the development of mental health apps has often occurred without expert input, government funding, or the involvement of academic institutions (Alyami et al., 2017; Marshall et al., in press; Shen et al., 2015), or adequate training for consumers and clinicians (Armstrong et al., 2018) and without ethical guidelines (Broussard & Teng, 2019). There are also concerns about privacy and confidentiality (Hendrikoff et al., 2019; Neary & Schueller, 2018; Stawarz, Preist, Tallon, Wiles, & Coyle, 2018; Terry & Gunter, 2018), especially in terms of individuals’ personal information being used without their permission or knowledge that may pose a threat to anonymity. Approximately 70% of health apps do not have a privacy policy that is available to users from within the app (Sunyaev, Dehling, Taylor, & Mandl, 2015).
The management of suicide risk is an important and sensitive issue. The use of data sharing is of particular concern in such situations as someone experiencing suicidal ideation may receive inappropriate unsolicited text messages or phone calls. If individuals are suicidal and targeted with inappropriate online advertising as a result of their personal information being sold by app developers to marketing companies, they may be in a heightened vulnerable position that may make them more susceptible to content they perceive as negative. Some app users with suicidal ideation may find it difficult talking about their feelings with a mental health clinician face-to-face. Using an app may be the last chance for some individuals to receive helpful information before they act on suicidal thoughts. This is one reason why Bakker, Kazantzis, Rickwood, and Rickard (2016), in their review and recommendations for mental health app developers, consider it important for mental health apps to include advice about suicidality, personal safety, and emergency contact information.
Regulation of Mental Health Apps: Assessing the Risk of Harm
The availability of mental health apps in the various app stores has, up until now, occurred with little government regulation. This remains a challenge for authorities and app developers, given the vast numbers of health apps that are available (Jones & Moffitt, 2016; Wang et al., 2018). Complicating the issue is that many apps may fall under a broad definition of a “mental health” app but may actually be best described as “self-help” or offering a single or novelty intervention. A valid question arises at this point about what type of app needs to be regulated and what degree of harm is currently being done by mental health apps that are not. There is currently little in the way of legal evidence that suggests that apps falling under the banner of mental health are doing harm, but the growth in use of these tools suggests that it is important to monitor this space (Armontrout, Torous, Cohen, McNiel, & Binder, 2018). From a clinician’s perspective, there are specific situations when regulation of apps would be more salient. For example, when recommending that clients use an app between sessions or when treatment has finished, the clinician has to be confident that the app is not going to cause harm and that it ideally is going to benefit the client. Clinicians would have more confidence about recommending an app if they knew it had “certified” evidence for its effectiveness and was considered “safe.” Furthermore, there may be legal and ethical ramifications in the event that a client does come to harm as a result of a clinician recommending an app that proved to offer non-evidence-based information (Armontrout, Torous, Drogin, & Gutheil, 2016).
In the United States, the Food and Drug Administration (FDA) intends to ban “mobile apps that are medical devices and whose functionality could pose a risk to a patient’s safety” (as quoted in Armontrout et al., 2018, p. 207). That is, the FDA will regulate on the basis of individual safety, rather than focusing on efficacy (https://www.fda.gov/medical-devices/digital-health/mobile-medical-applications). Harm and its opposite construct, “safety,” have been defined in many different ways and within many different contexts, and it is difficult to find a definition that is both succinct and encompassing (Emanuel et al., 2008). For the purposes of stating a definition for this article, harm refers to both physical and psychological injury that may occur to an individual (https://www.merriam-webster.com/dictionary/harm) as a result of using an app that makes claims about improving mental health. However, in regulating mental health apps, there are questions for authorities in examining what constitutes harm when making decisions about whether to ban a mental health app or not in determining its status as a medical device. If it is not considered a medical device, then the issue of whether or not it could cause harm is not considered by the FDA. Therefore, the most important definitional consideration initially is the definition of a mental health app as a medical device.
New efforts are also being made to regulate apps and other e-mental health tools by government bodies in other parts of the world, such as Australia (https://www.safetyandquality.gov.au/our-work/e-health-safety/national-safety-and-quality-standards-digital-mental-health-services), New Zealand (https://www.health.govt.nz/our-work/ehealth), Canada (https://www.canada.ca/en/health-canada/services/drugs-health-products/medical-devices/activities/announcements/notice-digital-health-technologies.html), and the United Kingdom (https://www.gov.uk/government/publications/health-app-assessment-criteria/criteria-for-health-app-assessment). Thus far, all are focused on the need to eradicate potentially unsafe health apps that may place an individual at risk of harm, and all are dealing with the intricacies about what constitutes harm and what factors define a mental health app as a “medical device.” Equally important, however, is that apps have demonstrated efficacy.
Measuring the Efficacy and Effectiveness of a Mental Health App: Is There an App for That?
In clinical psychology, efficacy studies occur under scientific, controlled conditions where participants are screened for their suitability to improve the homogeneity of the experimental group, whereas effectiveness studies are designed to measure interventions in “real-world” clinical settings (Kazdin, 2017). Effectiveness studies are frequently in the form of an independent repetition of a previously completed experiment (Buckley, Speece, & McLaughlin, 2014). Due to their growing popularity among consumers, the evaluation of mental health apps in both domains is important but has lagged behind app development. If the effectiveness of mental health apps can be proven, eventually by independent replication, it may have an impact on theory insofar as being able to provide widely accepted evidence that specific theoretical frameworks can be transferable to mobile devices such as smartphones and tablets, in much the same way it is now widely accepted that such frameworks can be transferable to other digital formats, including videoconference psychotherapy, virtual reality for anxiety, and web-based desktop and laptop computer programs (Mohr, Burns, Schueller, Clarke, & Klinkman, 2013).
Currently, there is no standardized method for assessing the efficacy or effectiveness of mental health apps. This issue has been raised previously by researchers (Clough & Casey, 2015b; Lui et al., 2017), but there is little evidence in peer-reviewed journals that much has changed. For example, a recent report considered a total of 17 assessment frameworks, best practice principles, and quality assurance guidelines in relation to the development of health apps from different countries and indicated that authorities worldwide are grappling with this issue (Nielsen & Rimpilainen, 2018). In research settings, questionnaires such as the Mobile Application Rating Scale (MARS; Stoyanov et al., 2015) have been developed as a means to rate the various features of apps, but the MARS does not focus on efficacy. Instead, it asks 29 questions relating to various aspects of the app and contains only 1 question that asks the completer to consider whether the app has any published evidence for its efficacy or effectiveness. Therefore, it does not provide a measure of efficacy or effectiveness.
The most widely accessible app evaluations are consumer ratings and reviews in the app stores. This information is what the majority of consumers use to help them choose a mental health app (Huang & Bashir, 2017). However, these ratings and reviews are poorly monitored by the app stores, and there have been reports of fake reviews (Xie & Zhu, 2015). Ratings, therefore, are often likely to have questionable validity and reliability in terms of offering an assessment of effectiveness and efficacy.
In response to this, efforts have been made by mental health experts to address the need for guidance in assessing apps. A collaboration between Canadian health research facilities, app developers, clinicians, and consumers resulted in a set of guiding principles for a mental health app assessment framework (Zelmer et al., 2018). This framework uniquely identifies gender responsiveness, cultural appropriateness, and user inclusion at all levels as crucial elements. Although efficacy was considered the number one criterion, there was no advice about how to study and assess it. The American Psychiatric Association also recently introduced an app rating framework designed to guide clinicians in their recommendations to patients and clients (Torous et al., 2018). This involves clinicians considering each app in the areas of potential risk of harm, data security, evidence of effectiveness, usability, and level of clinician interaction. Again, though, there is no guidance about a suitable methodology for examining efficacy or effectiveness.
There are now many reputable websites worldwide that provide advice about mental health apps and often have “expert” and “consumer” reviews, with information on published evidence if it exists. These include PsyberGuide (https://psyberguide.org/), Head to Health (https://headtohealth.gov.au/), reachout.com (https://au.reachout.com/tools-and-apps), beacon (https://beacon.anu.edu.au/), and Health Navigator (https://www.healthnavigator.org.nz/apps/m/mental-health-and-wellbeing-apps/). While these websites provide valuable information not contained in app store reviews, very few rate an app’s efficacy, and none recommend specific methods of assessing an app’s efficacy.
Difficulties With Ongoing Research
As mentioned above, there are many apps that are categorized as a “mental health” app, but such apps may only offer singular or novelty interventions that do not qualify as comprehensive therapeutic treatments and/or diagnostic instruments. For example, Breathe2Relax is an app with a simple function: to help users control their breathing by offering visual and auditory cues. It is possible that this would be categorized as a mental health app, but it does not necessarily need to have research done to determine a level of efficacy or effectiveness. It does not claim to offer a diagnosis or comprehensive therapeutic treatment for any mental disorder. These are important issues because of the large number of apps that fall into a similar category offering simple “interventions” for a narrow purpose. We would argue that research needs to focus on apps that claim to offer a comprehensive therapeutic treatment and/or diagnosis for a mental disorder.
To date, while the limited research on mental health apps has produced some positive results relating to efficacy, a high degree of heterogeneity exists across studies. Outcomes have been measured with varying instruments (Lai & Jury, 2018), and research designs and methodologies have differed across studies-some have had placebo (Flett, Hayne, Riordan, Thompson, & Conner, 2019) or waitlist (Lee & Jung, 2018) control groups, while others have had no control group (Paul & Fleming, 2019). Intervention periods have varied considerably, from as little as 10 days (Howells, Ivtzan, & Eiroa-Orosa, 2016) to as many as 12 weeks (Boisseau, Schwartzman, Lawton, & Mancebo, 2017). There has been a lack of prescribed “dose” from researchers and app developers-instructions in how to use the app can vary between little training (Flett et al., 2019) to detailed, and even daily, instructions about how to use the app (Roy et al., 2017). Often, the instruction is simply “use it however you like” when you feel depressed/anxious/stressed (Kuhn et al., 2017, p. 269). It has previously been shown that if individuals receive basic coaching in how to use a mental health app, they tend to remain more adherent to it (Mohr, Cuijpers, & Lehman, 2011). For this reason, it would seem necessary to include user training for that app to be most effective. Similarly, if clinicians received training in how to incorporate apps into psychotherapy, more would (Armstrong et al., 2018; Miller et al., 2019). Many published studies lack follow-up data to examine if changes have been maintained over time, an important factor when considering the long-term effects of using mental health apps, especially in comparison to treatment-as-usual therapies. Finally, although a number of reviews have been published (e.g., Donker et al., 2013; Firth, Torous, Nicholas, Carney, Pratap, et al., 2017; Firth, Torous, Nicholas, Carney, Rosenbaum, et al., 2017; Mehrotra & Tripathi, 2018; Menon, Rajan, & Sarkar, 2017) with combined findings that equate to an overall small to moderate effect size for common mental health conditions such as anxiety and depression (Lai & Jury, 2018), the research remains sparse overall.
Two systematic reviews on apps for treating anxiety and depression (Firth, Torous, Nicholas, Carney, Pratap, et al., 2017; Firth, Torous, Nicholas, Carney, Rosenbaum, et al., 2017) located a total of 20 apps with published research demonstrating significant efficacy, but none of these studies represented independent research. That is, all of the located research was carried out by individuals who were either involved in the development of the app being studied, stood to gain financially from the app, or otherwise had another association with the app. While much of the limited previous research has been to a satisfactory scientific standard, if claims about favorable treatment outcomes are to be accepted, the level of independent research must increase.
A particular challenge for mental health app research is that apps are regularly updated, and it cannot be assumed that research results on one version of an app will yield the same results on updated versions (Torous, Levin, Ahern, & Oser, 2017; Wang et al., 2018). Therefore, there is a need to develop methods of being able to perform similar studies across different versions of an app. One solution to this problem has been proposed by Mohr, Cheung, Schueller, Hendricks Brown, and Duan (2013) using the method known as Continuous Evaluation of Evolving Behavioral Intervention Technologies (CEEBIT), which can allow updated versions of an app to be evaluated separately from older versions. CEEBIT also allows different, newer apps to be tested against other older apps by comparing information related to positive behavior change in response to using the app. CEEBIT is based on a process of continued testing of similar apps that does not necessarily have to end as newer apps or updated versions continue to become available. The focus of this system is the elimination of apps on the basis of inferiority, rather than identifying apps that have reached a certain level of effectiveness.
It is difficult to see any strong pattern emerging in the literature for a research methodology on the efficacy of mental health apps. More randomized controlled trials (RCTs) have been widely called for, but one of the central obstacles to this is the time it takes to conduct such trials, often years. The popularization of mental health apps has taken place in the relatively recent past, so it is understandable that few RCTs have been completed. However, it is the extent to which apps have been embraced, including in the use of mental health apps, that has led to the realization that ongoing research is needed to confirm that the use of apps has proven potential to assist in managing mental health (Lui et al., 2017). In the world of app development, things happen quickly because improvements in technology mean that new features become available that supersede the abilities of existing apps. Therefore, future research needs to come up with alternative methodologies to meet this fluid market and technology-driven demand for increasing evidence of effectiveness and efficacy in a timely manner (Clough & Casey, 2015b).
Future Research: A New Approach Toward Single-Case Designs and Clinician Researchers
An alternative to RCTs for testing the efficacy of mental health apps is the single-case design (Clough & Casey, 2015a; Mehrotra & Tripathi, 2018). Barlow, Nock, and Hersen (2009) noted that “a series of single-case designs in similar clients in which the original experiment is directly replicated three or four times can produce robust results that may equal or surpass those produced by the experimental group/no-treatment control group design” (p. 53). Other researchers in this area have arrived at similar conclusions (Horner et al., 2005; Kazdin, 2017) and identified single-case designs as being complementary to larger group designs rather than in opposition to them (Buckley et al., 2014; Sheridan, 2014).
Single-case designs have advantages over RCTs. More information about individual participants can be captured in a single-case design, as compared to RCTs where only group means are reported. Therefore, it is possible to make more informed hypotheses about how peripheral issues may have influenced results (Barlow et al., 2009). Also, data can be collected at more time intervals in single-case designs, more precisely identifying when outcomes change in respect to changes in treatment (Machalicek & Horner, 2018). These designs also offer the opportunity for real-time monitoring and therefore tailoring to the responses from individuals (Bentley, Kleiman, Elliott, Huffman, & Nock, 2019). Finally, data can be collated and analyzed faster in single-case designs compared to larger RCTs (Kazdin, 2017), a crucial point in the world of mobile apps, where development and listing on app stores happen at rapid speeds.
The methodology of single-case designs allows practice-based research whereby “real-world” data can be gathered by practicing clinicians. One new way to potentially involve clinicians would be to establish an online register, similar to the way that RCTs and systematic reviews/meta-analyses are currently registered, that would allow clinicians to add information about an app’s effectiveness based on a client’s response to that app using a standardized methodology. In addition to clinicians, this centralized registry could potentially be accessed by consumers, researchers, students, and ethics review committees in their efforts to find the most appropriate mental health apps for their purpose. This kind of registry would also provide continually updated and evolving practice-based evidence for mental health apps. For this scheme to perhaps be most successful, clinicians may have to follow a standardized protocol.
As a practical exercise for clinicians (and researchers), the following is a simplified demonstration of how such a protocol might look and how research on a mental health app may progress using the methodology of a multiple baseline across-subjects design (Barlow et al., 2009) in a practice setting. Five individuals are asked to use a specific app. A baseline period is established to confirm a series of stable readings across time (and this allows each individual to act as his or her own control). The app is then introduced to the clients, with instructions on how to use it. For example, “Use the app for at least ten minutes per day, five days per week, for ten weeks” (based on the idea of an equivalent 50-min face-to-face session with a therapist once per week). There would then be a postintervention period of equivalent time to the initial baseline period, with a follow-up period of 3 months. Individuals could provide regular data on their emotional state by a Subjective Units of Distress (SUDS) scale that is automatically texted to their smartphone each day, asking them to reply with a number out of 10 that corresponds to their emotional state at that time, also known as ecological momentary assessment (Shiffman, Stone, & Hufford, 2008). Several online survey and statistical platforms are able to be programmed so that text messages and questionnaires are automatically sent, and returning data are automatically assigned to individual participants. In this way, a clinician is able to track an individual’s progress and eventually make a judgment on the effectiveness of the app based on its demonstrated clinical significance (Jacobson, Roberts, Berns, & McGlinchey, 1999; Jacobson & Truax, 1991) according to accepted interpretation standards in single-case design (Barlow et al., 2009). More detailed self-report inventories could also be sent digitally at specific points throughout the various phases for a more in-depth analysis of specific symptomatology. See Figure 1
Figure 1. An overview of a multiple baseline across-subjects design. SUDS = Subjective Units of Distress.
for a visual representation of how such a real-world experiment might take place. The clinician researcher could then add comments/results to the centralized registry as described in the preceding paragraph.
While the advantages of practice-based single-case research for the evaluation of mental health apps lie in the potential of increasing the real-world evidence base for the effectiveness of these apps, there are also some potential disadvantages of this approach (Barlow et al., 2009). First, clinicians in practice settings are traditionally time-poor (Hatfield & Ogles, 2004) and will only have limited time to add results and comments to any centralized registry. This also suggests that clinicians will only have limited time to monitor the response data for clients who participate in such research, and it is therefore reasonable to assume that most clinicians who participate would only be able to assess a small number of individuals simultaneously at best. However, this research approach has the potential to be highly automated, thereby allowing clinician researchers to be engaged without spending large amounts of time in administration duties associated with the research. Second, results from a single-case study do not have broad generalizability across large populations in a way that larger RCTs may be able to do. Third, it can be difficult to find homogeneous groups of individuals when attempting to replicate a single-case study, particularly if the individual has a complex presentation, and there will always be a debate of whether one individual’s presentation is adequately similar to another’s. Finally, single-case research is more prone to variability affecting results. For example, if an individual who is the subject of a single-case design experiences a significant negative event (like the death of a family member) during the intervention phase, it will potentially influence the results obtained that may inform the judgment of the effectiveness of that intervention. (It is also acknowledged that an intervention can be altered in response to such events in single-case research, which is an advantage over RCTs.) Whereas if that individual was in a group research project involving hundreds of participants, the negative event would not necessarily have a substantial impact on the results if the study is looking at group mean differences.
The application of this type of research design to the evaluation of mental health apps could be further enhanced with the cooperation of the major app stores.
Future Research: A New Certification Framework
The current way that mental health apps are represented in the various app stores is inadequate on a number of levels, but particularly in regard to risk of harm and efficacy. While governments have started to act on the risk of harm, the possibility of clinician-led research could provide one solution for improving the amount of evidence for the effectiveness of mental health apps. If clinicians report back to a centralized registry, this will have to be maintained and administered by some authority. Similar examples of centralized repositories in recent years have been successfully run by academic institutions either by receiving government grants or by providing the finances themselves and in return receiving the kudos and respect for doing so. It is difficult to see where else the money may come from to finance this type of scheme. One goal for such testing is to create libraries of mental health apps that have independent evidence for their effectiveness, a pursuit called for in the literature (e.g., Mehrotra & Tripathi, 2018).
The two largest app marketplaces, the Apple App Store and Google Play, would need to be willing to recategorize mental health apps in ways that consumers can clearly distinguish between apps that have acceptable scientific research and those that do not. Apple and Google may be more willing to do this if there is a financial benefit. If a mental health app became certified by an independent clinician researcher, it would give that app a marketing credential over and above similar apps without such certification. The cost of this testing process would be the responsibility of the app developer and would have to be marketed to them as potentially offering greater financial returns for having undergone certification.
By reclassifying mental health apps as certified in this way, Apple and Google would also be playing a role in educating the public, because consumers may not be aware of the importance and need to establish evidence to back up claims of effectiveness. With the involvement of the app stores, consumers could conceivably become educated in how to critically assess the evidence base of an app, the source of the app (mental health expert, academic institution, government authority, or a nonexpert individual with questionable motives), learn to be more familiar with the contents of privacy policies, and be educated on the importance of confirming the existence of a privacy policy. Under this model, consumers could also become more aware of the limits of mental health apps and be directed toward other help if necessary (e.g., to a general practitioner or mental health professional). Apple and Google could also display contact details for emergency suicide support services on pages listing mental health apps.
Conclusion
Currently, searching the app stores for a mental health app that is efficacious and safe is problematic. There are limited numbers of mental health apps with research evidence to begin with, but those that do are difficult to differentiate from the many more that do not. Therefore, it may be time that the two largest app marketplaces, the Apple App Store and Google Play, categorized mental health apps in ways that consumers and clinicians can clearly distinguish between those that have acceptable scientific research and those that do not.
The literature is clear: There is a need for users of mental health apps to be able to identify those that are safe and have proven effectiveness (Firth et al., 2018). The implication for practicing psychologists in this regard is that they need to be certain the apps they are incorporating into therapy by way of recommendations to clients and patients will do no harm and that there is evidence that justifies their use in the first place. To this end, research into the effectiveness of mental health apps needs to become a standard and visible form of credentialing. This, however, involves a financial consideration. Evaluation costs money, and it is not known if consumers would pay a nominal fee to download an app that had official certification for its effectiveness over that of an app that was free but without such certification. What is more certain is that research support would increase the legitimacy of apps as tools to effectively treat mental health issues.
Corresponding Author
Jamie M. Marshall received his Master of Clinical Psychology from the University of New England, where he is also currently a PhD candidate. He operates a rural private practice and has special interests in autism, anxiety, and integrating technology with traditional forms of therapy, particularly cognitive-behavioral therapy and positive psychology. He also holds a position as a Clinical Reference Lead with the Australian Digital Health Agency.
Debra A. Dunstan received her PhD from Charles Sturt University. She currently holds the position of Professor and Head, School of Psychology, Faculty of Medicine and Health, University of New England. She has previous clinical experience in private practice and government settings and has a special research interest in modes and models of service delivery, as well as work disability prevention and management.
Warren Bartik received his PhD from the University of New England, where he is also currently the Psychology Clinic Director. He has had previous experience working in private practice settings and has research interests in the areas of suicide bereavement, youth suicide, early psychosis, rural mental health, and modes and models of service delivery.
Correspondence concerning this article should be addressed to Jamie M. Marshall, School of Psychology, Faculty of Medicine and Health, University of New England, Armidale, New South Wales, Australia 2351
Email: [email protected]
Publication History
Received August 2, 2019
Revision received October 6, 2019
Accepted October 7, 2019
Alyami, M., Giri, B., Alyami, H., & Sundram, F. (2017). Social anxiety apps: A systematic review and assessment of app descriptors across mobile store platforms. Evidence-Based Mental Health, 20, 65–70. https://doi.org/10.1136/eb-2017-102664
Armontrout, J. A., Torous, J., Cohen, M., McNiel, D. E., & Binder, R. (2018). Current regulation of mobile mental health applications. Journal of the American Academy of Psychiatry and the Law, 46, 204–211.
Armontrout, J., Torous, J., Fisher, M., Drogin, E., & Gutheil, T. (2016). Mobile mental health: Navigating new rules and regulations for digital tools. Current Psychiatry Reports, 18, Article 91. https://doi.org/10.1007/s11920-016-0726-x
Armstrong, C. M., Ciulla, R. P., Edwards-Stewart, A., Hoyt, T., & Bush, N. (2018). Best practices of mobile health in clinical care: The development and evaluation of a competency-based provider training program. Professional Psychology: Research and Practice, 49, 355–363. https://doi.org/10.1037/pro0000194
Bakker, D., Kazantzis, N., Rickwood, D., & Rickard, N. (2016). Mental health smartphone apps: Review and evidence-based recommendations for future developments. Journal of Medical Internet Research Mental Health, 3, Article e7. https://doi.org/10.2196/mental.4984
Barlow, D. H., Nock, M. K., & Hersen, M. (2009). Single case experimental designs: Strategies for studying behavior change (Vol. 3). Boston, MA: Pearson.
Bentley, K. H., Kleiman, E. M., Elliott, G., Huffman, J. C., & Nock, M. K. (2019). Real-time monitoring technology in single-case experimental design research: Opportunities and challenges. Behaviour Research and Therapy, 117, 87–96. https://doi.org/10.1016/j.brat.2018.11.017
Boisseau, C. L., Schwartzman, C. M., Lawton, J., & Mancebo, M. C. (2017). App-guided exposure and response prevention for obsessive compulsive disorder: An open pilot trial. Cognitive Behaviour Therapy, 46, 447–458. https://doi.org/10.1080/16506073.2017.1321683
Broussard, J. D., & Teng, E. J. (2019). Models for enhancing the development of experiential learning approaches within mobile health technologies. Professional Psychology: Research and Practice, 50, 195–203. https://doi.org/10.1037/pro0000234
Buckley, J. A., Speece, D. L., & McLaughlin, J. E. (2014). The role of single-case designs in supporting rigorous intervention development and evaluation at the Institute of Education Sciences. In T. R. Kratochwill & J. R. Levin (Eds.), Methodological and statistical advances (pp. 283–296). Washington, DC: American Psychological Association. https://doi.org/10.1037/14376-010
Clough, B. A., & Casey, L. M. (2015a). Smart designs for smart technologies: Research challenges and emerging solutions for scientist-practitioners within e-mental health. Professional Psychology: Research and Practice, 46, 429–436. https://doi.org/10.1037/pro0000053
Clough, B. A., & Casey, L. M. (2015b). The smart therapist: A look to the future of smartphones and mHealth technologies in psychotherapy. Professional Psychology: Research and Practice, 46, 147–153. https://doi.org/10.1037/pro0000011
Donker, T., Petrie, K., Proudfoot, J., Clarke, J., Birch, M.-R., & Christensen, H. (2013). Smartphones for smarter delivery of mental health programs: A systematic review. Journal of Medical Internet Research, 15, Article e247. https://doi.org/10.2196/jmir.2791
Emanuel, L., Berwick, D., Conway, J., Combes, J., Hatlie, M., Leape, L., . . . Walton, M. (2008). What exactly is patient safety?Advances in Patient Safety, 1, 1–18.
Firth, J., Torous, J., Carney, R., Newby, J., Cosco, T. D., Christensen, H., & Sarris, J. (2018). Digital technologies in the treatment of anxiety: Recent innovations and future directions. Current Psychiatry Reports, 20, Article 44. https://doi.org/10.1007/s11920-018-0910-2
Firth, J., Torous, J., Nicholas, J., Carney, R., Pratap, A., Rosenbaum, S., & Sarris, J. (2017). The efficacy of smartphone-based mental health interventions for depressive symptoms: A meta-analysis of randomized controlled trials. World Psychiatry, 16, 287–298. https://doi.org/10.1002/wps.20472
Firth, J., Torous, J., Nicholas, J., Carney, R., Rosenbaum, S., & Sarris, J. (2017). Can smartphone mental health interventions reduce symptoms of anxiety? A meta-analysis of randomized controlled trials. Journal of Affective Disorders, 218, 15–22. https://doi.org/10.1016/j.jad.2017.04.046
Flett, J. A. M., Hayne, H., Riordan, B. C., Thompson, L. M., & Conner, T. S. (2019). Mobile mindfulness meditation: A randomised controlled trial of the effect of two popular apps on mental health. Mindfulness, 10, 863–876. https://doi.org/10.1007/s12671-018-1050-9
Frank, E., Pong, J., Asher, Y., & Soares, C. N. (2018). Smart phone technologies and ecological momentary data: Is this the way forward on depression management and research?Current Opinion in Psychiatry, 31, 3–6. https://doi.org/10.1097/YCO.0000000000000382
Hatfield, D. R., & Ogles, B. M. (2004). The use of outcome measures by psychologists in clinical practice. Professional Psychology: Research and Practice, 35, 485–491. https://doi.org/10.1037/0735-7028.35.5.485
Hendrikoff, L., Kambeitz-Ilankovic, L., Pryss, R., Senner, F., Falkai, P., Pogarell, O., . . . Peters, H. (2019). Prospective acceptance of distinct mobile mental health features in psychiatric patients and mental health professionals. Journal of Psychiatric Research, 109, 126–132. https://doi.org/10.1016/j.jpsychires.2018.11.025
Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidenced-based practice in special education. Exceptional Children, 71, 165–179. https://doi.org/10.1177/001440290507100203
Howells, A., Ivtzan, I., & Eiroa-Orosa, F. J. (2016). Putting the “app” in happiness: A randomised controlled trial of a smartphone-based mindfulness intervention to enhance wellbeing. Journal of Happiness Studies: An Interdisciplinary Forum on Subjective Well-Being, 17, 163–185. https://doi.org/10.1007/s10902-014-9589-1
Huang, H.-Y., & Bashir, M. (2017). Users’ adoption of mental health apps: Examining the impact of information cues. JMIR mHealth and uHealth, 5, Article e83. https://doi.org/10.2196/mhealth.6827
Jacobson, N. S., Roberts, L. J., Berns, S. B., & McGlinchey, J. B. (1999). Methods for defining and determining the clinical significance of treatment effects: Description, application, and alternatives. Journal of Consulting and Clinical Psychology, 67, 300–307. https://doi.org/10.1037/0022-006X.67.3.300
Jacobson, N. S., & Truax, P. (1991). Clinical significance: A statistical approach to defining meaningful change in psychotherapy research. Journal of Consulting and Clinical Psychology, 59, 12–19. https://doi.org/10.1037/0022-006X.59.1.12
Jones, N., & Moffitt, M. (2016). Ethical guidelines for mobile app development within health and mental health fields. Professional Psychology: Research and Practice, 47, 155–162. https://doi.org/10.1037/pro0000069
Kazdin, A. E. (2017). Research design in clinical psychology (Vol. 5). Boston, MA: Pearson.
Kuhn, E., Kanuri, N., Hoffman, J. E., Garvert, D. W., Ruzek, J. I., & Taylor, C. B. (2017). A randomized controlled trial of a smartphone app for posttraumatic stress disorder symptoms. Journal of Consulting and Clinical Psychology, 85, 267–273. https://doi.org/10.1037/ccp0000163
Lai, J., & Jury, A. (2018). Effectiveness of e-mental health approaches: Rapid review. Retrieved fromhttps://www.tepou.co.nz/uploads/files/resource-assets/E-therapy%20report%20FINAL%20July%202018.pdf
Lee, R. A., & Jung, M. E. (2018). Evaluation of an mHealth app (Destressify) on university students’ mental health: Pilot trial. JMIR Mental Health, 5, Article e2. https://doi.org/10.2196/mental.8324
Lui, J. H. L., Marcus, D. K., & Barry, C. T. (2017). Evidence-based apps? A review of mental health mobile applications in a psychotherapy context. Professional Psychology: Research and Practice, 48, 199–210. https://doi.org/10.1037/pro0000122
Machalicek, W., & Horner, R. H. (2018). Special issue on advances in single-case research design and analysis. Developmental Neurorehabilitation, 21, 209–211. https://doi.org/10.1080/17518423.2018.1468600
Marshall, J. M., Dunstan, D. A., & Bartik, W. (in press). The digital psychiatrist: In search of evidence-based apps for anxiety and depression. Frontiers in Psychiatry. https://doi.org/10.3389/fpsyt.2019.00831
Mehrotra, S., & Tripathi, R. (2018). Recent developments in the use of smartphone interventions for mental health. Current Opinion in Psychiatry, 31, 379–388. https://doi.org/10.1097/YCO.0000000000000439
Menon, V., Rajan, T. M., & Sarkar, S. (2017). Psychotherapeutic applications of mobile phone-based technologies: A systematic review of current research and trends. Indian Journal of Psychological Medicine, 39, 4–11. https://doi.org/10.4103/0253-7176.198956
Miller, K. E., Kuhn, E., Yu, J., Owen, J. E., Jaworski, B. K., Taylor, K., . . . Possemato, K. (2019). Use and perceptions of mobile apps for patients among VA primary care mental and behavioral health providers. Professional Psychology: Research and Practice, 50, 204–209. https://doi.org/10.1037/pro0000229
Mohadis, H. M., & Ali, N. M. (2014, September). A study of smartphone usage and barriers among the elderly. Paper presented at the 3rd International Conference on User Science and Engineering, Shah Alam, Malaysia.
Mohr, D. C., Burns, M. N., Schueller, S. M., Clarke, G., & Klinkman, M. (2013). Behavioral intervention technologies: Evidence review and recommendations for future research in mental health. General Hospital Psychiatry, 35, 332–338. https://doi.org/10.1016/j.genhosppsych.2013.03.008
Mohr, D. C., Cheung, K., Schueller, S. M., Hendricks Brown, C., & Duan, N. (2013). Continuous evaluation of evolving behavioral intervention technologies. American Journal of Preventive Medicine, 45, 517–523. https://doi.org/10.1016/j.amepre.2013.06.006
Mohr, D. C., Cuijpers, P., & Lehman, K. (2011). Supportive accountability: A model for providing human support to enhance adherence to eHealth interventions. Journal of Medical Internet Research, 13, e30–e41. https://doi.org/10.2196/jmir.1602
Neary, M., & Schueller, S. M. (2018). State of the field of mental health apps. Cognitive and Behavioral Practice, 25, 531–537. https://doi.org/10.1016/j.cbpra.2018.01.002
Nielsen, S. L., & Rimpilainen, S. (2018). Report on international practice on digital apps. Retrieved fromhttps://strathprints.strath.ac.uk/66139
Paul, A. M., & Fleming, C. J. E. (2019). Anxiety management on campus: An evaluation of a mobile health intervention. Journal of Technology in Behavioral Science, 4, 58–61. https://doi.org/10.1007/s41347-018-0074-2
Roy, M. J., Costanzo, M. E., Highland, K. B., Olsen, C., Clayborne, D., & Law, W. (2017). An app a day keeps the doctor away: Guided education and training via smartphones in subthreshold post traumatic stress disorder. Cyberpsychology, Behavior, and Social Networking, 20, 470–478. https://doi.org/10.1089/cyber.2017.0221
Shah, A., Kraemer, K. R., Won, C. R., Black, S., & Hasenbein, W. (2018). Developing digital intervention games for mental disorders: A review. Games for Health, 7, 213–224. https://doi.org/10.1089/g4h.2017.0150
Shen, N., Levitan, M.-J., Johnson, A., Bender, J. L., Hamilton-Page, M., Jadad, A. A., & Wiljer, D. (2015). Finding a depression app: A review and content analysis of the depression app marketplace. JMIR mHealth and uHealth, 3, Article e16. https://doi.org/10.2196/mhealth.3713
Sheridan, S. M. (2014). Single-case designs and large-N studies: The best of both worlds. In T. R. Kratoch & J. R. Levin (Eds.), Single-case intervention research: Methodological and statistical advances (pp. 299–308). Washington, DC: American Psychological Association. https://doi.org/10.1037/14376-011
Shiffman, S., Stone, A. A., & Hufford, M. R. (2008). Ecological momentary assessment. Annual Review of Clinical Psychology, 4, 1–32. https://doi.org/10.1146/annurev.clinpsy.3.022806.091415
Stawarz, K., Preist, C., Tallon, D., Wiles, N., & Coyle, D. (2018). User experience of cognitive behavioral therapy apps for depression: An analysis of app functionality and user reviews. Journal of Medical Internet Research, 20, Article e10120. https://doi.org/10.2196/10120
Stoyanov, S. R., Hides, L., Kavanagh, D. J., Zelenko, O., Tjondronegoro, D., & Mani, M. (2015). Mobile app rating scale: A new tool for assessing the quality of health mobile apps. Journal of Medical Internet Research mHealth uHealth, 3, Article e27. https://doi.org/10.2196/mhealth.3422
Sunyaev, A., Dehling, T., Taylor, P. L., & Mandl, K. D. (2015). Availability and quality of mobile health app privacy policies. Journal of the American Medical Informatics Association, 22, e28–e33. https://doi.org/10.1136/amiajnl-2013-002605
Terry, N. P., & Gunter, T. D. (2018). Regulating mobile mental health apps. Behavioral Sciences & the Law, 36, 136–144. https://doi.org/10.1002/bsl.2339
Torous, J., Firth, J., Huckvale, K., Larsen, M. E., Cosco, T. D., Carney, R., . . . Christensen, H. (2018). The emerging imperative for a consensus approach toward the rating and clinical recommendation of mental health apps. Journal of Mental and Nervous Disease, 206, 662–666. https://doi.org/10.1097/NMD.0000000000000864
Torous, J., Levin, M. E., Ahern, D. K., & Oser, M. L. (2017). Cognitive behavioral mobile applications: Clinical studies, marketplace overview, and research agenda. Cognitive and Behavioral Practice, 24, 215–225. https://doi.org/10.1016/j.cbpra.2016.05.007
Wang, K., Varma, D. S., & Prosperi, M. (2018). A systematic review of the effectiveness of mobile apps for monitoring and management of mental health symptoms or disorders. Journal of Psychiatric Research, 107, 73–78. https://doi.org/10.1016/j.jpsychires.2018.10.006
World Health Organization. (2013). Mental health action plan 2013–2020. Retrieved fromhttps://www.who.int/mental_health/action_plan_2013/bw_version.pdf?ua=1
Xie, Z., & Zhu, S. (2015). Appwatcher: Unveiling the underground market of trading mobile app reviews. Retrieved fromhttps://dl.acm.org/citation.cfm?id=2766510
Zelmer, J., van Hoof, K., Notarianni, M., van Mierlo, T., Schellenberg, M., & Tannenbaum, C. (2018). An assessment framework for e-Mental health apps in Canada: Results of a modified Delphi process. JMIR mHealth and uHealth, 6, Article e10016. https://doi.org/10.2196/10016
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2019. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the associated terms available at https://publishing.aip.org/publications/journals/covid-19/.
Abstract
Practicing psychologists are being faced with the reality that mobile mental health apps for smartphones and tablet devices are increasing in popularity. This growth area within e-mental health has been well documented in Professional Psychology: Research and Practice. This article provides an update on the issues of safety and efficacy in mental health app development, two of the biggest concerns that practicing psychologists have about these new digital tools. Governments and medical authorities are wrestling with how to regulate the health app market to avoid harm to users. At the same time, a lack of research into the efficacy and effectiveness of most mental health apps in the various app stores leaves clinicians and consumers with uncertainty. The vast majority of the limited research to date has been completed by those involved in an app’s development. Further independent research and replication are required to demonstrate legitimacy and increase the acceptance of mental health apps as valid sources of therapy. Complicating this issue is disparity about an acceptable methodology for examining the effectiveness of a mental health app. This article proposes a new approach to incorporate multiple baseline single-case designs to increase the amount of evidence and to guide larger-scale randomized controlled trials, something that could and should include practicing psychologists. This novel approach also proposes that mental health apps undergo a new “certification” process with the participation of app store marketplaces.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer