Content area
Background: The concept of health equity by design encompasses a multifaceted approach that integrates actions aimed at eliminating biased, unjust, and correctable differences among groups of people as a fundamental element in the design of algorithms. As algorithmic tools are increasingly integrated into clinical practice at multiple levels, nurses are uniquely positioned to address challenges posed by the historical marginalization of minority groups and its intersections with the use of "big data" in healthcare settings; however, a coherent framework is needed to ensure that nurses receive appropriate training in these domains and are equipped to act effectively. Purpose: We introduce the Bias Elimination for Fair Al in Healthcare (BE FAIR) framework, a comprehensive strategic approach that incorporates principles of health equity by design, for nurses to employ when seeking to mitigate bias and prevent discriminatory practices arising from the use of clinical algorithms in healthcare. By using examples from a "real-world" Al governance framework, we aim to initiate a wider discourse on equipping nurses with the skills needed to champion the BE FAIR initiative. Methods: Drawing on principles recently articulated by the Office of the National Coordinator for Health Information Technology, we conducted a critical examination of the concept of health equity by design. We also reviewed recent literature describing the risks of artificial intelligence (Al) technologies in healthcare as well as their potential for advancing health equity. Building on this context, we describe the BE FAIR framework, which has the potential to enable nurses to take a leadership role within health systems by implementing a governance structure to oversee the fairness and quality of clinical algorithms. We then examine leading frameworks for promoting health equity to inform the operationalization of BE FAIR within a local Al governance framework. Results: The application of the BE FAIR framework within the context of a working governance system for clinical Al technologies demonstrates how nurses can leverage their expertise to support the development and deployment of clinical algorithms, data and Al technologies. mitigating risks such as bias and promoting ethical, high-quality care powered by big Conclusion and Relevance: As health systems learn how well-intentioned clinical algorithms can potentially perpetuate health disparities, we have an opportunity and an obligation to do better. New efforts empowering nurses to advocate for BE FAIR, involving them in Al governance, data collection methods, and the evaluation of tools intended to reduce bias, mark important steps in achieving equitable healthcare for all.
Abstract
Background: The concept of health equity by design encompasses a multifaceted approach that integrates actions aimed at eliminating biased, unjust, and correctable differences among groups of people as a fundamental element in the design of algorithms. As algorithmic tools are increasingly integrated into clinical practice at multiple levels, nurses are uniquely positioned to address challenges posed by the historical marginalization of minority groups and its intersections with the use of "big data" in healthcare settings; however, a coherent framework is needed to ensure that nurses receive appropriate training in these domains and are equipped to act effectively.
Purpose: We introduce the Bias Elimination for Fair Al in Healthcare (BE FAIR) framework, a comprehensive strategic approach that incorporates principles of health equity by design, for nurses to employ when seeking to mitigate bias and prevent discriminatory practices arising from the use of clinical algorithms in healthcare. By using examples from a "real-world" Al governance framework, we aim to initiate a wider discourse on equipping nurses with the skills needed to champion the BE FAIR initiative.
Methods: Drawing on principles recently articulated by the Office of the National Coordinator for Health Information Technology, we conducted a critical examination of the concept of health equity by design. We also reviewed recent literature describing the risks of artificial intelligence (Al) technologies in healthcare as well as their potential for advancing health equity. Building on this context, we describe the BE FAIR framework, which has the potential to enable nurses to take a leadership role within health systems by implementing a governance structure to oversee the fairness and quality of clinical algorithms. We then examine leading frameworks for promoting health equity to inform the operationalization of BE FAIR within a local Al governance framework.
Results: The application of the BE FAIR framework within the context of a working governance system for clinical Al technologies demonstrates how nurses can leverage their expertise to support the development and deployment of clinical algorithms, data and Al technologies. mitigating risks such as bias and promoting ethical, high-quality care powered by big
Conclusion and Relevance: As health systems learn how well-intentioned clinical algorithms can potentially perpetuate health disparities, we have an opportunity and an obligation to do better. New efforts empowering nurses to advocate for BE FAIR, involving them in Al governance, data collection methods, and the evaluation of tools intended to reduce bias, mark important steps in achieving equitable healthcare for all.
KEYWORDS
artificial intelligence, ethics, Health equity, nursing, social determinants of health
INTRODUCTION
The ascent of artificial intelligence (Al) in healthcare has been accompanied by hopes that algorithmic technologies will usher in a golden age of personalized medicine, optimized diagnoses, and streamlined care (Cutler, 2023). Nurses use Al tools to assist in various aspects of patient care, such as care planning and patient monitoring (Clancy, 2020). These tools help nurses analyze vast amounts of patient data, identify patterns, and provide real-time recommendations for personalized care (Cary Jr. et al., 2021; Koleck et al., 2021; Santos et al., 2023). This technology is poised to play a transformative role in healthcare (Haug & Drazen, 2023). Due to their proximity to patients and their standing as the most trusted profession (Brenan & Jones, 2024), nurses are uniquely positioned to address challenges posed by the historical marginalization of minority groups and the constraints of big data, thereby playing a crucial role in advancing healthcare equity.
Nurses are increasingly called upon to utilize Al tools in practice, act upon their outputs, and navigate the ethical complexities they present. However, traditional health professions education largely lacks the kinds of training needed for these Al-related competencies, leaving nurses with significant skill gaps in data analysis, algorithmic understanding, and ethical reasoning (Lomis et al., 2021). When coupled with underrepresentation of minorities in the nursing workforce, this creates a double jeopardy: nurses lack expertise to address historical biases in healthcare data, and these biases perpetuate themselves within algorithms, potentially leading to unfair and discriminatory care for marginalized communities. Diversifying the nursing workforce across all levels is paramount, not just for advancing equity throughout Al development, but also to ensure that nurses themselves are well-equipped to advocate for their patients and the healthcare system.
Overreliance on Al presents limitations (Lyell & Coiera, 2017) that nurses are often the first to witness. When algorithms are trained on datasets that under-represent minorities, they risk perpetuating existing health disparities. This can lead to inaccurate, ineffective treatment plans and further marginalization of vulnerable groups. Nurses, uniquely positioned at the patient bedside, are aware of these limitations and their impacts on patients. However, a simplistic "data-driven" approach fails to capture the full complexity of human experience and diverse patient needs. This necessitates critical engagement with the limitations of big data, alongside fostering diverse perspectives and lived experiences within Al development and implementation. By equipping nurses with the necessary skills and amplifying their voices, we can harness the full potential of Al for a more equitable healthcare system.
BACKGROUND
The promise of Al technologies in healthcare
The term "artificial intelligence" was introduced in 1956 by Dartmouth College and Stanford University computer scientist John McCarthy. McCarthy, who would go on to found one of the first artificial intelligence laboratories at Stanford University, used the term to describe a new scientific field based on the proposition that "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it" (McCarthy et al., 2006). Artificial intelligence is not a single kind of technology, but rather a broad term that encompasses a collection of methods and technologies centered on computers that are capable of learning to perform tasks without being specifically programmed to do so. Decades of repeating cycles of enthusiasm and ready funding for Al research and development ("Al summers") succeeded by periods of disappointment and retrenchment ("Al winters") have more recently given way to steady, rapid uptake of technologies and applications founded on algorithms that rely on access to big data and require intensive use of computational resources (Smith & Smith, 2023).
Applications of Al in healthcare
In the domain of healthcare, advances in areas such as drug discovery, robot-assisted surgeries, wearable sensors, and ML algorithms to predict health outcomes represent just a few of the applications deployed in clinical and administrative settings. Al technologies are also helping nurses identify patients who are at risk for poor outcomes and could benefit from prompt identification and intervention. One example of such clinical tools is Duke Health's Sepsis Watch (Sendak et al., 2020), a deep learning application that analyzes over 32 million data points to assess a patient's risk for developing sepsis and guide the hospital's rapid response team through the first 3h of care administration. Other researchers have achieved 90 percent sensitivity in predicting when critical care patients will need resuscitation (Si et al., 2021).
In addition to applications focused on clinical care, health systems are using Al to adjust staffing to reduce wait times in emergency departments (Araz et al., 2014; Leung et al., 2022). By analyzing historical administrative data, nurse managers can anticipate staffing requirements during major events such as extreme weather (Gafni-Pappas & Khan, 2023) or seasonal events such as winter or flu season (Yang et al., 2015). Administrators can monitor and sift massive amounts of data on the frequency of use of words and phrases from social media platforms to predict outbreaks of communicable diseases and proactively allocate resources. In other cases, familiar consumer technologies have been integrated into patient care: for instance, nurses using an Apple Watch with an always-on Siri function can summon assistance from a patient care technician or launch clinical apps (e.g., Medscape, Epocrates, UpToDate) hands-free.
PURPOSE
In this paper, we explore recent advancements in Al-powered technologies and their applications within healthcare settings. We introduce and describe the development of procedural processes integrated within a larger Al governance-the Bias Elimination for Fair Al in Healthcare (BE FAIR) framework. By critically examining the BE FAIR framework, we intend to pave the way for broader discussions on empowering nurses with necessary skills needed to reduce the risk of algorithmic discrimination and advance health equity across healthcare systems.
NEW AI TECHNOLOGIES, NEW Al RISKS
Twenty years ago, the National Academy of Medicine (formally known as the Institute of Medicine) report titled Unequal Treatment: Confronting Racial and Ethnic Disparities in Healthcare, noted that many actors may contribute-wittingly or unwittingly, in ways both large and small-to creating patterns of inequitable care (Institute of Medicine Committee on, Eliminating, & Ethnic Disparities in Health, 2003). Despite the report's influence and the many corroborating studies and initiatives that followed, inequities in health by race and ethnicity have persisted. In a recent example, the devastating burden borne by racial and ethnic minorities throughout the COVID-19 pandemic has focused attention on the critical need to ensure fairness in forecast results (Luck et al., 2022).
Despite substantial potential for benefit to patients, practitioners, and health systems from the use of Al technologies, the presence of bias in Al algorithms underscores the risks of perpetuating inequitable care. Because models are built from data, missing or inaccurate data affect recommendations generated by the model. If missing data and measurement errors are more prevalent in some patient groups than in others (e.g., patients with low health literacy, or who change insurance, or use providers across many institutions), the model may not adequately represent all patients. In a recent evidence review (Jain et al., 2023), the authors described 18 clinical algorithms currently in use that have potential for bias. In addition, an algorithm trained on data from a predominately white patient population is not expected to have the same accuracy when applied to other ethnicities (Hermansson & Kahan, 2018), while hidden bias may exist for patients who receive certain types of procedures or receive care in under-resourced settings, placing vulnerable populations at risk for inadequate or inappropriate care (Vela et al., 2022).
ADVANCING HEALTH EQUITY IN AI USE
Recent years have seen a flurry of activity by federal agencies and professional associations aimed at detecting, mitigating, and preventing algorithmic bias and fostering health equity in Al and ML technologies applied in healthcare settings. This includes efforts undertaken through various divisions of the U.S. Department of Health and Human Services, including a Notice of Proposed Rule (now made Final Rule as of April 26, 2024) aimed at prohibiting discrimination related to the use of Al in healthcare, numerous contributions to developing data standards, and meetings and workshops focused on health equity issues. Professional nursing organizations have also been active in creating toolkits, educational programs, and advocacy efforts in this arena (Table 1).
Recommendations for mitigating algorithmic bias
Against this backdrop of government and professional efforts, Cary and colleagues (Cary Jr. et al., 2023) have offered seven key recommendations representing the broadest approach to mitigating bias in clinical algorithms: (1) policymakers should advocate for governance structures and robust evaluation plans; (2) health systems should foster diversity within well-trained teams from the clinical, social, and technical sciences; (3) auditable clinical algorithms should provide clear and accurate information on intended uses and associated risks; (4) biases should be acknowledged and outcomes of mitigation strategies reported; (5) the Health Equity By Design framework (Argentieri et al., 2022) should be infused into algorithm development; (6) research efforts should be funded to expand the empirical evidence base; and (7) developers must actively engage diverse patients and local communities in the design and preprocessing stages of clinical algorithms.
FRAMEWORKS TO PROMOTE HEALTH EQUITY IN Al
Three prominent frameworks for addressing algorithmic bias in the development and deployment of Al systems in healthcare have recently emerged (Table 2), each of which offers valuable insights and guidance on how to understand, identify, and mitigate bias in Al algorithms. These include (1) the Algorithmic Bias Playbook (Obermeyer et al., 2021), which provides a comprehensive approach to bias mitigation; (2) the Al Bias Aware Framework (Agarwal et al., 2023), which is focused on health equity; and (3) the JustEFAB ethical framework (Mccradden et al., 2023), which emphasizes fairness and social justice in clinical ML integration. Collectively, these frameworks underscore the significance of equitable and ethical Al deployment and offer recommendations for health systems, Al developers, and end users seeking to navigate the intricate landscape of algorithmic bias at different stages of the development and deployment process.
However, there are two notable gaps affecting these frameworks. One is a lack of consensus on how to define and measure algorithmic bias, making it difficult to assess the relative effectiveness of different mitigation strategies. Another gap is a lack of guidance on how to implement and monitor mitigation strategies in complex, dynamic real-world settings. We lack specific guidance on translating these recommendations into practical, effective actions and on operationalizing fairness and monitoring strategies in real-world scenarios. Despite these gaps, the frameworks provide a valuable starting point for addressing algorithmic bias in Al.
ARTICULATING THE BE FAIR FRAMEWORK
Eliminating bias for fair and responsible Al
Building on our previous work and that of others, we present Bias Elimination for Fair and Responsible Al in Healthcare (BE FAIR), a multi-faceted framework for mitigating bias across the algorithmic lifecycle from design to deployment (Figure 1). BE FAIR was informed by the six common types of bias informed by a scoping review: societal bias, label bias, aggregation bias, representation bias, evaluation bias, and human use bias, and offers nurses guidance on how to implement and monitor Al strategies in real-world clinical settings (Table 3). The bias management strategies described in Table 3 not only educate development teams about different types of biases but also serve as a guide for managing bias-related risks that may be introduced by Al technologies used in healthcare. The BE FAIR framework thus serves as a roadmap for identifying and mitigating bias throughout the entire lifecycle of Al-powered healthcare technologies, from conception and development to deployment and use. This structured approach empowers nurses, who play a critical role as both end users and patient advocates, to actively participate in building and utilizing fair and equitable Al tools.
Operationalizing BE FAIR through Al governance: A nursing exemplar
Health systems require an Al governance framework for oversight and deployment of safe, high-quality predictive algorithms (Bedoya et al., 2022). However, there is little understanding of how nurses should evaluate complex algorithms for bias and discrimination. Relatively few frameworks exist that promote health equity across the development lifecycle (from design through evaluation and deployment); of these, none provide nurses with practical strategies for mitigating bias in clinical algorithms. By following BE FAIR as part of the lifecycle management of clinical algorithms, Al developers and end users such as nurses can take concrete steps to reduce the risk of bias and discrimination and create a more equitable healthcare system.
Algorithmic governance frameworks (Economou-Zavlanos et al., 2023) can serve as a vehicle to operationalize BE FAIR (Table 4). Through algorithmic governance and oversight, organizations can drive accountability for the stakeholders involved in the development and use of algorithmic technologies intended for health so that health equity considerations are integrated throughout the lifecycle of these technologies. Similar to Quality by Design concepts (Tenaerts et al., 2018), health equity and algorithmic bias assessments should be performed early in the lifecycle during the design and development phases. By forming committees and evaluation processes that focus on bias mitigation algorithmic design, data analysis, and clinical workflows, nurses play a key role in ensuring fairness and equity as well as safe and effective Al deployments. BE FAIR can help nurses and other end users to be aware of and better understand bias and discrimination. Additionally, it may empower nurses to lead efforts aimed at integrating fairness and equity principles within healthcare operations and decision-making.
As a precursor to developing Al models with Al healthcare technologies, the establishment of a team (with health disparities and equity expertise focused on four areas, including Clinical and Health Service Research, Biological and Behavioral Sciences, Data Science, and Community Health and Population Sciences) should define and documents clinical and model performance requirements and designs the model. Data infrastructure is built iteratively for retrospective evaluation. Interoperability and rapid maturation of standards for data elements and their incorporation into clinical workflows are essential for addressing health equity issues.
In the design phase of Al-enabled healthcare technologies, the BE FAIR team's approach extends beyond simply gathering the correct data. It involves a comprehensive strategy to achieve equitable patient outcomes and address broader determinants of health equity such as efficiency and cost savings. An example of addressing biases in the design phase of Al development, with a focus on health equity, can be seen in the development and application of "no-show" predictive models within healthcare settings (AlMuhaideb et al., 2019). These models predict the likelihood of patients missing their scheduled appointments, a substantial operational and care delivery challenge for providers. Considering the design of such models highlights the importance of considering health equity and the potential biases in Al-driven interventions.
Applying BE FAIR can inform the team and the healthcare organization about risks that need to be managed during the use of Al technologies. During evaluation, the BE FAIR framework plays a dual role in managing bias. First, it raises awareness among Al development teams regarding different types of biases that may be introduced across the Al lifecycle by providing examples of how these biases can be identified, assessed, and mitigated. Second, it serves as a self-assessment tool for nurses and other team members to reflect on biases that may be introduced when a specific Al tool is integrated into the workflow, as well as providing insight into how those biases can be assessed and mitigated. Here, the team considers model performance metrics, resource allocation, and patient outcomes and ensures that sociodemographic data within the training set are transparent to anyone deploying the model. In this context, the nurse's role involves actively participating in the evaluation of performance metrics related to health outcomes among marginalized groups. They are key in identifying disparities in these outcomes, thereby contributing to a comprehensive understanding of health inequities. Their direct patient-care experience and trust within communities position them to effectively communicate the needs and challenges faced by marginalized groups, guiding efforts to improve health equity to be more targeted and impactful. As the algorithm transitions to deployment, the team continues to prioritize health equity, incorporating feedback from a diverse group of end users (e.g., patients, nurses, providers, health system administrators) and advising on how to deploy models within clinical environments.
At this stage, when the team closely monitors clinicians' use of the model, nurses are crucial in observing alterations to clinical workflows and discerning how clinician responses to algorithm outputs-reflecting human bias-might affect service utilization and patient outcomes. Equally important is the assessment of model drift or decay, factors that constitute reductions in the model's performance over time. These assessments are essential for maintaining the fairness of Al applications in healthcare.
LIMITATIONS
Several limitations and challenges could affect the application of BE FAIR. Specific challenges include limited availability of diverse, representative data with which to train Al models appropriately (Gopal et al., 2021). In cases where data for specific populations and combinations of subgroups (e.g., Black women, Asian men, rural and/or economically disadvantaged persons, sexual and gender minorities) is limited or nonexistent, it becomes especially difficult to address bias and promote equity for these groups. Further, there is a risk that algorithms trained on biased or unrepresentative data may perpetuate or exacerbate existing disparities when applied to underrepresented groups.
The BE FAIR framework may not adequately address the intersectionality of multiple identity factors (e.g., race, gender, sexual orientation) when considering bias and health equity. Bias may operate differently when considering individuals who belong to multiple marginalized groups. Also, the qualitative nature of work under BE FAIR to identify bias introduces inherent limitations in the objectivity of the information collected. Emphasizing clear and transparent communication about algorithms, including their limitations, with healthcare professionals and patients is essential for setting realistic expectations. BE FAIR is currently formulated as a quality improvement effort to ensure that biases and inequities are identified, assessed, and mitigated during the lifecycle of Al technologies. However, the BE FAIR framework has not yet been rigorously evaluated in real-world settings and has yet to undergo empirical testing through controlled studies or broad implementation. Consequently, although the strategies proposed appear viable based on nurses' roles and capabilities in mitigating health inequities, their effectiveness and scalability across diverse healthcare settings have yet to be demonstrated.
FUTURE DIRECTIONS
Health system governance and oversight are pivotal in implementing frameworks that apply an equity lens, especially when evaluating complex algorithms across their lifecycles for bias and discrimination. This necessitates expansion of the collection, reporting, and analysis of Social Determinants of Health data, as advocated by the 2022-2023 Centers for Medicare and Medicaid Services Framework for Health Equity. Without this crucial data, the capacity to provide culturally competent care remains constrained. Protocols for assessing and correcting potential bias and discrimination are essential. However, the effectiveness of various strategies within health systems remains an area of uncertainty and exploration.
CONCLUSIONS
The health professions, and particularly nursing, are at a juncture where they can leverage Al tools to optimize patient and population care. Recent developments suggest the emergence of new competencies and roles encompassing basic Al knowledge, awareness of social and ethical implications, evidence-based evaluation, and clinical workflow analysis. As health systems confront the reality that well-intentioned clinical algorithms may inadvertently perpetuate health disparities, there is a compelling opportunity and moral imperative to improve. Nurses, whose jobs place them in direct and prolonged contact with patients, are poised to lead such efforts as evaluators, interpreters, and communicators of Al applications in healthcare and to promote and operationalize BE FAIR through governance and clinical practice, thereby ensuring fair and equitable use of health Al. The BE FAIR framework underscores the urgent need for nurses to step forward and become Al Influencers, leveraging their expertise to guide an increasingly Al-driven healthcare system. Through the use of enabling frameworks such as BE FAIR, nursing professionals can play a key role in ensuring that Al becomes a catalyst for progress and equity, rather than a tool that perpetuates existing inequalities.
ACKNOWLEDGMENTS
Research reported in this publication was supported in part by Duke Clinical and Translational Science, National Center for Advancing Translational Sciences, and National Institutes of Health (Award No. UL1TR002553). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors have no conflicts of interest to declare. The authors wish to acknowledge the leadership and content expertise provided by members of the Duke AlgorithmBased Clinical Decision Support Oversight Committee including: Amanda Parrish, PhD; Armando Bedoya, MD, MMCi; Benjamin Goldstein, PhD, MPH; Cara O'Brien, MD; Eric Jelovsek, MD; Eric Poon, MD, MPH; Michael Lipkin, MD; Scott Elengold, JD; Sharon Ellison, PharmD; and Suresh Balu, MS, MBA. The authors would also like to acknowledge editorial support from Judith C. Hays, PhD, RN.
DATA AVAILABILITY STATEMENT
This project does not involve the collection, generation, or utilization of specific data sets. Therefore, there are no data sets associated with paper, and no data are available for sharing or access.
CLINICAL RESOURCES
Website Title: Addressing the Limitations of Medical Data in Al.
URL: https://www.fda.gov/medical-devices/medical-device-regul atory-science-research-programs-conducted-osel/addressing-limit ations-medical-data-ai.
Website Title: Identifying and Measuring Artificial Intelligence (Al) Bias for Enhancing Health Equity.
URL: https://www.fda.gov/medical-devices/medical-device-regul atory-science-research-programs-conducted-osel/identifying-andmeasuring-artificial-intelligence-ai-bias-enhancing-health-equity.
Website Title: Evaluation Methods for Artificial Intelligence (AI)Enabled Medical Devices: Performance Assessment and Uncertainty Quantification.
URL: https://www.fda.gov/medical-devices/medical-device-regul atory-science-research-programs-conducted-osel/evaluation-metho ds-artificial-intelligence-ai-enabled-medical-devices-performanceassessment-and.
Website Title: Performance Evaluation Methods for Evolving Artificial Intelligence (Al)-Enabled Medical Devices.
URL: https://www.fda.gov/medical-devices/medical-device-regul atory-science-research-programs-conducted-osel/performanceevaluation-methods-evolving-artificial-intelligence-ai-enabled-medic al-devices.
Website Title: Nondiscrimination in Health Programs and Activities, A Rule by the Centers for Medicare & Medicaid Services on 05/06/2024.
URL: https://www.govinfo.gov/content/pkg/FR-2024-05-06/pdf/2024-08711.pdf.
ORCID
Michael P. Cary Jr ©https://orcid.org/0000-0002-7966-7515
Kay Lytle © https://orcid.org/0000-0001-9845-1501
REFERENCES
Agarwal, R., Bjarnadottir, M., Rhue, L., Dugas, M., Crowley, K., Clark, J., & Gao, G. (2023). Addressing algorithmic bias and the perpetuation of health inequities: An Al bias aware framework. Health Policy and Technology, 12(1), 100702. https://doi.org/10.1016/j.hlpt.2022.100702
AlMuhaideb, S., Alswailem, O., Alsubaie, N., Ferwana, I., & Alnajem, A. (2019). Prediction of hospital no-show appointments through artificial intelligence algorithms. Annals of Saudi Medicine, 39(6), 373-381.https://doi.org/10.5144/0256-4947.2019.373
Araz, O. M., Bentley, D., & Muelleman, R. L. (2014). Using Google flu trends data in forecasting influenza-like-illness related ED visits in Omaha, Nebraska. American Journal of Emergency Medicine, 32(9), 10161023. https://doi.org/10.1016/j.ajem.2014.05.052
Argentieri, R. M., Mason, T. A., Hefcart, J., & Henry, J. (2022). Embracing Health Equity by Design. Retrieved from https://www.healthit.gov/buzz-blog/health-it/embracing-health-equity-by-design
Bedoya, A. D., Economou-Zavlanos, N. J., Goldstein, B. A., Young, A., Jelovsek, J. E., O'Brien, C., Parrish, A. B., Elengold, S., Lytle, K., Balu, S., Huang, E., Poon, E. G., & Pencina, M. J. (2022). A framework for the oversight and local deployment of safe and high-quality prediction models. Journal of the American Medical Informatics Association, 29(9), 1631-1636. https://doi.org/10.1093/jamia/ocac078
Brenan, M., & Jones, J.M. (2024). Ethics ratings of nearly all professions down in the U.S. Politics. https://news.gallup.com/poll/608903/ethicsratings-nearly-professionsdown.aspx
Cary, M. P., Jr., Zhuang, F., Draelos, R. L., Pan, W., Amarasekara, S., Douthit, B. J., & Colón-Emeric, C. S. (2021). Machine learning algorithms to predict mortality and allocate palliative Care for Older Patients with hip Fracture. Journal of the American Medical Directors Association, 22(2), 291-296. https://doi.org/10.1016/j.jamda.2020.09.025
Cary, M. P., Jr., Zink, A., Wei, S., Olson, A., Yan, M., Senior, R., & Pencina, M. J. (2023). Mitigating racial and Ethnic bias and advancing Health equity in clinical algorithms: A scoping review. Health Affairs, 42(10), 1359-1368. https://doi.org/10.1377/hlthaff.2023.00553
Clancy, T. R. (2020). Artificial intelligence and nursing: The Future is now. Journal of Nursing Administration, 50(3), 125-127. https://doi.org/10.1097/nna.0000000000000855
Cutler, D. M. (2023). What artificial intelligence means for Health care. JAMA Health Forum, 4(7), e232652. https://doi.org/10.1001/jamahealthforum.2023.2652
Economou-Zavlanos, M. J., Bessias, S., Cary, M. P., Jr., Bedoya, A. D., Goldstein, B. A., Jelovsek, J. E., O'Brien, C. L., Walden, N., Elmore, M., Parrish, A. B., Elengold, S., Lytle, K. S., Balu, S., Lipkin, M. E., Shariff, A. I., Gao, M., Leverenz, D., Henao, R., Ming, D. Y., ... Poon, E. G. (2023). Translating ethical and quality principles for the effective, safe and fair development, deployment and use of artificial intelligence technologies in healthcare. Journal of the American Medical Informatics Association, 31, 705-713. https://doi.org/10.1093/jamia/ocad221
Gafni-Pappas, G., & Khan, M. (2023). Predicting daily emergency department visits using machine learning could increase accuracy. The American Journal of Emergency Medicine, 65, 5-11. https://doi.org/ 10.1016/j.ajem.2022.12.019
Gopal, D. P., Chetty, U., O'Donnell, P., Gajria, C., & Blackadder-Weinstein, J. (2021). Implicit bias in healthcare: Clinical practice, research and decision making. Future Healthcare Journal, 8(1), 40-48. https://doi. org/10.7861/fhj.2020-0233
Haug, C. J., & Drazen, J. M. (2023). Artificial intelligence and machine learning in clinical medicine, 2023. New England Journal of Medicine, 388(13), 1201-1208. https://doi.org/10.1056/NEJMra2302038
Hermansson, J., & Kahan, T. (2018). Systematic review of validity assessments of Framingham risk score results in Health economic modelling of lipid-modifying therapies in Europe. PharmacoEconomics, 36(2), 205-213. https://doi.org/10.1007/s40273-017-0578-1
Institute of Medicine Committee on, Eliminating, & Ethnic Disparities in Health. (2003). In B. D. Smedley, A. Y. Stith, & A. R. Nelson (Eds.), Unequal treatment: Confronting racial and Ethnic Disparities in Health care. National Academies Press (US) Copyright 2002 by the National Academy of Sciences. All rights reserved.
Jain, A., Brooks, J. R., Alford, C. C., Chang, C. S., Mueller, N. M., Umscheid, C. A, & Bierman, A. S. (2023). Awareness of racial and Ethnic bias and potential solutions to address bias with use of Health care algorithms. JAMA Health Forum, 4(6), е231197. https://doi.org/10.1001/jamah ealthforum.2023.1197
Koleck, T. A., Topaz, M., Tatonetti, N. P., George, M., Miaskowski, C., Smaldone, A., & Bakken, S. (2021). Characterizing shared and distinct symptom clusters in common chronic conditions through natural language processing of nursing notes. Research in Nursing and Health, 44(6), 906-919. https://doi.org/10.1002/nur.22190
Leung, F., Lau, Y. C., Law, M., & Djeng, S. K. (2022). Artificial intelligence and end user tools to develop a nurse duty roster scheduling system. International Journal of Nursing Science, 9(3), 373-377. https://doi. org/10.1016/j.iinss.2022.06.013
Lomis, K., Jeffries, P., Palatta, A., Sage, M., Sheikh, J., Sheperis, C., & Whelan, A. (2021). Artificial intelligence for health professions educators. National Academy of Medicine Perspectives, 1-14.
Luck, A. N., Preston, S. H., Elo, I. T., & Stokes, A. C. (2022). The unequal burden of the Covid-19 pandemic: Capturing racial/ethnic disparities in US cause-specific mortality. SSM Popul Health, 17, 101012. https:// doi.org/10.1016/j.ssmph.2021.101012
Lyell, D., & Coiera, E. (2017). Automation bias and verification complexity: A systematic review. Journal of the American Medical Informatics Association, 24(2), 423-431. https://doi.org/10.1093/jamia/ocw105
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the Dartmouth summer research project on artificial intelligence, august 31, 1955. Al Magazine, 27(4), 12. https://doi.org/10. 1609/aimag.v27i4.1904
Mccradden, M., Odusi, O., Joshi, S., Akrout, I., Ndlovu, K., Glocker, B., Maicas, G., Liu, X., Mazwi, M., Garnett, T., Oakden-Rayner, L., Alfred, M., Sihlahla, I., Shafei, O., & Goldenberg, A. (2023). What's fair is... fair? Presenting JustEFAB, an ethical framework for operationalizing medical ethics and social justice in the integration of clinical machine learning: JustEFAB. Paper presented at the Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Chicago, IL, USA https://doi.org/10.1145/3593013.3594096
Obermeyer Z. N. R., Stern M, Eaneff S, Mullainathan S. (2021). Algorithmic Bias Playbook.
Santos, F. C. D., Snigurska, U. A., Keenan, G. M., Lucero, R. J., & Modave, F. (2023). Clinical decision support Systems for Palliative Care Management: A scoping review. Journal of Pain and Symptom Management, 66(2), е205-е218. https://doi.org/10.1016/j.jpainsymman.2023.03.006
Sendak, M. P., Ratliff, W., Sarro, D., Alderton, E., Futoma, J., Gao, M., & O'Brien, C. (2020). Real-world integration of a sepsis deep learning technology into routine clinical care: Implementation study. JMIR Medical Informatics, 8(7), е15182. https://doi.org/10.2196/15182
Si, Y., du, J., Li, Z., Jiang, X., Miller, T., Wang, F., Jim Zheng, W., & Roberts, К. (2021). Deep representation learning of patient data from electronic Health records (EHR): A systematic review. Journal of Biomedical Informatics, 115, 103671. https://doi.org/10.1016/j.jbi.2020. 103671
Smith, P., & Smith, L. (2023). This season's artificial intelligence (Al): Is today's Al really that different from the Al of the past? Some reflections and thoughts. Al and Ethics. https://doi.org/10.1007/s4368 1-023-00388-0
Tenaerts, P., Madre, L., & Landray, M. (2018). A decade of the clinical trials transformation initiative: What have we accomplished? What have we learned? Clinical Trials, 15(1_suppl), 5-12. https://doi.org/10. 1177/1740774518755053
Vela, M. B., Erondu, A. I, Smith, M. A., Peek, M. E., Woodruff, J. N., & Chin, M. H. (2022). Eliminating explicit and implicit biases in Health care: Evidence and research needs. Annual Review of Public Health, 43, 477-501. https://doi.org/10.1146/annurev-publhealth-05262 0-103528
Yang, S., Santillana, M., & Kou, S. C. (2015). Accurate estimation of influenza epidemics using Google search data via ARGO. Proceedings of the National Academy of Sciences of the United States of America, 112(47), 14473-14478. https://doi.org/10.1073/pnas.1515373112
Copyright Blackwell Publishing Ltd. Jan 2025