Content area
Aim
This study aimed to describe the perspectives of doctors, nurses and residents toward healthcare artificial intelligence (AI) and its integration in healthcare settings in Kazakhstan.
Background
The potential of AI integration in healthcare is becoming increasingly recognized. The efficient use of AI in healthcare directly depends on the trust and knowledge of healthcare professionals. Therefore, it is crucial to understand how doctors, nurses and residents perceive AI and its integration into the healthcare system.
Design
A descriptive qualitative study.
Methods
Fifteen doctors, nurses and residents working at the University Medical Center in Astana were purposively selected for this study. Data were collected through semi-structured interviews from February 3 to March 31, 2024. The thematic approach was used to analyze the data.
Results
The study has identified five major themes: "Use of AI in Healthcare," "Impact on Healthcare Workflow," "Challenges and Limitations of AI Use in Healthcare," "Ethical and Societal Implications of AI Use in Healthcare," and "Future Prospects and Trends of AI Use in Healthcare." The findings highlight the importance of AI in education, patient care and everyday work practice.
Conclusions
The findings highlighted the need to prepare doctors, nurses and residents for AI's effective and ethical use in healthcare. This preparation requires updating the curriculum of health professions education to integrate AI courses in the theoretical and clinical aspects of the curriculum. Healthcare facilities should provide educational opportunities for their doctors, nurses and residents on AI. Hence, its ethical use in education should be explored and advocated.
1 Introduction
Artificial intelligence (AI) is the capability of machines to perform tasks requiring human intelligence, such as reasoning, learning and problem-solving through machine learning algorithms ( Bhattamisra et al., 2023). In healthcare, AI has evolved from simple data analysis to complex clinical decision-making and predictive patient care ( Högberg et al., 2023; Van der Zander et al., 2022). It enhances medical services, particularly in image interpretation, data processing, workflow improvement and reducing medical errors ( Balay-Odao et al., 2024).
AI aids general practitioners (GPs) with patient care, diagnosis and therapy planning. Incorporating AI into the clinical workflow must focus on user-friendliness, data safeguarding and appropriate training ( Salwei et al., 2021). However, challenges like practitioner mistrust and ethical considerations hinder its adoption ( Blease et al., 2019). A UK study of 720 GPs revealed that while they recognized the benefits of AI, such as reduced administrative workloads and faster diagnoses, the GPs also acknowledged the critical role of clinical judgment. They also raised concerns about the social and ethical issues related to AI use in their practice ( Blease et al., 2019).
The perceptions on AI integration in healthcare vary. For instance, breast radiologists in Sweden were optimistic about incorporating AI into the screening process ( Högberg et al., 2023), while UK radiologists were less optimistic, partly due to concerns over their ability to confidently explain AI-generated results to patients and colleagues ( Rainey et al., 2022). Russian doctors had positive opinions and see it as a valuable tool to support their work rather than a replacement ( Orlova et al., 2023). In nursing, AI has a transformative role in nursing care delivery and education by streamlining tasks and aiding decision-making ( Rony et al., 2024b). AI also helps nurses analyze patient data, predict outcomes, optimize treatment plans and support personalized patient care ( Lu et al., 2021).
In a previous study, nurses view AI as an opportunity to improve and expand care delivery. They perceive a dynamic combination of human expertise and AI, redefining nursing and healthcare ( Rony et al., 2024a). Also, AI-enabled robotics and telehealth solutions expand the reach of nursing and medical care, improving the accessibility of healthcare services and remote monitoring capabilities of patients' health conditions.
Despite the potential positive outcomes of AI, challenges such as cybersecurity, ethical considerations and a lack of competent professionals in medical big data persist ( Lu et al., 2021). AI technology could influence the patient's autonomy and raise issues on privacy and security due to the inadequacies of policies on patients’ rights protection in relation to AI use ( Lee and Yoon, 2021). Other issues, such as bias and discrimination, were raised due to the use of algorithms in diagnostic and treatment and automated decision–making systems ( Rodrigues, 2020). Also, AI-generated resources are expensive to some extent, making them inaccessible to low-income health institutions ( Shah, 2024). Inadequate infrastructure, insufficient training in digital competencies, challenges with data interoperability, regulatory gaps and resource limitations were also some constraints that hinders the adoption of AI among healthcare professionals ( Mollura et al., 2020).
AI integration in healthcare is an emerging discourse in Kazakhstan ( Cruz et al., 2024). Some AI technologies have been integrated into Kazakhstan's healthcare system, such as PnuemoNet and Cerebra ( World Bank Group, 2022). Despite these achievements, more research should explicitly focus on the opinions of doctors, nurses and residents using these AI systems. Understanding their experiences, difficulties and the overall effect on their clinical practice is vital to addressing the challenges and ensuring an effective integration of AI into healthcare.
2 Aim
This study aimed to describe the doctors, nurses and residents’ perspectives on healthcare AI and its integration into healthcare settings in Kazakhstan.
3 Method
The study used a descriptive qualitative design to gain insights about healthcare AI from doctors, nurses and residents. The “consolidated criteria for reporting qualitative research” (COREQ) checklist was used in the study (Supplementary File 1).
The research took place in University Medical Center (UMC) in Astana, Kazakhstan. UMC has a capacity of 856 in-patient beds and 500 outpatient ambulatory examinations per shift. A purposive sampling strategy was used to select the participants. The inclusion criteria were doctors, nurses and residents aware of AI use in healthcare, aged 18–65 and speak and understand Kazakh and Russian. The exclusion criteria were doctors, nurses and residents unaware of AI integration in healthcare, who could not speak and understand Kazakh and Russian and who fell outside of the ages of 18–65.
A total of 15 doctors, nurses and residents were interviewed: eight males and seven females; ages ranged from 27 to 59 years and years of work experience ranged from 5 to 34 years. The participants worked in the cardiology, neonatology, obstetrics, radiology, surgery and pediatrics units. The participants provide direct care to patients. All participants were familiar with AI and its use in healthcare settings (
4 Ethical considerations
This manuscript was part of a research reviewed and approved by the Institutional Review Ethics Committee of Nazarbayev University School of Medicine (ref number: 2023Sep#01), National Research Center for Cardiac Surgery Council (18/09/2023) and the UMC Research Ethics Committee (protocol no 6/2023–19/10/2023). The participants signed a written informed consent to signify their understanding of the study and voluntary participation. The participants received information about the study, such as the study’s aims and procedures and the participant’s rights, benefits and potential risks from the researchers. They were given instructions that they were free to withdraw at any point with no penalty. Any information acquired during this study was kept private. Hence, all the data were stored on the principal investigator's password-protected computer. The researchers ensured the confidentiality and data anonymity of the respondents by using pseudonyms on the transcript files and data reports. Also, the names of the hospitals or names of persons mentioned by the participant were replaced with Hospital X or Person X. The researchers ensured that no identifiers were included in the transcript file and data reports.
5 Data collection
The researchers approached participants in the healthcare facility, targeting moments when they appeared free or less engaged in their duties. The researchers explained important information about the study. The researchers also assessed potential participants' familiarity with AI by asking: “Are you aware of AI and how it is used in healthcare settings?” Those who confirmed their awareness of AI and its use in healthcare settings were invited to participate and sign the informed consent. The participants determined their interview schedule based on their availability.
Data were collected from February 3 to March 31, 2024, through face-to-face semi-structured interviews that lasted 30–60 min and were conducted in Russian or Kazakh. The interview started with the general question, “Can you describe your perception of using AI in health care?" The guide questions are shown in Supplementary File 2. Follow-up questions were done based on the responses of the participants. Each interview was audio-recorded with the participant's consent. Data saturation was reached on the 14th participant. However, another participant was interviewed to ensure that no new information emerged.
6 Data analysis
The semantic thematic analysis was used to analyze the data ( Braun and Clarke, 2019). The first step is familiarization. The researchers manually transcribed the participants' audio-recorded interviews and then professional translators translated all the data transcriptions into English. Then, researchers read all of these interview notes several times to ensure they understood what everyone was saying. The second step was coding. Four researchers manually coded the data. Keywords or phrases were coded during translation and transcription to represent the participant's idea about the topic. To keep track of the consolidated data, the researcher created memos. The researcher resolved conflicts or disagreements on the codes by rereading the participants' verbatim information and then the researchers came up with the appropriate codes. The third step was generating items and combining similar codes to form "themes." The fourth step was reviewing themes. The fifth step was defining and naming themes. The final step was writing up an analysis of the data by extracting themes, subthemes and code that were analyzed and related to the existing scientific literature.
7 Trustworthiness and rigor
The study's rigor was established through consistency and multiple analyses. Maintaining consistent data collection and using guide questions have been employed. The collected data were analyzed several times; initial findings were revised after deep analysis and researchers ensured that all nuances were captured. To avoid personalized biases, the two researchers who conducted the interviews wrote a reflective journal before every interview. This reflective journal helped the two researchers recognize their personal beliefs, values, or biases regarding AI in healthcare. To ensure credibility, regular interaction with participants took place guaranteeing that shared experiences accurately reflect the truth of the findings. The study's credibility was maintained through the detailed description of the study's methodology. This approach allowed external researchers to trace the steps of the research process, ensuring the study's consistency and reliability. The study was designed with enough detail to allow its findings to be applied in other contexts.
8 Results
AI is increasingly being developed for medical purposes, particularly in diagnosis and treatment decision-making. While AI has undeniably made a positive impact on health, it also gives rise to numerous unanswered questions. The study identified 17 sub-themes were identified, which lead to 5 major themes, namely "Use of AI in Healthcare," "Impact on Healthcare Workflow," "Challenges and Limitations of AI Use in Healthcare," "Ethical and Societal Implications of AI Use in Healthcare," and "Future Prospects and Trends of AI Use in Healthcare" (
9 Theme 1 use of AI in healthcare
Healthcare practitioners believed that AI would streamline their tasks by removing the need for human involvement and decreasing the amount of work. AI enhances the training and education of healthcare practitioners and can provide precise and effective diagnostic and treatment results, improving efficiency in healthcare settings. AI can enhance the precision and efficiency of medical diagnosis, treatments, learning and personalized patient care.
9.1 Enhance education and training
AI is a beneficial tool in medical education in the clinical setting since it helps health professionals to understand and stay updated with intricate medical concepts. Participants mentioned that integrating AI in health practice improves learning outcomes, strengthens information retention and prepares future health practitioners to face the profession's problems. AI-driven training tools and simulations allow healthcare staff to gain knowledge and improve their skills:
"Recently, I saw an education conference at this medical university and the main topic was how AI is used in medical education." Participant 2
9.2 Diagnostic and treatment support
AI assists doctors, nurses and residents in various diagnostic tasks, such as detecting abnormalities in medical imaging or analyzing medical data to identify potential diseases. AI-driven hospitals assist medical professionals in making diagnostic judgments by assessing patient data, such as laboratory findings, symptoms and medical history, ultimately improving health outcomes:
"…it gives specific results so as not to miss features in clinics now in Kazakhstan oncology projects various AI low-level CT scan of lung cancer AI diagnostics…" Participant 1
AI improves patient care quality through diagnosis, personalized medicine suggestions and treatment plans. It supports treatment as it can suggest the best courses of action by considering each patient's unique characteristics and preferences. It was noted that AI-enabled doctors, nurses and residents provide more effective, efficient and personalized care by analyzing complex medical data and generating relevant insights:
"…for example, heart disease, you click on it and AI gives you the result, for example, what tests need to be done, what medications you prescribe…" Participant 4
9.3 Improve patient care management
Healthcare systems increasingly incorporate AI to improve patient care and expedite operations. The healthcare practitioners mentioned that AI addresses practical healthcare issues, such as assisting clinicians and patients with prescription schedule monitoring and planning. The growing incorporation of AI into healthcare systems has the potential to significantly advance patient management and outcomes, accuracy and efficacy across the whole healthcare continuum:
"On the positive side, what we all do in general is to treat the patient… Helps people to use the doctor's time more efficiently." Participant 9
10 Theme 2 impact on healthcare workflow
This theme encompasses changes in medical practices, resources and staffing patterns resulting from the integration of AI. The doctors, nurses and residents mentioned that AI-driven procedures lead to more efficient resource use and cost savings, enabling healthcare providers to treat more patients while maintaining high standards of care. AI in healthcare improves the efficient and cost-effective distribution of staff, resources and infrastructure to fulfill patients' requirements. Healthcare settings may quickly adapt their staff levels in reaction to changes in demand by using AI-driven predictive analytics to forecast patient volume, severity and staffing needs.
10.1 Work improvement
AI is viewed as a tool for enhancing the work of doctors, nurses and residents, enabling them to work more efficiently and effectively. AI is a beneficial tool in the healthcare sector, assisting healthcare practitioners in delivering better care, improving patient outcomes and enhancing the entire patient experience:
"…AI gives only positive aspects at work, in education and in science." Participant 3
10.2 Workflow efficiency
The integration of AI streamlines healthcare workflows and improves patient outcomes. Telemedicine and remote monitoring are more effective when health data supplied by patients is analyzed in real-time using AI. Healthcare providers can remotely monitor and evaluate patients and implement preventive measures when needed:
"It makes it convenient and helps a person even in remote areas. So, you don’t have to spend so much time, but on the contrary, he can write to you briefly and you just read…" Participant 5
10.3 Reduction of medical errors
AI minimizes medical errors since these technologies can alert health practitioners about mistakes in patient care, provide suggestions for suitable tests or treatments and give feedback on potential diagnoses. Incorporating AI in healthcare (i.e., in diagnostic procedures) improves diagnostic accuracy and treatment effectiveness, reduces errors and improves patient satisfaction:
"The presence of AI allows doctors to check their mistakes and their knowledge. For example, AI gives its answer, and you do it and check in this regard…" Participant 2
11 Theme 3 challenges and limitations of AI use in healthcare
The conversation also addressed the challenges and limitations associated with AI in healthcare. Participants recognized the significant potential of AI in the healthcare industry, but AI can only partially replace human expertise and may sometimes require human oversight and interpretation due to its lack of emotion. Limitations on responsibility and liability when mistakes happen due to AI use in clinical practice were also acknowledged. Using AI may also have an impact on healthcare's creativity and credibility and lead to overreliance and misunderstandings, negatively affecting patient care. This notion could contribute to the reluctance of healthcare systems to adopt AI.
11.1 Lack of empathy and sympathy
One of the limitations of AI use in health care practice is the lack of human qualities such as empathy and sympathy, which are essential in patient care. Understanding and sharing the experiences and thoughts of another person is crucial for building rapport and trust with patients, which the current AI cannot perform. Even though AI can simulate empathy through sentiment analysis and natural language processing, it cannot precisely replicate human emotional and intuitive bonds with their patients:
"AI cannot replace human beings because, in AI, there is no such thing as empathy and sympathy…" Participant 3
11.2 Responsibility and liability
Another limitation was the issue of responsibility and liability regarding therapeutic recommendations and diagnoses produced by AI. Healthcare practitioners acknowledged that AI could aid in clinical decision-making, but the ultimate duty of treating patients lies with healthcare professionals. Medical personnel may face ethical and legal consequences if AI systems provide inaccurate suggestions that endanger patients:
"There is a huge legal implication in using AI. We might be liable if the AI algorithm is incorrect and results in poor outcomes. We primarily need clear protocols about how AI is used in the decision-making process and who is responsible for any failures." Participant 3
11.3 Overreliance and misunderstandings
Health professionals and patients may need to understand AI's capabilities since poor understanding may lead to overreliance and distrust. Healthcare personnel worry about the possibility of overestimating and misunderstanding AI's capabilities and ignoring its limitations, which may lead to errors in the patient's diagnosis, treatment and care when using AI:
"Not every smartphone can handle this program; the technologically advanced category of patients I operated on do not understand…" Participant 7
11.4 Reluctance and readiness of healthcare systems
The need to change infrastructure and policy in healthcare systems to integrate AI into healthcare practice made participants reluctant to use AI. Also, healthcare practitioners' distrust of the benefits of AI use to patients, administrators and practitioners made them reluctant and not ready to use AI as a supportive tool in caring for their patients:
"But now, AI is technogenic espionage. For example, if you take any software or some kind of chat, they say we've implemented it so much right now, such a system…" Participant 3
11.5 Impact on healthcare's creativity and credibility
Another downside of AI usage is its impact on the creativity and credibility of healthcare practitioners. Healthcare practitioners mentioned that the widespread use of AI-generated data could diminish their professional identity and autonomy by overshadowing their ideas, viewpoints and experiences. Healthcare practitioners are concerned with the content produced by AI, which does not match the required standards of accuracy or contextual appropriateness:
"… the creativity of healthcare practitioners and their credibility will suffer because of AI use, which is alarming." Participant 6
12 Theme 4 ethical and societal implications of AI use in healthcare
Ethical considerations emerge as a significant theme in the discussion. Healthcare professionals reflect on the importance of patient consent and preferences in using AI. They prioritized patient health and well-being over personal preferences and acknowledge the potential concerns of job replacement. Concerns about cybersecurity and patient data protection were also briefly mentioned, indicating the need for ethical guidelines and safeguards in AI implementation. Participants may prioritize ethical issues while using AI to ensure that the instruments protect patients' autonomy, privacy and safety. Advocating for using transparent and accountable AI in healthcare decision-making can promote justice, equity and accountability.
12.1 Privacy and security concerns
AI in healthcare raises concerns about patient privacy and data security. Patients may experience privacy apprehensions when using communication tools and virtual assistants driven by AI. They may hesitate to share sensitive information or discuss personal health issues if they believe that AI systems are tracking or saving their data:
"The side of that information is that social networks are very developed nowadays. AI seems to be allowed if information is strongly protected in medicine.” Participant 9
Healthcare practitioners were concerned about the risks of technogenic espionage associated with AI systems. Revealing technological secrets jeopardizes patient privacy, confidentiality and data security. Healthcare facilities risk being targeted by cybercriminals, state-sponsored hackers, or competitors seeking to steal patient data, research findings, or proprietary algorithms:
"Our databases are not properly secured and all the medical norms and information systems in the country can be easily accessed online since they are scattered, so there is a need to have a single secured database throughout the country…" Participant 7
12.2 Misinformation and misuse
Misinformation and potential misuse of AI technology in healthcare settings have ethical and social implications. AI-driven platforms can be misused to deliver public health information, diagnoses, treatments, therapies and medical disorders. This phenomenon can erode healthcare practitioner and patient interaction since patients will do online consultations. Misinformation about health management and treatment can also be problematic as AI-driven flatforms base their recommendations on the algorithm. Thus, AI can cause harm to patients and undermine public trust because of ethical and legal concerns:
"…people who work in private clinics think that they can install one AI instead of doctors and it will solve all issues…" Participant 5
12.3 Need for education and awareness
Education and awareness efforts are crucial to ensure that healthcare professionals and patients understand the capabilities and limitations of AI in healthcare. Healthcare institutions should provide thorough training and awareness programs to educate their patients on the use of AI and the importance of protecting sensitive medical information. Training programs for healthcare professionals should cover incident response tactics, data management protocols and cybersecurity best practices to ensure that staff members and patients remain vigilant and proactive in protecting patient information:
"…Education and awareness efforts are crucial to ensure that doctors, nurses and residents and patients understand the capabilities and limitations of AI in healthcare." Participant 2
13 Theme 5 future prospects and trends of AI use in healthcare
Healthcare professionals expressed interest in the ongoing advancements in AI and emphasized the need to choose practical applications while avoiding setbacks. AI in healthcare showed potential for improving patient outcomes, research, medical innovation and productivity.
13.1 Normalization and future role of AI in daily workflow
AI is expected to become a standard part of healthcare practice, seamlessly integrated into daily workflows. Healthcare professionals can improve their productivity and efficiency by using AI to automate repetitive operations, speed up administrative duties and offer decision-making assistance. Healthcare facilities may enhance resource usage, address administrative difficulties and focus on delivering the best patient care by smoothly incorporating AI into daily operations:
"…over time, AI will be an integral part of our life. We will live in an AI world." Participant 5
AI was envisioned to play a significant role in everyday healthcare tasks, from diagnosis to treatment planning. Clinical decision support systems powered by AI provide real-time guidance, evidence-based suggestions and predictive analytics to aid healthcare practitioners in treatment planning and patient management. AI algorithms assist clinicians in generating personalized treatment recommendations based on patient data, medical literature and best practices, ultimately improving patient safety and outcomes:
"… In the future, with the use of AI, healthcare practitioners can anticipate when a fatal complication may develop; that is, so much advanced use of AI." Participant 7
13.2 Future development and collaboration opportunities
AI-driven healthcare solutions foster innovation and improvement by offering opportunities for collaboration and development. Integrating AI into healthcare operations in the future can boost innovation, evidence-based practice and patient outcomes:
"…in the future, I would like to collaborate with the Koreans specifically on the thyroid gland of the mammary gland using ultrasound…" Participant 6
13.3 Personalized medicine vs. standard protocols
The future of healthcare may involve a shift towards personalized medicine, enabled by AI, instead of standardized treatment protocols. AI can use patient data such as genetics, medical history and lifestyle factors to create personalized treatment recommendations. Healthcare providers may integrate precision medicine and AI-driven predictive analytics tools into their clinical workflows to customize interventions, improve treatment outcomes and increase patient satisfaction:
"…I think it would be advantageous to have personalized medicine rather than protocols in the future. According to this protocol, you cannot move if something happens; it checks you." Participant 5
14 Discussion
The integration of AI in healthcare has witnessed significant advancements in recent years. In this context, exploring the perspectives of doctors, nurses and residents on AI adoption becomes crucial. The analysis of these perspectives has led to the identification of five key themes: the use of AI in healthcare, its impact on healthcare workflow, the challenges and limitations of AI use in healthcare, the ethical and societal implications of AI use in healthcare and the prospects and trends of AI use in healthcare. These themes form the backbone of our research, shedding light on the multifaceted aspects of AI integration in healthcare.
The study showed the participants’ understanding of the use of AI in healthcare: (1) enhanced education and training, (2) diagnostic and treatment support and (3) improved patient care management. A study shows that AI can diagnose faster and more precisely by detecting subtle differences between infected and non-infected images. Such automated diagnosis could help with the early detection of diseases, thus enabling early and more effective patient treatment ( Pinto-Coelho, 2023). This aspect of AI has already shown some real-life results. For example, it was proven to predict the progression of COVID-19 cases ( Bekbolatova et al., 2024). Our participants emphasized their responses about the benefit of AI in healthcare.
Moreover, many responses mentioned how AI could improve education and training. Becoming a healthcare worker requires knowing a vast amount of information and being able to use it when necessary ( Sheikh et al., 2021). However, medicine never stands in one place; it constantly evolves and new information appears daily, making it difficult for the human brain to comprehend ( Laï et al., 2020). The research suggests that AI in healthcare was proven to be able to analyze, store and retrieve medical data effectively, which aligns with the answers of our participants ( Bekbolatova et al., 2024).
One of the findings is the change in the workflow of doctors, nurses and residents due to AI integration. AI assistance is viewed as a tool to improve workflow and make it more efficient. According to a study conducted in China, Chinese ophthalmologists believed that AI applications would ease doctors' burden, leading to more patient flow ( Zheng et al., 2021). It is worth mentioning that AI would help assist documentation work ( Ng et al., 2022). Our participants responded, "Telemedicine and remote monitoring would be the best way to evaluate patient's symptoms, vital signs and condition after admission." It is an optimal approach to assessing patients after discharge, improving workflow efficiency and revolutionizing patient care ( Tursynbek et al., 2024).
Furthermore, a previous study identified that AI could provide remote patient support while helping doctors save time ( Fischer et al., 2023). Another significant finding was that most participants mentioned that AI technologies could improve diagnosis, specifically in the early stages of lung and thyroid gland cancer. Several scan types evaluated by AI can diagnose this, precisely converting them into digital images, such as CT scans for lung cancer and MRI for brain tumors ( Ahmad et al., 2021). During the interview, participants clarified that AI is mainly used as a supportive tool in diagnostics, which aligns with Bekbolatova et al.'s statement that “AI is an accessory tool for clinicians that cannot replace the judgment and decision-making” (2024). Despite positive aspects of workflow, conversations also included challenges and limitations in using AI. Algorithms cannot replace human factors such as empathy and sympathy. Typically, patient-doctor relations are built through those characters. However, in AI-driven machines, there is no informed consent for collecting patient data, leading to another concern, such as patient safety ( Gerke et al., 2020). Cybersecurity attacks and stealing patient information can harm patients' well-being. Another issue is liability and legal problems. In case of a wrong diagnosis, who will be responsible is unknown ( Bekbolatova et al., 2024). This concern was also discussed during interviews with our participants.
There is also a growing concern for the ethics of its use. Our study showed that healthcare professionals are concerned about patient privacy and misuse of AI, which aligns with previous studies on this topic. The research focusing on ethics in pathology states that the risks of AI are sometimes underestimated and shows which aspects of ethics should be addressed when using AI ( Chauhan and Gullapalli, 2021). Another study highlights the importance of ethical safeguards, concluding the paper with the idea that ethical regulations ensure the safe use of AI’s transformative opportunities ( Elendu et al., 2023). While our research included all healthcare professionals ranging from nurses to residents to surgeons, most published papers focus on pathology and radiology due to their structured and image-intensive nature, making them highly attractive to AI researchers. Moreover, pathology and radiology help with the patient's diagnosis, prognosis and management, aligning with AI's decision-making abilities ( Chauhan and Gullapalli, 2021). Therefore, the study can be enhanced in the future by focusing on these fields only to make it more specific.
It was explored on discussion that doctors, nurses and residents expect AI to develop to enhance patient outcomes, research and medical innovation. One of the intentions was to collaborate with professionals outside the country to integrate AI technologies into healthcare settings. Conversations highlighted that AI would be a great way to shift protocol treatments towards personalized ones. AI is already used in drug consultations, providing specific recommendations individually considering the patient's medical history, social/economic status and medical condition ( Alowais et al., 2023). Our participants showed high interest and desire towards personalized medicine rather than protocols.
15 Limitations
The study only included doctors, nurses and residents, who do not represent the entire population of healthcare workers. The perceptions of the healthcare workers at BLINDED, a modern healthcare setting, may not represent those from rural areas or smaller hospitals. A descriptive phenomenology qualitative design does not produce easily generalizable data and there might be cases of researcher bias. It is recommended that mixed-method research be conducted to enrich these qualitative results.
16 Conclusions
This study explored the perspectives of doctors, nurses and residents on AI integration in Kazakhstan's healthcare setting. While healthcare professionals positively perceive integrating AI into their practices, education on the proper and ethical use of AI in healthcare is warranted. AI can help develop care provision and contribute to the early diagnosis of diseases. AI can be beneficial in training and educating healthcare professionals on sophisticated medical concepts. Healthcare professionals recognize AI's critical role in advancing the health profession's education and training. AI-driven tools can advance the skills and improve the healthcare team's knowledge team. Hence, its ethical use in education should be explored and advocated. However, there are limitations and challenges in AI integration, such as technological unpreparedness and legal, ethical and cybersecurity issues that should be managed to maximize its use in patient care.
The results are essential in creating new health policies ensuring the ethical use of AI in healthcare. The findings highlight the need to prepare healthcare professionals for AI's effective and ethical use in healthcare. This preparation requires updating the curriculum of health professions education to integrate AI courses in the theoretical and clinical aspects of the curriculum. Moreover, continuing education on AI and other innovations for healthcare professionals is needed to prepare them for the future of healthcare. Healthcare facilities should provide educational opportunities on AI, such as in-house training, professional development programs and funding for educational endeavors on AI. Education authorities in nursing, medicine and other health professions should also develop graduate programs focused on AI in healthcare to develop experts in this field.
Ethical Approval
The study protocol was approved by the Nazarbayev University School of Medicine Institutional Research Ethics Committee (NUSOM-IREC; reference number 2023Sep#01) and National Research Center for Cardiac Surgery Council approval (protocol №6/2023) and UMC Research Ethics Committee approval (protocol №6/2023).
Funding
The study did not receive any form of funding.
CRediT authorship contribution statement
Jonas Preposi Cruz: Supervision, Validation, Resources, Formal analysis, Project administration, Writing – original draft, Conceptualization, Data curation, Software, Writing – review & editing, Investigation, Visualization, Methodology. Gulnur Nadirbekova: Software, Resources, Investigation, Writing – review & editing, Conceptualization, Data curation, Methodology, Formal analysis, Validation. Alma Tursynbek: Writing – review & editing, Validation, Writing – original draft, Investigation, Methodology, Project administration, Conceptualization, Data curation, Resources, Visualization, Software. Dilnaz Zhaksylykova: Writing – review & editing, Validation, Investigation, Methodology, Conceptualization, Software, Formal analysis, Writing – original draft, Resources, Project administration, Data curation, Visualization. Balay-odao Ejercito Mangawa: Software, Writing – review & editing, Methodology, Conceptualization, Formal analysis, Resources, Visualization, Data curation, Writing – original draft, Supervision, Investigation, Validation, Project administration.
Declaration of Generative AI and AI-assisted technologies in the writing process
During the preparation of this work the authors used Grammarly in order to check the grammar and English use. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgement
The authors used Grammarly in proofreading the manuscript.
Appendix A Supporting information
Supplementary data associated with this article can be found in the online version at
Appendix A Supplementary material
Supplementary material
Supplementary material
Table 1
| Participants | Age | Gender | Years of experience | Profession | Area of assignment |
| Participant 1 | 36 | Male | 8 | Doctor | Radiology |
| Participant 2 | 27 | Male | 5 | Resident | Cardiology |
| Participant 3 | 59 | Male | 34 | Doctor | Radiology |
| Participant 4 | 52 | Female | 30 | Nurse | Interventional radiology |
| Participant 5 | 46 | Female | 23 | Nurse | Cardiology |
| Participant 6 | 45 | Female | 25 | Nurse | Obstetrics |
| Participant 7 | 46 | Male | 24 | Doctor | Cardiac surgery |
| Participant 8 | 28 | Female | 5 | Nurse | Cardiology |
| Participant 9 | 28 | Male | 9 | Nurse | ICU |
| Participant 10 | 48 | Female | 29 | Nurse | Interventional radiology |
| Participant 11 | 37 | Male | 13 | Doctor | Neonatology |
| Participant 12 | 51 | Female | 32 | Nurse | Surgery |
| Participant 13 | 27 | Male | 6 | Nurse | Radiology |
| Participant 14 | 29 | Female | 5 | Resident | Surgery |
| Participant 15 | 51 | Male | 29 | Doctor | Pediatrics |
Table 2
| Sub-themes | Major Themes |
| Enhance education and training | Utilization of AI in Healthcare |
| Diagnostic and Treatment Support | |
| Improve patient care management | |
| Work Improvement | Impact on Healthcare Workflow |
| Workflow Efficiency | |
| Reduction of Medical Errors | |
| Lack of Empathy and Sympathy | Challenges and Limitations of AI Use in Healthcare |
| Responsibility and Liability | |
| Overreliance and Misunderstandings | |
| Reluctance and Readiness of Healthcare Systems | |
| Impact on Healthcare's Creativity and Credibility | |
| Privacy and Security Concerns | Ethical and Societal Implications of AI Use in Healthcare |
| Misinformation and Misuse | |
| Need for Education and Awareness | |
| Normalization and Future Role of AI in Daily Workflow | Future Prospects and Trends of AI Use in Healthcare |
| Future Development and Collaboration Opportunities | |
| Personalized Medicine vs. Standard Protocols |
© 2025 Elsevier Ltd