Content area
Aim
Internship programs are important components of teaching and learning that provide medical students with opportunities for real-life learning. The study aimed at evaluating the internship program for students of xxx using the CIPP model (Context, Input, Process, and Product).
Materials and methods
A cross-sectional descriptive analysis was performed on 305 students and 15 faculty members of xxx. Data were collected using a questionnaire based on the CIPP model that developed by researchers. The internship program was evaluated in four areas: input, context, process and product from the perspective of students and faculty members. The scores obtained for each domain were analyzed using SPSS v.21.
Results
Context Evaluation: Significant stakeholder discrepancy (p < 0.001): 78% of students reported inadequate environmental needs assessment vs. 35% of faculty 72% students identified goal misalignment with clinical realities vs. 28% faculty. Input Evaluation: No significant difference (p = 0.32): Comparable ratings for resource adequacy (students: 4.1/6, faculty: 4.3/6) Similar perceptions of curriculum design quality. Process Evaluation: Major implementation gaps (p < 0.001): Supervision quality: Students 2.8/6 vs. Faculty 4.7/6 Feedback mechanisms: 65% student dissatisfaction. Product Evaluation: Strong CIPP domain correlations: Context→Input: r = 0.769 (p < 0.001). Context→Process: r = 0.733 (p < 0.001). Context→Product: r = 0.724 (p < 0.001).
Conclusions
The design and implementation of evaluation programs based on the CIPP model may help improve internship programs and achieve students’ professional competencies. The positive and negative findings in this study should be considered by decision makers and healthcare officials when designing and implementing internship programs. Further longitudinal studies may be required to confirm these findings.
Clinical trial number
Not applicable.
Introduction
Primary health care (PHC) is the basis of the largest health care provision system [1]. Once graduated, medical students will be part of the healthcare system and will be responsible for providing, maintaining and promoting community health. In order to acquire efficiency in their prospective profession, students must attain the necessary knowledge and skills during their studies/training in order to perform their duties and responsibilities in the healthcare system and affiliated organizations [2].
Many educational institutions, along with their traditional education, often offer students practical education programs such as internships in the field and experimental learning activities in the classroom. Internship is one of the important components of teaching and learning that provides medical students with opportunities for practical learning [3] as well as their activities in real-life environments to acquire practical skills prior to entering a future profession [4]. In the internship period, students, together with the faculty members and in interaction with the environment, practically apply the learned concepts [5]. Education is influenced by 4 factors: instructor, learner, lesson content and the temporal and spatial conditions of the teaching place [6].
While internship in the field as one of the main methods of experimental education has a key role in achieving students’ professional competencies, the findings of various domestic and foreign studies show that the existence of numerous problems in internship prevents students from achieving the goal of professional empowerment and efficiency in their job responsibilities. These include problems such as poor coherent planning [7], lack of specific task descriptions for students and internship faculty members, mismatch and lack of coordination between the presented materials and their application in the clinical environment [8] and lack of efficiency in promoting technical and communication skills, students’ insufficient access to the instructor [9]. Other issues have also been reported including students’ dissatisfaction with the inadequacy of the internship due to lack of proper management of the internship and confusion of students in the ward, assigning students with duplicate and trivial tasks and lack of a proper evaluation system [9]. A study by Abolkhani and colleagues (2014) added that lack of educational and welfare facilities and lack of cooperation and inadequate communication between the personnel of the internship wards and sections may significantly hinder efforts to achieve internship objectives [10].
Learning professional characteristics in the internship environment is undoubtedly of great importance. Professional skills include two basic parts: specialized skills and promotion skills. There are many people, who have sufficient knowledge, however, there is a huge gap between their knowledge and performance, and in fact, in practice, they do not significantly apply what they have learnt [11].
Given the obstacles to the effectiveness of the internship program in the field and on the other hand the big chunk of money spent by universities to implement these programs, it seems necessary to conduct an evaluation to assess factors threatening the effectiveness of the internship program in the field. Evaluation, as one of the tools for quality improvement, provides the possibility to identify the strengths and weaknesses of such programs. By strengthening the positive aspects and by eliminating the shortcomings, appropriate steps are taken to create changes and reform in the educational system. Evaluation is the extent to which educational goals of a health intervention program are achieved [12]. Program evaluation is among the most important strategies for receiving feedback and may lead education from stagnation to dynamism. Unfortunately, despite the importance of evaluating curricula, a small portion of resources are devoted to this issue [13].
In 1983, Stufflebeam proved that among the evaluation models, the CIPP model is a very useful approach for educational evaluation as it provides a systematic and structured approach in many educational programs [14]. According to the CIPP model, the most important goal of evaluation is to improve the training courses. In addition, rather than just focusing on individual development, the CIPP model provides information that may be used by decision makers in educational institutions to evaluate programs, by providing organized feedback on current work. This model helps administrators prioritize basic needs and allocate available resources to effective activities. The CIPP model is derived from an acronym of Context, Input, Process, and Product [15, 16](Fig. 1).
[IMAGE OMITTED: SEE PDF]
The purpose of context evaluation is to provide a rational basis for setting educational goals. It is also aimed at making analytical efforts to identify relevant elements in the learning environment and to identify problems, needs, and opportunities in an educational context or situation. The purpose of input evaluation is to facilitate the implementation of the program designed in the field phase. In addition, input relates to human and financial resources, policies, educational changes, educational strategies, barriers and limitations of the educational system. The philosophy behind performing process evaluation is to identify or forecast implementation problems during training activities and the desirability of the implementation process of these activities. In fact, process evaluation is some sort of quality control during program implementation. Product evaluation is performed to judge the desirability of the effectiveness of educational activities where the results of the program are compared with the goals of the program, and the relationship between expectations and actual results is determined [17].
The CIPP evaluation model provides stakeholders with constructive information to improve training programs and informed decision making [16]. Using the unique features of this model, one may convince stakeholders that there is a need for making major changes in the training program of undergraduate (and graduate) medicine courses [18]. Given the role of the CIPP evaluation model in improving training courses, the absence of comprehensive model-based internship evaluation programs, such as the CIPP model, becomes somewhat visible. In this regard, the present study is designed to evaluate field internship program for students of xxx using the CIPP model.
Methods
In this cross-sectional study, all students of xxx who were attending internships together with faculty members who were responsible for offering internships course were recruited for the study. A total of 305 students and 15 faculty members participated in this cross-sectional analysis.
Program type
A mandatory clinical clerkship for final-year medical sciences students.
Duration
12 months (divided into 4-week rotations across 6 specialties).
Disciplines
Internal Medicine (30%), Surgery (25%), Pediatrics (20%), OB/GYN (15%), Public Health (5%), Electives (5%).
Settings
Tertiary teaching hospitals (70%), community clinics (20%), research labs (10%).
A researcher-developed and validated questionnaire was used to collect information from internship students and faculty members. The questionnaire was prepared using the CIPP evaluation model checklist, which is an internationally accepted and validated checklist [15]. Questionnaires were designed in two parts to accommodate both students and faculty members. In the first part, questions covered personal information while the second part covered internship evaluation based on the CIPP evaluation model in four areas including context evaluation (questions 1 to 18), input (questions 19 to 27), process (questions 28 to 41), and the product (questions 42 to 55). Therefore, 55 questions in total were raised in this instrument, and the 6-point Likert scale (strongly agree, agree, slightly agree, slightly disagree, disagree, strongly disagree) and scoring range (1 to 6) was used in the questionnaire. Scoring was done in the form of strongly agree [6], agree [5], slightly agree [4], slightly disagree [3], disagree [2], strongly disagree [1]. The minimum and maximum scores that could be achieved in this research tools were 55 and 330, respectively.
Instrument Scoring Protocol: Scale Structure: Individual items: 6-point Likert scale (1 = Strongly Disagree to 6 = Strongly Agree). Domain scoring: Context: 18 items (Range: 18–108). Input: 9 items (Range: 9–54). Process: 14 items (Range: 14–84). Product: 14 items (Range: 14–84). Total possible score: 55–330 (Sum of all domains).
Interpretation Guidelines:
| % of Max Score| Interpretation|
|----------------|----------------|
| <50%| Unsatisfactory|
| 50–75%| Needs Improvement|
| >75%| Satisfactory|
The validity of the questionnaires in Mazloomy and Moradi [19] study was confirmed by reviewing different texts and experts’ viewpoints. The reliability of the questionnaires was determined by calculating the Cronbach’s alpha coefficient. For students, the total reliability coefficient of the questionnaire was 0.97 and the reliability coefficient of four areas was from 0.87 to 0.97. For faculty members, the total reliability coefficient of the questionnaire was 0.92 and the reliability coefficient of four areas was from 0.68 to 0.93.
Following the distribution of the questionnaires among students and faculty members, the collected data were analyzed using SPSS v.21 where descriptive statistical methods (calculation of central tendency and distribution indices) and inferential (T-test for independent and correlation samples) were adopted. In each of the questionnaires, the total scores obtained in each area were separately calculated. The mean and standard deviation of the answers related to the questions of each area were determined in separate tables for the questionnaire of students and faculty members. In order to figure out the difference between the perspectives of students and faculty members, T-test was used for independent samples.
Results
The results obtained from 305 students were as follows: the mean (standard deviation) of student’s age was 22.11 (± 3.70). Among students, 69.5% (212 people) were females and 29.2% (89 people) were males. The mean (standard deviation) of students’ average score was 17.02 (± 1.33).
The results obtained from 15 faculty members were as follows: the majority of faculty members were from the School of Nursing and Midwifery (40%), 26.7% (4 people) were from the School of Public Health and 26.7% (4 people) were from the School of Paramedical Sciences; 60% (9 people) were faculty members in master’s degree and 26.7% (4 people) were in PhD degree; and. 46.7% (7 people) of the faculty members had permanent employment contract. 60% (9 people) were males and 40% (6 people) were females.
In evaluating the internship program in terms of context, the overall average score of students’ perspectives on the field was 76.39 ± 1.56 and the total average of faculty members’ perspectives on the field was 89.66 ± 9.28 (See Table 1).
[IMAGE OMITTED: SEE PDF]
Given the results illustrated in Table 2, total mean of students’ perspective in terms of output was 44.69 ± 11.00 and the total mean of faculty members’ perspective was 41.85 ± 5.84.
[IMAGE OMITTED: SEE PDF]
According to the results shown in Table 3, the overall mean scores of students’ perspective on the scope of the process were 55.48 ± 11.39 and the overall mean scores of faculty members’ perspective on the scope of the process were 64.50 ± 11.02.
[IMAGE OMITTED: SEE PDF]
According to the results shown in Table 4, the total mean scores of students’ perspective on product scope were 56.61 ± 1.46 and the total mean scores of faculty members’ perspective on product scope was 68.41 ± 11.13.
[IMAGE OMITTED: SEE PDF]
According to Table 5, in the correlation matrix from the students’ perspective, there was a positive and significant correlation between the areas (context, input, process and product). According to the faculty members, in the correlation matrix from the perspective of faculty members, there was a positive and significant correlation between the areas (context, input, process and product).
[IMAGE OMITTED: SEE PDF]
Discussion
CIPP model evaluation revealed divergent perspectives between students and faculty across all program domains. While both groups identified strengths and weaknesses in the internship program, their areas of satisfaction and concern differed significantly. Students demonstrated particular appreciation for educational content quality but expressed consistent concerns about practical implementation aspects.
Students showed moderate satisfaction (76.39) with educational content but expressed concern about curriculum-goal misalignment. This contrasts with faculty focus (89.66) on pedagogical tools, mirroring Nabilou et al.‘s findings regarding communication gaps in clinical training [20]. The 13-point satisfaction gap suggests fundamental differences in program expectations, where faculty prioritize structural elements while students value practical relevance. In the study of Rozbahani et al., students evaluated the performance of trainee coaches as very good (61%). It seems that paying attention to factors such as following up on trainees’ problems, explaining the trainee’s duties before starting the course, and monitoring their learning of practical skills can improve the quality of education as much as possible [21].
Regarding program inputs, students reported highest satisfaction with preparatory components (logbook access: 85%; orientation sessions: 78%), but significant dissatisfaction with infrastructure limitations (physical space adequacy: 42%; equipment availability: 38%). This resource-related discontent contrasts with Farzianpour et al.‘s (2015) null findings regarding physical environment impacts [22], potentially reflecting our participants’ skills-heavy curriculum demands. The emphasis on preparatory elements aligns with Rezaei et al.‘s demonstration that structured program foundations - particularly bridging theory-practice gaps - enhance trainee satisfaction [23]. Our results suggest that while educational scaffolding is valued, inadequate physical resources may undermine these preparatory benefits in clinical skill acquisition.
Regarding process evaluation, students reported high satisfaction with program adherence to timelines (82%) and faculty communication (78%), but significant dissatisfaction with infrequent logbook reviews (45%). This aligns with Nabilou et al.’s findings [20], where logbooks were deemed critical for guiding clinical activities. Effective logbook use, when paired with structured feedback, enhances training clarity and purpose—a point further supported by Mehraban’s work [24], which identified gaps in routine skill exposure and supervision consistency. These implementation challenges (e.g., irregular reviews, variable supervision) directly impact competency development, suggesting a need for standardized logbook protocols and faculty training to ensure consistent trainee oversight.
Product evaluation revealed students’ highest satisfaction with skill acquisition through teaching aids (78%), aligning with Abedini et al.‘s findings that infrastructure limitations significantly impact training quality [25]. This domain highlights the critical relationship between program outcomes and expectations, where accessible simulation resources (models, moulage) directly enhance clinical preparedness. Our results demonstrate that pre-clinical skill mastery in simulated environments serves two key functions: [1] building technical competence and [2] reducing transition stress to real-world settings. However, inconsistent access to these resources - as noted in comparable programs [19]- creates variability in baseline competency, ultimately compromising the internship’s effectiveness. These findings underscore the need for standardized simulation training protocols prior to clinical placements.
In terms of context, the faculty members attached great importance to the use of educational equipment for teaching practical skills and receiving feedback from students within the period of the implementation process to improve the quality of the internship program. However, students were less satisfied with the style of assessment methods for measuring actual performance. In the study conducted by Salimi et al., [26] the faculty members were relatively satisfied with the internship program. Moreover, Makarem et al. 2013 [27] and AliMohammadi et al. 2010 [28], reported satisfaction by the faculty members’ viewpoints, in the study of the areas of satisfaction with training experience, educational goals and achievement of goals in medical school evaluation, and education status in oral health and social dentistry. Heidari et al.‘s study regarding the evaluation of the quality of the public health internship program based on the CIPP model at Golestan University of Medical Sciences, showed that there was no significant difference between the views of continuous and non-continuous students, as well as the views of students and instructors of the internship program in evaluating the quality of the internship program in the phases of the study [29].
In terms of input, faculty members reported the highest satisfaction in accessing the logbook while the least satisfaction was reported for the inadequacy of the number of faculty members per student. In the study conducted by Mazloumi and Moradi (2018) [19], Ali Mohammadi et al. (2010) [28], Toulabi et al. (2008) [30], no adequate physical space was available in the faculty for teaching practical skills. In these studies, in the students’ perspective, the use of appropriate space has been one of the effective elements impacting the quality of educational programs. Based on the available evidences, providing educational facilities in the schools, clinical training facilities, appropriate space and scientific resources are the elements very effective on the quality of training programs.
In terms of process, faculty members are reported to be most satisfied with the implementation of the internship program respecting the set time, as well as proper communication between students and the faculty members, and they are reported to be the least satisfied with the daily review and confirmation of the logbook. The logbook is a useful tool for training and learning, and is also recommended as a ‘Study Guide’ to document the learning process, student evaluation tools, making evaluation more objective and students’ greater satisfaction. However, studies may properly use structured logbooks to provide experimental learning [31]. Taking into account students’ explanations in logbooks, they can develop new skills, new attitudes and concepts for thinking on the part of students [32], so it seems that faculty members should place more emphasis on implementing the minimum training by students and recording practical activities in the logbook on a daily basis and under their direct supervision.
Khodabande et al.‘s study regarding the evaluation of the educational program of the Kerman Medical School based on the CIPP model showed that from the perspective of the participants in the research, the condition of the field, input, process and the whole of the medical school was quite favorable and the output of the school was relatively favorable. The results showed that, in general, there is a significant difference between the views of students and professors in the field, process and output of the medical school. Students evaluated the condition of the field and the professors evaluated the condition of input, process, output and the whole faculty as more favorable [33].
In terms of product, the faculty members expressed the most satisfaction with better learning of practical skills using educational aids and equipment while they expressed the least satisfaction with the programs offered within the internship program. In Mazloomy and Moradi’s study [19], the study finding indicated dissatisfaction with the attitude of officials and staff in various environments. In their study there was no evidence obtained from the similar studies related to the views of faculty members. But in the students’ perspective in the study of Ahanchian et al. in 2017 [9], Tabrizi et al. in 2015 [34] some factors such as lack of cognizance and cooperation of health center staff, lack of cooperation by health teams and lack of coordination between university and health centers were raised as important problems in clinical training [9, 34,35,36]. However, in the study conducted by Pournamdar et al. [37], the way that personnel and staff of the training environment were reported as desirable.10, Heidari et al.‘s study regarding the evaluation of the quality of the public health internship program based on the CIPP model at Golestan University of Medical Sciences showed that in each phase of the study, evaluation using the CIPP model identified the weaknesses and strengths of the internship from the perspective of students and instructors. The CIPP evaluation model has the potential to guide policymakers and stakeholders to make informed decisions about implementing reforms in an educational program [29].
Mazloomy et al.‘s study regarding the evaluation of the public health internship program of Yazd University of Medical Sciences using the CIPP model showed that the design and implementation of evaluation programs based on the CIPP model can help develop the knowledge required for the internship program and help students achieve professional capabilities. play an effective role [19].
The main advantage of evaluating an educational program based on the CIPP evaluation model is that the context, input, process and product of the program are observed and evaluated. In a systematic way, this helps educational officials to make appropriate decisions based on the strengths and weaknesses of the program to continue, stop and revise it. The difference in the views of students and instructors regarding each of the stages of the model in different studies and the present study shows the difference in the type of educational program and available facilities and will help in future educational planning.
Limitations of this study
One of the limitations of this study was evaluation of the internship program for one university and in one course of study; therefore, the relevant findings may not be extendable to other universities or settings. And given the emergence of COVID-19 and the rise of e-learning, new studies will be required to assess new internship strategies under more strict guidelines. In addition, our study was a cross-sectional evaluation of students and faculty member’s satisfaction in relation to internship programs; therefore, a more robust design such as longitudinal studies with larger sample size may provide more convincing evidence.
Conclusion
The use of CIPP evaluation model has been effective in determining the strengths and weaknesses of the internship programs in relation to input, context, process and product. Although both students and faculty members have shown satisfaction towards some aspects of the internship program, especially in terms of ‘context’, some other aspects such as ‘input area’ has been clearly associated with some weaknesses. These weaknesses and strengths should be considered by decision makers and officials responsible for the design and implementation of such internship programs. Further longitudinal studies may be required to confirm these findings.
Student satisfaction with educational content
1. Overall Satisfaction
*
Mean satisfaction score: 4.16 (± 1.26)
(Scale: 1 = Very Dissatisfied to 6 = Very Satisfied)
*
Interpretation: Moderately positive (69.3% of maximum score)
2. Most Satisfactory Aspects
(Highest-rated content components)
Aspect
Mean Score
Key Finding
Practical skill training
4.53 ± 1.32
Hands-on learning was most valued
Alignment with MOHME guidelines
4.85 ± 3.20
Strong compliance with national standards
Use of educational equipment
4.27 ± 1.38
Effective resource utilization
3. Least Satisfactory Aspects.
(Lowest-rated content components)
Aspect
Mean Score
Key Issue
Theoretical lecture quality
3.59 ± 1.61
Didactic methods need improvement
Daily logbook implementation
3.44 ± 1.53
Perceived as bureaucratic burden
Goal-curriculum alignment
1.27 ± 0.92
Critical gap in foundational planning
Data availability
The datasets used and/or analyzed during the current study are available by reasonable request from the author via [email protected].
Abbreviations
CIPP:
Context, Input, Process, and Product
PHC:
Primary health care
Sellera PEG, Pedebos LA, Harzheim E, Medeiros OLd, Ramos LG, Martins C, et al. Monitoring and evaluation of primary health care attributes at the National level: new challenges. Ciênc Saúde Coletiva. 2020;25:1401–12.
Yarber L, Brownson CA, Jacob RR, Baker EA, Jones E, Baumann C, et al. Evaluating a train-the-trainer approach for improving capacity for evidence-based decision making in public health. BMC Health Serv Res. 2015;15(1):1–10.
Kolb DA. Experiential learning: experience as the source of learning. and development: FT; 2014.
Kaas CW, Batt C, Bauman D, Schaffzin D. Delivering Effective Education in Externship Programs (from Building on Best Practices). Stetson University College of Law Research Paper. 2015(2015-6).
Mahmoody Z, Mahmoody F, Mobaraki A, MardanParvar H. Status of internship clinical from viewpoint of Yasoj senior operation room and anesthesia students. J Educ Ethics Nurs. 2014;3(3):9–13.
Hadizadeh F, Firuzi M. Clinical education problems from nursing students’ point of view in Gonabad medical faculty. Iran J Med Educ. 2005;5.
Parvizrad P, Rezaei S. Clerkship of Public Health from the Students and the Faculty Perspective: A Qualitative Research. 2014.
Cook DA, Levinson AJ, Garside S. Method and reporting quality in health professions education research: a systematic review. Med Educ. 2011;45(3):227–38.
Ahanchian M, Sharafi S, Vafaee M, Hajiabadi F. Evaluate the effectiveness of internship program in nursing student using Kirkpatrick’s model. Res Med Educ. 2017;9(1):17–9.
Abdolkhani R, Azizi S, Sarikhani L. A survey on the viewpoint of graduates of medical records bachelor degree about strengths and weaknesses of the internships program in Ahvaz jundishapour university of medical sciences. Educational Dev Judishapur. 2014;5(1):12–20.
Tabrizi J, Mardani L, Kalantari H, Hamzehei Z. Clerkship from the perspective of students of health services management and family health in Tabriz university of medical sciences. Iran J Med Educ. 2011;10(4).
Van Melle E, Frank JR, Holmboe ES, Dagnone D, Stockley D, Sherbino J, et al. A core components framework for evaluating implementation of competency-based medical education programs. Acad Med. 2019;94(7):1002–9.
Tremblay K, Lalancette D, Roseveare D. Assessment of higher education learning outcomes: feasibility study report, volume 1–Design and implementation. Paris, France: Organisation for Economic Co-operation and Development; 2012.
Nouraey P, Al-Badi A, Riasati MJ, Maata RL. Educational program and curriculum evaluation models: a mini systematic review of the recent trends. Univers J Educ Res. 2020;8(9):4048–55.
DL S. CIPP Evaluation Model checklist 2016 [Available from: http://wmich.edu/evalctr/checklists
Frye AW, Hemmer PA. Program evaluation models and related theories: AMEE guide 67. Med Teach. 2012;34(5):e288–99.
Saif AA. Educational Measurement, Assessment and Evaluation. 7th ed. Tehran: Dowran Publisher; 2017.
Mirzazadeh A, Gandomkar R, Hejri SM, Hassanzadeh G, Koochak HE, Golestani A, et al. Undergraduate medical education programme renewal: a longitudinal context, input, process and product evaluation study. Perspect Med Educ. 2016;5(1):15–23.
Mazloomy Mahmoudabad SS, Moradi L. Evaluation of externship curriculum for public health course in Yazd university of medical sciences using CIPP model. Bimon Educ Strategies Med Sci. 2018;11(3):28–36.
Nabilou B, Amirzadeh J, Mirzapour S, Salem Safi P, Yusefzadeh H, EVALUATION OF CLERKSHIP QUALITY OF PUBLIC HEALTH STUDENTS IN URMIA MEDICAL SCIENCES UNIVERSITY IN 2017. Nurs Midwifery J. 2018;16(4):218–24.
Rouzbahani F, Sheykhtaheri A, Farzandipour M, Rangraz Jeddi F, Mobarak Ghamsari Z. Evaluation of training educators performance from points of views of medical record students in Kashan university of medical sciences, Iran. Health Inform Manage. 2011;8(2):–.
Farzianpour F, Eshraghian MR, Emami AH, Hosseini S. Assessment of training and internship programs in hospitals based on A survey on Tehran university of medical sciences students. Payavard Salamat. 2015;8(5):427–36.
Rezaei P, Damanabi S, Ghaderi Nansa LGN. Designing and assessment of apprenticeship comprehensive program for health information technology students. Iran J Med Educ. 2017;17(0):323–34.
Adel Mehraban M, Moladoust A. Evaluation of nursing management internship: A mixed methods study. Iran J Med Educ. 2015;14(11):972–87.
Abedini S, Aghamolaei T, Jomehzadeh A, Kamjoo A. Clinical education problems: the viewpoints of nursing and midwifery students in hormozgan university of medical sciences. Hormozgan Med J. 2009;12(4):249–53.
Salimi T, Khodayarian M, Rajabioun H, Alimandegari Z, Anticchi M, Javadi S, et al. A survey on viewpoints of nursing and midwifery students and their clinical instructors at faculty of nursing and midwifery of Shahid Sadoughi university of medical sciences towards clinical education during 2009–2011. J Med Educ Dev. 2012;7(3):67–78.
Makarem A, Movahed T, Sarabadani J, Shakeri MT, Asadian Lalimi T, Eslami N. Evaluation of educational status of oral health and community dentistry department at Mashhad dental school using CIPP evaluation model in 2013. J Mashhad Dent School. 2014;38(4):347–62.
Alimohammadi T, Rezaeian M, Bakhshi H, VaziriNejad R. The evaluation of the medical school faculty of Rafsanjan university of medical sciences based on the CIPP model in 2010. J Rafsanjan Univ Med Sci. 2013;12(3):205–18.
Heidari A, Khademi J, Khatirnamani Z, Rafiei N, Mirkarimi S-K, Charkazi A, et al. Evaluating the quality of the public health internship program based on the CIPP model at Golestan university of medical sciences. Horizons Med Educ Dev. 2021;12(2):20–6.
Toulabi T, Janani F, Mohammadi EQ. The Appropriateness of Educational Programs’ Objectives for Professional Needs: The Viewpoints of Khorramabad School of Nursing and Midwifery Graduates. Iran J Med Educ. 2008;8(2).
TORABI K, BAZRAFKAN L, SEPEHRI S, HASHEMI M. The effect of logbook as a study guide in dentistry training. J Adv Med Educ Professionalism. 2013;1(3):81–4.
Attar A, Bazrafkan L, Naghshzan A, Baharluei MK, Dehghan A, Tavangar M et al. A Survey on Medical Students’ Viewpoint on Logbook as a Tool for Recording New Ideas and Reflection. Iran J Med Educ. 2011;11(1).
Khodabandeh S, Rostambeig P, Sabzevari S, Nouhi E. An investigation of medical school curriculum in Kerman university of medical sciences Iran based on the CIPP model. Strides Dev Med Educ. 2016;12(4):663–70.
Tabrizi J, Azami-Aghdash S. Perspective of health service management master students about methods of holding clerkship and internship courses: a qualitative study. Res Med Educ. 2015;7(3):1–10.
Vahabi S, Ebadi A, Rahmani R, Tavallaei A, Khatouni A, Tadrisi S, et al. Comparison of the status of clinical education in the views of nursing educators and students. Educ Strategies Med Sci. 2011;3(4):179–82.
Javadi M, Raeisi AR, Golkar M. Comparison between health care management students instructors point of view about internship lessons in this major Isfahan university of medical sciences 2006. Strides Dev Med Educ. 2008;4(2):84–91.
Pournamdar Z, Salehiniya H, Shahrakipoor M. Nurse and midwifery students’ satisfaction of clinical education in hospitals of Zahedan. Res Med Educ. 2015;7(2):45–51.
Stufflebeam DL, Coryn CL. Evaluation theory, models, and applications. Wiley; 2014.
© 2025. This work is licensed under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.