Content area
Background
The evaluation of faculty members plays a vital role in the successful implementation of educational programs and in improving the quality of university performance. Evaluation of faculty members by students is the most common type of evaluation, but it is not a complete mechanism for evaluating the role of faculty members. Since the evaluation of faculty members at the Neyshabur University of Medical Sciences was based on students, the researchers decided to conduct this study in order to designing, implementing and evaluating of faculty member evaluation process using electronic 360-degree method.
Methods
This study was a developmental research project conducted in 2022 and included three stages of design, implementation, and evaluation. Evaluation checklists for faculty members were developed through expert panel methods. A descriptive-analytical study with a cross-sectional approach was employed, utilizing researcher-made questionnaires to gauge satisfaction levels with the 360-degree evaluation method. Validity and reliability of the questionnaires were confirmed by experts. Data from 373 students and 48 faculty members were analyzed using SPSS software and statistical tests.
Results
Results indicated that evaluation sources and the weight of each source included students (40%), vice-chancellor (20%), head of department (30%), and peers (10%). Both faculty members and students expressed high satisfaction with the electronic 360-degree evaluation system. There was a significant difference in faculty members' satisfaction based on gender and age.
Conclusions
Given this satisfaction, university administrators are encouraged to adopt the 360-degree evaluation method to gain a comprehensive understanding of faculty performance.
Background
Faculty members are pivotal assets in higher education systems, tasked with cultivating competent human resources crucial for societal growth and advancement, along with providing scientific and research services [1]. The efficacy and effectiveness of faculty members directly impact the progress, growth, and development of higher education institutions; inadequate performance can lead to institutional crises and diminish their competitive edge [2]. Consequently, evaluating faculty performance is a primary responsibility of university administrators, pivotal for effective planning, successful program implementation, and enhancing overall performance quality [3].
Recent years have witnessed a heightened awareness of the pivotal role and performance of faculty members, coinciding with paradigm shifts within universities. Key indicators of this shift include dwindling organizational resource acquisition capabilities and heightened societal scrutiny regarding returns on investment in higher education institutions [4]. Hence, designing and implementing robust evaluation processes to ensure faculty performance quality is essential to prevent wastage of human and material resources and maintain competitiveness in the global arena [5].
Evaluation in the higher education system is an ongoing, systematic process aimed at describing and ensuring the quality of faculty activities [6]. Numerous objectives have been outlined for faculty evaluation, including facilitating faculty development and refining educational methods, as well as aiding managerial decisions regarding recruitment, promotion, and other faculty-related matters [7]. While various evaluation methods exist, student evaluation constitutes a predominant approach, utilized in over 85% of universities and educational institutions [8]. While student evaluations are valued for enhancing educational quality [9, 10], concerns regarding subjectivity and validity persist, prompting calls for comprehensive evaluation approaches [11, 12].
Studies recommend limiting student evaluations to 30–50% of total faculty evaluations, underscoring the need for supplementary methods [9, 13]. The 360-degree evaluation method, integrating diverse assessment approaches, is endorsed for comprehensive performance evaluation [14, 15]. This method entails gathering feedback from individuals directly and indirectly associated with the evaluated individual, including superiors, peers, subordinates, and clients, to provide a holistic assessment [16, 17]. Rigorous attention to data collection accuracy, tool validity, and reliability ensures the integrity of the evaluation process, facilitating informed academic decisions such as hiring, contract extensions, promotions, and remuneration [18, 19].
According to Miles’ paper [20], which identified seven research gaps in the research, there appears to be a methodological gap in faculty evaluation. Student evaluation is currently the dominant approach and is currently implemented in more than 85% of universities and higher education institutions [20]. Although student evaluation of faculty members can lead to improvements in faculty teaching activities, reinforcing strengths and correcting weaknesses [21], given that students do not have sufficient and necessary skills such as critical thinking and scientific mastery [22], implementing such evaluations alone can be misleading.
Student evaluation of professors, which is currently the dominant method in medical universities, has been criticized for: students'inattention and lack of accuracy in completing evaluation forms, pushing professors towards students'desired criteria, such as increasing students'scores without a real increase in learning quality [21], lack of desirable content validity of evaluation questionnaires [23], and the involvement of some categories unrelated to teaching quality, such as the professor's appearance and personality traits, professor's reputation, professor-student relationship, and such cases [24]. It seems that this evaluation method needs to be improved.
Given the significance of faculty evaluation and the reliance on student evaluations at Neyshabur University of Medical Sciences, this study was undertaken to design, implement, and assess a 360-degree faculty evaluation process.
Methods
This study was a developmental research project conducted in 2022 to design, implement, and evaluate a 360-degree evaluation process for faculty members at Neyshabur University of Medical Sciences. A developmental approach was purposefully selected due to its alignment with the multifaceted nature of faculty evaluation, which encompasses educational performance, professional ethics, administrative competencies, and peer interactions. Unlike traditional approaches that focus solely on measurement or judgment, the developmental model prioritizes continuous improvement and capacity building. Given that the objective of the current study was not only to assess performance but also to design a robust and sustainable evaluation framework that supports long-term professional growth, the developmental research method provided the flexibility and depth necessary to integrate expert insights, stakeholder feedback, and iterative tool refinement. Furthermore, the 360-degree evaluation process inherently targets multidimensional competencies, making a developmental paradigm particularly suitable as it allows for both formative and summative assessment strategies, while fostering a culture of reflection and self-improvement among faculty members.
Following this rationale, a developmental research project was carried out, which is a method that provides researchers with usable data while focusing on the design, development, and evaluation of educational products and processes. A developmental research project is a set of studies, actions, and implementations that lead to the development and improvement of the quality of education, such as changes in models, strategies, and methods of teaching and learning, development or modification of educational programs, development or modification of evaluation programs, and theorizing about the foundations and tools of advanced education or the implementation of new and innovative methods in various fields of medical education. Generally, it provides theoretical or practical solutions to existing educational problems.
1. 1.
Design stage
At this stage, an evaluation committee (experts) was established consisting of five expert faculty members specializing in education and evaluation. These individuals had at least 5 years of experience in executive and managerial activities in the field of the university's Vice-Chancellor for Education. They had Ph.D. degrees in medical education, health information technology, and nursing. They also held the positions of Vice-Chancellor for Education, Director of the Medical Education Development Center, Director of the Faculty Evaluation Unit, and Director of Educational Affairs.
The problem statement and its needs assessment were discussed by the committee in the first session. All committee members agreed that the evaluation of faculty members solely by students is not a comprehensive evaluation mechanism and is unable to show a complete and accurate picture of the educational performance of faculty members. Two members of the committee were tasked with identifying a more comprehensive model for evaluating faculty members by searching articles, evaluation forms from other universities, and the faculty promotion bylaw and presenting the results to the committee. The result of this search was the identification of the 360-degree evaluation method, the areas of evaluation, the evaluator's sources, and the weight of each source. The evaluative sources, areas, and the weight of each area were determined during five expert panel meetings and group discussions on the presented results. The initial evaluation forms for faculty members also were developed. This committee was also responsible for designing a questionnaire to assess the satisfaction of faculty members and students with the new evaluation process and forms (third stage tool). In addition, all evaluation forms were sent to the managers of the educational groups (10 groups) for comments. Eight groups approved the forms, and two groups provided comments that were reviewed by the expert panel, some of which were approved and applied to the forms. Finally, faculty evaluation resources and their respective weights were determined as follows: students (40%), vice-chancellor (20%), head of department (30%), and peers (10%).
1. 2.
Implementation stage
In this phase, the evaluation committee tried to choose the appropriate electronic platform for uploading the evaluation forms, and finally, the Hamava system was chosen. This system is a web-based sub-system of the University’s Vice-Chancellor for Educational Affairs, which allows students and faculty members to manage their educational activities through it. For students, these activities include tasks such as course registration, faculty evaluations, and viewing exam scores. For faculty members, the system provides access to activities like peer evaluations, student attendance tracking, and entering grades.
The university had previously purchased it as an education management system and used it to carry out educational matters such as registration, course selection, etc. Two weeks before the end-of-semester exams, the head of the faculty evaluation unit uploaded and activated nine evaluation forms in the Hamava system. Both students and faculty members were informed about the opportunity to participate in the faculty member's evaluation. According to the previous university law regarding student-centered evaluation, all 800 students were required to participate in the evaluation of faculty members. After 2 weeks, access to the evaluation system was closed to students and faculty members.
1. 3.
Evaluation Stage
The type of study was descriptive-cross-sectional at this stage. The Kirkpatrick evaluation model was used to evaluate the implemented program. Although primarily used to evaluate training interventions [25, 26], the Kirkpatrick is sometimes used to evaluate non-training interventions [27,28,29]. This model has four levels. The first level is reaction or satisfaction, the second level is learning, the third level is behavior, and the fourth level is results [30]. In our study, evaluation at the first level, satisfaction, was conducted using a questionnaire to examine the level of satisfaction of learners and faculty members with 360-degree evaluation. To determine the sample size of students, first, a satisfaction survey was conducted on 30 students (based on the central limit theorem) in a pilot study using a random selection method. Based on the Cochran formula, the sample size was calculated as 423 students after calculating a 10% dropout rate. Additionally, the sample size for faculty members was set at 60 individuals. For the sampling methods, we used the census method for faculty members and stratified random proportional sampling for students.
The questionnaires were sent to ten experts to determine face and content validity, and their feedback was incorporated. In the faculty member questionnaire, one item was removed due to its similarity to another item. At the end of both questionnaires, a question titled"How much is your overall satisfaction with the quality of the evaluation system of faculty members? (choose between 1 and 10)"was added. Final faculty satisfaction questionnaire included demographic characteristics (department, place of service, academic rank, employment status, degree, work experience, gender, age, executive position, and role) and 18 items. Final student satisfaction survey questionnaire included demographic characteristics (gender, age, degree, discipline, academic semester, and faculty) and 13 items. Both questionnaires had four areas including questions designing, infrastructure and technology environment, informing, and process and method of evaluation. The satisfaction questions were developed on a 5-point Likert scale as strongly agree, agree, no opinion, disagree, and strongly disagree.
The reliability of the questionnaires was confirmed using Cronbach's alpha test, yielding coefficients of 0.85 for faculty members and 0.921 for students (Questionnaires are attached). Three days after the end of the faculty evaluation, an electronic link to the questionnaire was sent to the virtual groups of faculty members and students. The approximate time to complete each questionnaire was between 10 and 15 min.
Data analysis
Data were analyzed using SPSS version 22 statistical software. Descriptive statistics were used to express quantitative data (mean, standard deviation) and qualitative data (frequency distribution tables). Independent t-tests and one-way ANOVA were used to determine relationships between satisfaction scores and demographic variables, while Pearson's correlation coefficient assessed the relationship between satisfaction scores and participants'age.
Ethical considerations
Informed consent was obtained from all participants, and ethical guidelines outlined in the Helsinki Treaty were strictly adhered to. Approval for this study was obtained from the Ethics Committee of the National Agency for Strategic Research in Medical Education (NASR) under code 4,000,466 (IR.NASRME.REC.1400.291).
Results
The aim of this study was to design, implement, and evaluate a 360-degree faculty evaluation process at Neyshabur University of Medical Sciences. In this section, the results of the study are presented in three parts, corresponding to the different phases of the study.
Design phase: A total of nine evaluation forms have been developed for faculty members. (Table 1). Based on the Table 1, the highest number of items is found in the"Department manager's evaluation forms for faculty members'quality of educational performance", with 30 items and the lowest number of items is found in the"Peers'evaluation forms for basic science faculty members'quality of education", with 15 items.
[IMAGE OMITTED: SEE PDF]
All evaluation forms encompassed five areas: education and counseling, professional and social ethics, discipline, management, and execution.
Implementation Phase: At this stage, the link to the Hamava system was activated. The link to access the system is as follows:
*
https://educationsys.nums.ac.ir/CAS/Account/Login?ReturnUrl=/Education
Notifications were disseminated through the university's administrative automation system, education department, email, and various messengers. Within two weeks, faculty members and students completed the questionnaires.
Evaluation Phase: A total of 373 students and 48 faculty members completed the questionnaires, with a participation rate of 88% for students and 80% for faculty members. Participating faculty members had an average age of 38.39 ± 6.068 (ranging from 27 to 54 years old), while participating students had an average age of 21.91 ± 4.337 (ranging from 18 to 45 years old). Table 2 illustrates the gender distribution of participants, student grades, and faculty members'academic ranks.
[IMAGE OMITTED: SEE PDF]
Table 3 presents the average and standard deviation of faculty members'opinions regarding the questionnaire fields individually. The data indicate high satisfaction levels among faculty members across all subscales. However, in the subscale measuring overall satisfaction with evaluation quality, satisfaction levels were rated at an average level.
[IMAGE OMITTED: SEE PDF]
Table 4 illustrates the relationship between faculty members'overall satisfaction scores with the 360-degree evaluation system and research variables. The analysis revealed a statistically significant difference in overall satisfaction based on gender (P = 0.041), with women exhibiting significantly higher satisfaction levels compared to men. However, no statistically significant difference was observed between overall satisfaction and other variables (educational group, place of service, academic rank, employment status, degree, work experience, position, role) (P > 0.05).
[IMAGE OMITTED: SEE PDF]
Pearson's correlation coefficient analysis revealed a significant difference between faculty members'average overall satisfaction scores and their age (P = 0.012), with a correlation coefficient of 0.360. This indicates that as individuals grew older, their satisfaction scores with the evaluation system increased.
Table 5 displays the average and standard deviation of students'opinions regarding each evaluation area individually. The data indicate high levels of satisfaction among students across all subscales.
[IMAGE OMITTED: SEE PDF]
Table 6 compares the relationship between the average total score of student satisfaction and various research variables. The analysis indicates no statistically significant difference between overall satisfaction with the evaluation and any of the variables examined, including gender, degree, discipline, academic semester, and faculty (P > 0.05).
[IMAGE OMITTED: SEE PDF]
Based on the Pearson correlation coefficient analysis, no significant difference was found between the average overall satisfaction score of students and their age (P = 0.595). The correlation coefficient between these two variables was 0.028, indicating a negligible association between student age and satisfaction with the evaluation system.
Discussion
The main objectives of the present study were to design a 360-degree evaluation process for faculty members, implement this new process, and evaluate the process. The primary motivation for developing the 360-degree evaluation tool was to establish a more comprehensive and multidimensional assessment approach. This method aims to capture a broader and more accurate perspective on faculty performance, surpassing the limitations of traditional, student-only evaluations. In the present study, according to the article search sessions and expert panel, the faculty members'evaluation resources and their weights were determined. They included students with a weight of 40%, the vice-chancellor for education with a weight of 20%, the head of the department with a weight of 30%, and peers with a weight of 10%. Nine evaluation forms were developed by a panel of experts, along with a survey of faculty members. The faculty member evaluation questionnaires included 5 areas: education and counseling, professional and social ethics, discipline, management and executive. The results of the present study also showed that faculty members and students were highly satisfied with the 360-degree electronic faculty evaluation system. There was a statistically significant difference between the average satisfaction score of faculty members with evaluation and gender, meaning that women's satisfaction was significantly higher than men's. There was also a significant difference between the average satisfaction score of faculty members and their age, meaning that the older they were, the higher their satisfaction score with the 360-degree evaluation.
The study by Haghighi et al. [31] shows that in Tehran and Shahid Beheshti Universities of Medical Sciences, the sources of evaluation of faculty members include students, the vice-chancellor for education, the head of the department, and peers, which is consistent with the results of our study. The results of this study confirm that the intended broader perspective was effectively realized, as reflected in the integration of multiple evaluator sources and the high satisfaction levels reported by both faculty members and students. The only difference is the weight of each of the evaluation sources. At Tehran University of Medical Sciences, the weight of the evaluator resources includes students (50%), the vice-chancellor for education and head of departments (25%), and peers (25%). At Shahid Beheshti University, the weight of the evaluator resources includes students (40%), the vice-chancellor for education and head of departments (40%), and peers (20%) [31]. These two universities have considered the weighting of the vice-chancellor for education and head of departments jointly. In contrast, in the present study, separate weights have been considered for these two sources, including the vice-chancellor for education (20%) and the head of departments (30%). The higher weight of the head of the department in our study is due to his more and closer interaction with faculty members and direct supervision of faculty members'activities. In fact evaluation sources should provide accurate, first-hand information about the educational performance of faculty members, and appropriate weight should be assigned to each source [13].
The evaluator resources at Tarbiat Modares University and Tehran Medical Sciences University include students, the vice-chancellor for education, and the head of departments. Peers are not used to evaluate faculty members in these universities. However, in our study, evaluators were more complete, and peer evaluation also was done. Peers are valuable resources that can evaluate individual performance. Peers have a more comprehensive insight into each other's job performance and provide opportunities for better feedback for self-improvement [32]. In this way, peers in each department evaluate each other, which can lead to objective and reliable measures of faculty performance [33].
At Tarbiat Modares University and Shahid Beheshti and Isfahan Universities of Medical Sciences, students are given a weight of 40% in the evaluation of faculty members, which is consistent with the current study. At Tehran and Shiraz Universities of Medical Sciences, 50% weight is given to students, which is higher than in our study. These different weights may be due to the differences in opinions of experts in the field of evaluation at each university. Areola, an expert in the field of faculty evaluation, also believes that student evaluation should not comprise more than 30–50% of faculty members'evaluation [34].
In Isfahan University of Medical Sciences, one of the sources of evaluation of faculty members is self-evaluation. However, in the current study, based on experts'opinions, this method was not used to evaluate faculty members. Studies show that there is a significant difference between the scores obtained from self-evaluation and other evaluation methods, and individuals give themselves higher scores in self-evaluation, which can overshadow the overall results of the evaluation [35,36,37].
The results of Haghigi et al.'s research show that the most important sources of gathering information about faculty members include students, colleagues, the head of the department, and the vice dean of educational affairs. Compliance with discipline, academic ability, teaching quality, interaction with students and colleagues, and adherence to rules and regulations are among the common axes of evaluation [31]. In our research, we considered more evaluation areas, including professional and social ethics, as well as managerial and executive activities, which can better evaluate the performance of faculty members. Observing professional ethics in the university as a scientific organization by those responsible for fostering ethics in society is crucial. Therefore, evaluating the level of attention of university faculty members to comply with professional ethics in teaching and learning is considered an important step in improving their professional competencies.
In the study by Tahani et al., the review of evaluation forms of the Faculty of Dentistry in Isfahan was conducted using the Delphi method. The researchers stated that despite the different nature of courses in various disciplines at Isfahan University of Medical Sciences, the same forms were used for evaluation. After review, theoretical course evaluation forms were designed in four areas: planning, content, resources, and lesson presentation. Additionally, clinical and practical course forms were classified into four areas: planning, learning and teaching methods, facilities, and evaluation and consequences. The clinical training evaluation form and practical training evaluation form were designed separately [38]. Compared to Tahani et al.'s research, the variety of faculty members'evaluation forms is greater in the current research due to the nature of the courses. It is important to note that different courses do not have the same nature, and especially the teaching performance of faculty members in different courses should not be evaluated with the same form. Therefore, this point should be considered when designing the evaluation system for faculty members.
The results of the present study showed that faculty members and students were highly satisfied with the 360-degree electronic faculty evaluation system. Savitha's qualitative study on the views of 40 faculty members on the 360-degree evaluation of faculty members in the state of Karnataka showed that faculty members were satisfied with the 360-degree feedback mechanism for their evaluation, which is in line with the results of our study. However, they were not aware of the parameters and weighting and could not argue them [39]. The results of Jamshidi et al.'s study, which aimed to examine the opinions of faculty members of Hamadan Medical Sciences about faculty evaluation, are not consistent with the result of our research. In this study, 66% of faculty members were not satisfied with the evaluation process. Only 34.3 percent of faculty members found the evaluation result acceptable, which is not consistent with the current study [40]. In another study aimed at designing and validating performance evaluation forms for faculty members, which was conducted on 344 students and 66 faculty members, the results showed that faculty members and students were not satisfied with the evaluation method, which is not in line with the results of our study [41]. The higher level of satisfaction in our study and the reason for the inconsistency of the results with the above-mentioned two studies can be due to the fact that in our research, we asked for the opinions of the departments and faculty members in formulating the questions.
Comparing our findings with previous studies reveals some differences in evaluator weighting and satisfaction outcomes. These differences might stem from contextual variations such as the size of the institution, the extent of faculty involvement in designing the evaluation tool, and cultural factors influencing perceptions of fairness and feedback in evaluations. Such contextual considerations are important for interpreting evaluation results and tailoring the implementation to specific settings. It is important to note that the effectiveness and acceptance of evaluation tools like the 360-degree method are inherently influenced by local contexts and stakeholder engagement. Therefore, while our study demonstrates promising satisfaction and applicability, differences with other studies underscore the necessity for careful adaptation and ongoing refinement to meet specific institutional and cultural needs. This nuanced understanding strengthens the validity of our findings and supports the tailored implementation of such comprehensive evaluation systems.
In our study, the opinions of department managers and faculty members were solicited in formulating the questions, and the opinions were applied in the evaluation forms after being reviewed by experts. Additionally, the weight of each evaluation index was determined and communicated to the faculty members. Another important point is that in the overall evaluation form of each faculty member, the raw points obtained in each area were determined, and the points of each area were multiplied by the weight of that area, and the final score was calculated out of 100 and sent to the department manager and faculty member. It seems that faculty members'participation in the evaluation process can enhance their satisfaction. Also, the transparency of the evaluation process, the clarity of the weight of each area, and the raw and overall scores can increase the trust of the academic staff towards the evaluation and increase the probability of their positive reaction to the evaluation process. Therefore, the above points should be considered by university administrators when designing and implementing the 360-degree evaluation process for faculty members.
In the present study, there was a statistically significant difference between faculty members'satisfaction with evaluation and gender, and women's satisfaction was significantly higher than men's. This could be due to receiving better scores. Gender can influence how students evaluate faculty members. Female students give lower scores to male professors than male students and female students evaluate female professors more positively than male professors. Since 65% of students at Neyshabur University of Medical Sciences are female, it may be justified [42,43,44]. Additionally, research indicates a significant relationship between the evaluation scores of faculty members and their communication skills [45,46,47]. Yilmaz et al.'s study showed that women have stronger communication skills than men, and gender is an important factor in communication skills [48]. Therefore, women's higher satisfaction with the evaluation system may be due to their superior communication skills and possibly achieving better scores through more interaction with students and peers.
In the current study, there was also a significant difference between the satisfaction score of faculty members and their age, meaning that the older they were, the higher their satisfaction score with the 360-degree evaluation. This issue can also be due to obtaining a higher evaluation score. Some studies show that increasing age and experience can improve communication skills, and better communication skills can increase a faculty member's evaluation score [49,50,51]
Conclusions
The present study designed nine models of faculty member evaluation forms and determined the evaluation resources and their respective weights: students (40%), vice-chancellor (20%), head of department (30%), and peers (10%). It emphasized the need for tailored evaluation forms due to the diverse nature of courses. Utilizing appropriate weighting, a comprehensive evaluation was achieved through 360-degree evaluation, combining multiple methods and triangulating data. Results indicated high satisfaction levels among faculty members and students with the electronic 360-degree evaluation method, suggesting faculty participation positively influenced satisfaction. Therefore, policymakers and educational managers are encouraged to adopt 360-degree evaluation for faculty performance assessment, recognizing its complexity and effectiveness. Identifying weaknesses in university evaluation systems and developing valid tools can enhance evaluation quality and educational outcomes.
Limitations and suggestions for future research
One of the limitations of the current study was the small size of the university, the limited number of faculty members and students, and the narrow range of disciplines. To address this limitation, it is suggested that future studies be conducted in larger universities with more faculty members and a greater variety of disciplines.
Another limitation was the lack of student participation in the process of designing evaluation questionnaires, which was not feasible due to the university's conditions and its social and cultural context. It is recommended that in future research endeavors, students be included as stakeholders in the questionnaire design process, with their perspectives carefully considered.
In the current study, a questionnaire was used to assess the satisfaction of faculty members and students. It is suggested that qualitative studies and interviews with stakeholders be used to increase the richness of the results in future research. It is also recommended to examine the long-term impact of 360-degree evaluations on the performance of faculty members.
In this study, the limited number of participants and restricted access to them prevented the determination of the Content Validity Ratio (CVR) and Content Validity Index (CVI) to confirm the validity of the evaluation forms. It is suggested that these measures be assessed in future studies and methods such as the Delphi technique be employed to strengthen the validity and reliability of the forms.
At our small university, we utilized the Hamava system platform due to limited financial resources. This system had previously been purchased for educational purposes. We dedicated a lot of time to preparing the platform for 360-degree evaluation and addressing various issues. For universities with fewer financial constraints, it is suggested to use custom systems that have fewer bugs and better technical support. Also, to implement faculty evaluation using the 360-degree electronic method, it is recommended to form a faculty evaluation committee with members who are experts in the field of evaluation, ensure sufficient and specialized human resources to implement the 360-degree evaluation, create a culture of understanding and justification among faculty members and students, and involve all stakeholders from the beginning of the design of the new evaluation method. We hope that other researchers will benefit from the lessons we learned in this study.
Data availability
The data are available through the corresponding author of the manuscript and by sending reasonable request to email address: [email protected]. Data were imported into an SPSS-file and is available.
Abbreviations
ANOVA:
Analysis of Variance
CVI:
Content Validity Index
CVR:
Content Validity Ratio
K-S test:
Kolmogorov-Smirnov test
NASR:
National Agency for Strategic Research in Medical Education
Analysis the management of universities and institutions of higher education in recent years [http://www.imna.ir/vdcayon0.49nuo15kk4.html]
Ahmady S, Tatari F, Yazdani S, Hosseini SA. A comprehensive approach in recruitment and employment policies for faculty members: A critical review. International Journal of Medical Research & Health Sciences. 2016;5(12):356–64.
Chambers D, Boyarsky H. B P: Development of a mission-focused faculty evaluation system. Dental Education. 2003;67(1):10–22.
tayebeh mosaviamiri ri. Innovation and University; Reflection on the Formation and Development of Innovative University. Journal of Industry and University. 2020;11(39):81–99.
Bameni Moghadam M, Rafiey SR. Evaluation of teaching quality of faculty members in higher education institutions: A case study of faculty of economics at Allameh Tabataba’i University. Quarterly of Educational Measurement. 2017;8(29):1–22.
Beran T, Rokosh J. Instructors’ Perspectives on the Utility of Student Ratings of Instruction. Instructional Science: An International Journal of the Learning Sciences. 2009;37(2):171–84.
Taheri A. Evaluation of Professors in Some Prestigious Universities of the World: A Comparative Study. Education. 2023;14(1):15–32.
SHAKURNIA A: Critical Analysis of Teacher Assessment System by the Students (Systematic Review). 2019.
Tatari F: A survey on validity and effective factors of faculty evaluation by studens. In: The First National Congress of the Challenges and strategies for developing student participation in educational system: 2014; Iran,Mashhad; 2014.
Dargahi H, Mohammadzadeh N. Faculty Members’ Evaluation by Students: Valid or Invalid. Iranian Journal of Medical Education. 2013;13(1):39–48.
Torkzadeh J, Marzoghi R, Mohamadi M, Mohtaram M. Influencing Factors on Students’ Evaluation of Faculties at Shiraz University. Educational Measurement and Evaluation Studies. 2014;4(7):139–64.
Payandeh A, Ghazanfarpour M, Khoshkholgh R, Malakoti N, Afiat M, Shakeri F. Views of Students and Faculty Members on Faculty Evaluation by Students: A Systematic Review. Medical Education Bulletin. 2023;4(1):661–73.
R.A A: Developing a comprehensive faculty evaluation system, third ed. edn: Anker; 2007.
Bastani P, Amini M, Tahernezhad A, Roohollahi N: the tehran university of medical sciences faculty members’viewpoints about the teachers’evaluation system: a qualitative study. 2014.
Eslami K, Boostani H, Zahiri M, Jahani S, Fouladi Dehaghi B, Salahshori A, Arjmand R, Babaei Heydarabadi A. Elaborating on the strengths and weaknesses of the evaluation status and quality assurance of e-learning in universities of medical sciences. Iranian Journal of Medical Education. 2021;21:208–22.
Predescu SV. Learning in industrial organizations-a multisource feedback study. Procedia Soc Behav Sci. 2010;2(2):3334–8.
Hooshi-al-Sadat SA, Ebrahimi A, Molaei H. Creation and validation of the questionnaire on the quality assessment of faculty members of the Farhangian University: A 360-degree approach. Educational and Scholastic studies. 2017;6(1):89–103.
Ghafourian Boroujerdnia M, Shakurnia AH, Elhampour H. The opinions of academic members of ahvaz University of Medical Sciences about the effective factors on their evaluation score variations. Strides in development of medical education. 2006;3(1):19–25.
Keshmiri F: Recording and Evaluating Faculty Academic Aerformance: an Experience of Yazd Shahid Sadoughi University of Medical Sciences. Journal of Medical Education and Development 2021.
Miles DA: A taxonomy of research gaps: Identifying and defining the seven research gaps. In: Doctoral student workshop: finding research gaps-research methods and strategies, Dallas, Texas: 2017; 2017: 1–15.
Keykha A, Shakurnia A. Critical Analysis of Teacher Assessment System by the Students (Systematic Review). Educational Development of Judishapur. 2019;10(2):82–96.
BM. Z, SA. R, SS. H, GhF. R, SM. N: A Survey on Students’ Attitude Toward Teachers’ Educational Characteristics in Birjand University of Medical Sciences in 2012. Journal of Medical Education and Development 2014, 9(2):41–48.
Emdadi S, Amani F, Soltanian AR, Imani B, Maghsoud A, Shojaeei S, Zargaran M, Fathi Y, Fallahi G, Khatibian M et al: A Study of Reliability and Validity of the Teacher Evaluation Form and Factors Affecting Students Evaluation of Teachers. 2014.
J. F, GH. A, H. S, H. F, A. S: Coparison of The Assessment of Professors by Student Based on Two Different Protocols, Asadabad Medical Sciences Faculty, Hamadan University of Medical Sciences. Education Strategies in Medical Sciences 2015, 8(4):209–214.
Paull M, Whitsed C, Girardi A. Applying the Kirkpatrick model: Evaluating an’interaction for learning framework’curriculum intervention. Issues in Educational Research. 2016;26(3):490–507.
Lipuma J, Leon C: Analyzing the Use of the Kirkpatrick Model in Higher Education: Insights from an NSF-Funded Chemistry Curriculum Project. 2024.
Udeshika P: Training That Sticks: Rethinking Kirkpatrick for Hotels. Journal of Human Resource Management Perspectives 2024, 9(2).
Joshi MP: An Analysis of Appropriate Training Effective Evaluation System Followed in Hotels.
Kirkpatrick J. An introduction to the new world Kirkpatrick model. Kirkpatrick Partners. 2015;10:9781580468619.
Bates R. A critical analysis of evaluation practice: the Kirkpatrick model and the principle of beneficence. Eval Program Plann. 2004;27(3):341–7.
Haghighi M, Cherabin M, Karimi M. A comparative study of the educational evaluation system of faculty members in Iranian universities of medical sciences. Iranian Journal of Medical Education. 2021;21:237–46.
Pourheidari S: The Impact of Peer Evaluation on the Quality of nursing report.
Rosenbaum ME, Ferguson KJ, Kreiter CD, Johnson CA. Using a peer evaluation system to assess faculty performance and competence. Fam Med. 2005;37(6):429–33.
Arreola RA: Developing a comprehensive faculty evaluation system., third ed edn: Anker; 2007.
Almohaimede AA. Comparison between students’ self-evaluation and faculty members’ evaluation in a clinical endodontic course at King Saud University. Eur J Dent Educ. 2022;26(3):569–76.
Aamer S, Anwar FS, Abbas B, Zara B, Farhan F, Zafar S: Comparison of Self-Assessment and Students’ Perspective Regarding Teaching Effectiveness of Medical Teachers. In: Med Forum: 2021; 2021: 146.
Shaikh G, Gul S, Tahir M. COMPARISON OF SELF-EVALUATED AND STUDENTS-REPORTED TEACHING EFFECTIVENESS OF MEDICAL TEACHERS: A CROSS SECTIONAL STUDY. Journal of University Medical & Dental College. 2020;11(3):17–24.
Tahani B, Omid A, Malek Ahmadi P, Movahedian B. Revising the theoretical, practical, and workshop evaluation checklists of the dental school at Isfahan University of Medical Sciences. Iranian Journal of Medical Education. 2021;21:426–38.
Savitha G: 360 Degree feedback: faculty perspective. 360 degree feedback: faculty perspective 2020, 60(1):6–6.
Jamshidi S, Baghaei F, Abdolsamadi H, Faradmal J, Soltanian A, Ahmadiani E. Evaluation of academic Staffs’ viewpoint about their assessment by students in hamadan university of medical sciences (2011–2012). Research in Medical Education. 2013;5(2):39–45.
Hasani R. Design and validation of performance evaluation forms for faculty members. Journal of Educational Measurement & Evaluation Studies. 2018;8(22):29–50.
Bendig AW. The relation of level of course achievement to students’ instructor and course ratings in introductory psychology. Educ Psychol Measur. 1953;13(3):437–48.
Bachen CM, McLoughlin MM, Garcia SS. Assessing the role of gender in college students’ evaluations of faculty. Commun Educ. 1999;48(3):193–210.
Keçeci A, Arslan S. Nurse faculty members’ communication skills: From student perspective. Journal of Human Sciences. 2012;9(1):34–45.
Amini M, Najafipour S, Torkan N, Ebrahimi Nejad F. Correlation between educational performance and communication skills of Jahrom medical teachers. Journal of Babol University of Medical Sciences. 2010;12(5):100–5.
Adhami A, Reihani H, Fattahi Z, Nakhaie N, Fasihi Harandi T. Comparison of student assessment of educational performance of the faculty with the teachers self assessment in Kerman University of Medical Sciences. Strides in Development of Medical Education. 2005;2(1):25–32.
Razavinia FS, Sharifimoghadam S. Assessment of Communication Skills Level among Students at Qom University of Medical Sciences in 2017. Education Strategies in Medical Sciences. 2019;12(4):19–25.
Yilmaz M, Kumcagiz H, Balci-Celik S, Eren Z. Investigating communication skill of university students with respect to early maladaptive schemas. Procedia Soc Behav Sci. 2011;30:968–72.
Attarha M, Shamsi M, Torkestani NA: Faculty Members' Communication Skills in Educational Process in Arak University of Medical Sciences. Iranian Journal of Medical Education 2012, 12(9).
Pour Asghar M, Najafi K, Tirgari A, Yazdani J, Falaki M, Salehi F. Investigating Employees’ and Health Care Prac-titioners’ Communication Skills. Iranian Journal of Psychiatry and Clinical Psychology. 2017;23(2):208–17.
Safavi M, Fesharaki M, Esmaeilpour Bandboni M. Communication skills and its related factors in Guilans teaching hospitals’ nurses 94. Avicenna Journal of Nursing and Midwifery Care. 2016;24(1):50–7.
© 2025. This work is licensed under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.