Introduction
According to UN’s latest Policy Brief: Education during COVID-19 and beyond, the covid-19 epidemic has caused the most serious damage to the education system in history, affecting nearly 1.6 billion students in more than 190 countries on all continents. 94% of students all over the world have been affected by the suspension of schools and other learning places, and 99% of those in low-income and low-middle-income countries. This crisis has stimulates innovation of education sectors with digitized education emerging as a critical breakthrough in educational development. Governments and other stakeholders around the world, including the Global Education Alliance initiated by UNESCO, have swiftly responded by developing new modes of teaching and learning in a digital context. These digitized education solutions have successfully addressed the challenges posed by the suspension of schooling, teaching, and classes due to the pandemic, propelling global education swiftly towards digitization and informatization. Although digital technologies offer numerous advantages such as easier access to teaching data, diverse types, deeper processing, and timelier application, effectively transforming learning, meeting individual needs, and reducing educational inequities. However, the hasty implementation of digitized education without in-depth investigation or careful consideration can exacerbate digital inequities, thereby perpetuate educational disparities. This presents an excellent opportunity to re-examine the critical issue of digital inequities (Miller and Liu, 2023).
According to a survey conducted by Times Higher Education, higher engineering education is increasingly adopting an Outcome-Based Education (OBE) approach. The OBE emphasizes a teaching design of taking students’ learning outcomes after education as the teaching goal, and a teaching evaluation design of considering both students’ cultivating core abilities and curriculum learning results, hereby to ensure better teaching results (Bhat et al., 2020). In the whole teaching process, teaching effect analysis is also mainly conducted around the evaluation of teachers, students, learning process and learning results. However, in the digitized education of higher engineering education, due to the existence of multiple factors such as differences in digital facilities and equipment or technologies, regional differences in the level of digital technology utilization, the complexity of the communication and interaction process between teachers and students, and differences in students’ learning acceptance of digital education, etc (Hebebci et al., 2020).The existence of multiple factors will greatly aggravate the occurrence of educational inequity. This not only seriously affects the quality of students’ teaching results, but also tests teachers’ digital teaching technology and management capabilities (Albrahim, 2020). Therefore, when it comes to the digital transformations of higher engineering education, how to make digital teaching course planning and student learning outcome assessment more equitable has become a key issue that urgently needs to be solved (Rajab et al., 2020).
Digital teaching course planning
Digital teaching refers to the teaching activities of teachers and students in a digital environment, following modern educational theories, using digital resources, and promoting knowledge acquisition through digital teaching mode. Curriculum planning for digital teaching requires teachers to systematically analyze and plan from multiple dimensions such as teaching materials, teaching time, as well as various digital resources and technologies. In the advanced engineering disciplines, whether it is design disciplines (such as environmental art design, urban planning and design), Mechanical engineering and automation disciplines (such as mechanical engineering, industrial design, intelligent manufacturing engineering), computer science and technology engineering disciplines (such as software engineering, information security, artificial intelligence), medical engineering disciplines (such as biology and medicine, pharmacy), It is also the discipline of information science and engineering (such as communication engineering, integrated circuit and system design, electronic information engineering), electrical engineering discipline (such as electrical equipment, power system), civil engineering discipline (such as civil engineering, water supply and drainage science and engineering), chemical engineering discipline (such as biotechnology, chemical engineering and technology) etc. The undergraduate education in the above disciplines has the highest commonality, which is to cultivate students’ professional abilities as the core, and is result-oriented, emphasizing the importance of students’ practical engineering skills, focusing on professional and soft skills, as well as team communication, creativity, leadership, and respect for others or partners, are essential for all the students of advanced engineering disciplines (Lantada, 2020). Therefore, the engineering practice courses offered by various disciplines play a leading role in the engineering education system, and such courses emphasize the learning by doing to guide students to independently construct cognitive frameworks, integrate knowledge, shape mindset, stimulate creativity, and ultimately achieve the purpose of strengthening students’ practical ability. When implementing their online courses, teachers usually employ the mixed mode of combining theoretical teaching with practical guidance, and take project or product design and development as the medium, and need to create a real learning environment and platform construction for engineering practice in the professional context in the digital teaching system, so as to realize the collaborative integration of educational elements. This not only requires teachers to pay attention to students’ classroom behavior in real time, but also emphasizes the characteristics of assisted teaching based on digital resources. Therefore, to promote the digital teaching of higher engineering disciplines, it is necessary to continuously enrich the digital teaching scenarios around the syllabus, integrate digital technology with traditional education, and innovate educational concepts, methods and forms. This initiative aims to empower education with digital resources and technology, create a more equal educational environment, and better serve the essence of education. According to the education industry standard JY/T 0644-2022 issued by the Ministry of Education of the People’s Republic of China in November 2022, the types of interaction of digitized educational resources include presentation, conversational, interoperable, and blended. The types of resources are more abundant, including learning websites, online courses, educational tools and software, virtual simulation systems, exercises, assessment systems, online assignments or tests, digital library resources, etc. In addition, the course requires good online communication between teachers and students at all times to ensure that students receive guidance from teachers as much as possible. It also requires excellent continuous communication, discussion and collaboration among students, which not only promote effective communication and collaboration, but also enrich the cognitive process by eliciting emotional resonance among students, and ultimately realize the design and development of a project or product. In the whole process of course communication, the design and design results of the students’ practice process should be presented based on the teaching system. This requires a clear, concise, and convenient visual system interface and a variety of interactive forms (Rajab et al., 2020).
Despite the fact that many digital assistance systems developed by businesses and research institutes have successfully supported online teaching in various subjects, it is precisely because of these particularities that they are not suitable for this type of course. This is confirmed in a study of 76 randomly selected digital teaching courses, which shows that most digital courses have high-quality teaching materials and presentations, but run short of the quality of lesson planning and undermine student learning outcomes (Margaryan et al., 2015). Another study also notes that lower student completion and engagement rates with digital courses are strongly correlated with poor quality curriculum planning. For example, there is a lack of problem-centered solving practices, peer coordination and collaboration for knowledge co-building, and the support students need (Shukor and Abdullah, 2019). Student engagement is directly related to the effectiveness of learning outcomes, and the biggest challenge in digital teaching of these subjects is to facilitate teacher-student communication in the learning process. Some teachers try to use multiple assistance systems to establish effective communication and facilitate student teamwork, but this approach often requires teachers and students to learn a wider range of digital technologies, which greatly increases the learning burden of teachers and students, resulting in a less-than-ideal user experience (Angelova, 2020). Compared to traditional teaching, the complexity of digital teaching is self-evident. Teachers need to design a feasible teaching method that stimulates student engagement based on the curriculum content, considers the diversity of students’ available devices, and chooses user-friendly digital tools to minimize the impact on student learning outcomes (Cullen et al., 2013). Therefore, to maintain educational equity and teaching quality in digital education impose a greater workload and difficulty on curriculum planning.
Assessment of student learning outcomes in digitized education
Assessment of learning outcomes in teaching is a common practice in quality assurance in international higher education. Results-oriented, student-centered, and continuous improvement are the basic philosophies in higher engineering education. This is also an advanced mainstream concept advocated by developed countries and international organizations. It is very important to establish a scientific education evaluation orientation at the national level to ensure and improve the talent training quality of engineering education students. Compared with the courses with standard answers, the assessment of higher engineering disciplines practice courses is mostly presented in the form of design products. It is difficult to evaluate scores by judging between true and false. Teaching evaluation is a highly subjective cognitive activity, which is multidimensional, non-quantitative, and ambiguous (Li et al., 2020). This is mainly reflected in the diverse and vague assessment criteria of this type of design course. Traditionally, teachers have balanced their subjective judgment by understanding students’ design process in a live classroom environment. However, in digital teaching, it is difficult to obtain these insights in a timely manner, which poses a challenge to accurately define the scoring boundaries and the equity of the assessment results. Alternatively, when multiple teachers evaluate students’ performance in related courses, group decision-making involving multiple clear course evaluation criteria can help teachers objectively evaluate students’ online learning processes and outcomes. This includes aspects such as effective information organization, design language, and practical design solutions during the course learning process (Lin et al., 2021). The essence of this process is that one or more evaluators (teachers) evaluate the objectives (course evaluation criteria, student achievement) to determine the optimal or acceptable objectives. The uncertainty and subjectivity of the evaluator’s assessment target information under certain standards will aggravate the inequity of the assessment results. In addition, the process of assessing learning outcomes in digital courses faces problems such as computational complexity and unreasonable weighting (Chiao et al., 2018), and data mining and learning analytics techniques are needed to predict student performance (Namoun and Alshanqiti, 2021). It can be seen that in the digital teaching of higher engineering disciplines, how to objectively and fairly evaluate the course grading criteria and student learning outcomes is the biggest difficulty.
In a nutshell, the digita teaching assistant system does not focus on educational equity, and the digital technology developed fails to fully consider the teaching characteristics of higher engineering disciplines courses, and its poor performance in technical support teaching plans, course interactions, teaching resource connections, and course achievement evaluation often makes the acceptance of teachers and students low, affects the course teaching effect of teachers, and leads to poor teaching quality of higher engineering disciplines. This is the biggest threat to the future of the development of online higher engineering disciplines education. At the same time, the lack of a good teaching system will seriously affect students’ enthusiasm for learning, resulting in poor academic performance and further lack of confidence in their professional abilities. An increasing number of students lack enthusiasm for their majors, and the phenomenon of graduates turning to non-advanced engineering and technical positions will become more and more frequent, and in the long run, the loss of a large number of engineering talents will also seriously affect the sustainable development of higher engineering industries and hinder the development of human civilization.
In order to deeply analyze the adaptation of the current digital teaching assistant system and the teaching of higher engineering education disciplines based on educational equity, we conduct a survey on the digital teaching assistant system being used by many higher engineering schools. Scalability and many other advantages have become the inevitable embodiment of the development of colleges and universities. Many institutions of higher learning have tailored digitalteaching assistant systems to their specific needs to support their teaching and learning equity, and have set up a special department responsible for the development, operation, control, and guidance of teachers and students in the use of digital teaching assistant systems. For example, EDX, co-founded by Harvard University and MIT, was the first to offer Massive Open Online Courses (MOOCs) and their ancillary services for higher education in different disciplines (Aithal and Aithal, 2016). In September 2015, Malaysian universities also launched a large-scale online course (MOOCs) platform (Azizi, 2017) to support students to access more learning resources. The Universitas Pendidikan Ganesha in Indonesia provides a learning management system with multi-dimensional learning resources and interactive learning functions to improve inequities in learning resources in higher education (Dewanti et al., 2022). Taiwan Cheng Kung University’s digital learning platform provides features such as Webex video synchronous teaching, Moodle course teaching, and the integration of performance assessments to assist teaching.
A review of the data from 69 highly relevant research articles in the past five years shows that the impact of digital teaching technology on students’ academic achievement is most prominent (Matthew et al., 2021), and therefore, the research and application of seeking relevant technological breakthroughs in the two dimensions of digital technology and educational equity account for the largest proportion, making digital teaching an important part of global higher education today. For example, Li and García-Díaz (2022) developed an intelligent education online assistance system that enables.The advantage is that the collaborative filtering algorithm can recommend corresponding teaching resources according to students’ interests, help students to expand their knowledge, fix the discontinuity of teachers’ guidance to students, and improve the effect and satisfaction of online education. Luo (2019), based on SPOC, is a teaching platform of the School of Public Curriculum Information, which uses a three-layer B/S architecture model to run advanced applications, and links with the database to develop a curriculum information module, a textbook module, a teaching activity module, an assessment module, and a statistical module, on which the platform can be operated and taught to achieve educational equity. However, due to the large amount of data that needs to be processed, the working speed of the system needs to be improved. Yang and Lin (2022) developed an intelligent education support system based on WEB using B/S structure mode, among which the biggest innovation is to propose a Bayesian-based digital cloud computing integration model for educational resources to effectively classify and optimize the allocation of educational resources. Villegas-Ch et al. (2020) combined technologies such as artificial intelligence and data analytics with learning management systems to improve learning, its artificial neural network can analyze input and output data through research, learn the underlying patterns between output and input, evaluate and measure new data, and estimate the desired output. Sun et al. (2021) developed a deep learning-assisted online intelligent teaching system based on artificial intelligence modules combined with knowledge recommendation, and used decision tree algorithms and neural networks to generate a teaching evaluation implementation model based on decision tree technology, provide valuable data from massive information, summarize rules and data, and help teachers improve teaching and student performance. Elmasry and Ibrahim (2021) adopted hybrid cloud typologies through cloud-hosted learning management systems as a solution for achieving educational equity among higher education institutions in technologically disadvantaged developingcountries. Other researchers have used educational function models, focus groups, and e-learning tool assessment scales to design online course assessment models to understand teacher and student satisfaction (Ibrahim et al., 2021). In addition, a number of studies hold that it is important to explore the problems of online teaching interaction process from the perspective of users (both teachers and students), so that the relevant research focusing on the two dimensions of user communication and educational equity to seek better forms of interaction is also relatively large. For example, Danjou (2020) proposed a method combining synchronous and asynchronous teaching to achieve educational equity. The asynchronous part of the asynchronous part is that students can learn at their own pace by watching the teacher play instructional videos on the social platform of Facebook, and the synchronous part of teaching is to keep the communication between teachers and students and the social connection between students smooth through the Discord platform. Mahmood et al. (2021) created an Interprofessional Simulation Education (IPSE) module and best practice simulations based on the TeamSTEPPS® framework to improve the effectiveness of undergraduate medical and nursing students in teamwork and communication skills. Sajja et al. (2023) developed an AI-enhanced intelligent teaching assistant systems based on powerful language models (i.e., GPT-3) provide students with access to a variety of curriculum-specific issues, from curriculum to logistics and curriculum policies, helping to increase student engagement and reduce learning barriers while also reducing teacher workload. In order to coordinate the various technological tools used by teachers and students for online teaching, the University of Urbino develops a digital system toolkit that includes the learning management system Moodle and the web conferencing tool Blackboard Collaborate, as well as the electronic proctoring tool Smowl for computer monitoring and the web conferencing tool Google Meet on students’ smartphones for environmental monitoring (Bernardo and Bontà, 2023).
All of these studies show that teaching assistant systems built on digital information technology play a vital role in supporting the achievement of educational equity, contributing to better dissemination and understanding of knowledge, and making a significant contribution to the field of learning. However, the above studies only focus on solving the problem of educational equity between one or two dimensions, and it cannot be ignored that educational equity has diversified characteristics, and it also needs to be considered from more dimensions such as digital technology, user interaction, educational process, and environment (Marrinan et al., 2015). This view is also supported by the research results of Fadhil (2022), which holds that human factor is related to students’ attitudes and skills in the learning process, and although self-initiative, independence, and improving technical skills are positive user factors, lack of discipline, difficulty in understanding course topics, and lack of communication skills are the most common negative factors in digital teaching. Environmental factors determine students’ comfort and familiarity with learning using digital systems, and teachers’ use of systematic teaching to encourage students to learn at their own pace is seen as a positive online learning atmosphere experienced by students, but distraction, lack of face-to-face interaction, and self-learning leading to antisocial emotions are found to be negative environmental factors, It also confirms that the existing digital teaching system is insufficient for the interaction between teachers and teachers’ teaching planning and users. The technical element involves the stability of the system, and while the ability to access anytime and anywhere, the quick access or upload of various resources, access to a wider audience, and the ease of learning have been cited as the most frequently mentioned positives, while poor internet connection, technical issues, lack of a user-friendly website and interface, significant negative factors, such as lack of hardware, are the most prominent barriers to educational equity, and the problems that are prevalent in these digital teaching assistant systems have not been well addressed to this day (Sarker et al., 2019). However, so far, the specific research on the integration of multiple dimensions for educational equity to embed digital technology into teaching to enhance learning, teaching, assessment and curriculum skills is limited, and most of the word-based teaching assistant systems still have heterogeneous characteristics, the tight coupling between digital tools and between digital tools and subject courses is insufficient, a large number of educational resource systems cannot be interconnected, resource sharing and software reuse, and information resources a2re not updated in a timely mannerIt is difficult to fully ensure the orderly, intensive and optimized resource integration services, which ultimately makes it difficult for teachers to improve their teaching plans and assist in filtering educational resources through the system, making it more difficult for them to establish teaching practices under educational equity. On the part of students, it is also difficult for them to obtain personalized learning content and better collaborative communication methods through the system, which leads to their low motivation to learn and affects their academic performance (Garcia-Martinez and Hamou-Lhadj, 2013).
In summary, in order to achieve equal and high-quality education, it is challenging to introduce the common goals, common values and sustainable development methods of higher engineering disciplines in order to achieve the rapid development of self-organizing digital teaching systems based on digital information technology, information and communication technology (ICT) and digital learning (Farias-Gaytan et al., 2023). The development of a multi-dimensional digital teaching system architecture is a potential research opportunity to effectively improve educational inequity in the teaching and learning process (Da Silva et al., 2023). It is also conducive to better realizing seamless communication, interaction, resource sharing and information processing between education authorities, teachers and students. There is an urgent need to create an intelligent online teaching assistant system that matches curriculum planning and teaching evaluation decisions to achieve more convenient teacher-student interaction and high-quality online education (Hodges et al., 2020). This requires not only untapping the relationship between online teaching courses, assessments, and participant interactions (Ní Shé et al., 2019), but also using digital technologies to collect and diversify learning outcomes in a timely manner. Carry out accompanying evaluation that is integrated into the whole process of teaching, provide clear and transparent evaluation criteria, and use a variety of evaluation methods such as regular performance observation, learning process analysis, and learning outcome analysis. This all-round, multi-layered approach aims to fully grasp students’ learning and help teachers realize the visualization and intelligence of teaching plans and assessments, thereby enhancing the data mining and analysis capabilities of education and promoting personalized teaching. In addition, it is also necessary to provide timely feedback on students’ learning outcome evaluation according to curriculum standards to help students understand their own learning status, and at the same time pay attention to students’ privacy, so as to avoid the physical and psychological burden caused by the unfair evaluation process, so as to protect students’ physical and mental health. Therefore, based on the multi-dimensional criteria of practical ability, this study is to find a fairer curriculum planning and student learning outcome evaluation method in the current digital teaching of higher engineering, so as to solve the problems of difficult interactive planning of teaching and learning in online courses of higher engineering disciplines, difficult teaching course arrangement, and lack of consistency in performance evaluation.
Establishment of evaluation model and construction of intelligent teaching assistance system
To better aid the realization of digitized education equity in higher engineering disciplines, this study proposes a mulch-criteria group decision-making (MCGDM) model that combines Quality Function Deployment (QFD) with the t-test method, based on literature exploration and analysis. An accompanying intelligent online teaching assistant system has been developed. This system utilizes digital technology to expand teaching time and space, provide learning resources and tools, enhance student experience and interaction, and support data collection and application. It assists in facilitating teacher-student interactions and collaborative team design processes, promoting deep student involvement in the teaching process, and helping teachers in digital teaching course planning, analysis of student learning processes, and evaluation of learning outcomes.
Introduction of evaluation methods
Addressing the challenges of teaching evaluation in higher engineering online courses necessitates the introduction of suitable multi-criteria group decision-making methods. Based on the results of literature exploration, this study adopts the QFD and t-test methods for teacher evaluation of student learning outcomes, as detailed below:
Quality Function Deployment (QFD)
QFD is an effective modern quality control method, serving as a user-driven quality improvement tool that translates user needs into service design elements. It effectively aids quality management in enhancing user satisfaction and is highly esteemed in the field of intelligent products and services (Singh and Kumar, 2021). It is also believed to enhance the teaching quality in higher entrepreneurship education and improve the teaching quality of higher entrepreneurship education (Jiang and Cao, 2021).
Research on the construction of curriculum systems in higher engineering disciplines reveals that the QFD correlation matrix demonstrates the relationship between the core competencies of the discipline and the construction of curriculum system modules, effectively transforming core competencies into improvement needs for curriculum modules. The QFD steps used in this study are as follows:
First of all, a number of teachers evaluate the importance levels of the curriculum core training ability Ki, which can be categorized into five levels, which are Level 1 (needs irrelevant to the realization of functions), Level 2 (needs not affecting the realization of main functions), Level 3 (needs of more importance), Level 4 (needs of importance) and Level 5 (needs of significant importance). Then, the weighted average method is used on the teachers’ evaluation results so that the importance of core training ability Ki is obtained. Then, the teachers brainstorm to discuss the course grading standard items in correspondence to each core training ability, and then uses the three relation degrees 9, 3, and 1 to evaluate the correlation between the curricular core training ability items and the curricular grading standard items. The correlation level is represented by symbols, that is, ▲weak = 1; ○medium = 3; ◎strong = 9; blank = 0 (that is, no correlation), and the correlation between the two items is shown as Eq. (1):
1
Hj is the absolute weighted grade of the curricular grading standard item, j (j = 1,2, …, n); Ki is the weighted degree of the curricular core training ability item, i (i = 1,2, …, m); Rij (i = 1,2, …, m; j = 1,2, …, n) is the correlation level, which indicates the importance of correlation between the curricular core training ability item i and the curricular grading standard item j. If the j-th course grading standard item is closely associated to a number of curricular core training ability items, and the concerned curricular grading standard items are of greater importance (Ki is larger), then the value of Hj is larger, that is, the course grading standard item is more important.T-test
The t-test statistical method is widely used in education research, and shows high reliability in the applications of intelligent teaching evaluation research (Hooshyar et al., 2016). In the research on solution-based intelligent teaching assistant system, Zhang and Jia (2017) developed the Lexue 100 Intelligent Teaching Assistant System.
Higher engineering courses often take teamwork to accomplish the learning outcomes, and the number of groups normally does not exceed thirty, which fits the t-test for small sample size (for example, n < 30) and parameter inspect conditions of normal distribution with unknown population standard deviation σ. It uses the t-distribution theory to deduct the probability of difference, so as to determine whether the difference between two means is significant. The t-test includes independent-sample t-test, dependent-sample t-test, and single-sample t-test. The purpose of this teaching practice case is to understand whether there are differences between the two design methods in students’ learning, and the learning effectiveness of each group of students under different evaluation standards, so dependent sample t-test is the only appropriate method. Dependent sample t-test is mainly applicable for the same batch of participants before and after receiving two experimental observations involving repeated measurements. Each pairing group is composed of two people, and the features concerned are identical, which can be regarded as one participant undergoing two experiments (in pairing) and verifying the significance of difference between two averages. This method can be used only on the condition that normal distribution is hypothesized and the result shows that the samples come from a normal population. The calculation method of the dependent sample t-test is shown in Eq. (2). When the above formula is used, its degree of freedom df = N-1.
2
In which, N refers to the number of samples (the number of paired groups), is the sample means of the sum of the differences between the first and the second test, , the sample mean of the first and second test result respectively, and Sd the standard deviation of these differences, as shown in Eq. (3):
3
hypothesizes that after repeated extractions an infinite number of is obtained, of which the standard deviation consists of the frequency distribution of d is shown in formula 4:4
Construction of evaluation model
To address the challenges faced by teachers in course teaching planning, scientific evaluation of student learning outcomes, and difficulties in student evaluation of teaching, this study proposes a multi-criteria group decision-making model based on the QFD and T-test methods. The assessment process is divided into two stages. In the first stage, focusing on the teacher’s assessment of student learning performance, the teacher focus group first uses QFD to determine the optimal course evaluation standards. Then, the teachers assess the performance of student course outcomes according to these standards, followed by using the T-test to analyze the students’ performance under each grading criterion. The second stage involves obtaining the assessment results. Teachers gain a comprehensive understanding of students’ learning situations through the results of student performance assessments. Students, on the other hand, learn about the quality of their course learning and the differences compared to their classmates based on the assessment results provided by the teachers. The diagram of the evaluation model is shown in Fig. 1.
Fig. 1 [Images not available. See PDF.]
Evaluation model diagram.
Construction of the intelligent teaching assistance system
Based on the aforementioned multi-criteria group decision-making model, this study has developed an intelligent online teaching assistance system (hereinafter referred to as the new system). The system offers both visualization display and intelligent assessment analysis functionalities. The former is realized through a system visualization interface that includes the presentation of student learning processes, outcomes, and teacher-student assessment analysis data. The latter is based on a backend programming system aligned with the assessment model, facilitating intelligent statistics and analysis of teacher-student assessment data. Teachers and students do not need to learn the calculation and analysis of the assessment methods; they simply need to fill out the assessment content according to the operational prompts provided by the new system’s interface. The results and analysis are then automatically calculated by the system’s backend and displayed on the interface.
In developing this new system, we opted for a B/S structure (Browser/Web Server architecture) within an open-source framework comprising Apache (HTTP server), MySQL (relational database management system), and PHP (programming language), also known as the AMP stack, running on the Windows platform. This system is shared by teachers and students, allowing multiple users to perform online operations on the system interface simultaneously. The system settings for login and exit, account management, message feedback, and use of the help function are the same for both teachers and students.Based on the decision model and differing service requirements of teachers and students, the system has been designed and developed differently for each. Under the My Course submenu, the teacher interface is equipped with three interfaces for teaching material management, course grading standard assessment, and student evaluation results, while the student interface is set up for course material downloads and course grade queries. Under the Assignments Management submenu, the teacher interface has interfaces for learningprocess inspection and learning outcome evaluation, while the student interface only has interfaces for assignment modification and submission. The specific system processes for teachers and students are shown in Fig. 2. Figures 3 and 4 respectively display the interfaces of the intelligent online teaching assistance system for the teacher and student interfaces (partial).
Fig. 2 [Images not available. See PDF.]
New system flow chart.
Fig. 3 [Images not available. See PDF.]
The teacher terminal interface display of the intelligent teaching assistant system (partial).
Fig. 4 [Images not available. See PDF.]
The student terminal interface display of the intelligent teaching assistant system (partial).
Instructional practice and result analysis
This case study is based on an online course (Design Management and Strategy) offered at the College of Mechanical Engineering and Automation at a university in China. There were 48 participants, all senior students majoring in Industrial Design. The course required teamwork to complete learning outcomes, with students divided into 16 groups of three each. The entire course lasted for 7 weeks, totaling 21 class hours, with one session per week and each session lasting for 3 class hours, where each class hour was 50 min long. Among these, 9 class hours were dedicated to the course instructor introducing the teaching content and operational methods of the related online systems, while the remaining 12 class hours involved each group of students conducting online course training based on course topics designated by the teacher. The course utilized Tencent Meeting and the new system developed in this study for online teaching practice. Tencent Meeting was used for teaching course content, assigning tasks, and conducting voice communication with students, while the new system was used to display student learning performance results and teaching evaluation analysis outcomes. This case involves both teachers’ evaluation of program assessment criteria and evaluation of student learning outcomes. The following sections will discuss the acquisition, description, and intelligent computation analysis of the assessment data from both teachers and students based on the new system.
Assessment of course grading standards
The course is certified under the OBE concept in higher engineering education in a Chinese university. Majors seeking certification are required to define the grading standards of course learning outcomes based on core training abilities specified in the certification norms and evaluate the attainment of students’ learning outcomes according to these grading standards. This is also the key to achieving a more objective and fair assessment of students’ learning outcomes by teachers. Therefore, the three members of the course teaching focus group (one associate professor and two lecturers) used the four core training abilities specified for the course (expression, insight, analysis, and cooperation abilities) as preliminary guidelines. And through the QFD, it is translated into fourteen course grading criteria (see Table 1 for the relevant codes). Then, fill in all of them into the weight table set on the Course Grading Criteria Evaluation interface of the new system teacher, and the system will automatically generate the weight results of all grading criteria after obtaining the weight results of the importance of the core training ability items provided by the teacher and the importance of the relationship between the core training ability items and the course grading standard items (See Table 2).
Table 1. Related codes for evaluation standards.
Ability | Code | Criterion | Code |
---|---|---|---|
Expression | AE | Elaboration | EL |
Clarity | CL | ||
Fluency | FL | ||
Neatly | NE | ||
Insights | AI | Delightful | DE |
Originality | OR | ||
Insightful | IN | ||
Quantification | QU | ||
Analyze | AA | Logicality | LO |
Reasonable | RE | ||
Systematic | SY | ||
Integration | IT | ||
Collaboration | AC | Collaboration | CO |
Flexibility | FY |
Table 2. Weight Data for Grading Standards.
Relative weight | Weight | Ability | EL | CL | FL | NE | DE | OR | IN | QU | LO | RE | SY | IT | CO | FY |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
26.7 | 4.0 | AE | ◎ | ◎ | ◎ | ◎ | ○ | ○ | ▲ | ○ | ○ | ◎ | ○ | ▲ | ○ | ▲ |
33.3 | 5.0 | AI | ◎ | ▲ | ▲ | ○ | ◎ | ◎ | ◎ | ◎ | ◎ | ▲ | ○ | ○ | ||
20.0 | 3.0 | AA | ○ | ○ | ○ | ▲ | ◎ | ◎ | ◎ | ◎ | ○ | ◎ | ▲ | |||
20.2 | 3.0 | AC | ○ | ▲ | ○ | ○ | ○ | ▲ | ▲ | ○ | ◎ | ◎ | ||||
Weight | 660.0 | 353.3 | 360.0 | 293.3 | 200.0 | 380.0 | 566.7 | 380.0 | 620.0 | 740.0 | 313.3 | 246.7 | 540.0 | 226.7 | ||
Relative weight | 11.2 | 6.0 | 6.1 | 5.0 | 3.4 | 6.5 | 9.6 | 6.5 | 10.5 | 12.6 | 5.3 | 4.2 | 9.2 | 3.9 |
▲= 1, ○= 3, ◎= 9.
The results of Table 2 show that the highest weight value of core cultivation ability is Ability of Insights, followed by Ability of Expression, and the other two items are the lowest with the same weight. The relationship between the core training ability items and the course grading standard items and the weighted value of the importance of the ranking of the absolute advantages are Reasonable, Elaboration, Logicality, Insightful and Collaboration. Therefore, the course teaching team takes these five course grading criteria as the final evaluation criteria for the course and uses them to evaluate the student achievement performance in this case study. Finally, the results will be presented on the student side of the system, making evaluation criteria open and transparent, to guide students’ efforts in their course work.
Teacher assessment of student learning performance
In order to minimize the influence of external factors on learning practices, the research obtained the real performance of the students in the course in learning the Brainstorming and Crazy 8 methods. To minimize external influences on learning practice and to capture the true learning performance of students taught using the Brainstorming and Crazy 8 methods, standardized formats for these methods were designed by the course teacher and submitted to the teacher’s interface of the new system under teaching material management. Students can use them in the Assignment Management interface on the student side of the system to present the group’s learning process outcomes and final outcomes (see Fig. 3). Furthermore, the course implemented both methods by performing different tasks, with each method having the same execution process and duration (6 class hours for each method).
The topic for the Brainstorming task was the development of product design requirements for users with disabilities, and for Crazy 8, it was the development of covid-19 epidemic prevention products. Both methods required each student group to conduct an ice-breaking survey task (2 class hours), then present results of two rounds of task ideation and convergence (1 class hour) on the student interface of the new system. This was followed by sharing and discussing with all course participants (1 class hour). Finally, group discussions determined the final user demand results and optimized and submitted the learning outcomes of the entire implementation process (2 class hours).
The evaluation of learning outcomes will be carried out by a course focus group composed of three teachers with experience in teaching the course, which to obtain more objective and fair evaluation results. After students submitted their learning outcomes on the new system, the three teachers from the course focus group evaluated the learning outcomes of the 16 student groups based on the five course grading criteria using the homework management interface in the teacher’s side of the new system. The system then intelligently generated statistical results using weighted averages. The course teachers use the systematic Dependent Sample T-test to obtain statistical results of the learning performance of each group under five grading criteria of the two methods (see Tables 4 and 5).
The data results in Table 3 show that among the 16 groups, the group with the highest academic score is Group 5, with a score of 90.85, and the grade is excellent. Group 6 is also excellent, with a score of 90.56; the group with the lowest score is Group 16, the failing score of 55.65 points. Judging from the overall results of various scoring standards, the learning performance of the whole class in Brainstorming is better than that of Crazy 8. However, among all the standards, the best performance of the class is Crazy 8’s standard Elaboration, while the worst performance is Crazy 8’s Logicality.
Table 3. Data related to student outcome assessment.
Group | Mean | Total | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Brainstorming | Crazy 8 | ||||||||||
RE | EL | LO | IN | CO | RE | EL | LO | IN | CO | ||
B1 | 8.750 | 8.150 | 8.350 | 8.250 | 8.500 | 7.450 | 8.850 | 8.150 | 7.300 | 7.900 | 81.650 |
B2 | 8.350 | 8.250 | 7.100 | 7.900 | 8.600 | 8.500 | 8.650 | 7.250 | 8.250 | 7.800 | 80.650 |
B3 | 8.750 | 7.650 | 7.650 | 8.450 | 8.800 | 8.150 | 8.500 | 7.250 | 8.150 | 8.000 | 81.350 |
B4 | 8.250 | 7.400 | 8.200 | 8.350 | 8.030 | 8.900 | 8.350 | 7.850 | 8.900 | 7.900 | 82.130 |
B5 | 9.250 | 9.050 | 9.450 | 9.050 | 9.250 | 8.900 | 9.200 | 8.950 | 9.000 | 8.750 | 90.850 |
B6 | 9.400 | 9.000 | 9.300 | 8.750 | 9.100 | 9.300 | 9.300 | 9.000 | 9.200 | 8.210 | 90.560 |
B7 | 8.150 | 8.160 | 8.500 | 7.900 | 8.250 | 8.250 | 8.250 | 7.500 | 8.250 | 6.900 | 80.110 |
B8 | 8.500 | 7.250 | 8.160 | 8.750 | 7.900 | 7.750 | 7.850 | 7.050 | 8.000 | 8.300 | 79.510 |
B9 | 8.500 | 8.100 | 7.750 | 7.800 | 8.300 | 8.120 | 9.250 | 8.150 | 8.000 | 7.800 | 81.770 |
B10 | 6.500 | 7.100 | 6.500 | 6.550 | 6.450 | 6.400 | 8.350 | 6.500 | 6.800 | 6.550 | 67.700 |
B11 | 7.450 | 7.250 | 7.550 | 7.550 | 7.000 | 8.250 | 9.100 | 8.650 | 8.300 | 7.400 | 78.500 |
B12 | 7.250 | 8.000 | 7.250 | 7.400 | 8.650 | 7.300 | 8.000 | 6.350 | 7.100 | 7.000 | 74.300 |
B13 | 7.520 | 6.500 | 6.500 | 7.900 | 8.300 | 6.000 | 7.650 | 6.350 | 6.030 | 7.500 | 70.250 |
B14 | 9.100 | 8.350 | 7.900 | 8.500 | 8.500 | 7.400 | 7.800 | 6.900 | 7.400 | 8.000 | 79.850 |
B15 | 7.300 | 6.500 | 7.450 | 7.250 | 7.550 | 7.100 | 7.300 | 6.150 | 6.700 | 6.800 | 70.100 |
B16 | 7.050 | 5.450 | 5.550 | 5.750 | 5.500 | 5.650 | 5.500 | 4.750 | 5.650 | 4.800 | 55.650 |
Total | 8.129 | 7.735 | 7.698 | 7.881 | 8.043 | 7.714 | 8.244 | 7.300 | 7.689 | 7.476 |
In addition, the data in Table 3 also presents the evaluation results of each standard for each group. For example, the performance results of the two design methods of Group 5 with excellent results are excellent. Brainstorming has better learning results than Crazy 8, and the learning results of Logicality, the Brainstorming scoring standard, are the best.Taking Group 1 again for example, the learning effect of Brainstorming in this group is relatively better than that of Crazy 8, but among all performances, Crazy 8’s Elaboration has the highest performance, and the learning effect is the best among the five scoring criteria of Brainstorming. It is Reasonable, the Elaboration is relatively low, and the Insightful of Crazy 8 is also low. These can be used as learning goals for future efforts. The 16th group with the lowest ranking is not respectable in terms of expressiveness of both design methods, but the learning effect of Brainstorming is relatively better than Crazy 8. Among the five scoring criteria of Brainstorming, the best learning effect is Logicality. All standard projects except for these require intensive training in the future. The situation of other groups can be deduced similarly.
Based on the results in Tables 4 and 5, as indicated by the t-test critical value table prompted on the new system interface, it is known that ,16 groups using the Brainstorming and Crazy 8 methods is t = 2.491 > 1.753. This demonstrates that there is a significant difference in the learning outcomes between the Brainstorming method and the Crazy 8 method among the 16 groups, with Brainstorming showing a notably better learning effect than Crazy 8. Additionally, the t-values of the total scores in the five creativity performance indicators for both Brainstorming and Crazy 8 methods are greater than 1.753. This implies significant differences in the learning performance of Brainstorming over Crazy 8 in four of the grading standards, except for the grading standard Elaboration, where Crazy 8 outperformed Brainstorming. The biggest difference in learning outcomes between the two methods is in the grading standard Elaboration, indicating superior performance in Crazy 8’s Elaboration learning. The grading standard Collaboration also shows a significant difference, with Brainstorming performing much better than Crazy 8. The smallest difference between the two methods is in the grading standard Insightful, suggesting similar performance between the two learning methods.
Table 4. Descriptive statistics related data.
Criterion | M | N | Std.Deviation | Std.Error |
---|---|---|---|---|
Brainstorming | ||||
Reasonable | 8.129 | 16 | 0.739 | 0.185 |
Elaboration | 7.635 | 16 | 0.913 | 0.228 |
Logicality | 7.698 | 16 | 0.844 | 0.211 |
Insightful | 7.881 | 16 | 1.235 | 0.309 |
Collaboration | 8.043 | 16 | 0.824 | 0.206 |
Total | 7.989 | 16 | 0.759 | 0.190 |
Crazy 8 | ||||
Reasonable | 7.714 | 16 | 1.033 | 0.258 |
Elaboration | 8.244 | 16 | 1.008 | 0.252 |
Logicality | 7.300 | 16 | 1.970 | 0.255 |
Insightful | 7.689 | 16 | 1.021 | 0.320 |
Collaboration | 7.476 | 16 | 1.143 | 0.286 |
Total | 7.674 | 16 | 1.022 | 0.256 |
Table 5. T-test related statistical data.
Brainstorming-Crazy 8 | Mean | Std.Deviation | Std. Error | 95% Confidence Interval for Mean | ||||
---|---|---|---|---|---|---|---|---|
Lower | Upper | t | df | Sig | ||||
Reasonable | 0.415 | 0.752 | 0.188 | 0.014 | 0.816 | 2.208 | 15 | 0.043 |
Elaboration | −0.589 | 0.786 | 0.196 | 1.007 | −0.170 | −2.998 | 15 | 0.009 |
Logicality | 0.389 | 0.678 | 0.170 | 0.028 | 0.751 | 2.296 | 15 | 0.037 |
Insightful | 0.193 | 0.218 | 0.054 | 0.076 | 0.309 | 3.534 | 15 | 0.003 |
Collaboration | 0.568 | 0.784 | 0.196 | 0.150 | 0.985 | 2.896 | 15 | 0.011 |
Total | 0.316 | 0.507 | 0.127 | 0.046 | 0.586 | 2.491 | 15 | 0.025 |
Teachers can use these results to gain a comprehensive understanding of student performance and tailor their teaching accordingly. Meanwhile, students can access these results through the new system’s course grade query function, understanding their group’s performance status among all groups, and focus on improving their weaker grading standards to enhance their academic performance.
Results and discussions
The main purpose of this section is to explore the impact of the new system on students’ academic performance, teacher and student experience attitudes and satisfaction compared with other currently used systems. The specific implementation is as follows.
In order to further understand whether the new system can lead to better learning performance compared with other systems, this study is verified by the experiment group and the control group. The engineering practice course of this study is still design management and strategy, and since the course time is fixed in the first semester of each student’s senior year, this experimental test last for two years. The background conditions verified by this experiment are the same, the teacher is the same, and the teaching content, coursework content and teaching time arrangement are the same. The academic performance of the two groups of students before admission is basically similar. The most representative products are selected for the control group to make the verification results more comparative, so the teaching assistant system, which currently is the most frequently used in China’s higher engineering schools, is finally selected to provide the control group with the most frequent operation.Considering the protection of the rights and interests of the system, the name of the system is omitted here, and the name is referred to as System A. Although the system will be different from the new system developed in this study (referred to as System B) in terms of digital technology and interface, it also has the same teaching and learning-related functions for teachers and students, such as teaching management, classroom teaching, learning analysis and other functions on the teacher’s side, and course information resource download, learning results submission, grade viewing and other functions on the student side. However, it also needs to use the voice and video functions of other social platforms to assist teaching, and here it uses Tencent Meeting to assist teaching like the experiment group. The two groups of test subjects are the two consecutive classes of students from the same school and major, respectively. The experiment group (i.e., Class B) consists of 48 students who participate in the teaching practice of the new system, and their results are shown in Table 3. Before the implementation of the course, the participants in the control group have not used System B, but were familiar with the operation of System A, so they do not need to be trained in system operation. The test subjects in the control group (i.e., Class A) were 54 students, and they are divided into eighteen groups to work in teams to complete the course learning tasks, including regular learning tasks (i.e., the process tasks of Brainstorming and Crazy 8) and assessment learning tasks (comprehensive learning tasks of Brainstorming and Crazy 8). After students complete the regular learning process results and assessment learning results submitted on the system, the teacher (one teacher) all the learning outcomes of each group on the teacher’s interface. The system adopts the traditional course assessment method, consisting of procedural outcome assessment results and exam outcome assessment results. The final score of the course is 100 points, of which the regular score accounts for 30% of the total score of the course, and the exam score accounts for 70% of the total score. Finally, the performance data obtained by each group of Class B are shown in Table 6, in which the regular homework codes of Brainstorming are B-1 and B-2, the final exam homework codes are B-R, the regular homework codes of Crazy 8 are C-1, C-2, C-R, and the final exam homework code is C-R.
Table 6. Class A score data.
Group | Regular | Exam | Total | ||||
---|---|---|---|---|---|---|---|
B-1 | B-2 | R-1 | C-2 | B-R | C-R | ||
A1 | 70 | 70 | 78 | 78 | 70 | 74 | 72.60 |
A2 | 90 | 90 | 90 | 88 | 90 | 85 | 88.10 |
A3 | 70 | 70 | 70 | 70 | 70 | 76 | 72.10 |
A4 | 60 | 61 | 45 | 44 | 60 | 47 | 53.20 |
A5 | 78 | 79 | 70 | 71 | 74 | 70 | 72.75 |
A6 | 90 | 90 | 91 | 91 | 90 | 92 | 90.85 |
A7 | 73 | 74 | 63 | 66 | 72 | 65 | 68.65 |
A8 | 88 | 86 | 76 | 76 | 83 | 77 | 80.45 |
A9 | 40 | 46 | 62 | 64 | 42 | 60 | 51.60 |
A10 | 70 | 64 | 70 | 66 | 63 | 70 | 66.80 |
A11 | 70 | 76 | 70 | 70 | 71 | 70 | 70.80 |
A12 | 75 | 78 | 76 | 75 | 76 | 75 | 75.65 |
A13 | 61 | 60 | 34 | 31 | 56 | 30 | 44.05 |
A14 | 80 | 81 | 88 | 89 | 81 | 87 | 84.15 |
A15 | 74 | 72 | 68 | 68 | 70 | 60 | 66.65 |
A16 | 60 | 64 | 57 | 55 | 65 | 58 | 60.75 |
A17 | 69 | 68 | 70 | 71 | 70 | 71 | 70.20 |
A18 | 80 | 80 | 70 | 72 | 78 | 71 | 74.80 |
The independent samples t-test is an analytical method to test the difference between the mean values of the two groups of cases, so the study is based on the actual course learning results of the students in the two classes under different systems, and the results obtained by the independent samples t-test are used to understand the overall learning effectiveness of the two classes, and the relevant data are shown in Table 7, under the assumption of equal variance, F = 1.052, significance probability Sig. = 0.313 > 0.05, Indicates that the variance of learning outcomes between the two classes is equal. The two-tailed significance probability of the t-distribution Sig. = 0.048 < 0.05 indicates that there is a significant difference in learning effectiveness between the two groups. The average results of the two classes showed that Class B students had better learning outcomes than Class A, indicating that the new system was conducive to improving students’ learning outcomes.
Table 7. Independent samples T-test data.
Class A | Class B | |
---|---|---|
N | 18 | 16 |
Mean | 70.2306 | 77.8081 |
SD | 12.24319 | 8.70748 |
SE | 2.88575 | 2.17687 |
Equal variances assumed | Equal variances not assumed | |
F | 1.052 | |
Sig. | 0.313 | |
t | −2.055 | −2.096 |
df | 32 | 30.617 |
Sig.(2-tailed) | 0.048 | 0.044 |
Then, the results of the questionnaire survey of digital teaching assistant system for teachers and students are used to obtain the experience attitude and satisfaction of teachers and students. Due to the different purposes of teachers and students using the system, the ways they consider the functions of the system are also different, so the survey uses the Likert five-point scale to design a questionnaire for teachers and students, in which 5-points indicate strongly agree, 4-points agree, and 3-points neutral, 2 points disagree and 1-point strongly disagree, and the participants are informed of the purpose of the study through online consent. A total of 33 questions are prepared for the students, including 5 questions on personal basic information, 20 questions on the satisfaction evaluation of system functions, and 8 questions on the experience of using the system. A total of 34 questions are prepared for teachers, including 5 questions on personal basic information, 19 questions on system function satisfaction evaluation, and 10 questions on system user experience. In the last two parts of the two questionnaires, each question is asked once for each system. The relevant codes are shown in Tables 8 and 9.
Table 8. Satisfaction evaluation related codes.
Content | Code |
---|---|
Convenient operation | E1 |
Smooth operation | E2 |
Clear operation structure and navigation instructions | E3 |
Teaching and learning functions are in line with the characteristics of higher engineering disciplines | E4 |
Truly creates a learning environment that supports the characteristics of higher engineering disciplines | E5 |
Diversity of system teaching models | E6 |
Diversity of system resource types | E7 |
Readability of logical relationships between system resources | E8 |
Close relevance between recommended resources and content of advanced higher engineering disciplines | E9 |
Quick to obtain required resources | E10 |
Convenient for uploading and downloading resources | E11 |
Student attendance punch-in record function | E12 |
Teacher-student interaction method that fits the characteristics of higher engineering disciplines teaching | E13 |
Student interaction and collaboration method that fits the characteristics of higher engineering disciplines teaching | E14 |
Message reminder function for teachers and students to ask and answer questions | E15 |
Function of submitting students’ learning process results and assessment results | E16 |
Interface for recording students’ learning process and results | E17 |
Interface for publishing course learning tasks and learning grading standards | E18 |
Timely reminding of evaluation results of students’ learning process | E19 |
Multi-dimensional learning evaluation analysis reports | E20 |
Convenience of uploading and downloading resources | F1 |
Student attendance punch-in record function | F2 |
Interface for comprehensively observing the results of students’ learning process and assessment results | F3 |
Interface for recording students’ learning process and results | F4 |
Fair learning standard evaluation function | F5 |
Performance evaluation that considers the learning process | F6 |
Table 9. Use attitude evaluation related codes.
Content | Code |
---|---|
Improving students’ learning self-confidence | G1 |
Improving students’ learning enthusiasm | G2 |
Improving students’ learning motivation | G3 |
Improving students learning attention | G4 |
Improving students learning efficiency | G5 |
Improve students’ ability to independently construct a cognitive framework for professional courses | G6 |
Improve students’ creative thinking and obtain more design solutions | G7 |
Improving students’ learning outcomes | G8 |
Improving teaching self-confidence | G9 |
Improving teaching enthusiasm | G10 |
Improving teaching efficiency | G11 |
Providing effective assistance with improving teaching plans | G12 |
This questionnaire is collected from teachers and students who have used both systems. System A is the official teaching system of the school that has been used by both the teachers and students tested. At present, System B has been opened to the engineering disciplines of two universities, and the teachers of the engineering practice courses have applied the system to teach, and the students in the control group have used System B in other professional practice courses after the end of the course, so the questionnaire survey of the students is mainly collected from the two classes of students in this study case, and one abnormal questionnaire with no difference in the results of each option is eliminated, and a total of 101 valid questionnaires are collected. The questionnaire survey of teachers is mainly collected from teachers who have used two systems in two higher engineering disciplines, and a total of 57 valid questionnaires are collected after eliminating abnormal questionnaires. Since the questionnaire consists of two different types of questions, the attitude evaluation results of system functional satisfaction and user experience are statistically analyzed with SPSS 19, and the reliability and validity of the teacher-student questionnaire and related statistical results are shown in Table 10.
Table 10. Relevant statistical data of the questionnaire.
Student’s satisfaction with system function | Teacher’s satisfaction with system function | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Cronbach’s α | N of Items | KMO | df | Sig. | Cronbach’s α | N of Items | KMO | df | Sig. | ||
0.941 | 40 | 0.858 | 780 | 0.000 | 0.929 | 38 | 0.705 | 703 | 0.000 | ||
Code | A system | B system | N | Code | A system | B system | N | ||||
Mean | SD | Mean | SD | Mean | SD | Mean | SD | ||||
E1 | 3.96 | 0.882 | 4.08 | 0.821 | 101 | E1 | 4.21 | 0.818 | 3.67 | 0.715 | 57 |
E2 | 3.93 | 0.816 | 4.01 | 0.877 | 101 | E2 | 4.16 | 0.751 | 3.67 | 0.852 | 57 |
E3 | 3.96 | 0.720 | 4.20 | 0.928 | 101 | E3 | 3.07 | 0.923 | 3.98 | 0.790 | 57 |
E4 | 3.84 | 0.903 | 3.95 | 0.792 | 101 | E4 | 3.14 | 0.972 | 4.04 | 0.844 | 57 |
E5 | 3.77 | 0.847 | 3.99 | 0.889 | 101 | E5 | 2.88 | 1.103 | 3.56 | 0.708 | 57 |
E6 | 3.95 | 0.767 | 3.89 | 0.847 | 101 | E6 | 3.26 | 0.791 | 3.79 | 0.940 | 57 |
E7 | 3.84 | 0.821 | 3.94 | 0.798 | 101 | E7 | 4.21 | 0.725 | 3.81 | 0.811 | 57 |
E8 | 3.83 | 0.861 | 3.96 | 0.927 | 101 | E8 | 2.98 | 0.896 | 3.74 | 0.897 | 57 |
E9 | 3.64 | 0.955 | 3.94 | 0.835 | 101 | E9 | 3.02 | 0.991 | 4.14 | 0.667 | 57 |
E10 | 3.78 | 0.832 | 3.96 | 0.836 | 101 | E10 | 3.46 | 0.888 | 3.68 | 0.909 | 57 |
E11 | 3.78 | 0.808 | 4.07 | 0.816 | 101 | F1 | 3.40 | 0.799 | 3.96 | 0.844 | 57 |
E12 | 3.86 | 0.775 | 3.81 | 0.913 | 101 | F2 | 3.67 | 0.764 | 3.40 | 0.799 | 57 |
E13 | 3.76 | 0.896 | 4.17 | 0.991 | 101 | E13 | 3.09 | 1.005 | 4.19 | 0.789 | 57 |
E14 | 3.76 | 0.950 | 3.93 | 0.828 | 101 | E14 | 3.05 | 0.990 | 4.25 | 0.808 | 57 |
E15 | 3.67 | 0.907 | 3.88 | 0.898 | 101 | E15 | 3.05 | 0.990 | 3.88 | 0.867 | 57 |
E16 | 3.87 | 0.902 | 3.83 | 0.906 | 101 | F3 | 3.05 | 0.915 | 4.25 | 0.786 | 57 |
E17 | 3.86 | 0.837 | 3.90 | 0.911 | 101 | F4 | 3.05 | 0.789 | 4.04 | 0.731 | 57 |
E18 | 3.73 | 1.038 | 4.08 | 0.808 | 101 | F5 | 4.04 | 0.731 | 4.09 | 0.739 | 57 |
E19 | 3.66 | 0.886 | 4.00 | 0.825 | 101 | F6 | 2.95 | 0.934 | 4.21 | 0.773 | 57 |
E20 | 3.79 | 0.864 | 4.19 | 0.902 | 101 |
Student’s usage attitude | Teacher’s usage attitude | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Cronbach’s α | N of Items | KMO | df | Sig. | Cronbach’s α | N of Items | KMO | df | Sig. | ||
0.906 | 16 | 120 | 890 | 0.000 | 0.908 | 20 | 770 | 190 | 0.000 | ||
Code | A system | B system | N | Code | A system | B system | N | ||||
Mean | SD | Mean | SD | Mean | SD | Mean | SD | ||||
G1 | 3.79 | 0.816 | 3.94 | 0.785 | 101 | G9 | 3.16 | 0.882 | 3.81 | 0.789 | 57 |
G2 | 3.82 | 0.899 | 4.01 | 0.794 | 101 | G10 | 2.98 | 0.896 | 3.75 | 0.892 | 57 |
G3 | 3.89 | 0.835 | 3.99 | 0.831 | 101 | G11 | 3.23 | 0.964 | 4.16 | 0.727 | 57 |
G4 | 3.72 | 0.850 | 3.81 | 0.967 | 101 | G12 | 3.09 | 0.739 | 4.25 | 0.576 | 57 |
G5 | 3.79 | 0.898 | 4.09 | 0.918 | 101 | G3 | 3.00 | 0.945 | 3.67 | 1.041 | 57 |
G6 | 3.81 | 0.796 | 3.98 | 0.860 | 101 | G4 | 3.23 | 0.846 | 3.95 | 0.742 | 57 |
G7 | 3.86 | 0.917 | 4.04 | 0.937 | 101 | G5 | 3.16 | 0.862 | 3.96 | 0.755 | 57 |
G8 | 3.75 | 0.754 | 4.05 | 0.865 | 101 | G6 | 2.89 | 0.994 | 3.75 | 0.969 | 57 |
G7 | 3.23 | 0.846 | 4.02 | 0.744 | 57 | ||||||
G8 | 3.25 | 0.931 | 4.05 | 0.666 | 57 |
The results in Table 10 show that the reliability coefficient of the teacher-student questionnaire is greater than 0.9,indicating that the reliability is excellent and the questionnaire is internally consistent. The KMO of the student questionnaire was greater than 0.8, indicating that the validity of the questionnaire is suitable for principal component analysis of factor analysis, and the KMO of the teacher questionnaire is greater than 0.7, indicating that the validity of the questionnaire meets the requirements of principal component analysis of factor analysis, and the Sig of the teacher-student questionnaire is 0.000, and the Sig < 0.01 indicates that the questionnaire also has externalvalidity, indicating that there are significant differences between the two sets of questionnaires for teachers and students.
The statistical results show that teachers and students are satisfied with the performance of most functions of System A, indicating that the system has met the basic functions required by teachers and students for online teaching courses, and shows excellent functional characteristics in terms of convenient operation, smooth operation, clear operation structure and navigation instructions, diversified resource types, and rapid access to the required resources. In addition, most of the students said that System A is also better than System B in terms of diversification of teaching forms, submitting students’ learning process results and assessment results, which can help them greatly improve their learning initiative, students’ creative ideas, and obtain more design solutions and enthusiasm for learning.
The vast majority of engineering teachers and students who participate in this survey believe that the digital teaching assistant system of higher engineering disciplines has the functions of teaching and learning, which is in line with the characteristics of the curriculum of higher engineering disciplines, multi-dimensional learning evaluation analysis, reports, teacher-student interaction methods that are in line with the teaching characteristics of higher engineering disciplines, and has clear operating structures and navigation instructions, which are extremely important to help teachers and students establish a fair teaching environment. In addition, more than 40% of the teachers also show a high willingness to create a learning environment that supports the characteristics of the higher engineering curriculum, closely correlate the recommended resources with the curriculum content of the higher engineering discipline, and provide an interface for comprehensively observing the results of the students’ learning process and assessment. More than 40% of the students also emphasize that the systematic interface for the publication of course learning tasks and learning grading standards, the interface of the learning process and the evaluation record of the achievement evaluation is extremely important for the exchange of their learning information.
In the face of the needs of teachers and students, the disadvantages of system A are also particularly obvious, and the vast majority of teachers clearly said that System A is very poor in creating a learning environment that supports the characteristics of higher engineering courses, the readability of logical connections between resources, and the multi-dimensional learning evaluation and analysis report, which leads to a poor overall user experience of the system, which will greatly affect their enthusiasm for teaching. The vast majority of students are mostly neutral in their satisfaction with the functions of System A, and relatively few are satisfied or very satisfied, indicating that the students’ experience of System A will not be too good, and there is still a lot of room for improvement in the system.
We have found that teachers and students favor System B far more than System A, and there are 8 functions to achieve the satisfaction level of teachers and students, among which the functions that teachers and students are satisfied with are multi-dimensional learning evaluation analysis reports and teacher-student interaction methods that fits the teaching characteristics of higher engineering disciplines, and the satisfaction weights of these two items are far higher than those of System A. This shows that these two functions are the biggest advantages of System B. The teachers’ satisfaction with the functions of System B in providing an interface for comprehensive observation of students’ learning process results and assessment results. Student interaction and collaboration method that fits the characteristics of higher engineering disciplines teaching, and close relevance between recommended resources and content of advanced higher engineering disciplines are better than those of System A. In addition, the system provides a fair learning standard evaluation function, student learning process and achievement observation and evaluation record function is also outstanding, and the multi-dimensional learning evaluation analysis report function, improve the efficiency of teaching, and provide effective help for improving the teaching plan.
The students believe that interface for publishing course learning tasks and learning grading standards, timely reminding of evaluation results of students’ learning process the performance of System B is much better than that of System A, which is not only more in line with the characteristics of their disciplines and majors, but also fully demonstrates the equity of education in terms of smooth learning communication, transparency of learning tasks and assessment standards, and multi-dimensional analysis of learning performance reports, and believes that System B can help improve their learning efficiency more than System A. The ability to independently construct the cognitive framework of professional courses and creative ideas to obtain more design solutions make them more enthusiastic about learning, and then their learning effectiveness is also greatly improved. The above student self-evaluation results were also confirmed in the results of the teacher questionnaire. In addition, the students said that there is not much difference between the two systems in terms of the interface and smooth operation of the evaluation records of the students’ learning process and the assessment results. However, this does not affect the students’ recognition of System B, which generally says that System B is better than System A in terms of improving learning efficiency and learning effectiveness.
Conclusion
The analysis results show the successful development of a proposed digital teaching assistant system based on the multi-criterion group decision-making model, and the case study reflects the feasibility of the whole experiment. The advantage of the system lies in the fact that the teaching and learning functions developed are suitable for the teaching characteristics of higher engineering disciplines and create a fair educational environment. Compared with the existing teaching system, the system developed in this study is more effective in resource recommendation, teacher-student interaction, and student-student interaction and collaboration, which can provide a functional support more in line with the diversified needs of teachers in curriculum planning and students’ learning and collaboration. In addition, a better educational equity has been demonstrated in terms of the learning process and the assessment of learning outcomes. The system provides the publication of learning tasks and learning outcome grading standards, as well as the submission, observation, evaluation and feedback of learning processes and learning outcomes, in addition to the multi-dimensional learning evaluation analysis report obtained under the Coverage Criteria Group Decision Making (MCGDM) model based on QFD combined with t-test, which can help provide a fair learning environment and overcome the ambiguity of learning assessment in the design discipline. It is conducive to the transformation of evaluation from summative evaluation to formative evaluation and avoids unfair evaluation results. At the same time, it also allows teachers and students to better understand the overall learning situation of the course, provides all-round guidance for teachers to capture the characteristics and individual differences of student groups and improve teaching planning, and also provides students with targeted teaching content, diversified learning resources, differentiated learning paths, and personalized learning guidance. This is also the reason why the system’s teacher-student interaction and multi-dimensional learning evaluation analysis reports gain wide satisfaction among teachers and students. Therefore, we have reason to believe that the system has created good conditions for the establishment of teaching equity, and can effectively solve the problems in curriculum teaching planning, student learning performance evaluation, and teaching quality in higher engineering disciplines digitized education. This study also provides a new perspective on the teaching and evaluation of similar online courses, provides additional options for teachers of related disciplines to better reflect educational equity, develop core competencies, and deepen the learning experience, and also contributes to how higher engineering disciplines can understand and enhance digitized education learning at the university and national levels.
In the future, it is necessary to carry out more run-in experiments with related disciplines to make the system function more suitable for the particularity of the course. By refining the specific implementation paths and strategies, carrying out point-to-point practice, and developing replicable and generalizable tools, we will promote the reform of curriculum teaching. In addition, the next plan is to improve the disadvantages of the new system identified in the user questionnaire to create a better user experience.
Author contributions
Study conception and design: JL and SFL. Data collection: JL and HQZ. Analysis and interpretation of results: JL and ZL. Original draft preparation: JL and MC. Supervision: SFL. Revision: JL. All authors reviewed the results and accepted the published version of the manuscript.
Data availability
The data sets generated and/or analyzed during the present study include the personal information of the participants, so they are not in any repository, but an anonymized version is available to the corresponding author upon reasonable request.
Competing interests
The authors declare no competing interests.
Ethical approval
This study examines the implementation of an online teaching assistant system designed to enhance teaching, assessment, and address educational inequity. Our institution deemed ethical approval unnecessary under the Declaration of Helsinki, as the study was not classified as medical research or human experimentation. Conducted in accordance with guidelines for research involving human participants, all participants were over 18 years old and provided informed consent. They were assured that their information would be kept confidential and anonymous, and used solely for educational research purposes.
Informed consent
Participants were informed about the overall objectives and aim of the study, the validation procedures of the study requirements, confidentiality of information, voluntary participation, and ability to opt out of the study if needed. All experts gave their agreement to participate in the study and consented to processing of their data.
Supplementary information
The online version contains supplementary material available at https://doi.org/10.1057/s41599-024-03616-y.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
Aithal, PS; Aithal, S. Impact of on-line education on higher education system. Int J Eng Res Mod Educ; 2016; 1,
Albrahim, FA. Online teaching skills and competencies. Turkish Online J Educ Technol; 2020; 19,
Angelova, M. Students’ attitudes to the online university course of management in the context of COVID-19. Int J Technol Educ Sci; 2020; 4,
Azizi NA (2017) Malaysia at the forefront of e-Learning. New Straits Times. Retrieved from https://www.nst.com.my/news/nation/2017/09/284259/malaysia-forefront-e-learning
Bernardo, M; Bontà, E. Teaching and learning centers and coordinated technologies for an effective transition at COVID-19 pandemic time to massive distance learning and online exams. J e-Learn Knowl Soc; 2023; 19,
Bhat, S; D’Souza, R; Bhat, S; Raju, R; Kumara, BP. Effective deployment of outcome based education: strategies based on motivational models. J Eng Educ Transform; 2020; 33, pp. 164-169.
Chiao, HM; Chen, YL; Huang, WH. Examining the usability of an online virtual tour-guiding platform for cultural tourism education. J Hospitality Leis Sport Tour Educ; 2018; 23, pp. 29-38. [DOI: https://dx.doi.org/10.1016/j.jhlste.2018.05.002]
Cullen, R; Kullman, J; Wild, C. Online collaborative learning on an ESL teacher education programme. ELT J; 2013; 67,
Da Silva, FL; Slodkowski, BK; da Silva, KKA; Cazella, SC. A systematic literature review on educational recommender systems for teaching and learning: research trends, limitations and opportunities. Educ Inf Technol; 2023; 28,
Danjou, PE. Distance teaching of organic chemistry tutorials during the COVID-19 pandemic: focus on the use of videos and social media. J Chem Educ; 2020; 97,
Dewanti, P; Candiasa, IM; Tegeh, IM; Sudatha, IGW. The SMILE, A cyber pedagogy based learning management system models. Int J Adv Comput Sci Appl; 2022; 13,
Elmasry, MA; Ibrahim, MH. Cloud computing for e-learning: a proposed model for higher education institutions in developing countries. Artic Int J Sci Technol Res; 2021; 10,
Fadhil, NFM. Using rich picture to understand the issues and challenges in e-learning environment: a case study of students in higher education institution. World J Engl Lang; 2022; 12,
Farias-Gaytan, S; Aguaded, I; Ramirez-Montoya, MS. Digital transformation and digital literacy in the context of complexity within higher education institutions: a systematic literature review. Humanit soc sci comm; 2023; 10,
Garcia-Martinez S, Hamou-Lhadj A (2013) Educational recommender systems: a pedagogical-focused perspective. Multimedia Serv Intell Environ Recommend Serv 113–124
Hebebci, MT; Bertiz, Y; Alan, S. Investigation of views of students and teachers on distance education practices during the coronavirus (COVID-19) pandemic. Int J Technol Educ Sci; 2020; 4,
Hodges CB, Moore S, Lockee BB, Trust T, Bond MA (2020) The difference between emergency remote teaching and online learning. https://er.educause.edu/articles/2020/3/the-difference-between-emergency-remote-teaching-and-online-learning
Hooshyar, D; Ahmad, RB; Yousefi, M; Fathi, M; Horng, SJ; Lim, H. Applying an online game-based formative assessment in a flowchart-based intelligent tutoring system for improving problem-solving skills. Comput Educ; 2016; 94, pp. 18-36. [DOI: https://dx.doi.org/10.1016/j.compedu.2015.10.013]
Ibrahim, AF; Attia, AS; Asma’M, B; Ali, HH. Evaluation of the online teaching of architectural design and basic design courses case study: college of architecture at JUST, Jordan. Ain Shams Eng J; 2021; 12,
Jiang H, Cao Y (2021) An empirical study of entrepreneurship education and teaching in Colleges and Universities under the concept of sustainable development. EDP Sciences 251:02084
Lantada, AD. Engineering education 5.0: Continuously evolving engineering education. Int J Eng Educ; 2020; 36,
Li, J; Li, Z; Liu, SF; Cheng, M. Applying a fuzzy, multi-criteria decision-making method to the performance evaluation scores of industrial design courses. Interact Learn Environ; 2020; 28,
Li, Y; García-Díaz, V. Design of distance assistance system for intelligent education based on WEB. Mob Netw Appl; 2022; 27,
Lin S, Dong Y, Lan X, Luyun Z (2021) Online Teaching and Reform of the “Intelligent Furniture Design” Course during the Covid-19 Epidemic. In 2021 2nd International Conference on Education, Knowledge and Information Management (ICEKIM) (pp. 150-153). IEEE
Luo, X. Research on the construction and operation of information teaching platform of university public courses based on SPOC. Inf Sci; 2019; 37,
Margaryan, A; Bianco, M; Littlejohn, A. Instructional quality of massive open online courses (MOOCs). Comput Educ; 2015; 80, pp. 77-83. [DOI: https://dx.doi.org/10.1016/j.compedu.2014.08.005]
Marrinan, H; Firth, S; Hipgrave, D; Jimenez-Soto, E. Let’s take it to the clouds: the potential of educational innovations, including blended learning, for capacity building in developing countries. Int J Health Policy Manag; 2015; 4,
Matthew, UO; Kazaure, JS; Okafor, NU. Contemporary development in E-Learning education, cloud computing technology & internet of things. EAI Endorsed Trans Cloud Syst; 2021; 7,
Mahmood, LS; Mohammed, CA; Gilbert, JH. Interprofessional simulation education to enhance teamwork and communication skills among medical and nursing undergraduates using the TeamSTEPPS® framework. Med J Armed Forces India; 2021; 77, pp. S42-S48. [DOI: https://dx.doi.org/10.1016/j.mjafi.2020.10.026] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33612931][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7873709]
Miller, R; Liu, K. After the virus: disaster capitalism, digital inequity, and transformative education for the future of schooling. Educ Urban Soc; 2023; 55,
Namoun, A; Alshanqiti, A. Predicting student performance using data mining and learning analytics techniques: a systematic literature review. Appl Sci; 2021; 11,
Ní Shé C, Farrell O, Brunton J, Costello E, Donlon E, Trevaskis S, Eccles S (2019) Teaching online is different: critical perspectives from the literature. https://doras.dcu.ie/23890/1/13758_Text_V3.pdf
Rajab, MH; Gazal, AM; Alkattan, K. Challenges to online medical education during the COVID-19 pandemic. Cureus; 2020; 12,
Sajja, R; Sermet, Y; Cwiertny, D; Demir, I. Platform-independent and curriculum-oriented intelligent assistant for higher education. Int J Educ Technol High Educ; 2023; 20,
Sarker, FH; Mahmud, R; Al,; Islam, MS; Islam, K. Use of e-learning at higher educational institutions in Bangladesh. Opportunities and challenges. J Appl Res High Educ; 2019; 11,
Shukor, NA; Abdullah, Z. Using learning analytics to improve MOOC instructional design. Int J Emerg Technol Learn; 2019; 14,
Singh, A; Kumar, S. Picture fuzzy set and quality function deployment approach based novel framework for multi-criteria group decision making method. Eng Appl Artif Intell; 2021; 104, 104395. [DOI: https://dx.doi.org/10.1016/j.engappai.2021.104395]
Sun, Z; Anbarasan, M; Praveen Kumar, DJCI. Design of online intelligent English teaching platform based on artificial intelligence techniques. Comput Intell.; 2021; 37,
Villegas-Ch, W; Román-Cañizares, M; Palacios-Pacheco, X. Improvement of an online education model with the integration of machine learning and data analysis in an LMS. Appl Sci; 2020; 10,
Yang, C; Lin, JCW. Design of distance assistance system for intelligent education by web-based applications. Mob Netw Appl; 2022; 27,
Zhang B, Jia J (2017). Evaluating an intelligent tutoring system for personalized math teaching. In 2017 international symposium on educational technology (ISET) (pp. 126-130). IEEE
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
In response to the covid-19 pandemic, the digital strategy of education has become a key pedagogical approach for higher engineering disciplines. In this complex environment, the outcome-oriented higher engineering discipline faces the challenge of educational equity in terms of curriculum design and learning outcome assessment for its online engineering practical courses. In order to solve this problem, this study develops a digital teaching assistant system to facilitate the reform of educational equity in higher engineering disciplines. The teaching and learning functions of the system aim to support the interaction between teachers and students in the teaching process and the visual display of learning outcomes, and its intelligent evaluation and analysis function is based on the multi-criterion group decision-making model of Quality Function Deployment (QFD) combined with t-test to realize the evaluation of course learning outcomes. The results show that the academic performance of this system is significantly better than that of other similar systems. The statistical results of the teacher-student satisfaction survey of the two systems show that the teacher-student interaction and multi-dimensional learning evaluation analysis report function provided by the system are the most highly recognized, and the system also has better educational equity in resource recommendation, interaction and collaboration, learning achievement display and learning achievement evaluation, etc. In addition, the system is more in line with the teaching characteristics of higher engineering disciplines, which can provide effective assistance for teachers of higher engineering disciplines to improve teaching plan and cultivate students’ disciplinary practicality, consequently creating favorable conditions for the establishment of teaching equity.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details


1 National Cheng Kung University, Department of Industrial Design, Tainan, Taiwan, ROC (GRID:grid.64523.36) (ISNI:0000 0004 0532 3255)
2 National Cheng Kung University, Department of Industrial Design, Tainan, Taiwan, ROC (GRID:grid.64523.36) (ISNI:0000 0004 0532 3255); Huaqiao University, College of Mechanical Engineering and Automation, Xiamen, China (GRID:grid.411404.4) (ISNI:0000 0000 8895 903X)
3 Quanzhou Normal University, Fine Art and Design College, Quanzhou, China (GRID:grid.449406.b) (ISNI:0000 0004 1757 7252)
4 Osaka University, Graduate School of Human Sciences, Osaka, Japan (GRID:grid.136593.b) (ISNI:0000 0004 0373 3971)
5 NingboTech University, Ningbo, China (GRID:grid.513221.6)