1. Introduction
The first attempt to create the European Higher Education Area (EHEA) was in 1999, when 29 education ministers signed the “Bologna Declaration”. Currently, the EHEA consists of 48 countries, and its implementation has involved a major change in most universities, moving from a traditional approach (teacher-focused) to a more student-centred approach [1]. All this has led to more dynamic classes, usually through interactive tasks based on information and communication technologies (ICTs) [2]. In fact, some authors consider ICTs to be an essential element in 21st-century education [3,4]. Thus, universities usually have virtual platforms, called learning management systems, where all the necessary elements of the subjects are included. In Spain, the most common one is based on Moodle [5]. According to [6], Moodle is characterized by a series of functionalities grouped into two classes: resources (which include teaching materials: web pages, documents, presentations, etc.) and modules (which provide interaction between students and teachers: databases, assignments, forums, questionnaires, wikis, activities based on the HTML5 package (H5P), etc.) [7,8,9]. These modules, in turn, are related to different types of activities: creation, organization, delivery, communication, collaboration, and assessment [10]. In [11] the different Moodle modules are grouped into the activity classes previously indicated. Along with the variety of Moodle modules, in recent years, numerous virtual tools have also appeared (such as Kahoot! [12], Socrative [13], Quizizz [14] or Genially [15]), which allow for the gamifying of classes [16,17]. Gamification is defined as a methodology used to increase motivation, competitiveness, and people’s effort by using typical game techniques [18,19]. Gamification can be conducted individually (each student competes against the rest of their classmates) or cooperatively (in groups); in the latter case, in addition to the fun and motivating dynamics created by gamification, participation and interpersonal relationships are also encouraged, creating a more suitable environment for learning [20]. This change in educational philosophy implies a transformation not only in the teaching methodology, but also in the way we evaluate learning. The transition towards student-centred teaching involves a more active and participatory approach, where students are seen as active agents in their own learning process. This shift requires assessments that go beyond simple memorization tests and truly reflect the deep understanding, practical application, and critical skills that students acquire throughout their education [21]. Furthermore, the transition to learner-centred education demands that students are actively involved in evaluating their own learning progress. In fact, students should have opportunities to assess their acquisition of knowledge and skills, as well as provide peer feedback, throughout a course. Involving students directly in these assessment processes, whether through self-evaluations, peer reviews, or other methods, can promote higher-order thinking, improved metacognition, and collaborative learning [22]. Some alternatives analysed in the methodology proposed in this article involve students in self-assessment and the evaluation of their peers.
For the lecturer, having all these alternatives often makes it difficult to decide which of them are the most appropriate, thus becoming a decision problem on which one to use. It should be remembered that any activity carried out in class must make sense for the subject, that is, encompass competencies and learning outcomes, and encourage collaborative learning or the use of new technologies [23]. In addition, from the lecturer’s point of view, it is also important to assess other aspects, such as the complexity of preparing and/or grading it. Multicriteria decision-making (MCDM) methodologies are a branch of operational research that deals with finding the optimal solution in complex scenarios (which include conflicting objectives and factors), allowing one to objectively assess the different alternatives and order them according to the criteria analysed [24]. These methodologies have become popular in recent years, and have been used in very diverse fields, such as the selection of materials for optimal design [25], the parametric optimization of machining processes [26], the selection of green technologies for the rehabilitation of existing buildings [27], the optimal selection of wind power plant locations [28], and many other applications [29].
However, as far as the authors of this work know, MCDM has not been used in educational contexts. Thus, here, SMART (Selection Model for Assessment Resources and Techniques) is proposed, a methodology that aims to determine the best activities to perform in class, using the following set of MCDM methods: the analytic hierarchy process (AHP) and the technique for order of preference by similarity to ideal solution (TOPSIS) to optimize the different activity alternatives based on different defined criteria. The rest of the document is organized as follows: Section 2 explains the methodology of the proposed model; the model evaluation is presented in Section 3, analysing the results in Section 4 and Section 5; finally, Section 6 summarizes the main conclusions obtained after this study.
2. Methodology
SMART aims to answer the following question: what assessment activities should be implemented in a class? Apparently, it seems simple to solve, but it is really a complex decision. To solve this problem, an optimized methodology is proposed that obtains the most appropriate assessment activities according to some defined criteria. Figure 1 shows an overview of SMART, including the following phases: “Data”, “Analysis”, and “Results”. While the “Data” phase is the basis of the model, the “Analysis” and “Results” phases make up its optimization process. They are described in the following sections.
2.1. Data Phase
This phase is the structure of the problem to be solved, where all the information is collected and prepared to execute the optimization process. In general, it is necessary to know the education level, modality, whether a virtual platform is used as the main medium for classes or as support, etc. Based on these data, criteria and alternatives are defined.
-
The criteria are the most important indicators that are considered in the evaluation process of the alternatives. They can be quantitative or qualitative and can be organized into main criteria (categories), subcriteria, etc. For example, for the problem analysed in this paper, several categories are involved: students, teachers, subject, and the activity in question. These categories deploy a series of indicators to be evaluated.
-
The alternatives are the different options involved in decision making. In this model, the alternatives correspond to the different assessment activities being analysed.
To effectively determine and define the criteria and alternatives, the authors recommend asking different people related to the educational context (e.g., teachers, counsellors, etc.) with an extensive experience in the field. In this way, it is possible to consider a more comprehensive set of criteria and alternatives.
2.2. Analysis Phase
First, the weight of the criteria is determined according to the AHP model and then the decision matrix is created. The authors recommend using AHP whenever the group of experts has extensive experience in the topic addressed (in this case, online teaching experience).
2.2.1. AHP
In 1980, Professor Saaty formulated the AHP [30], used in the decision-making procedure. The hierarchy of the model characterizes the method so that the upper level represents the goal to be achieved, the intermediate level incorporates all levels of criteria and subcriteria that link to the model, and at the base are the alternatives that will be evaluated. The fundamentals of the method are mathematical and psychological, and it can be used to evaluate alternatives. However, in this work, it is used to calculate the weight of the criteria, as detailed below:
1.. Design the hierarchical model of the problem. In this step, the problem is modelled with a three-level hierarchical structure, encompassing the goal or objective, criteria, and alternatives; see Figure 2.
2.. Assignment and assessment of priorities. The objective of this step is to obtain the weight of the criteria based on the evaluation of the criteria: it can be performed with a scale directly or indirectly through the comparison between pairs of criteria where the different priorities are compared in a matrix R. Each piece of data is a positive numeric value that determines the relative priority between the row criterion compared to the column criterion; see Table 1.
The complete mathematical procedure is as follows. A priority vector is intended to be determined as shown in Equation (1). For this, Equation (2) is proposed, from which matrix W is obtained by assigning the weights () associated with the comparison of criteria (). The elements of the matrix are positive numbers.
(1)
(2)
A simplified way to state the above equation is according to Equation (3):
(3)
where for row i, the sum of the elements is , and for column j, the sum of the elements is .Once the matrix is normalized, the sum of the columns obtains the vector . This method demonstrates the mathematical strengths of reciprocity, homogeneity, and consistency [24]. Specifically, consistency is demonstrated using the consistency ratio () indicator according to Equation (4), where is the random consistency index (simulation of 100,000 random reciprocal matrices [31]) and is the consistency index:
(4)
Due to the dimensions of the matrix being considered, the weights of the criteria will be valid if . This threshold varies if the matrix has other dimensions.
2.3. Decision Matrix
The decision matrix database for the TOPSIS method is the set of evaluations of each alternative with respect to the criteria; see Table 2, where:
- Ai
alternatives, ;
- Cj
criteria, ;
- vij
evaluation of alternative with reference to criterion ;
- W
vector of weights associated with the criteria, obtained according to Section 2.2.1.
2.4. Results Phase: TOPSIS
Finally, the TOPSIS method is applied in order to obtain a ranking of alternatives. With the classification of alternatives provided by SMART, the most appropriate assessment activities to perform in class are determined. The TOPSIS method is based on the definition of the ideal and anti-ideal for the selection of alternatives, and was created by [32]. It states that the finally selected alternatives minimize the distance to the positive ideal solution and maximize the distance to the negative ideal solution.
Figure 3 shows a graphical representation of the method with five alternatives (), two criteria ( and ), and the plot of the ideal and anti-ideal points. Alternative is the closest to the ideal and and are the farthest from the anti-ideal. TOPSIS solves the problem by calculating the weighted distances to the ideal and anti-ideal for each alternative, through a multivariate data analysis [33].
The TOPSIS method algorithm is as follows [24]:
Construction of the decision matrix.
Normalization of the decision matrix.
Construction of the normalized weighted matrix.
Determination of the positive and negative ideal solution.
Calculation of the positive and negative ideal solution.
Calculation of the relative proximity of each alternative to the positive ideal solution.
Ordering of the alternatives according to their relative proximity.
3. Model Evaluation
As stated in Section 2, SMART consists of three phases: “Data”, “Analysis”, and “Results”. Each of these phases is evaluated following the case study.
3.1. Data
The education level is university, specifically in a technical subject of an official master’s degree, designed under Royal Decree 861/2010 of 2 July [34], which determines the basic competencies that an official degree must develop according to its level (bachelor’s, master’s, or doctorate).
The educational modality is virtual, using the Moodle learning platform, 4.2 version, based on free software under a General Public Licence.
3.1.1. Criteria
Table 3 shows the selected criteria organized by category.
With respect to the subject category, criteria – are linked to the classification used by the Ministry of Education in the University Register [35]. Criterion (general and transversal competencies) refers to personal and interpersonal competencies; specific competencies () are related to competencies of a training nature and achievement of knowledge related to the Master’s degree; the learning outcomes criterion () encompasses a set of indicators that students are expected to understand, know, and be able to perform at the end of the subject [36]. Therefore, it will be very positive if the number of learning outcomes and competencies (specific, general, and transversal) encompassed by an activity is maximum.
In the lecturer category, the criteria correspond to the complexity when designing the activity from a technological point of view for criterion and the complexity in grading the activity for criterion . That does not mean that simple design activities are better; quite the opposite. The aim is for teachers to be prepared to develop robust and interactive activities and have the necessary tools for grading them, whether directly from the Moodle platform itself or by implementing new applications [37] capable of extracting and analysing the results. The objective will always be to promote student learning.
The activity category includes criteria related to the inclusion of innovative new technology activities (), as can be the case with H5P activities [38], encouraging collaborative learning () with group activities in which students not only share knowledge, but also participate in the assessment process [39], and criterion is related to the integration of the activity within the platform where the course is developed (Moodle, in this case).
Finally, the student category includes the degree of difficulty to perform the activity () from a technological point of view, and the feedback from the teaching team () received by the students; it is very important to highlight the importance of feedback for students, since mistakes are a valuable source of learning [40].
3.1.2. Alternatives
Ten assessment activities are designed that make up the proposed alternatives (–), see Table 4.
The different case study alternatives are described below:
- A1
Moodle assignment where students, based on the instructions specified in the assignment itself (activity content, format, structure, etc.), will submit a report they have prepared individually. The activity has associated submission dates. To carry out the activity, they need advanced use of different programs: a word processor, a spreadsheet for statistical analysis and graph insertion, and stipulated technical programs of the subject. The activity is graded with a points rubric, with individual feedback by section and general feedback for the entire activity.
- A2
Same type as the previous one, but carried out in groups. The lecturer previously forms the different work groups, the activity content is much more extensive, and grading is performed by groups also with a rubric and partial and total feedback.
- A3
Moodle questionnaire. It consists of blocks of questions of different types (multiple choice, true/false, single choice, match options, small calculations, fill in texts, etc.). Once completed, students can view their grade with brief feedback, as well as view their responses. The teaching staff prepares the questionnaire based on the random selection of a question database.
- A4
Moodle lesson. A group of pages with different types of information and associated questions of different modalities (essays, multiple choice, true/false, single choice, etc.). Movement between pages can have different itineraries, depending on student responses. At the end of the lesson, the student will see brief feedback and their grade.
- A5
Moodle workshop. As in alternative , students submit a report according to the professor’s specifications. Grading will be performed by the students themselves, through a rubric designed by the teacher. Self-assessment will weigh 20% and peer assessment 80%.
- A6
H5P activity. The H5P plugin is installed on the Moodle platform; therefore, the activity and grading are contained in the classroom. Its composition is a multicolumn object made up of different types of activities: fill-in texts, drag images or texts, select in an image, etc.; see Figure 4. Students receive the grade immediately after finishing the activity; the questions include a review and editing before ending the activity. The design of the activity by the teacher has required prior learning of the different objects to be used.
- A7
Moodle forum. Discussion on a subject topic; students can start a thread and the rest intervene positively or not in the different threads. At the end of the activity, the teacher will grade both types of interventions. The grade will be reflected in the grade book using a rubric.
- A8
Moodle glossary. In a collaborative way, students will prepare a list of definitions of a subject specified by the teacher. The teacher will reflect the grade in the grade book.
- A9
Moodle database. Students will create a technical data sheet of a certain technology and with the fields defined by the teacher. The teacher will reflect the grade in the grade book.
- A10
Activity with Genially. The activity is not integrated into the Moodle platform, but is an external web. It consists of an interactive Escape Room-type activity, carried out by groups. It consists of different tests, each of which provides a number, see Figure 5. When they have the sequence of numbers, the Escape Room ends. The grade will be reflected by the teacher in the grade book for the different groups, according to the time it takes the groups to solve the activity.
4. Analysis
Continuing the methodology, the “Analysis” phase is entered, where the weights of the previously defined criteria are determined, and the decision matrix is created.
4.1. Criteria Weights
Using the described sequence of the AHP method, the weight of the criteria is determined. A group of three experts (, , and ) participate in the decision process. They independently carry out the pairwise assessment of each criterion. The expert matrices are shown in Table 5, Table 6 and Table 7. An example of this assessment process is now detailed for expert 1 () and the specific competencies criterion (); see Figure 6. It is moderately important compared to learning outcomes () and degree of difficulty to perform the activity (); strongly important compared to general and transversal competencies (); much stronger than use of new technologies () and encourages collaborative learning (); and extremely important compared to complexity in preparing the activity (), grading the activity (), integration with the platform (), and feedback from the teaching team ().
The indicator is calculated, resulting in the following: , , and , all . Therefore, the matrices are valid and the weights of the criteria are obtained; see Table 8.
The individual weight of the criteria that stand out the most are the specific competencies () and learning outcomes (), representing 33% and 24%, respectively, of the total criteria; see Figure 7.
4.2. Decision Matrix
The decision matrix, which is the cross-reference between the attributes of the criteria and alternatives, is created according to the objective function and unit of each criterion, as shown in Table 9. Criteria – are expressed as percentages of the total competencies (–) or learning outcomes (). The remaining criteria are evaluated according to a Likert scale [41], as shown in Table 10. Table 11 finally shows the decision matrix that will be used in the following phase (“Results”).
5. Results
In the “Results” phase, the alternatives are evaluated according to the TOPSIS method described in the Methodology section (Section 2.4). Based on the criteria weights (Table 8) and the decision matrix (Table 11) from the previous step (“Analysis”; see Section 4), the matrix is normalized (Table 12) and the normalized weighted matrix is obtained. The relative proximity of each alternative to the positive ideal solution () is calculated, as shown in Table 13.
The top five alternatives from the ranking obtained by SMART correspond to the following alternatives: Assignment. Report (Group)—; Assignment. Report (Individual)—; Workshop—; Complex H5P Activity—; and Questionnaire—, as shown in Figure 8. These five alternatives represent 72% of the ideal solution, therefore, it can be confirmed that they are the best assessment activities for the purpose of fulfilling the objective functions of each criterion, within a technical subject of an official master’s degree.
5.1. Discussion
The SMART methodology is useful both for educators and institutions.
The key implications of this methodology for educators are as follows:
Understanding which activities are optimal for assessment based on the specific criteria and educational context. In this particular case study, assignments, reports, workshops, complex H5P activities, and questionnaires were prioritized. However, depending on the criteria and educational context, traditional methods could also be suitable.
Understanding tradeoffs between different activity features based on the criteria evaluations.
Potentially saving preparation and grading time by replacing less optimal activities identified by the model.
The key implications of this methodology for institutions are the following:
Using ranked activity data to provide teachers/lecturers with standardized recommendations or resources for assessments.
Allocating educational technology budgets based on the activity ranking given by the model (e.g., tools for creating simulations).
Establishing faculty training priorities around highly ranked activities, if skill gaps exist.
5.2. Limitations of the Study and Future Works
The focus of this study is to support teachers/lecturers to choose what assessment activity to prepare. However, there are also some points that could be considered in the future:
Even though all the activities under analysis have been carried out throughout the described course, it is also interesting to confirm if by conducting the top five alternative ranking, the students’ qualifications improve.
Only instructor and activity factors were evaluated. Students’ preferences and perspectives could be incorporated into the SMART methodology by including student-related criteria in the criteria set used for evaluation, or by conducting a student survey to identify highly rated or preferred activities and using these as the activities (alternatives) for the SMART method.
Analysing the results using other combinations of MCDM techniques. For instance, to weight the criteria, entropy (objective method), the analytic network process (ANP) or the best worst method (BWM) can be used. Moreover, the VIseKriterijumska Optimizacija I Kompromisno Resenje (multicriteria optimization and compromise solution, VIKOR) and more could rank the alternatives and then compare the results with the ÉLimination Et Choix Traduisant la REalité (elimination and choice translating reality, ELECTRE) to eliminate the least favourable options.
6. Conclusions
The European Higher Education Area has promoted major changes in university teaching so that classes follow a more student-centred approach. Thus, teaching staff have begun to generate interactive content, taking advantage of new technologies, either through the modules included in virtual platforms (like Moodle, in the Spanish case) or other external tools. Therefore, the difficulty encountered by the lecturer lies in determining which activity is most appropriate or preferable to use. SMART aims to solve this problem by analysing different university-level activities according to various criteria (related to the subject, lecturer, activity, and student) using the AHP and TOPSIS multicriteria decision-making techniques. According to SMART, and with the activities and criteria considered, the five best activities are submission tasks such as reports (group and individual), workshops, complex H5P activities, and questionnaires. While SMART has originally been designed for a technical master’s subject, it is crucial to acknowledge that its applicability extends to diverse modalities and educational levels. However, it is essential to emphasize that while the methodology offers valuable insights into activity selection, the specific findings may not be universally applicable. The ranking process should be independently conducted for each educational scenario to ensure optimized, context-specific results, considering the unique criteria and needs of each case.
Conceptualization, I.C.G.-G. and A.F.-G.; methodology, I.C.G.-G.; software, I.C.G.-G.; validation, A.F.-G.; investigation, I.C.G.-G.; resources, I.C.G.-G. and A.F.-G.; writing—original draft preparation, I.C.G.-G.; writing—review and editing, A.F.-G.; visualization, I.C.G.-G.; supervision, A.F.-G. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
Data are contained within the article.
The authors declare no conflicts of interest.
The following abbreviations are used in this manuscript:
AHP | Analytic Hierarchy Process |
ANP | Analytic Network Process |
BWM | Best Worst Method |
EHEA | European Higher Education Area |
ELECTRE | ÉLimination Et Choix Traduisant la REalité |
ICTs | Information and Communication Technologies |
MCDM | Multicriteria Decision Making |
SMART | Selection Model for Assessment Resources and Techniques |
TOPSIS | Technique for Order of Preference by Similarity to Ideal Solution |
VIKOR | VIseKriterijumska Optimizacija I Kompromisno Resenje |
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 3. Ideal and anti-ideal alternatives. Source: own elaboration based on [33].
Figure 4. [Forumla omitted. See PDF.] Alternative. Activity H5P. Source: own elaboration.
Figure 5. [Forumla omitted. See PDF.] Alternative. Activity: Escape Room. Source: own elaboration.
Figure 6. Expert 1 ([Forumla omitted. See PDF.]) evaluation of specific competencies criterion ([Forumla omitted. See PDF.]) with respect to the rest of criteria. Source: own elaboration.
Fundamental paired comparison scale proposed by Saaty. Source: own elaboration based on [
Scale | Verbal Scale | Explanation |
---|---|---|
1 | Equal importance | Two criteria contribute equally to the objective |
3 | Moderate importance | Experience and judgement favour one criterion over another |
5 | Strong importance | One criterion is strongly favoured |
7 | Very strong importance | One criterion is very dominant |
9 | Extreme importance | One criterion is favoured by at least one order of magnitude of difference |
Structure of the decision matrix. Source: own elaboration based on [
| | ⋯ | | ⋯ | | |
---|---|---|---|---|---|---|
| | ⋯ | | ⋯ | | |
| | | ⋯ | | ⋯ | |
| | | ⋯ | | ⋯ | |
⋮ | ⋮ | ⋮ | ⋱ | ⋮ | ⋱ | ⋮ |
| ⋮ | ⋮ | ⋱ | ⋮ | ⋱ | ⋮ |
⋮ | ⋮ | ⋮ | ⋱ | ⋮ | ⋱ | ⋮ |
| | | ⋯ | | ⋯ | |
Criteria organized by category. Case study. Source: own elaboration.
Category | Criterion | |
---|---|---|
Subject | | General and transversal competencies |
| Specific competencies | |
| Learning outcomes | |
Lecturer | | Complexity in preparing the activity |
| Grading of the activity | |
Activity | | Use of new technologies |
| Encourages collaborative learning | |
| Integration with the platform | |
Student | | Degree of difficulty to perform the activity |
| Feedback from the teaching team |
Alternatives. Case study. Source: own elaboration.
Activities | |
---|---|
| Assignment: Report (Individual) |
| Assignment: Report (Group) |
| Questionnaire |
| Lesson |
| Workshop |
| Complex H5P Activity |
| Forums |
| Glossary |
| Databases |
| Genially: Escape Room case |
Expert 1 (
| | | | | | | | | | |
---|---|---|---|---|---|---|---|---|---|---|
| 1 | 1/5 | 1/3 | 3 | 3 | 3 | 1/3 | 3 | 3 | 1/5 |
| 5 | 1 | 3 | 9 | 9 | 7 | 7 | 9 | 9 | 3 |
| 3 | 1/3 | 1 | 9 | 9 | 5 | 5 | 5 | 7 | 3 |
| 1/3 | 1/9 | 1/9 | 1 | 3 | 1/5 | 1/3 | 1/3 | 1/3 | 1/7 |
| 1/3 | 1/9 | 1/9 | 1/3 | 1 | 1/3 | 1/5 | 1/5 | 1/5 | 1/7 |
| 1/3 | 1/7 | 1/5 | 5 | 3 | 1 | 1/3 | 3 | 3 | 1/3 |
| 3 | 1/7 | 1/5 | 3 | 5 | 3 | 1 | 3 | 3 | 1/3 |
| 1/3 | 1/9 | 1/5 | 3 | 5 | 1/3 | 1/3 | 1 | 3 | 1/3 |
| 1/3 | 1/9 | 1/7 | 3 | 5 | 1/3 | 1/3 | 1/3 | 1 | 1/5 |
| 5 | 1/3 | 1/3 | 7 | 7 | 3 | 3 | 3 | 5 | 1 |
Expert 2 (
| | | | | | | | | | |
---|---|---|---|---|---|---|---|---|---|---|
| 1 | 1/7 | 1/7 | 3 | 3 | 1/3 | 1/3 | 3 | 1/3 | 1/3 |
| 7 | 1 | 3 | 9 | 9 | 5 | 5 | 7 | 3 | 3 |
| 7 | 1/3 | 1 | 9 | 9 | 5 | 5 | 7 | 3 | 3 |
| 1/3 | 1/9 | 1/9 | 1 | 3 | 1/5 | 1/5 | 1/3 | 1/5 | 1/5 |
| 1/3 | 1/9 | 1/9 | 1/3 | 1 | 1/5 | 1/3 | 1/3 | 1/5 | 1/3 |
| 3 | 1/5 | 1/5 | 5 | 5 | 1 | 1/3 | 3 | 1/3 | 1/5 |
| 3 | 1/5 | 1/5 | 5 | 3 | 3 | 1 | 3 | 1/3 | 3 |
| 1/3 | 1/7 | 1/7 | 3 | 3 | 1/3 | 1/3 | 1 | 1/3 | 1/3 |
| 3 | 1/3 | 1/3 | 5 | 5 | 3 | 3 | 3 | 1 | 3 |
| 3 | 1/3 | 1/3 | 5 | 3 | 5 | 1/3 | 3 | 1/3 | 1 |
Expert 3 (
| | | | | | | | | | |
---|---|---|---|---|---|---|---|---|---|---|
| 1 | 1/3 | 1/3 | 9 | 9 | 5 | 3 | 3 | 5 | 5 |
| 3 | 1 | 3 | 9 | 9 | 9 | 7 | 7 | 9 | 3 |
| 3 | 1/3 | 1 | 9 | 9 | 7 | 5 | 5 | 7 | 5 |
| 1/9 | 1/9 | 1/9 | 1 | 3 | 1/5 | 1/3 | 1/3 | 1/3 | 1/3 |
| 1/9 | 1/9 | 1/9 | 1/3 | 1 | 1/5 | 1/3 | 1/3 | 1/3 | 1/3 |
| 1/5 | 1/9 | 1/7 | 5 | 5 | 1 | 3 | 3 | 5 | 3 |
| 1/3 | 1/7 | 1/5 | 3 | 3 | 1/3 | 1 | 3 | 3 | 3 |
| 1/3 | 1/7 | 1/5 | 3 | 3 | 1/3 | 1/3 | 1 | 3 | 3 |
| 1/5 | 1/9 | 1/7 | 3 | 3 | 1/5 | 1/3 | 1/3 | 1 | 3 |
| 1/5 | 1/5 | 1/5 | 3 | 3 | 1/3 | 1/3 | 1/3 | 1/3 | 1 |
Criteria weight. Source: own elaboration.
Criterion | | | | | | | | | | |
---|---|---|---|---|---|---|---|---|---|---|
Weight | 0.0747 | 0.3127 | 0.2295 | 0.0204 | 0.0164 | 0.0622 | 0.0764 | 0.0405 | 0.0509 | 0.0722 |
Units and objective function of decision matrix attributes. Maximize (↑). Minimize (↓). Source: own elaboration.
Criterion | Unit | Objective Function |
---|---|---|
% | Maximize (↑) | |
% | Maximize (↑) | |
% | Maximize (↑) | |
Likert | Minimize (↓) | |
Likert | Minimize (↓) | |
Likert | Maximize (↑) | |
Likert | Maximize (↑) | |
Likert | Maximize (↑) | |
Likert | Minimize (↓) | |
Likert | Maximize (↑) |
Likert scale. Source: own elaboration based on [
Value | Description |
---|---|
1 | Very little |
2 | Little |
3 | Moderately sufficient |
4 | Sufficient |
5 | A lot |
Decision matrix. Source: own elaboration.
| | | | | | | | | | |
---|---|---|---|---|---|---|---|---|---|---|
| 45 | 70 | 80 | 4 | 5 | 2 | 1 | 5 | 2 | 5 |
| 48 | 80 | 90 | 5 | 5 | 2 | 5 | 5 | 3 | 5 |
| 35 | 50 | 95 | 5 | 1 | 3 | 1 | 5 | 2 | 4 |
| 30 | 45 | 60 | 5 | 1 | 4 | 1 | 5 | 3 | 4 |
| 48 | 55 | 70 | 5 | 4 | 3 | 5 | 5 | 4 | 5 |
| 40 | 60 | 65 | 5 | 1 | 5 | 1 | 5 | 4 | 4 |
| 20 | 25 | 35 | 2 | 3 | 2 | 5 | 5 | 2 | 3 |
| 18 | 20 | 20 | 2 | 1 | 2 | 1 | 5 | 2 | 2 |
| 15 | 20 | 18 | 2 | 1 | 2 | 1 | 5 | 2 | 2 |
| 25 | 45 | 45 | 5 | 4 | 5 | 3 | 1 | 4 | 4 |
Normalized matrix. Source: own elaboration.
| | | | | | | | | | |
---|---|---|---|---|---|---|---|---|---|---|
| 0.427 | 0.457 | 0.411 | 0.328 | 0.562 | 0.231 | 0.112 | 0.353 | 0.246 | 0.429 |
| 0.456 | 0.522 | 0.462 | 0.410 | 0.562 | 0.231 | 0.559 | 0.353 | 0.369 | 0.429 |
| 0.332 | 0.326 | 0.488 | 0.410 | 0.113 | 0.346 | 0.112 | 0.353 | 0.246 | 0.343 |
| 0.285 | 0.294 | 0.308 | 0.410 | 0.113 | 0.462 | 0.112 | 0.353 | 0.369 | 0.343 |
| 0.456 | 0.359 | 0.359 | 0.410 | 0.450 | 0.346 | 0.559 | 0.353 | 0.492 | 0.429 |
| 0.380 | 0.392 | 0.334 | 0.410 | 0.113 | 0.577 | 0.112 | 0.353 | 0.492 | 0.343 |
| 0.190 | 0.164 | 0.180 | 0.164 | 0.337 | 0.231 | 0.559 | 0.353 | 0.246 | 0.257 |
| 0.171 | 0.131 | 0.103 | 0.164 | 0.113 | 0.231 | 0.112 | 0.353 | 0.246 | 0.171 |
| 0.142 | 0.131 | 0.093 | 0.164 | 0.113 | 0.231 | 0.112 | 0.353 | 0.246 | 0.171 |
| 0.237 | 0.294 | 0.231 | 0.410 | 0.450 | 0.577 | 0.335 | 0.070 | 0.492 | 0.343 |
Normalized weighted matrix. Source: own elaboration.
| | | | | | | | | | | | | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0.032 | 0.143 | 0.094 | 0.007 | 0.009 | 0.014 | 0.009 | 0.014 | 0.013 | 0.031 | 0.0493 | 0.1298 | 0.7248 |
| 0.034 | 0.163 | 0.106 | 0.008 | 0.009 | 0.014 | 0.043 | 0.014 | 0.019 | 0.031 | 0.0249 | 0.1563 | 0.8628 |
| 0.025 | 0.102 | 0.112 | 0.008 | 0.002 | 0.022 | 0.008 | 0.014 | 0.013 | 0.025 | 0.0726 | 0.1129 | 0.6086 |
| 0.021 | 0.092 | 0.071 | 0.008 | 0.002 | 0.029 | 0.008 | 0.014 | 0.019 | 0.025 | 0.0911 | 0.0759 | 0.4545 |
| 0.034 | 0.112 | 0.083 | 0.008 | 0.007 | 0.022 | 0.043 | 0.014 | 0.025 | 0.031 | 0.0624 | 0.1054 | 0.6282 |
| 0.028 | 0.123 | 0.077 | 0.008 | 0.002 | 0.036 | 0.008 | 0.014 | 0.025 | 0.025 | 0.0659 | 0.1042 | 0.6127 |
| 0.014 | 0.051 | 0.041 | 0.003 | 0.006 | 0.014 | 0.043 | 0.014 | 0.013 | 0.019 | 0.1365 | 0.0453 | 0.2491 |
| 0.013 | 0.041 | 0.024 | 0.003 | 0.002 | 0.014 | 0.008 | 0.014 | 0.013 | 0.013 | 0.1589 | 0.0194 | 0.1090 |
| 0.011 | 0.041 | 0.021 | 0.003 | 0.002 | 0.014 | 0.008 | 0.014 | 0.013 | 0.013 | 0.1605 | 0.0192 | 0.1067 |
| 0.018 | 0.092 | 0.053 | 0.008 | 0.007 | 0.036 | 0.026 | 0.003 | 0.025 | 0.023 | 0.0976 | 0.0677 | 0.4096 |
References
1. Fernández-Guillamón, A.; Molina-García, Á. Comparativa de herramientas de gamificación de acceso libre: Aplicación en asignatura de Grados en Ingenierías industriales. Innovación Docente e Investigación en Ciencias, Ingeniería y Arquitectura; Dykinson: Madrid, Spain, 2019; pp. 783-800.
2. Abdel-Aziz, A.A.; Abdel-Salam, H.; El-Sayad, Z. The role of ICTs in creating the new social public place of the digital era. Alex. Eng. J.; 2016; 55, pp. 487-493. [DOI: https://dx.doi.org/10.1016/j.aej.2015.12.019]
3. Chiappe, A. Trends in Digital Educational Content in Latin America; Universidad de La Sabana: Chía, Colombia, 2016.
4. López-Gorozabel, O.; Cedeño-Palma, E.; Pinargote-Ortega, J.; Zambrano-Romero, W.; Pazmiño-Campuzano, M. Bootstrap as a tool for web development and graphic optimization on mobile devices. XV Multidisciplinary International Congress on Science and Technology; Springer: Cham, Switzerland, 2020; pp. 290-302.
5. Moodle: A Free Open Source Learning Platform or Course Management System. 2002; Available online: https://moodle.org/ (accessed on 13 October 2023).
6. Blin, F.; Munro, M. Why hasn’t technology disrupted academics’ teaching practices? Understanding resistance to change through the lens of activity theory. Comput. Educ.; 2008; 50, pp. 475-490. [DOI: https://dx.doi.org/10.1016/j.compedu.2007.09.017]
7. Ashrafi, A.; Zareravasan, A.; Rabiee Savoji, S.; Amani, M. Exploring factors influencing students’ continuance intention to use the learning management system (LMS): A multi-perspective framework. Interact. Learn. Environ.; 2022; 30, pp. 1475-1497. [DOI: https://dx.doi.org/10.1080/10494820.2020.1734028]
8. Chichernea, V. Campus information systems for enhancing quality and performance in a smart city high education environment. Conference Proceedings of «eLearning and Software for Education» (eLSE); Carol I National Defence University Publishing House: Bucharest, Romania, 2016; Volume 12, pp. 50-56.
9. Fuentes Pardo, J.M.; Ramírez Gómez, Á.; García García, A.I.; Ayuga Téllez, F. Web-based education in Spanish Universities. A Comparison of Open Source E-Learning Platforms. J. Syst. Cybern. Inform.; 2012; 10, pp. 47-53.
10. Piotrowski, M. What is an e-learning platform?. Learning Management System Technologies and Software Solutions for Online Teaching: Tools and Applications; IGI Global: Hershey, PA, USA, 2010; pp. 20-36.
11. Costa, C.; Alvelos, H.; Teixeira, L. The Use of Moodle e-learning Platform: A Study in a Portuguese University. Proceedings of the 4th Conference of ENTERprise Information Systems—Aligning Technology, Organizations and People (CENTERIS 2012); Algarve, Portugal, 3–5 October 2012; Volume 5, pp. 334-343. [DOI: https://dx.doi.org/10.1016/j.protcy.2012.09.037]
12. Versvik, M.; Brand, J.; Brooker, J. Kahoot. 2012; Available online: https://kahoot.com/ (accessed on 10 October 2023).
13. Socrative—Showbie Inc. 2010; Available online: https://www.socrative.com/ (accessed on 10 October 2023).
14. Quizizz Inc. 2015; Available online: https://quizizz.com/ (accessed on 10 October 2023).
15. Genially Inc. 2015; Available online: https://app.genial.ly/ (accessed on 10 October 2023).
16. Fernández-Guillamón, A.; Molina-García, Á. Simulation of variable speed wind turbines based on open-source solutions: Application to bachelor and master degrees. Int. J. Electr. Eng. Educ.; 2021; [DOI: https://dx.doi.org/10.1177/0020720920980974]
17. Valda Sanchez, F.; Arteaga Rivero, C. Diseño e implementación de una estrategia de gamificacion en una plataforma virtual de educación. Fides Ratio-Rev. Difus. Cult. Cient. Univ. Salle Boliv.; 2015; 9, pp. 65-80.
18. Hamari, J.; Koivisto, J.; Sarsa, H. Does Gamification Work?—A Literature Review of Empirical Studies on Gamification. Proceedings of the 2014 47th Hawaii International Conference on System Sciences; Waikoloa, HI, USA, 6–9 January 2014; pp. 3025-3034. [DOI: https://dx.doi.org/10.1109/HICSS.2014.377]
19. Oliva, H.A. La gamificación como estrategia metodológica en el contexto educativo universitario. Real. Reflex.; 2016; 44, pp. 108-118. [DOI: https://dx.doi.org/10.5377/ryr.v44i0.3563]
20. Mohamad, J.R.J.; Farray, D.; Limiñana, C.M.; Ramírez, A.S.; Suárez, F.; Ponce, E.R.; Bonnet, A.S.; Iruzubieta, C.J.C. Comparación de dos herramientas de gamificación para el aprendizaje en la docencia universitaria. V Jornadas Iberoamericanas de Innovación Educativa en el ámbito de las TIC y las TAC: InnoEducaTIC 2018, Las Palmas de Gran Canaria, 15 y 16 de noviembre de 2018; Universidad de Las Palmas de Gran Canaria: Las Palmas, Spain, 2018; pp. 199-203.
21. Krahenbuhl, K.S. Student-centered education and constructivism: Challenges, concerns, and clarity for teachers. Clear. House J. Educ. Strateg. Issues Ideas; 2016; 89, pp. 97-105. [DOI: https://dx.doi.org/10.1080/00098655.2016.1191311]
22. Deneen, C.C.; Hoo, H.T. Connecting teacher and student assessment literacy with self-evaluation and peer feedback. Assess. Eval. High. Educ.; 2023; 48, pp. 214-226. [DOI: https://dx.doi.org/10.1080/02602938.2021.1967284]
23. Vinent, M.E.S. Del proceso de enseñanza aprendizaje tradicional, al proceso de enseñanza aprendizaje para la formación de competencias, en los estudiantes de la enseñanza básica, media superior y superior. Cuad. Educ. Desarro.; 2009.
24. García Cascales, M.S. Métodos Para la Comparación de Alternativas Mediante un Sistema de Ayuda a la Decisión SAD y “Soft Computing”. Ph.D. Thesis; Universidad Politécnica de Cartagena: Cartagena, Spain, 2009.
25. Emovon, I.; Oghenenyerovwho, O.S. Application of MCDM method in material selection for optimal design: A review. Results Mater.; 2020; 7, 100115. [DOI: https://dx.doi.org/10.1016/j.rinma.2020.100115]
26. Chakraborty, S.; Chakraborty, S. A scoping review on the applications of MCDM techniques for parametric optimization of machining processes. Arch. Comput. Methods Eng.; 2022; 29, pp. 4165-4186. [DOI: https://dx.doi.org/10.1007/s11831-022-09731-w]
27. Si, J.; Marjanovic-Halburd, L.; Nasiri, F.; Bell, S. Assessment of building-integrated green technologies: A review and case study on applications of Multi-Criteria Decision Making (MCDM) method. Sustain. Cities Soc.; 2016; 27, pp. 106-115. [DOI: https://dx.doi.org/10.1016/j.scs.2016.06.013]
28. Gil-García, I.C.; Ramos-Escudero, A.; García-Cascales, M.S.; Dagher, H.; Molina-García, A. Fuzzy GIS-based MCDM solution for the optimal offshore wind site selection: The Gulf of Maine case. Renew. Energy; 2022; 183, pp. 130-147. [DOI: https://dx.doi.org/10.1016/j.renene.2021.10.058]
29. Toloie-Eshlaghy, A.; Homayonfar, M. MCDM methodologies and applications: A literature review from 1999 to 2009. Res. J. Int. Stud.; 2011; 21, pp. 86-137.
30. Saaty, T.L. What Is the Analytic Hierarchy Process?; Springer: Berlin/Heidelberg, Germany, 1988.
31. Aguarón, J.; Moreno-Jiménez, J.M. The geometric consistency index: Approximated thresholds. Eur. J. Oper. Res.; 2003; 147, pp. 137-145. [DOI: https://dx.doi.org/10.1016/S0377-2217(02)00255-2]
32. Hwang, C.L.; Yoon, K.; Hwang, C.L.; Yoon, K. Methods for multiple attribute decision making. Multiple Attribute Decision Making; Lecture Notes in Economics and Mathematical Systems Springer: Berlin/Heidelberg, Germany, 1981; pp. 58-191.
33. Dasarathy, B. SMART: Similarity Measure Anchored Ranking Technique for the Analysis of Multidimensional Data Arrays. IEEE Trans. Syst. Man Cybern.; 1976; SMC-6, pp. 708-711. [DOI: https://dx.doi.org/10.1109/TSMC.1976.4309424]
34.
35.
36. Mohammad-Davoudi, A.H.; Parpouchi, A. Relation between team motivation, enjoyment, and cooperation and learning results in learning area based on team-based learning among students of Tehran University of medical science. Procedia-Soc. Behav. Sci.; 2016; 230, pp. 184-189. [DOI: https://dx.doi.org/10.1016/j.sbspro.2016.09.023]
37. Marticorena-Sánchez, R.; López-Nozal, C.; Ji, Y.P.; Pardo-Aguilar, C.; Arnaiz-González, Á. UBUMonitor: An open-source desktop application for visual E-learning analysis with Moodle. Electronics; 2022; 11, 954. [DOI: https://dx.doi.org/10.3390/electronics11060954]
38. Gil-García, I.C.; Fernández-Guillamón, A.; García-Cascales, M.S.; Molina-García, Á. Virtual campus environments: A comparison between interactive H5P and traditional online activities in master teaching. Comput. Appl. Eng. Educ.; 2023; 31, pp. 1648-1661. [DOI: https://dx.doi.org/10.1002/cae.22665]
39. Sánchez-González, A. Peer assessment between students of energy in buildings to enhance learning and collaboration. Adv. Build. Educ.; 2021; 5, pp. 23-38. [DOI: https://dx.doi.org/10.20868/abe.2021.1.4567]
40. de la Vega, I.N. Una aproximación al concepto de evaluación para el aprendizaje y al feedback con función reguladora a partir de los diarios docentes. J. Neuroeduc.; 2022; 3, pp. 69-89. [DOI: https://dx.doi.org/10.1344/joned.v3i1.39642]
41. Luna, S.M.M. Manual práctico para el diseño de la Escala Likert. Xihmai; 2007; 2, [DOI: https://dx.doi.org/10.37646/xihmai.v2i4.101]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
The European Higher Education Area has ushered in a significant shift in university teaching, aiming to engage students more actively in classes. Professors have leveraged virtual platforms and external tools to introduce interactive tasks. With the proliferation of technology, educators face a challenge in choosing the most suitable approach. This paper presents SMART (Selection Model for Assessment Resources and Techniques), a methodology that determines the optimal assessment activities for university-level education. The methodology employs multicriteria decision-making techniques, specifically AHP and TOPSIS methods, to optimize activities based on various subject-, lecturer-, activity-, and student-related criteria. According to SMART, the top five assessment tasks are group and individual report submissions, workshops, complex H5P activities, and questionnaires. Therefore, it is advisable to prioritize these activities based on the methodology’s results, emphasizing their importance over other assessment methods.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details


1 Faculty of Engineering, Distance University of Madrid (UDIMA), C/Coruña, km 38500, Collado Villalba, 28400 Madrid, Spain;
2 Department of Applied Mechanics and Projects Engineering & Renewable Energy Research Institute, Universidad de Castilla–La Mancha, 02071 Albacete, Spain