1. Introduction
Over the past three decades, the field of educational technology has expanded significantly and continually promised to transform education [1]. However, the “wow factor” associated with new technologies (e.g., radio, CD-ROMs, interactive whiteboards, virtual/augmented reality) often overshadows the actual needs of learners and leads to their uncritical acceptance or “normalization” [2]. Some digital technologies have even become as ubiquitous as traditional tools such as pens and paper [2]. Scholars have also begun to consider how the “normalization” of technology impacts both students and teachers.
Artificial intelligence (AI), which has existed for many years, has recently experienced a resurgence in popularity and interest due to the emergence of generative AI tools, such as ChatGPT. It has the potential to reshape teaching and learning once again [3,4] by optimizing face-to-face, blended, and online learning [5,6]. AI can retrieve large amounts of data from various sources, identify patterns, and cluster/predict these patterns; this constitutes its “intelligence”. Furthermore, software engineers deploy these patterns to perform human-like actions, which makes it “artificial”. AI-powered tools can assist educators in identifying and utilizing effective pedagogies based on learning data, generating teaching materials and assessments, and issuing grades and feedback automatically [7].
In education, the term “learning analytics” (LA) is typically used to describe the use of data to inform teaching. LA can be defined as the “measurement, collection, analysis, and reporting of data about learners and their contexts, for the purpose of understanding and optimizing learning and the environment in which it occurs” [8]. Using data to produce actionable insights has become a key goal of utilizing AI in education.
Nevertheless, the integration of AI and LA programs in education raises significant concerns related to the use of educational data. It prompts questions about the “normalization” of technology in education and its impact on culture and values [9]. Previous studies have explored privacy concerns related to learning data, specifically considering students’ perspectives. Ifenthlaer and Schumacher [10], for example, found that students are not willing to share their personal information or the records of their behaviour online. Other studies have investigated how to respect privacy while deploying educational technologies and LA [11]. While this research offers important insights, studies on educational technology have yet to catch up with general AI research and the theory of “data colonialism”, which highlights the problematic nature of the massive retrieval and capturing of data.
The concept of “data colonialism” was introduced by Nick Couldry and Ulises Mejias, two scholars in Communications and Media Studies, and expounded in various publications, such as [12,13]. Working with other scholars (e.g., [14]), they identified similarities between colonialism and the data extraction practices of recent years. Under colonialism, natural resources were considered “free” to take and appropriate, which was supposed to bring about a new social order and a better world. Similarly, “data colonialism” treats user data as a natural resource, justifying the process by introducing new social relations and ideologies.
In a macro sense, Couldry and Mejias [12] draw on examples from major technology companies, such as Facebook and Amazon, which retrieve and privatize transaction data to connect user behaviours with personal attributes (i.e., social relations) and promote a more “personalized” purchasing experience (i.e., “a better world”). This leads to significant financial gains for these corporations and can convince customers to offer up more of their data (i.e., “free” resources). Even though critics such as Mumford [15] see this as a matter of data ownership, which could be addressed through regulations, the concept of data colonialism explains how companies are using seemingly “free” data for economic benefits.
Unfortunately, educational entities are not immune to such practices. Zembylas [16] has attempted to further contextualize how AI and LA can introduce data colonialism into higher education. In the context of educational technology, learning data are commonly used to generate value (though not always profit) for institutions and promote personalized learning experiences. Moreover, users are not always aware that their data have been appropriated. Thus, the concept of data colonialism in this context deserves further exploration.
Even though data colonialism is an important notion that has raised concerns in the academic community, to date, only a few conceptual discussions (such as [12,14,17,18]) have emerged. Little research has put data colonialism in context or examined how data are being appropriated. This study aims to provide a preliminary review of the realization of data colonialism in the field of educational technology. It is not intended to provide a comprehensive and in-depth synthesis but a general overview.
In addition, this preliminary review will not immediately provide solutions or identify how to “decolonize” educational technology. However, in response to Zembylas [16], it represents the first step of this process. By providing context and evidence, it can initiate a conversation about adopting “decolonized” practices in educational technology.
2. Methodology
This study aimed to review the existing body of literature on data colonialism by reviewing articles from impactful journals on educational technology. After choosing four journals, we conducted an initial search to select articles that related directly to our discussion. We then examined how the following four key features of data colonialism are being realized: (1) appropriation of resources; (2) establishing social relations; (3) concentration of wealth; and (4) promotion of ideologies. This allowed us to provide an overview of the topic.
2.1. Search Strategy
We chose “predictive analytics” as our search keyword because there are many studies on LA using educational technology, and this is one of the core research areas [19]. Using LA to predict student success—with the help of educational technology products and other solutions offered by vendors—is commonplace in higher education [20,21]. This keyword allowed us to identify many articles about LA.
We narrowed our focus to impactful journals by identifying the top five journals about educational technology on Google Scholar and Scopus, as well as all educational technology journals indexed in the Web of Science Social Science Citation Index (SSCI). When we examined these three lists, four journals appeared twice: Education Technology & Society, International Journal of Educational Technology in Higher Education, British Journal of Educational Technology, Educational Technology Research and Development, and Australasian Journal of Educational Technology. Therefore, we focused on articles published in these journals. Figure 1 presents details regarding how studies were included and/or excluded throughout this process.
2.2. Article Identification
After conducting our initial search, we identified a total of 83 studies with no duplicates. As we had targeted specific journals, all of the papers were peer-reviewed and written in English. We then applied the following inclusion criteria: (1) empirical studies; (2) data retrieved from an educational technology system (i.e., LA); (3) published after the year 2000; and (4) more than five citations.
Some of these criteria deserve brief explanations. The second criterion allowed us to exclude studies with traditional data collection strategies, such as questionnaires or interviews (which participants consent to complete). Because such participants provided their data willingly, these studies did not fit our aim. Using the fourth criterion, we ensured that we only included studies that have already received some attention in the field. While we believe that all of the studies in these journals are high quality, studies with at least five citations have gained recognition from the scholarly community, making them our priority.
2.3. Data Analysis
To understand how data colonialism is being realized in educational technology research, we examined four of its key features (see [12] for a detailed account of the concept). We developed key questions to correspond to each feature, as presented in Table 1.
3. Results
3.1. Overview of Studies
The final dataset included 22 studies published between 2013 and 2023. The number of citations in the studies (as of 1 October 2023) ranged from 6 [23] to 146 [24]. Among these studies, two were from the Australasian Journal of Educational Technology, four were from Educational Technology Research and Development, and eleven were from Educational Technology Research and Development. No studies from Education Technology & Society were included upon considering the inclusion and exclusion criteria. A general summary of the sources identified can be found in Appendix A and Appendix B. As the studies came from educational technology journals, most focused on learning behaviours or the effectiveness of particular platforms. They included studies on learning argumentation [25,26], facilitating academic advising sessions [27], and coding for kids [28]. Their samples ranged from less than 50 [25,29,30,31,32] to more than 100,000 students [24,28]. Many were based on introducing a new educational technology program in either an undergraduate [25,33,34,35,36,37,38] or postgraduate course [28,38]. Other contexts included elementary/high school [26,38,39], professional development for teachers [23,40] or university academic advisors [27], and online programs [28,30,31,35]. Seven studies were from the United States [25,26,30,31,35,39], and four were from Australia [34,38,41,42]. Other studies were performed in Asia [37,43], the United Kingdom [23,35], and Ecuador [27]. One was conducted online and did not specify the location or demographics [28]. Five [29,32,33,36,38] did not explicitly disclose the location despite being empirical studies.
3.2. Features of Data Colonialism
The following section describes the features of data colonialism identified in the studies based on the guiding questions presented in the previous section. After each feature is introduced, it is discussed with reference to the literature.
3.2.1. Appropriation of Resources
Among the studies reviewed, most retrieved behavioural data that had been generated by users of an educational technology system, including game logs [40], page views [24,39], and usage of an e-book tool [33] or learning management system [31,34]. Some studies were interested in user interaction data, such as forum posts [30,31] or chatroom chat logs [43]. Others were interested in spatial data and adopted tracking devices to capture and exploit the movements of learners [38,44]. A few retrieved assignments [25,26,29]. Importantly, all of these data were generated for other purposes (e.g., using a learning tool), not specifically for the research. They were then repurposed to promote the ideologies of the researchers. While many researchers captured log-based data, they also captured other data for linking purposes (e.g., questionnaire data or student outcome data). These data are described in the following section.
3.2.2. Social Relations
In these studies, log-based data were most often linked with questionnaire data. Researchers retrieved individuals’ log-based data (as described in the previous section) and connected them to their answers on a questionnaire. The data included students’ and teachers’ strategies [42], affective outcomes [34], and experiences [39]. Log-based data were also linked with learning achievements, such as final grades [20,32,33,34,39,42], language test results [43], tests of concepts [25], and teachers’ assessments [40,44]. Finally, log-based data were linked with teacher and student demographic data [23,33,36].
3.2.3. Concentration of Wealth
When data are considered a form of wealth, it is necessary to consider who has the power to distribute this wealth. In all of the studies, data produced by users for other purposes were appropriated for LA. While teachers (who may double as researchers) and IT departments can always access such data, we investigated the procedures by which the researchers obtained the authority to access this “wealth”. Several studies did not disclose how they obtained approval to retrieve the data [26,34,37]. Unsurprisingly, most stated that an institutional research board or ethical clearance committee was able to approve this access through a data request [23,30,31,32,33,35,39,40,41,42]. Some studies, however, indicated that approval was “not required” [40], with one claiming that approval was “not applicable” because of “the nature of a study conducted on already available/existing data” [29]. This reflects the notion that wealth is “just there” to be capitalized on by others. It is encouraging to see that a few studies gave the power back to users and obtained their informed consent to use the data they produced [27,44].
3.2.4. Promotion of Ideologies
The ideologies promoted in the reviewed studies were consistent. Most were concerned with engagement [30,31,41,43], outcomes [23,29,35,39,43], or experiences [27,39,41]. Some were more specific, considering how to adopt educational technologies effectively [33]. While these ideologies are noble, other researchers with access to the same data may not share these aims. While these ideologies may also exist in other research disciplines, their use as an excuse to exploit data matches the notion of data colonialism.
4. Discussion
4.1. The Presence of Data Colonialism and Related Concerns
The results of this study suggest that data colonialism exists in educational technology research. In general, data were produced using private tools [28,35], higher education learning management systems [31,34], and location tracking tools [38,44]. They were then captured and repurposed by researchers, including teachers [25,26] and members of the general public [28]. While researchers may have had admirable intentions, such as improving engagement [30,31,41,43], outcomes [23,29,35,39,43], or experiences [27,39,41], users were not always given a chance to agree to the use of their data. In practice, some users were only informed that their data was being used [43], and many were not even aware of this because approval was granted by ethics committees [30,31,33,35,39,40,41,42]. This practice echoes the idea that data simply exists, and anyone can take advantage of this [12]. Furthermore, under capitalism, no one can control whether such data will be exploited by others with different, less noble intentions. Some data from these studies are publicly available [28], so future researchers or private contractors will be able to capitalize on it without being bound by any constraints.
Our results highlight three major concerns related to data colonialism. First, data colonialism can further marginalize particular communities of learners. When researchers use existing data to establish new relationships with demographic variables [23,33,36] or final grades [34,39,40,42], they also establish relationships between students’ demographics (e.g., race and gender) and behaviours. We found that these patterns may change more often in education than in other fields. For example, Williams et al. [36] examined students’ use of a lecture-capturing podcast and concluded that Asian students and women were the heaviest users at this particular US university. Asian women were then chosen for further discussion in the study; the results of students from other races were not further discussed. In their study, the authors specifically focused on Asian women and found that heavy usage did not correlate with exam performance. This conclusion was drawn without making a comparison to other groups. While the study makes the argument that its findings are meant to relate to other literature, this can be considered as the first step of marginalizing Asian women. If these marginalized communities are targeted, their learning experience may be affected in the future. It is possible that some teachers would neglect heavy usage as an indicator of diligence based on the results of this study, making students feel that their time was wasted.
Second, the power dynamics between teachers, educational technology researchers, and learners make educational data especially vulnerable to data colonialism. For example, in relation to marketing analytics or social media analytics, users can choose not to use certain platforms to prevent their data from being colonized (as suggested by [13]). However, in educational institutions, it is hard for users to refuse. In practice, students generate data through courses they have to take for credit [25,32,33,34,35,36,37,42,43]. This may involve an educational technology tool they are required to use to pass the course or complete a mandatory assignment. The data students generate can then be retrieved for research purposes, a practice that can be seen as a form of colonial aggression.
In this context, teachers and/or educational technology researchers can also leverage their roles to require students to generate data/wealth, which can then be retrieved and capitalized upon. Significantly, this process also contributes to the advancement of researchers who benefit from the extraction of this “data-wealth”. After obtaining approval from educational institutions [32,33,35,42], the data can be repurposed and exploited, often without giving students a chance to refuse or informing them that their data have been retrieved. This scenario can occur only because educational administrators or teachers hold power over their students, creating an unbalanced relationship that closely resembles colonialism. Therefore, these users are especially vulnerable to data colonialism and the exploitation of their data to benefit others.
Third, it is also concerning to consider the data retrieval and approval process. We have identified six levels of data sovereignty, from studies with no information on how they obtained approval for data retrieval to those giving users a choice of whether to participate. At the lowest level, some studies do not even disclose how the data retrieval was approved [26,34,37]. Other studies claimed prior approval was not required or necessary [29,35] but still disclosed this practice. At level three, one study used a secondary dataset available online [28]. Many studies followed a conventional approach and gained access to data after ethics clearance from institutions [30,31,33,35,39,40,41]. At this level, students may still not know that their data are being retrieved or used for research purposes. At level five, one study informed students that they were using their data [43], which we consider a better practice. At the highest level, many studies asked students for explicit consent [23,25,27,32,36,38,42,44]. This practice provides students a chance to agree or disagree with the use of their data. These six levels of data retrieval practices provide a contextual overview of how data colonialism takes place in the field of educational technology. In subsequent sections, we offer recommendations to decolonialize such data retrieval practices.
4.2. Limitations
While we position the current study as an exploratory overview of the current literature, several limitations deserve readers’ attention. First, it is ironic for the current review to choose only studies from the most impactful journals to examine colonialism. This means studies that embraced the “English language and Euro-Western worldviews” [45], which is made apparent by the notions of “better” and “more effective” in the reviewed studies. Unfortunately, this is a common issue in systematic reviews. (See de Almeida and de Goulart [46] for more discussion.) We believe that this review is only a starting point for understanding the so-called “mainstream” literature; more can be done afterwards.
Second, only one search term (i.e., “predictive analytics”) was used to represent the field of learning analytics. The original intention was to gather any studies on data-driven analytics (see inclusion criteria), as predictive analytics is an important stream of research within the field. We eventually included all data-driven studies, which may have excluded other important learning analytics studies (e.g., those that profiled students through clustering). In other words, the studies identified are not yet representative of all learning analytics studies.
4.3. Implications and Recommendations
After finding evidence of data colonialism in education technology research, it is difficult to decide what to do next. User data generated by educational technology are available, their use is endorsed by institutions, and researchers take advantage of them to promote their ideologies. While we can offer some suggestions to empower the “colonized” users of educational technology, we are reluctant to argue that researchers must stop retrieving or mining data as a form of “decolonization”. LA, AI, and educational data mining have established positions in the world of knowledge.
However, it may be possible to perceive data colonialism through a traditional postcolonialist lens. Postcolonialism generally refers to the study of formerly colonized cultures [46]; it often refers to hybridity, as suggested by Bhabha [47,48], and acknowledges the value of both the identities and knowledge that are produced through the process of colonization and those that pre-dated it. This notion of “hybridity” has started to emerge from the technological literature (e.g., [49]). Such an approach may help us move forward from arguing that data colonialism exists to embracing the postcolonialist world. In practice, we propose the following steps to decolonize data practices:
Respecting data sovereignty: Institutional ethics committees need to ensure that researchers have made a reasonable attempt to decolonialize their data practices by obtaining consent from users before using their data. While this is not always possible, especially with large institutional datasets, this review shows that it is sometimes possible to obtain student consent. In our review, we acknowledge that Yan et al. [38] and Broadbent and Fuller-Tyszkiewicz [42] did ask for consent from users despite retrieving their data directly from the university computer systems. This shows a significant effort to respect users’ “right to be forgotten” [50].
Sensible data relations building: Institutional ethics committees should decolonialize their review of data retrieval requests and consider how researchers are building relationships between variables. Only theoretically or empirically meaningful relationships should be examined. In our review, we were pleased that behavioural data were seldom linked to demographic data, as this is one of the students’ major concerns (see [10]). If there are too many linkages or data points, ethics committees should be cautious about how this could affect the personal lives of users, especially those from marginalized communities.
Avoiding manipulation of user behaviours: We do not dispute the ideologies promoted by the reviewed studies, such as promoting engagement [30,31,41,43] or analyzing the effectiveness of programs [30,31]. To embrace a postcolonialist perspective, however, knowledge derived from data analytics alone should be deployed with caution. First, educational technology practitioners should further their understanding of user behaviour based on self-reported measures [30,31,33,34,39,42] or qualitative approaches [27]. Second, measures that aim to promote engagement or improve outcomes should not manipulate users’ behaviour.
Decolonializing the ethical clearance process: While ethics clearance committees do not usually include students due to their technical and academic nature, institutions should consider engaging students, staff members, and other users in approving data retrieval requests. We believe that the best practice is to ask for consent directly. If that is impossible or inappropriate due to the ecology of ethics approval at an institution, one appropriate first step towards decolonization would be to include student members in the data retrieval committee, which approves and rejects requests from researchers. Having all data users represented can provide a sense of “sensible relationship building” and “avoiding manipulation of behaviours” described above.
Decolonializing system design: While we do not have the technical knowledge necessary, we suggest decolonizing educational technology systems from the top down (i.e., the system design level). Modern university systems are linked together, and user attributes are shared among databases. For example, students’ numbers and preferred names are entered into the registrar’s system and shared with the learning management system. In recent decades, educational institutions have adopted the inclusive practice of allowing users to enter their preferred pronouns on various systems (see [51] for a detailed discussion). We argue that institutions could also permit users to choose whether their data are shared across systems. With this attribute, IT personnel could retrieve data after filtering out those who have exercised their “right to be forgotten”. Instead of retrieving all user data and deidentifying it manually, omitting data from certain users may be a more decolonized practice.
Informing students about data use: As part of the data consent process, students should be informed at the point of registration that the data they generate by interacting with the institution’s systems may be utilized for various purposes. This can include not only the improvement of courses and programmes but also research purposes. This transparency could empower students to make informed decisions about their data and contribute to the decolonization of data practices.
5. Conclusions
Colonization has never been alien to the educational community, and this study shows that it is manifesting in the use of data for research, as well. This review study examined 22 articles using a predictive analytics approach and educational technology data. We found that data colonialism is common in the field of educational technology. With vulnerable data users and administrators who are in an “ivory tower,” educational technology produces a broad range of data that is “just there” to be exploited. Promising better learning outcomes, researchers retrieve, repurpose, and link data. While some users were fortunate enough to have control over their data, others’ data were used based on the approval of institutional ethics committees.
We are concerned that this sort of data colonialism could lead to the further marginalization of some learners. However, we are not advocating for researchers to stop using data completely in order to achieve the “decolonization” of educational technology. Instead, we have proposed a range of measures to decolonialize data practices so users can regain data sovereignty and limit their chances of being manipulated by algorithms. These practices may not fully decolonialize educational technology, but they can at least raise awareness of data colonization.
Conceptualization, L.K. and D.F.; methodology, D.F.; formal analysis, D.F. and L.K.; writing—original draft preparation & review and editing, D.F. and L.K. All authors have read and agreed to the published version of the manuscript.
Not applicable.
The authors declare no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. PRISMA 2020 flow diagram on article identification (adapted from Page et al. [22]).
Guiding questions for data extraction.
Feature of Data | Guiding Question |
---|---|
Appropriation of Resources | What data are being retrieved? |
Social Relations | Other than the data being retrieved, what other information about users is involved? |
Concentration of Wealth | Who has the privilege to approve the use of data? |
Promotion of Ideologies | What “better” outcome is being presented as the result of using the data? |
Appendix A
General summary of 22 studies being reviewed.
Citation Entry (#) | Article Title | Year | Authors | Links (All Accessed on 26 September 2023) |
---|---|---|---|---|
[ | Analysis of patterns in time for evaluating effectiveness of first principles of instruction | 2022 | Frick et al. | |
[ | A large-scale implementation of predictive learning analytics in higher education: the teachers’ role and perspective | 2019 | Herodotou et al. | |
[ | The effects of successful versus failure-based cases on argumentation while solving decision-making problems | 2013 | Tawfik and Jonassen | |
[ | Identifying patterns in students’ scientific argumentation: content analysis through text mining using Latent Dirichlet Allocation | 2020 | Xing et al. | |
[ | Adoption and impact of a learning analytics dashboard supporting the advisor—Student dialogue in a higher education institute in Latin America | 2020 | De Laet et al. | |
[ | Understanding the relationship between computational thinking and computational participation: a case study from Scratch online community | 2021 | Jiang et al. | |
[ | To design or to integrate? Instructional design versus technology integration in developing learning interventions | 2020 | Kale et al. | |
[ | Priming, enabling and assessment of curiosity | 2019 | Sher et al. | |
[ | Exploring indicators of engagement in online learning as applied to adolescent health prevention: a pilot study of REAL media | 2020 | Ray et al. | |
[ | Gamification during COVID-19: Promoting active learning and motivation in higher education | 2021 | Rincon-Flores and Santos-Guevara | |
[ | The adoption of mark-up tools in an interactive e-textbook reader | 2016 | Van Horne et al. | |
[ | Academic success is about self-efficacy rather than frequency of use of the learning management system | 2016 | Broadbent | |
[ | Empowering online teachers through predictive learning analytics | 2019 | Herodotou et al. | |
[ | Lecture capture podcasts: differential student use and performance in a large introductory course | 2015 | Williams et al. | |
[ | Learning Analytics at Low Cost: At-risk Student Prediction with Clicker Data and Systematic Proactive Interventions | 2018 | Choi et al. | |
[ | The role of indoor positioning analytics in assessment of simulation-based learning | 2022 | Yan et al. | |
[ | Predict or describe? How learning analytics dashboard design influences motivation and statistics anxiety in an online statistics course | 2021 | Valle et al. | |
[ | Do social regulation strategies predict learning engagement and learning outcomes? A study of English language learners in wiki-supported literature circles activities | 2021 | Li et al. | |
[ | Does slow and steady win the race?: Clustering patterns of students’ behaviors in an interactive online mathematics game | 2022 | Lee et al. | |
[ | Mapping from proximity traces to socio-spatial behaviours and student progression at the school | 2022 | Yan et al. | |
[ | Profiles in self-regulated learning and their correlates for online and blended learning students | 2018 | Broadbent and Fuller-Tyszkiewicz | |
[ | Identifying engagement patterns with video annotation activities: A case study in professional development | 2018 | Mirriahi et al. | |
Appendix B
General summary of 22 studies being reviewed.
# | Authors | No. of Citations | Context | Sample Size | Location | Data Retrieved from Educational Technology Systems | Other Data Collected/Retrieved (Excepted) |
---|---|---|---|---|---|---|---|
[ | Frick et al. | 6 | University Teachers | 59 | UK | Login data of a dashboard |
|
[ | Herodotou et al. | 146 | MOOC | 172,417 | US | Usage data on webpages (pageviews, clicks scrolling) | nil |
[ | Tawfik and Jonassen | 85 | Undergraduate | 36 | US | Arguments produced by users | Pretest and post-test of concepts |
[ | Xing et al. | 26 | Middle/High School | 2472 | US | Student produced arguments | Teacher assessment of students’ learning |
[ | De Laet et al. | 34 | University Academic Advisors | 172 | Ecuador | Student study plan before and after intervention | Simulated advising sessions (qualitative data) |
[ | Jiang et al. | 10 | online learning tool | 105,720 | Online | Online learning journey (likes/loves) remixing projects | Computation scores assigned by another researcher |
[ | Kale et al. | 17 | Postgraduate | 22 | Not mentioned | Final projects completed for courses | nil |
[ | Sher et al. | 11 | Online program for youth club | 38 | US | Participant interactions | Questionnaire data on audience engagement |
[ | Ray et al. | 13 | Online Substance use prevention program | 38 | US | User interactions on the LMS | Questionnaire data on program usability |
[ | Rincon-Flores and Santos-Guevara | 54 | Undergraduate | 40 | Not mentioned | Student final grades and course achievement | Student grade |
[ | Van Horne et al. | 71 | Undergraduate | 274 | “Midwest” | Student Usage of mark-up tool (for a reading tool) | Questionnaire on reading behaviour |
[ | Broadbent | 100 | Undergraduate | 310 | Australia | Student LMS usage data | Questionnaire data on self-efficacy locus of control motivation |
[ | Herodotou et al. | 79 | Undergraduate | 559 | UK | Usage of dashboard system | Discipline of teachers/student performance |
[ | Williams et al. | 46 | Undergraduate | 835 | not mentioned | Login data from video viewing site | In-class clickers student demographic |
[ | Choi et al. | 113 | Undergraduate | 1075 | Hong Kong | In-class clickers data | Demographic information |
[ | Yan et al. | 12 | Undergraduate | 3604 | Australia | Position tracking in a simulated room | Teacher assessment of students’ learning |
[ | Valle et al. | 20 | Postgraduate | 179 | US | Number of views | Questionnaire data on prior content knowledge, experience |
[ | Li et al. | 20 | English language course | 95 | China | QQ chatroom chat logs | Language test at the end of activities |
[ | Lee et al. | 9 | Middle school | 227 | US | Student game logs |
|
[ | Yan et al. | 8 | Elementary | 98 | Not mentioned | Position tracker/wearable device position data | Student progression |
[ | Broadbent and Fuller-Tyszkiewicz | 122 | Undergraduate | 606 | Australia | Final grade |
|
[ | Mirriahi et al. | 37 | Teachers | 163 | Australia | Behavioural data on video annotation tool | nil |
References
1. Laurillard, D. Supporting Teachers in Optimizing Technologies for Open Learning. Global Challenges and Perspectives in Blended and Distance Learning; IGI Global: Hershey, PA, USA, 2013; pp. 160-173. ISBN 978-1-4666-3978-2
2. Bax, S. CALL—Past, Present and Future. System; 2003; 31, pp. 13-28. [DOI: https://dx.doi.org/10.1016/S0346-251X(02)00071-4]
3. Kohnke, L.; Moorhouse, B.L.; Zou, D. ChatGPT for Language Teaching and Learning. RELC J.; 2023; 54, pp. 537-550. [DOI: https://dx.doi.org/10.1177/00336882231162868]
4. Kohnke, L.; Moorhouse, B.L.; Zou, D. Exploring generative artificial intelligence preparedness among university language instructors. Comput. Educ. Artif. Intell.; 2023; 5, 100156. [DOI: https://dx.doi.org/10.1016/j.caeai.2023.100156]
5. Chiu, T.K.F.; Xia, Q.; Zhou, X.; Chai, C.S.; Cheng, M. Systematic Literature Review on Opportunities, Challenges, and Future Research Recommendations of Artificial Intelligence in Education. Comput. Educ. Artif. Intell.; 2023; 4, 100118. [DOI: https://dx.doi.org/10.1016/j.caeai.2022.100118]
6. Kexin, L.; Yi, Q.; Xiaoou, S.; Yan, L. Future Education Trend Learned From the COVID-19 Pandemic: Take ≪Artificial Intelligence≫ Online Course As an Example. Proceedings of the 2020 International Conference on Artificial Intelligence and Education (ICAIE); Online, 26–28 June 2020; pp. 108-111.
7. Chaudhry, M.A.; Kazim, E. Artificial Intelligence in Education (AIEd): A High-Level Academic and Industry Note 2021. AI Ethics; 2022; 2, pp. 157-165. [DOI: https://dx.doi.org/10.1007/s43681-021-00074-z] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34790953]
8. Society for Learning Analytics Research What Is Learning Analytics?. Available online: https://www.solaresearch.org/about/what-is-learning-analytics/ (accessed on 22 October 2023).
9. Mhlambi, S. Decolonizing AI. Available online: https://www.youtube.com/watch?v=UqVwfuIuU2k&t=30s (accessed on 1 October 2023).
10. Ifenthaler, D.; Schumacher, C. Student Perceptions of Privacy Principles for Learning Analytics. Educ. Technol. Res. Dev.; 2016; 64, pp. 923-938. [DOI: https://dx.doi.org/10.1007/s11423-016-9477-y]
11. Scholes, V. The Ethics of Using Learning Analytics to Categorize Students on Risk. Educ. Technol. Res. Dev.; 2016; 64, pp. 939-955. [DOI: https://dx.doi.org/10.1007/s11423-016-9458-1]
12. Couldry, N.; Mejias, U.A. The Decolonial Turn in Data and Technology Research: What Is at Stake and Where Is It Heading?. Inf. Commun. Soc.; 2023; 26, pp. 786-802. [DOI: https://dx.doi.org/10.1080/1369118X.2021.1986102]
13. Couldry, N.; Mejias, U.A. Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject. Telev. New Media; 2019; 20, pp. 336-349. [DOI: https://dx.doi.org/10.1177/1527476418796632]
14. Thompson, T.L.; Prinsloo, P. Returning the Data Gaze in Higher Education. Learn. Media Technol.; 2023; 48, pp. 153-165. [DOI: https://dx.doi.org/10.1080/17439884.2022.2092130]
15. Mumford, D. Data Colonialism: Compelling and Useful, but Whither Epistemes?. Inf. Commun. Soc.; 2022; 25, pp. 1511-1516. [DOI: https://dx.doi.org/10.1080/1369118X.2021.1986103]
16. Zembylas, M. A Decolonial Approach to AI in Higher Education Teaching and Learning: Strategies for Undoing the Ethics of Digital Neocolonialism. Learn. Media Technol.; 2023; 48, pp. 25-37. [DOI: https://dx.doi.org/10.1080/17439884.2021.2010094]
17. Thatcher, J.; O’Sullivan, D.; Mahmoudi, D. Data Colonialism through Accumulation by Dispossession: New Metaphors for Daily Data. Env. Plan. D; 2016; 34, pp. 990-1006. [DOI: https://dx.doi.org/10.1177/0263775816633195]
18. Prinsloo, P. Data Frontiers and Frontiers of Power in (Higher) Education: A View of/from the Global South. Teach. High. Educ.; 2020; 25, pp. 366-383. [DOI: https://dx.doi.org/10.1080/13562517.2020.1723537]
19. Sghir, N.; Adadi, A.; Lahmer, M. Recent Advances in Predictive Learning Analytics: A Decade Systematic Review (2012–2022). Educ. Inf. Technol.; 2023; 28, pp. 8299-8333. [DOI: https://dx.doi.org/10.1007/s10639-022-11536-0] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36571084]
20. Smithers, L. Predictive Analytics and the Creation of the Permanent Present. Learn. Media Technol.; 2023; 48, pp. 109-121. [DOI: https://dx.doi.org/10.1080/17439884.2022.2036757]
21. Williamson, B. Policy Networks, Performance Metrics and Platform Markets: Charting the Expanding Data Infrastructure of Higher Education. Br. J. Educ. Technol.; 2019; 50, pp. 2794-2809. [DOI: https://dx.doi.org/10.1111/bjet.12849]
22. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E. et al. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. BMJ; 2021; 372, n71. [DOI: https://dx.doi.org/10.1136/bmj.n71]
23. Frick, T.W.; Myers, R.D.; Dagli, C. Analysis of Patterns in Time for Evaluating Effectiveness of First Principles of Instruction. Educ. Technol. Res. Dev.; 2022; 70, pp. 1-29. [DOI: https://dx.doi.org/10.1007/s11423-021-10077-6]
24. Herodotou, C.; Rienties, B.; Boroowa, A.; Zdrahal, Z.; Hlosta, M. A Large-Scale Implementation of Predictive Learning Analytics in Higher Education: The Teachers’ Role and Perspective. Educ. Technol. Res. Dev.; 2019; 67, pp. 1273-1306. [DOI: https://dx.doi.org/10.1007/s11423-019-09685-0]
25. Tawfik, A.; Jonassen, D. The Effects of Successful versus Failure-Based Cases on Argumentation While Solving Decision-Making Problems. Educ. Technol. Res. Dev.; 2013; 61, pp. 385-406. [DOI: https://dx.doi.org/10.1007/s11423-013-9294-5]
26. Xing, W.; Lee, H.-S.; Shibani, A. Identifying Patterns in Students’ Scientific Argumentation: Content Analysis through Text Mining Using Latent Dirichlet Allocation. Educ. Technol. Res. Dev.; 2020; 68, pp. 2185-2214. [DOI: https://dx.doi.org/10.1007/s11423-020-09761-w]
27. De Laet, T.; Millecamp, M.; Ortiz-Rojas, M.; Jimenez, A.; Maya, R.; Verbert, K. Adoption and Impact of a Learning Analytics Dashboard Supporting the Advisor—Student Dialogue in a Higher Education Institute in Latin America. Br. J. Educ. Technol.; 2020; 51, pp. 1002-1018. [DOI: https://dx.doi.org/10.1111/bjet.12962]
28. Jiang, B.; Zhao, W.; Gu, X.; Yin, C. Understanding the Relationship between Computational Thinking and Computational Participation: A Case Study from Scratch Online Community. Educ. Technol. Res. Dev.; 2021; 69, pp. 2399-2421. [DOI: https://dx.doi.org/10.1007/s11423-021-10021-8]
29. Kale, U.; Roy, A.; Yuan, J. To Design or to Integrate? Instructional Design versus Technology Integration in Developing Learning Interventions. Educ. Technol. Res. Dev.; 2020; 68, pp. 2473-2504. [DOI: https://dx.doi.org/10.1007/s11423-020-09771-8]
30. Sher, K.B.-T.; Levi-Keren, M.; Gordon, G. Priming, Enabling and Assessment of Curiosity. Educ. Technol. Res. Dev.; 2019; 67, pp. 931-952. [DOI: https://dx.doi.org/10.1007/s11423-019-09665-4]
31. Ray, A.E.; Greene, K.; Pristavec, T.; Hecht, M.L.; Miller-Day, M.; Banerjee, S.C. Exploring Indicators of Engagement in Online Learning as Applied to Adolescent Health Prevention: A Pilot Study of REAL Media. Educ. Technol. Res. Dev.; 2020; 68, pp. 3143-3163. [DOI: https://dx.doi.org/10.1007/s11423-020-09813-1]
32. Rincon-Flores, E.G.; Santos-Guevara, B.N. Gamification during COVID-19: Promoting Active Learning and Motivation in Higher Education. Australas. J. Educ. Technol.; 2021; 37, pp. 43-60. [DOI: https://dx.doi.org/10.14742/ajet.7157]
33. Van Horne, S.; Russell, J.; Schuh, K.L. The Adoption of Mark-up Tools in an Interactive e-Textbook Reader. Educ. Technol. Res. Dev.; 2016; 64, pp. 407-433. [DOI: https://dx.doi.org/10.1007/s11423-016-9425-x]
34. Broadbent, J. Academic Success Is about Self-Efficacy Rather than Frequency of Use of the Learning Management System. Australas. J. Educ. Technol.; 2016; 32, 2634. [DOI: https://dx.doi.org/10.14742/ajet.2634]
35. Herodotou, C.; Hlosta, M.; Boroowa, A.; Rienties, B.; Zdrahal, Z.; Mangafa, C. Empowering Online Teachers through Predictive Learning Analytics. Br. J. Educ. Technol.; 2019; 50, pp. 3064-3079. [DOI: https://dx.doi.org/10.1111/bjet.12853]
36. Williams, A.E.; Aguilar-Roca, N.M.; O’Dowd, D.K. Lecture Capture Podcasts: Differential Student Use and Performance in a Large Introductory Course. Educ. Technol. Res. Dev.; 2016; 64, pp. 1-12. [DOI: https://dx.doi.org/10.1007/s11423-015-9406-5]
37. Choi, S.P.M.; Lam, S.S.; Li, K.C.; Wong, B.T.M. Learning Analytics at Low Cost: At-Risk Student Prediction with Clicker Data and Systematic Proactive Interventions. J. Educ. Technol. Soc.; 2018; 21, pp. 273-290.
38. Yan, L.; Martinez-Maldonado, R.; Zhao, L.; Dix, S.; Jaggard, H.; Wotherspoon, R.; Li, X.; Gašević, D. The Role of Indoor Positioning Analytics in Assessment of Simulation-Based Learning. Br. J. Educ. Technol.; 2023; 54, pp. 267-292. [DOI: https://dx.doi.org/10.1111/bjet.13262]
39. Valle, N.; Antonenko, P.; Valle, D.; Sommer, M.; Huggins-Manley, A.C.; Dawson, K.; Kim, D.; Baiser, B. Predict or Describe? How Learning Analytics Dashboard Design Influences Motivation and Statistics Anxiety in an Online Statistics Course. Educ. Technol. Res. Dev.; 2021; 69, pp. 1405-1431. [DOI: https://dx.doi.org/10.1007/s11423-021-09998-z] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34075283]
40. Lee, J.-E.; Chan, J.Y.-C.; Botelho, A.; Ottmar, E. Does Slow and Steady Win the Race?: Clustering Patterns of Students’ Behaviors in an Interactive Online Mathematics Game. Educ. Technol. Res. Dev.; 2022; 70, pp. 1575-1599. [DOI: https://dx.doi.org/10.1007/s11423-022-10138-4]
41. Mirriahi, N.; Jovanovic, J.; Dawson, S.; Gašević, D.; Pardo, A. Identifying Engagement Patterns with Video Annotation Activities: A Case Study in Professional Development. Australas. J. Educ. Technol.; 2018; 34, 3207. [DOI: https://dx.doi.org/10.14742/ajet.3207]
42. Broadbent, J.; Fuller-Tyszkiewicz, M. Profiles in Self-Regulated Learning and Their Correlates for Online and Blended Learning Students. Educ. Technol. Res. Dev.; 2018; 66, pp. 1435-1455. [DOI: https://dx.doi.org/10.1007/s11423-018-9595-9]
43. Li, Y.; Chen, K.; Su, Y.; Yue, X. Do Social Regulation Strategies Predict Learning Engagement and Learning Outcomes? A Study of English Language Learners in Wiki-Supported Literature Circles Activities. Educ. Technol. Res. Dev.; 2021; 69, pp. 917-943. [DOI: https://dx.doi.org/10.1007/s11423-020-09934-7]
44. Yan, L.; Martinez-Maldonado, R.; Gallo Cordoba, B.; Deppeler, J.; Corrigan, D.; Gašević, D. Mapping from Proximity Traces to Socio-Spatial Behaviours and Student Progression at the School. Br. J. Educ. Technol.; 2022; 53, pp. 1645-1664. [DOI: https://dx.doi.org/10.1111/bjet.13203]
45. Chambers, L.A.; Jackson, R.; Worthington, C.; Wilson, C.L.; Tharao, W.; Greenspan, N.R.; Masching, R.; Pierre-Pierre, V.; Mbulaheni, T.; Amirault, M. et al. Decolonizing Scoping Review Methodologies for Literature With, for, and by Indigenous Peoples and the African Diaspora: Dialoguing With the Tensions. Qual. Health Res.; 2018; 28, pp. 175-188. [DOI: https://dx.doi.org/10.1177/1049732317743237] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29182046]
46. De Almeida, C.P.B.; De Goulart, B.N.G. How to Avoid Bias in Systematic Reviews of Observational Studies. Rev. CEFAC; 2017; 19, pp. 551-555. [DOI: https://dx.doi.org/10.1590/1982-021620171941117]
47. Wang, Y. The Cultural Factors in Postcolonial Theories and Applications. JLTR; 2018; 9, 650. [DOI: https://dx.doi.org/10.17507/jltr.0903.26]
48. Bhabha, H. The Location of Culture; Routledge: London, UK, 1994.
49. Peralta, L.M.M. Resisting Techno-Orientalism and Mimicry Stereotypes in and Through Data Science Education. TechTrends; 2023; 67, pp. 426-434. [DOI: https://dx.doi.org/10.1007/s11528-023-00842-0]
50. Drachsler, H.; Greller, W. Privacy and Analytics—It’s a DELICATE Issue. A Checklist for Trusted Learning Analytics. Proceedings of the 6th Learning Analytics and Knowledge Conference 2016; Edinburgh, UK, 25–29 April 2016.
51. Chan, B.; Stewart, J.J. Listening to Nonbinary Chemistry Students: Nonacademic Roadblocks to Success. J. Chem. Educ.; 2022; 99, pp. 409-416. [DOI: https://dx.doi.org/10.1021/acs.jchemed.1c00498]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
As learning analytics and educational data mining have become the “new normal” in the field, scholars have observed the emergence of data colonialism. Generally, data colonialism can be understood as the process by which data were considered “free” to take and appropriate. Building on this theoretical understanding, this study aims to contextualize data colonialism in educational technology by identifying and reviewing learning analytics studies that adopted a predictive analytics approach. We examined 22 studies from major educational technology journals and noted how they (1) see data as a resource to appropriate, (2) establish new social relations, (3) show the concentration of wealth, and (4) promote ideologies. We found evidence of data colonialism in the field of educational technology. While these studies may promote “better” ideologies, it is concerning how they justify the authorities capitalizing on “free” data. After providing a contextualized view of data colonialism in educational technology, we propose several measures to decolonialize data practices, adopting a postcolonialist approach. We see data colonialism not only as a privacy issue but also as a culture that must be challenged.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details


1 Department of English Language Education, The Education University of Hong Kong, Tai Po, Hong Kong;
2 School of Journalism, Writing, and Media, University of British Columbia, Vancouver, BC V6T 1Z2, Canada