Content area
Emotion-aware technologies are increasingly shaping the future of digital education. This study explores the potential of affective artificial intelligence (AI) to recognize and respond to learners' emotional states in online learning environments. While such systems promise more inclusive, supportive, and responsive digital classrooms, their design in addition raises important ethical and psychosocial concerns. Drawing from affective computing, digital empathy, and inclusive pedagogy, this conceptual study examines how AI can be used not only to monitor engagement however in addition to promote emotional wellbeing and learner autonomy, especially for students at risk of emotional distress, disconnection, or exclusion. Through analysis of existing technologies and case-informed reflections, the paper identifies both the opportunities and the limitations of affective systems in e-learning. A preliminary framework for ethically aligned emotional AI is proposed, emphasizing transparency, user agency, and safeguards against bias and manipulation. These insights aim to inform educators, designers, and policymakers working toward more humane, equitable, and emotionally intelligent uses of AI in lifelong learning.
Abstract: Emotion-aware technologies are increasingly shaping the future of digital education. This study explores the potential of affective artificial intelligence (AI) to recognize and respond to learners' emotional states in online learning environments. While such systems promise more inclusive, supportive, and responsive digital classrooms, their design in addition raises important ethical and psychosocial concerns. Drawing from affective computing, digital empathy, and inclusive pedagogy, this conceptual study examines how AI can be used not only to monitor engagement however in addition to promote emotional wellbeing and learner autonomy, especially for students at risk of emotional distress, disconnection, or exclusion. Through analysis of existing technologies and case-informed reflections, the paper identifies both the opportunities and the limitations of affective systems in e-learning. A preliminary framework for ethically aligned emotional AI is proposed, emphasizing transparency, user agency, and safeguards against bias and manipulation. These insights aim to inform educators, designers, and policymakers working toward more humane, equitable, and emotionally intelligent uses of AI in lifelong learning.
1. Introduction
Artificial intelligence (AI) is changing the way we design, access, and experience education. One of the most fascinating and in addition controversial, developments in this space is affective computing: technology that can sense and respond to human emotions. In digital learning environments, this has given rise to emotion-aware systems that aim to make learning more personalized, engaging, and emotionally supportive. (European Commission, 2017; Cummings, 2018; Picard, 1997; D'Mello and Graesser, 2015).
But while the possibilities are exciting, they in addition come with important questions. Can a system truly understand how a learner feels? What happens when students are emotionally profiled, tracked, or nudged based on their data? And how can we make sure that emotional information is used in ways that support, rather than harm, those who may already be struggling? (Zuboff, 2019; Floridi and Cowls, 2019; Cheong, 2024; Koene et al., 2021).
This study takes a closer look at these issues by focusing on affective AI in e-learning. The goal is to understand how these technologies might help or hurt learners, especially those who feel emotionally disconnected or isolated. Special attention is given to students who experience stress, disengagement, or who are neurodivergent and may express emotions differently.
By offering a conceptual framework and reflecting on real-world cases, this study contributes to the growing conversation about how AI can be used in education with care and responsibility. It suggests that emotion-aware technologies shouldn't just make learning smarter, they should in addition make it kinder and more human.
2. Theoretical Background
The integration of affective computing into digital education is shaped by ideas and approaches from several fields, including artificial intelligence, psychology, pedagogy, and ethics. This section outlines the core concepts that support the development of emotion-aware learning environments, with a particular focus on digital empathy, learner diversity, and bioethical concerns.
2.1 Affective Computing and Emotion-Aware AI
Existing work on affect-aware learning technologies is increasingly complemented by studies that explore multimodal sensing and adaptive feedback in real-world learning contexts. Recent research shows how emotion recognition can be integrated into robotic tutoring or multi-agent environments to enhance engagement and personalization (Fung et al., 2025). At the same time, emerging literature on accountability in AI stresses that affective systems need to be evaluated not only for their technical accuracy but also for their transparency and responsibility toward learners (Nove, Taddeo and Floridi, 2023). In line with this, Seremeti (2023) emphasizes that affective educational technologies must be interpreted through socio-cultural lenses, cautioning against the adoption of universal models of emotional expression that ignore contextual diversity. These developments highlight that affective computing in education is no longer limited to theoretical exploration but is gradually entering applied pedagogical settings, accompanied by pressing ethical debates.
2.2 Digital Empathy in Online Learning
In addition to traditional concerns around the authenticity of empathy in human-computer interaction, new perspectives have linked digital empathy to issues of children's rights, psychological safety, and the socioemotional climate of online classrooms. For example, Risser and Bottoms (2020) argue that the use of emotionrecognition AI in contexts involving young or vulnerable learners requires careful safeguards to avoid exploitation or undue influence. More recent dialogic approaches suggest that empathy-driven systems can play a constructive role when designed to enhance mutual understanding rather than to simulate care superficially (Slade, Prinsloo and Khalil, 2019). Complementing these perspectives, Anastasiadou et al. (2022) stress that educational applications of affective AI must be aligned with learner autonomy, ensuring that emotional nudges do not compromise the integrity of pedagogical relationships.
2.3 Inclusion and Neurodiversity in E-Learning
International frameworks have increasingly emphasized the importance of inclusion when deploying AI in education. The UNESCO report on AI and inclusion (2022) stresses that emotion-aware systems must be aligned with principles of equity and non-discrimination, ensuring that neurodiverse learners or those from underrepresented cultural backgrounds are not misclassified. Greek research has similarly highlighted these risks: Souravlas et al. (2021) show how algorithmic personalization may unintentionally reinforce bias in digital platforms, while Robertson and Ne'eman (2008) warn that rigid affective models' risk pathologizing neurodiverse emotional expressions. Together, these insights underscore the need for inclusive design principles that recognize diversity rather than impose uniformity
2.4 Ethical Frameworks and Bio-Psycho-Social Implications
A bioethical lens reminds us that emotions are not merely data points to be captured, but are deeply personal and connected to identity, privacy, and trust. As Anastasiadou et al. (2024) argue, it is crucial to balance technological innovation with care and respect for learners' emotional integrity. At the same time, the ethical discourse around affective AI has intensified in recent years with the publication of the European Union's AI Act and parallel critiques of its limitations. Wachter (2024) identifies significant loopholes in the legislation that directly affect how emotional data may be collected, processed, and governed. Similarly, Robles and Mallinson (2023) highlight the lack of cohesion in global AI governance frameworks, underlining the risk that affective systems may advance faster than regulatory mechanisms. Building on these debates, Kontis et al. (2025) introduce the concept of legal entropy to describe the uncertainty and fluidity of AI regulation, an idea with particular resonance for emotion-aware systems where ethical and legal boundaries remain unsettled.
2.5 Cultural and Socio-Psychological Perspectives on Affective AI
Cultural background, social context, and lived experience all shape how people express and interpret emotions, and these dimensions strongly influence the design and application of affective AI in education. Seremeti (2023) highlights the risks of adopting universal emotion models that disregard cultural diversity, calling instead for culturally sensitive approaches to affective modelling. This concern is echoed in work by Anastasiadou et al. (2022), who underline the ethical risks of using AI to "read" emotions without safeguarding learner autonomy. Building on these perspectives, Souravlas et al. (2021) show how algorithmic personalization can unintentionally reinforce bias in digital platforms, further illustrating the dangers of misclassifying emotional behaviour. More recent European research situates affective AI within wider socio-cultural and workforce transformations: Kalogera et al. (2025) link emotional dimensions of technology to the emerging notion of a "digital DNA" shaping modern labour markets, while Anastasiadou (2026) applies advanced statistical techniques to explore ethical perceptions of AI in educational contexts. Complementing these analyses, Kontis et al. (2025) introduce the concept of legal entropy to capture the uncertainty and fluidity of the regulatory environment in which affective educational technologies operate. Taken together, these contributions underscore that emotion-aware systems cannot be reduced to technical artefacts; they must be understood as phenomena embedded in cultural, social, and ethical contexts that shape both their potential and their risks.
3. Methodology
The study is based on a structured review of the literature, combined with a thematic analysis of the material identified. The purpose of this approach was to develop a comprehensive and conceptually robust account of how emotion-aware artificial intelligence is being explored in educational contexts, while at the same time ensuring that the process of selecting and interpreting studies was systematic and transparent.
The search drew on major academic databases, including Scopus, Web of Science, IEEE Xplore, ERIC, and Google Scholar, and covered publications from 2010 to 2025. A set of targeted keyword combinations guided the process, with terms relating to affective computing, learning analytics, digital empathy, inclusion, and neurodiversity. To capture influential contributions that might not emerge directly from database searches, additional sources were located through citation tracking of seminal works in the field.
The studies retained for analysis were those that connected artificial intelligence with affective or emotional aspects of education, either through empirical investigations, conceptual models, or ethical and policy discussions. Purely technical articles without educational relevance and non-English texts were not included. After initial screening, the material was gradually refined to a core body of work that could support a deeper analysis. A flow diagram is provided to illustrate the selection path from the initial pool of results to the final dataset.
The corpus was examined through thematic analysis following established qualitative research practice. In line with Braun and Clarke's well-recognized framework (2006, 2019), this process involved a gradual progression from familiarization with the material to the generation of initial codes, the search and refinement of candidate themes, and the definition and naming of thematic categories. The analysis converged around three broad dimensions: the technological capacities of emotion-aware systems, their pedagogical and psychosocial implications, and the ethical or inclusion-related challenges they pose. These dimensions were subsequently aligned with the guiding research objectives of the paper, forming the basis for the synthesis and framework presented in the following sections.
4. Findings and Discussion
Since this paper does not include original empirical data, the findings are based on a combination of literature analysis, conceptual synthesis, and reflection on existing use cases. Three main insights emerge from this process: (1) the emotional gap in online learning, (2) the dual-use dilemma of affective AI, and (3) the need for ethically grounded and inclusive design.
4.1 The Emotional Gap in Digital Learning
Even though e-learning technologies have grown rapidly in recent years, most of them remain emotionally neutral, they aren't designed to recognize or respond to how learners feel. This lack of emotional awareness can make learning feel distant or disconnected. It may in addition lead to frustration, mental fatigue, and even isolation (D'Mello and Graesser, 2015; Wegerif, 2020).
Emotion-aware systems are seen as a potential solution to this problem. By using tools like facial expression analysis, tone of voice, or physiological sensors, these systems try to "read" the learner's emotional state and respond accordingly. But in practice, such tools are still rare in mainstream education, mainly because of technical limitations and concerns about privacy, ethics, and pedagogy.
4.2 The Dual-Use Dilemma of Affective AI
Affective AI holds great promise, but it in addition comes with risks. On one hand, it can help educators better understand students, support their wellbeing, and make learning more empathetic. On the other hand, it can easily be misused. Systems that collect emotional data might be used to monitor, categorize, or even manipulate students (Zuboff, 2019; Cowie et al., 2013).
There's in addition the risk of bias. Many affective systems are trained on emotional expressions that reflect dominant cultural norms. This means they might misinterpret or mislabel emotions that don't "fit the model", especially in the case of neurodiverse learners (Robertson and Ne'eman, 2008). In an educational setting, this can have serious consequences: students may be nudged in certain directions, or misunderstood, based on faulty assumptions about what they feel.
4.3 Towards Inclusive and Ethically Aligned Design
Designing affective AI for education means shifting the focus from simply predicting emotions to genuinely caring about how learners feel and respond. This requires systems that recognize emotional diversity, respect learner autonomy, and include safeguards grounded in bioethics (Anastasiadou et al., 2024; Seremeti, 2023).
Emotion-sensitive technologies must be transparent, respectful, and flexible, especially when used with students who are neurodivergent, emotionally vulnerable, or experiencing digital overload. They should support learners, not control them.
Ongoing European discussions, such as the AI Act and UNESCO's AI ethics framework, stress the importance of emotionally intelligent systems that enhance human decision-making rather than replace it. These systems should include clear mechanisms for consent, transparency, and emotional data governance (Souravlas et al., 2021).
To synthesize the thematic findings presented above and link them explicitly to the research objectives of the study, Table 1 provides an overview of how objectives, identified themes, key insights, and indicative references are aligned.
Towards a Human-Centered Framework
Building on the discussion above, this study proposes a preliminary framework for the responsible use of affective AI in education. The framework is structured around four guiding pillars (Floridi and Cowls, 2019; Robertson and Ne'eman, 2008; Anastasiadou, 2026):
* Emotional Transparency
Learners should always be clearly informed about what emotional data is collected and how it is being used.
* Learner Autonomy
Emotional nudging or manipulation must be avoided; learners' freedom of choice should remain intact.
* Cultural and Neurodiversity Sensitivity
Systems should respect diversity in emotional expression and avoid enforcing a single emotional model as a standard.
* Ethical Oversight
Institutions bear the responsibility to ensure affective data is collected, interpreted, and applied with accountability and fairness.
This framework provides a foundation for future pilot studies, particularly in lifelong learning programs designed to promote emotional wellbeing, such as initiatives in maternal digital care and professional upskilling.
5. Ethical Declaration
This research did not involve direct experimentation with human participants. It is based solely on secondary sources, published case studies, and conceptual analysis. No personal or biometric data were collected, stored, or processed in the preparation of this paper.
All ethical considerations were guided by European and institutional frameworks on AI in education and digital wellbeing. Any future empirical applications of the proposed framework will be subject to formal ethics review and informed consent procedures.
6. Conflict of Interest Statement
The author declares no conflict of interest regarding the content, funding, or institutional affiliations of this paper. The research was conducted independently as part of an ongoing PhD project and does not reflect the views of any commercial or political entity.
7. Conclusion
Integrating affective artificial intelligence into digital education opens promising opportunities, but it also raises critical concerns. Emotion-aware systems have the potential to enhance student engagement, foster inclusion, and support emotional wellbeing. This is especially valuable for learners who may feel isolated or overwhelmed in online environments where human presence is limited (Robertson and Ne'eman, 2008; Risser and Bottoms, 2020; Whittlestone et al., 2019; Kontis et al., 2025).
At the same time, the use of emotional data in education must be approached with caution. Risks such as emotional profiling, biased interpretations, or loss of learner agency cannot be overlooked. As this study has shown, there is a clear need for frameworks that place ethics, empathy, and inclusion at the center of design.
The reflections presented here are part of an ongoing PhD project. The next step will be to pilot the proposed framework in real-world educational settings, particularly in professional training and digital care. The ultimate goal is not only to design smarter systems, but to build thoughtful systems that support, rather than replace, the emotional connection between people and learning.
AI Declaration
No generative artificial intelligence (AI) or automated tools were used in the creation of the content of this paper. Any assistance with spelling, grammar, or formatting was conducted using standard software tools, and the intellectual content is entirely the responsibility of the authors.
Copyright Academic Conferences International Limited 2025