Content area
Purpose
The development and presentation of a framework that integrates modern methods for detecting, assessing and mitigating mental health issues in the context of dynamic and adverse changes in social networks.
Design/methodology/approach
This viewpoint is based on a literature review of current advancements in the field. The use of causal discovery and causal inference methods forms the foundation for applying all the techniques included in the framework (machine learning, deep learning, explainable AI as well as large language models and generative AI). Additionally, an analysis of network effects and their influence on users’ emotional states is conducted.
Findings
The synergy of all methods used in the framework, combined with causal analysis, opens new horizons in predicting and diagnosing mental health disorders. The proposed framework demonstrates its applicability in providing additional analytics for the studied subjects (individual traits and factors that worsen mental health). It also proves its ability to identify hidden factors and processes.
Originality/value
The proposed framework offers a novel perspective on addressing mental health issues in the context of rapidly evolving digital platforms. Its flexibility allows for the adaptation of tools and methods to various scenarios and user groups. Its application can contribute to the development of more accurate algorithms that account for the impact of negative (including hidden) external factors affecting users. Furthermore, it can assist in the diagnostic process.
Introduction
In recent years, the deterioration of mental health (MH) has become a global concern. According to statistics, approximately 450 million people worldwide suffer from mental illness, with such conditions accounting for 13% of the global disease burden (Abd Rahman et al., 2020). More recent data from the United States indicates even higher rates: one in five U.S. adults experience mental illness annually, and one in six U.S. youth aged 6–17 experience a mental health disorder each year (NAMI Helpline, 2024). The World Health Organization (WHO) provides a broad definition of mental health: Mental health is a state of mental well-being that enables people to cope with the stresses of life, realize their abilities, learn well and work well, and contribute to their community (World Health Organisation, 2022).
For example, anxiety, stress, and depression are the different stages of a person’s MH illness that negatively impact their health, emotions, and social interactions. Thus, determining the mental illness in the earlier phase helps people achieve better health. Moreover, some estimates suggest that anxiety disorders are prevalent psychiatric conditions characterized by excessive and prolonged anxiety in response to various stimuli. The average lifetime prevalence of any anxiety disorder is approximately 16%, with a 12-month prevalence of around 11%. The global survey conducted by the WHO revealed that mental health conditions like depression, schizophrenia, and personality disorders are over-represented in inpatient treatment settings due to certain features of these disorders requiring hospitalization. In contrast, patients with anxiety disorders are generally underrepresented in inpatient care as their condition rarely necessitates hospitalization. These findings suggest a significant underestimation and inadequate treatment of anxiety disorders, indicating a need for increased awareness and appropriate interventions. Therapists employed in mental health centers are burdened with significant administrative obligations. Additionally, necessary medical procedures can be time-consuming and expensive (Das and Gavade, 2024).
These circumstances pose challenges to the successful implementation of evidence-based practices. In light of the complexities surrounding the proficiency of adopting particular psychotherapeutic modalities, patients may experience varying levels of engagement, with some demonstrating resistance to treatment. In such scenarios, therapists must embrace flexible strategies to deliver interventions effectively. Considering the challenges with anxiety disorder management, the integration of innovative technologies such as artificial intelligence (AI) environments and platforms emerges as imperative (Das and Gavade, 2024).
In response to these needs, recent years have seen the active development of solutions based on modern technological tools. However, it is important to acknowledge that both society and the expert community occasionally raise questions about the effectiveness of such proposals. The primary issue here is the lack of robust evidence supporting the efficacy of new digital solutions (Jain and Bajaj, 2023).
This viewpoint paper aims to address this situation by presenting our understanding of the principles and mechanisms that should underpin a framework designed to enhance the reliability and effectiveness of technological methods for detecting, assessing, and mitigating mental health issues. While the framework aligns with the concepts outlined in the extensive body of existing research literature, it also introduces certain unique features, which we will highlight at the conclusion of this viewpoint. The article is structured to gradually unfold the details of the proposed framework.
Methodology
This viewpoint is based on a restricted review of recent publications (monographs, primary research, systematic reviews, and conference papers) in psychology, psychaitry, and articifical intelligence (AI), published in the last five years (2019–2024) .
Framework: the first deployment stage
Online Social Networks (OSN)—digital platforms for user interaction, content sharing, and the formation of social connections in the online space (e.g. Facebook, Instagram)—are often considered a primary source of mental health (MH) issues. These platforms are a universal factor affecting nearly all internet users, but their impact is particularly pronounced on modern adolescents, for whom interaction through OSN is an integral part of daily life. When discussing the mental health of today’s adolescents, it is important to consider that they are Digital Natives who actively or passively use media daily (Prensky, 2001). Immersed in the digital environment from childhood, they experience deeper consequences of negative interactions on their mental health compared to previous generations, for whom the digital environment was not as significant.
The surge in internet use for expressing personal thoughts and beliefs opens new opportunities for the research community. It allows not only the identification but also the verification of relationships between OSN activity and users’ mental health. Cross-sectional and longitudinal studies of social network data highlight the importance of real-time models for analyzing mental health.
Ideally, our framework should be built on the foundation of OSN data. When analyzing the mental health of OSN users, the only viable option is the direct extraction of data from OSN. Such data is often referred to as Digital Trace Data (DTD). DTD contains diverse information about users and their online activities, enabling analysis that includes understanding public opinion and decision-making in various fields—politics, healthcare, economics, and more. OSN also provide a unique opportunity to detect depression through user-generated posts. Some details of this functionality will be explained later.
As we proceed, we will gradually unfold the components of the framework, explaining its core principles and mechanisms. It should also be noted that the proposed framework is not an example of an IT application, although, if necessary, a high-level architecture describing the functionality of individual components could be proposed. Given the format of this Viewpoint, we will instead refer to examples that, to some extent, align with our conceptual vision of the framework’s implementation.
The core of the proposed framework, where OSN data is processed, consists of Machine Learning (ML) algorithms. For a better understanding of ML, we recommend the books (Ghosh, 2022; Salganik, 2018), which are tailored for specialists in psychology and sociology—fields closely aligned with the subject matter of this discussion. For psychiatrists, we also recommend a collection of conceptual articles related to the topic of this viewpoint, which we will revisit later.
In brief, ML solves optimization problems to predict and classify new values based on existing data. For example, an ML model is trained on OSN user data where individuals have already been diagnosed with specific mental health issues. Training concludes when the error on the training data is minimized. Afterward, the model can predict the likelihood of mental health issues in new OSN users whose data was not analyzed during training. It is crucial that the error on new data is as low as possible. In such cases, the ML algorithm is said to generalize well, which is essential. Otherwise, if the error on new data is high, it indicates that the data source used for training the algorithm is too specific, and the diagnosis may be tied to this specificity. Such situations, leading to bias, should be avoided. It is also worth noting that we will separately mention ML and Deep Learning (DL) algorithms in the future. The former refers to so-called “classical algorithms,” while the latter is based on neural networks. The differences in how ML and DL operate are significant for this article.
Let us turn to examples of implementation and, most importantly, the conclusions. In complementary review articles, methods of natural language processing (NLP) based on various ML and DL algorithms are used (Chancellor and De Choudhury, 2020; Zhang et al., 2022). NLP methods allow for the proper preparation of OSN data for subsequent predictive algorithm work using ML and DL.
This approach is particularly well-demonstrated in the second review, where a Sankey diagram visualizes the connections between NLP methods, illnesses, languages, and applications. From the first review, we highlight examples of text patterns and attributes that help predict specific mental health issues. For instance, the “language features” group includes structural parameters such as post length and linguistic style; the “behavior” group includes activity and interactions with others; and the “emotion and cognition” group includes mood, sentiment, intensity of emotion, and categories of emotional speech.
However, the first review reveals that there are issues in evaluating construct validity to determine and predict mental health status, which permeate the research process. This, the authors argue, will inhibit reproducibility and extension of this work into practical and clinical domains (Chancellor and De Choudhury, 2020). One reason is the presence of data bias, which can impair model generalization, as mentioned earlier. This negative effect, along with others noted in the first review, can significantly reduce measurement reliability and result reproducibility. Recommendations for mitigating these effects were provided in the review.
In the study (Cao et al., 2024) the issue of data bias in detecting mental health issues in OSN was specifically analyzed. It was found that data bias is a significant concern, stemming from the overrepresentation of certain demographic groups or linguistic communities while underrepresenting others. Other types of biases affecting model reliability and generalization were linked to platform dependency (primarily Twitter) and the predominance of English-language content.
At this point, we conclude the first stage of the framework’s deployment and draw attention to one of the key issues—data quality. The article (Sáez et al., 2024) proposes the concept of Resilient AI (RAI) as a fundamental solution to this issue. The authors state that the uncertainty, variability, and biases in real-world data environments still pose significant challenges to the development of health AI, its routine clinical use, and its regulatory frameworks. Health AI should be resilient against real-world environments throughout its lifecycle, including the training and prediction phases and maintenance during production, and health AI regulations should evolve accordingly. Data quality issues, variability over time or across sites, information uncertainty, human-computer interaction, and fundamental rights assurance are among the most relevant challenges. Ultimately, the authors propose a closed-loop architecture for resilient AI (RAI), as an AI that can automatically or semiautomatically adapt and react to unprecedented, uncertain, or undesired situations in real-world environments during model training and its use.
To clarify the term “AI” (Das, 2025) provides the following definition: “Artificial intelligence represents a branch of computer science that aims to create machines capable of performing tasks that typically require human intelligence”.
Framework: the second deployment stage
In the second stage of the framework’s deployment, functionality is introduced to address specific predictive tasks, which are typically performed using DL models. For example, in the study (Kannan et al., 2024), tasks include not only diagnosing mental health conditions such as schizophrenia, bipolar disorder, and depression but also predictive modeling for disease progression, with an emphasis on risk prediction models. These models estimate the likelihood of developing mental health disorders.
Additional examples of DL and advanced ML models for complex tasks can be provided. However, as noted in the previously mentioned review (Zhang et al., 2022), the evaluation of a successful model does not only rely on performance, but also on its interpretability, which is significant for guiding clinicians to understand not only what has been extracted from text but the reasoning underlying some prediction. DL-based methods achieve good performance by utilizing feature extraction and complex neural network structures for illness detection. Nevertheless, they are still treated as black boxes and fail to explain the predictions. Therefore, in future work, the explainability of the deep learning models will become an important research direction.
While the appeal of highly predictive, complex models is undeniable, there are significant domains, such as healthcare and finance, where interpretability is a key requirement. In these fields, decision-makers often need to justify their choices based on the model’s predictions. Consequently, the tension between interpretability and predictive power remains an active area of research. In summary, while the choice of model often depends on the specific task and the desired balance between interpretability and predictive performance, it is crucial to consider the potential consequences of deploying a black-box model, especially in sensitive and regulated domains. The field of Explainable AI (XAI) aims to bridge this gap by developing techniques that make even the most complex models more understandable and trustworthy (Hsieh et al., 2024).
The concept of XAI is increasingly seen as one of the most promising approaches to overcoming a significant barrier: the uncertainty (and thus unreliability) associated with critical applications. As noted earlier, healthcare is one such domain. A specialized article states: the paper navigates through traditional diagnostic methods, state-of-the-art data- and AI-driven research studies, and the emergence of explainable AI (XAI) models for mental healthcare. We review state-of-the-art machine learning methods, particularly those based on modern deep learning, while emphasizing the need for explainability in healthcare AI models (Ibrahimov et al., 2024).
As the framework unfolds, we reach a critical juncture. Despite the importance of key XAI issues related to the use of terms like Interpretability and Explainability, there is still no consensus on their goals and applications (Saeed and Omlin, 2023). Moreover, it is increasingly evident that the XAI concept will likely intersect with the concept of causality. Causality is arguably among the most desired properties when constructing a model from data. In this regard, uncovering causal connections learned through a model via explanations is a fundamental hope associated with XAI (Longo et al., 2024).
In this context, we recommend the in-depth article (Boge and Mosig, 2024), which discusses these issues in detail. Overall, the article declares: “The goal is to bring in line two aspects of explanation and causation: On the one hand, explanation and causation have taken important roles as warrants of progress in the history of biomedicine, while on the other hand, they now take important, albeit not yet well understood, roles in the context of artificial intelligence”.
Causality – the connecting component of the framework
Let us now provide some clarification on causality, as we consider it the most important and connecting component of the proposed framework. As we know, ML methods aim to minimize error on new data, while causality methods identify the direction of influence between variables (Causal Discovery – CD) and estimate the magnitude of change in the outcome variable caused by the influencing variable (Causal Inference – CI). In healthcare, the difference can be illustrated by predicting the number of hospital beds needed using ML versus determining the causes of hospitalization through the combined use of CD/CI methods. An end-to-end process involving CD/CI is well described in the (Geffner et al., 2024), with another detailed explanation provided by (Saxe et al., 2022). The authors of the latter state: “A fundamental premise of our analysis is that outcomes for mental disorders—like for all medical disorders—cannot improve without discovery of causal knowledge governing these outcomes. As will be shown, there is a structurally driven dearth of scientific causal knowledge on mental disorders, because the research methods conventionally employed to study etiology cannot infer causation at sufficient scale and speed. Moreover, the encoding of etiological knowledge for diagnosis (that invariably guides treatments) is precluded by the form of diagnostic nosology conventionally practiced by the field for mental disorders. These central components to the field’s guiding scientific paradigm serve to impose great limitations to progress for improving outcomes for mental disorders”. Furthermore, the authors of the above mentioned publication propose a methodology for transitioning to causal research methods for mental health issues: they recommend three steps for the field to take, for launching processes that can lead to the establishment of a causal diagnostic nosology for mental disorders.
A recent article on psychiatry expresses similar ideas regarding the application of causal methods (Newson et al., 2024). Psychiatrists may also find value in a collection of articles where experts discuss the challenges of causality (Kalis et al., 2017).
We must also highlight the article on precision psychiatry by (Chen et al., 2022), which presents methods closely aligned with our proposed framework. The following statement succinctly captures the essence of integrating causal methods: “Overlapping symptoms can be found in many mental disorders, making the diagnosis less precise or more error prone. For example, changes in sleep and energy level, often found in depression and generally measured using the PHQ-9 questionnaire, are very common across many other disorders. One goal in precision psychiatry is to fully dissect the mechanisms and causally reveal the many-to-one relationship. This can be catalyzed by rigorous measurements and quantification of neural and behavioral data relevant to mental health”. A similar idea is expressed in (Newson et al., 2024), which advocates moving away from the current approach of “the many-to-many mappings between symptoms and causes.”
ML and CI methods complement each other, and their synergy is crucial for modern research. In recent years, the practice of applying ML to causal analysis has been actively developing. Statistical methods used in econometric analysis help validate causal conclusions, while ML algorithms built on statistical methods offer advantages such as high levels of automation, the ability to handle large datasets (Digital Trace Data – DTD), a variety of algorithms and metrics, and feature selection methods (Athey and Imbens, 2019).
There are differences in how specialists approach ML and CI. Sociologists often focus on interpretable causal mechanisms, neglecting predictive accuracy, and should therefore pay more attention to ML methods (Hofman et al., 2017). The article (Brand et al., 2023) demonstrates examples of applying ML to tasks, including spillover effects. For more details about spillover effects see Supplement 1. (Grosz et al., 2020) notes that psychologists tend to avoid causal conclusions in non-experimental studies, which worsens research design and limits the significance of their work. (Eronen, 2020) urges psychologists to actively adopt CD methods, as identifying causes is a key goal of their research and is essential for understanding causal relationships when analyzing the causes of mental health deterioration. (Saxe et al., 2022) also shows that clinical psychologists, unfortunately, are slow to adopt causal approaches.
To conclude, we provide examples of causal research using modern ML-based technologies. These examples were chosen to demonstrate the specific value of incorporating CD/CI methods into the framework. In brief, we believe the framework should reliably and effectively assist in identifying external threats to mental health, both from individual factors and processes targeting OSN users. By “processes,” we mean both deliberate actions such as insults and threats, as well as other negative factors emerging within OSN.
Example 1: problematic OSN use and mental health
The first example, from (Mojtabai, 2024), examines the relationship between frequent and problematic OSN use by adolescents and mental health deterioration. The study employs the LiNGAM approach, which allows for the identification of a broader range of causal relationships and the precise determination of their direction. The results confirm that frequent OSN use worsens mental health, a trend that has intensified in recent years.
Example 2: cyberbullying detection
The second example involves the detection of cyberbullying (CB). Here, we make a brief clarification: we urge caution in using the term “cyberbullying” due to its ambiguous definition at this stage of OSN development. However, the authors of the study use this term, and we retain it, assuming it refers to deliberate, harmful, long-term, and periodic targeting of an individual. Preventive measures against CB include timely detection within OSN.
The most common text-based approach has limitations. The problem is complex and far from resolved due to its multifactorial nature: platform diversity, language variations, forms of expression, and evolving communication styles. Word meanings change, and new ones emerge. Philosophical questions also arise, such as the fine line between undesirable content and freedom of speech, as well as individual differences in perceiving aggressive content.
We combined two approaches using causal methods, as CI methods outperform ML-based CB detection due to the invariance of causal relationships across different conditions.
First Approach: (Cheng et al., 2019) proposes a solution for CB detection through the identification and control of confounders. This is the first and original work combining ML and causal relationships to create a reliable classifier. The authors applied ML to study causal relationships between psychological predictors and CB detection in OSN data, using Twitter and Formspring, and eliminated confounders to identify the causal influence of psychological covariates in the CB identification process.
Second Approach: (Sheth et al., 2024a, b “Cross-Platform …”; Sheth et al., 2024a, b “Causality Guided …”) describes a cross-platform model for detecting Hate Speech, which trains on data from one platform and generalizes to new, unknown platforms. Since CB detection also involves text analysis, this approach is applicable. To achieve cross-platform generalization, stable causal relationships representing causal representation are identified. The method is based on a Variational AutoEncoder (VAE) using neural networks.
Example 3: network effects and peer influence
The third example, included in the Supplement (see Supplement 1), discusses the specifics of OSN in assessing the impact of network effects (Adhikari and Zheleva, 2024). The study was conducted under heterogeneous conditions, which is important for evaluating peer effects (defining behavioral traits at the levels of influence and susceptibility) and homophily effects as confounders. However, spillover effects were not considered in this study, despite the presence of interference.
Example 4: spillover effects and advanced ML methods
The fourth and most complex example comes from (Ma and Tresp, 2021), which evaluates spillover effects using advanced ML methods, including deep learning and graph neural networks (GNN). GNN-based approaches expand the capabilities of causal analysis for network data and the assessment of heterogeneous impacts, accounting for spillover effects.
In the Supplement (see Supplement 1) we explain how causal methods can assess the degree of negative impact on individuals' mental health under conditions of significant nonlinear amplification of this impact. Additional complexity arises from the nature of such nonlinearity, which is associated with so-called “network effects,” where the transmission of influence occurs avalanche-like due to interactions among all participants—a characteristic feature of OSN. In our view, we have been living under such conditions for the past 10–12 years. However, validated methodologies that can assess such impacts have emerged only recently.
At present, with the prospect of further intensification of the situation, all OSN users may be exposed to artificial actors created through the development of technologies such as Large Language Models (LLMs) and Generative AI (GenAI).
Using the framework in the context of large language models & generative AI
In recent years, the terms Generative AI (GenAI) and Large Language Models (LLMs) have become widely recognized, even among individuals outside the IT field. This trend began with the phenomenon of ChatGPT and quickly spread to other products. The frequent releases by OpenAI and other companies have impressed users with their ease of use and utility, leading to the rapid adoption of GenAI in business and sparking concerns about the future of various professions. Discussions on these aspects have become widespread. Additionally, the potential of LLMs as assistants in the treatment of patients with mental health (MH) issues is being actively explored (Nguyen et al., 2024). While skepticism persists, research in this area continues, driven by the evident shortage of human resources.
It is useful to clarify the terms being used. In the book (Das, 2025) the following definitions are provided: “LLMs are a category of foundation models trained on immense amounts of data making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks. GenAI enables users to quickly generate new content based on a variety of inputs. Inputs and outputs to these models can include text, images, sounds, animation, 3D models, or other types of data”.
Earlier, we discussed the example of detecting cyberbullying (CB) and hate speech using ML and CD/CI methods. Now, let us consider the approach described in (Das, 2025). The author writes:
When it comes to the creation of an Anti-Cyberbullying tool, LLMs will play a huge role, as it is the direct language of the perpetrator and the victim which will need to be carefully analyzed.
This approach is similar to the one presented in (Dmonte et al., 2024), which is based on the direct analysis of text using LLMs. Since LLMs are trained on vast amounts of data and understand conversational context, they have a high likelihood of identifying offensive statements.
However, (Dmonte et al., 2024) notes that achieving full generalization—where a trained model performs equally well on new data obtained under different conditions—remains elusive. We previously discussed the assumption that causal relationships are invariant, which could help improve the generalization of ML models. A model based on causal principles can identify platform-dependent features in the context of hate speech detection.
In comparing the two approaches, each has its advantages. The LLM-based approach (Das, 2025; Schoeler et al., 2018) is more straightforward, assuming that over time, the necessary level of generalization for recognizing CB and offensive messages will be achieved, accounting for changes in language and meaning. On the other hand, the approach discussed in (Cheng et al., 2019; Sheth et al., 2024a, b) seeks a deeper understanding of the causal structure of messages. Researchers look for indirect indicators of this structure, relying on the invariance of causal relationships across different environments.
In recent years, the ability of LLMs to determine causal relationships has been actively studied, resulting in numerous articles with opposing conclusions—ranging from confidence in finding a universal tool for building causal models to skepticism about the causal capabilities of LLMs. We align with the cautious optimism expressed in one of the most cited works, which is based on extensive research and comparisons of various LLM versions (Kıcıman et al., 2024). The authors argue that LLMs provide a new approach to using statistical algorithms for constructing causal relationship graphs (Directed Acyclic Graphs – DAGs, representing unidirectional causal structures). This approach emphasizes metadata associated with variables rather than their values.
Typically, experts in relevant fields construct DAGs based on their knowledge and the context of the problem, a process that is complex and limits the use of such methods. However, research has shown that LLMs can fill gaps in domain knowledge that previously could only be addressed by humans. Unlike CD algorithms, which rely on variable values, LLMs can infer DAGs by reasoning based on metadata, such as variable names and context expressed in natural language. Thus, LLMs generating DAGs based on knowledge surpass modern standard CD algorithms, particularly in observational studies where obtaining a reliable causal structure is challenging.
Interestingly, this approach is already being applied in practice. At BayesiaLab, the GenAI-based assistant Hellixia, powered by LLMs (such as GPT-4), can generate DAGs (referred to as “causal networks” by BayesiaLab) (Jouffe, 2024).
In conclusion, we express cautious optimism about combining the two approaches mentioned above, as LLMs can perform both the detection of harmful text and the identification of causal relationships. Research is emerging that explores transformer architectures for solving causal tasks, as LLMs are based on transformers. In this regard, the new transformer structure proposed by (Liu et al., 2024) is noteworthy. It allows the integration of a DAG describing the causal structure of a problem as an attention mask, forcing the model to consider the causal flow of information in the analyzed system.
LLMs within the GenAI framework can serve as modern and flexible tools. The high autonomy of LLMs simplifies their use by specialists in various fields, reducing the need for domain programming knowledge and multiple libraries for solving ML and CI tasks. GenAI also opens new opportunities in scientific, technical, and public domains, including OSN. Unlike LLMs, which are limited to text, GenAI can generate images, audio, video, and animations, demonstrating significant progress with important implications for humanity. For example, “digital twin” technologies are already being actively used to improve cancer treatment, enabling the creation of digital replicas of patients to test various drugs and predict their effectiveness (McClure, 2024). Digital twins are seen as precursors to a new reality with numerous applications. However, potential pitfalls must be considered.
Problems with GenAI began to emerge immediately after the launch of the first version of ChatGPT, which demonstrated the capabilities of LLMs. In May 2023, during U.S. Congressional hearings, the situation with ChatGPT was discussed (Altman, 2023), and shortly thereafter, an article was published on the influence of LLMs on voters (Chu et al., 2023). The study examined the relationship between the “media diets” of various social groups and their political preferences. LLMs identify linguistic subtleties that elicit positive or negative responses in specific cultural contexts. These subtleties, hidden in “algorithmic patterns,” enable language models to shape individuals' worldviews and political preferences. Earlier, we mentioned the potential of LLMs in analyzing language to detect negative content to prevent CB, which also requires a deep understanding of context and linguistic nuances. (Das, 2025) notes: “GenAI has two sides: positive and negative. In terms of the former, it can be used to help filter rogue contents in any kind or type of conversation that is used in Cyberbullying. From here, warnings can also be created and sent to the parents and school educators if the child is being Cyberbullied. But in terms of the latter, it can also be used to create and launch Cyberbullying attacks”.
Two documents addressing the negative aspects of GenAI are worth mentioning: (Marchal et al., 2024; Ferrara, 2024). The first, a study conducted by Google DeepMind from January 2023 to March 2024, examines 200 cases of GenAI misuse, focusing on public opinion manipulation. The study highlights that most malicious actors exploited GenAI capabilities without needing to hack the system itself. The second article provides a detailed review and classification of all possible negative applications of GenAI and their consequences.
Currently, there are no methods capable of protecting OSN users from the numerous harmful impacts described in these documents, nor even of identifying such influences. Thus, we once again turn to the potential of the proposed framework to identify these new negative factors and processes through an integrated ML & CD/CI approach. We believe this issue warrants further study in the future.
Conclusion
Amid the rapid development and adoption of new technologies, we have demonstrated examples of deploying the framework, which represents an integrated approach to enhancing the reliability and effectiveness of technological methods for detecting, assessing, and mitigating mental health issues in the context of dynamic negative changes in social networks.
The active use of methods for detecting and assessing the magnitude of causal impacts serves as the connecting link in the framework’s architecture. It should be noted that, given the limited scope of this Viewpoint article, we have primarily focused on the technological aspects of the framework, leaving aside important and relevant issues such as ethics and fairness. However, we are confident that causality, as a key to human intelligence, plays a vital role in achieving socially responsible ML, DL, AI, LLM, and GenAI algorithms.
We believe that the framework, designed to address the complex and critical tasks outlined above, should also possess extended capabilities, which we have demonstrated through examples of successful implementations from available open sources.
The cumulative novelty of the proposed solution can be summarized as follows:
- The presented framework is a cohesive and timely solution based on the principles of broad application of causal methods, whose use enables synergy among all methods and mechanisms implemented in the framework.
- The framework demonstrates the ability to flexibly respond to significant dynamics in OSN changes, as exemplified by the emergence of network effects (see Supplement 1).
- The framework can reliably and effectively assist in identifying external threats to MH from individual negative factors and processes emerging in OSN and targeting OSN users, including under conditions of external covert influence, which the framework can identify.
- Independent value is potentially represented by the following cases:
•Additional analytics on individual traits, as demonstrated by the example of calculating the negative impact on OSN users due to network effects (see Supplement 1).
- The potential to establish evidence-based relationships “between symptoms and causes” in MH diagnostics through the successful integration of causal methods.
We believe that the framework should continue to evolve in the future, taking into account new challenges in an ever-changing reality.
This Viewpoint was translated into English using ChatGPT4 (OpenAI). Both authors have reviewed, edited and approved the translation.
© Emerald Publishing Limited.
