1. Introduction
In recent years, the adoption of online social networks (OSNs) has significantly increased (e.g., only Facebook owns 1.23 billion monthly active users), and OSNs have become one of the most famous platforms for social interactions. People use OSNs to interact as well as to share personal experiences and information with their friends. Many companies are using social media platforms to engage with their customers as well as to advertise their products/events. Due to the continuously growing popularity of OSNs, a large amount of personal big data is generated on daily basis (for example, Twitter generates about 500 million tweets each day and around 200 billion tweets per year). These data can assist in improving people’s quality of life as well as benefit various companies (e.g., advertisers, application developers, recommendation companies, content creators and sellers, policy makers, and so on). However, these data encompass sensitive information about people’s social interaction, spatial-temporal activities, demographics, finance, disease, mobility, religious/political views, etc., that needs privacy preservation to protect it from prying eyes. In recent years, privacy preservation has become more challenging due to rapid advancements in data mining and artificial intelligence tools and the availability of personal data (e.g., user profiles on OSN sites). These tools are good at finding sensitive information from large-scale data as well as predicting sensitive information using pre-trained models. Hence, privacy preservation of user data has become one of the most urgent research problems in OSNs.
OSNs are structures that depict a set of entities (i.e., users) and the ties/relations between them [1]. OSNs are usually represented with an undirected graph, G, , where V denotes the set of users (i.e., ), and E is the set of edges (i.e., ). In simple words, is any real-world user of SN, and denotes the links of with other users. The link between any two users, and , can be correspondence, friendship, collaboration, affiliation to a group/party, etc. In addition to set E, each node v in a G usually encompasses a set of attributes A, where . The labels for these attributes can be age, gender, race, and zipcode. For instance, . The domain of values for each attribute can be different, for example, if =gender, then . In addition to the non-identifying attributes, in some cases, A can contain one/two types of sensitive information (SI), denoted with S, where . Hence, the overall structure of attribute set can be represented as , where A and S denote the basic attributes, known as quasi-identifiers (QIDs) and SI, respectively. The G, when nodes contain attribute information as well, can be denoted as . A conceptual overview of the G is shown in Figure 1.
In Figure 1, there are nine users labeled as , and the number of edges is distinct for each user. For example, has three edges, and has one edge. The of any node () is also called the degree of that respective node, denoted as . Each node has QIDs as well as SI. For the sake of simplicity, we mark SI with bold fonts in Figure 1. The structure of can be denoted as , where , and . The G can be directed, undirected, weighted, labeled, unlabeled, etc., depending upon the scenario [2].
In recent years, the distribution of G with researchers/data miners has become a routine matter to find insights from G about people [3]. The sharing and analysis of G have a wide range of benefits for people. For example, better service/product recommendations by community-based clustering [4], information diffusion to targeted users [5], appropriate friend recommendations [6], point of interest recommendations [7], traffic incidents analysis [8], influence spreading [9], and route recommendations [10], to name a few. The usage of SN offers users many other benefits such as increasing their reputation, influencing others, recieving brand offers, receiving support, and connecting with a huge community [11]. However, the flip side of using OSN analysis or mining is the loss of privacy and the inherent consequences of this. Therefore, SNs’ service providers and users are still struggling to find a proper balance of social benefits as well as the potential privacy risks [12]. With the rapid digitization, both the scale and scope of OSN privacy breaches are expanding, impacting millions of users with either the loss of data or dignity. OSNs’ service providers are constantly integrating privacy-protection tools and upgrading the existing privacy settings to combat privacy issues. In this paper, we focus on privacy violations in G, and therefore, we discuss most concepts concerning G analysis, mining, and sharing.
There are two famous and state-of-the-art approaches for privacy preservation in G publishing, named naïve and structural anonymization [13]. In the former category, only the structure of G is published by removing all attributes of nodes and edges (see Figure 2a). In contrast, the latter category modifies the structure of G to preserve privacy (see Figure 2b). In Figure 2b, five new edges have been introduced to anonymize G.
Researchers have noticed that naïve anonymization may not be sufficient to provide strong resilience against privacy breaches [14]. In contrast, structural anonymization provides a relatively higher defense against privacy breaches by modifying graph structures. In Figure 2, five new edges have been introduced to change the structure of G for privacy preservation. Recently, many solutions have been proposed to preserve the privacy of SN users in G publishing [15,16,17,18,19,20,21]. These solutions have been used to preserve either nodes’ or edges’ privacy in the release of G. Recently, differential privacy-based solutions have also been proposed to alter the G’s structure for privacy preservation [22]. Despite the success of these solutions, privacy issues can stem in multiple formats, and robust solutions are needed to overcome all types of privacy issues.
The existing surveys related to G publishing cover many important aspects such as graph anonymization/de-anonymization techniques [23], graph anonymization operations [24], brief taxonomies of privacy models [25], anonymity frameworks for graph data [26], privacy/utility evaluating metrics employed by the anonymization mechanisms [27,28], random G modeling [29], and data mining from G [30]. Although we fully affirm the contributions of these surveys, these studies have the following five research gaps: (i) most surveys provided very limited knowledge about most aspects concerning OSN privacy, especially privacy-preserving G publishing and critical information that needs privacy preservation in G; (ii) the critical and experimental details of most studies have not been reported thoroughly; (iii) the discussion/analysis of privacy preservation in application-specific scenarios of OSNs has not been investigated in detail; (iv) the high-level taxonomy of privacy-preserving approaches (i.e., common + AI) used in publishing G for data mining and analytical purposes has not been provided; (v) the significance of artificial intelligence (AI) techniques in the context of graph publishing as well jeopardizing user’s privacy has not been comprehensively highlighted. Table 1 presents a detailed comparison of this review paper with existing SOTA surveys and review papers. Many current surveys either present a single type of anonymization or lack basic examples/knowledge that an early researcher needs to gain access to or understand this domain. Furthermore, the role of AI in the privacy of OSNs has not been thoroughly explained. This review article resolves the aforementioned limitations of the existing reviews and provides sufficient knowledge of privacy (or privacy disclosures) in OSNs from a broader perspective.
The major contributions of this article to OSNs’ privacy are summarized as follows.
-
It presents a comprehensive analysis and the findings of the state-of-the-art (SOTA) solutions that have been proposed to address privacy issues in OSNs;
-
It provides a high-level taxonomy of common as well as AI-based privacy-preserving approaches that have proposed as ways to combat the privacy issues in PPGP along with recent studies in each category;
-
It discusses many practical solutions that have been proposed for privacy preservation in application-specific scenarios (e.g., information diffusion, community clustering, influence analysis, friend recommendation, etc.) of OSNs that remained unexplored in the recent literature;
-
We discuss various generic and AI-based de-anonymization techniques that have been developed to infer SI from the anonymized graph (a.k.a. the de-anonymization of G);
-
Technical challenges of preserving privacy in OSNs in recent times and promising opportunities for future research are discussed in detail;
-
The novelty of our work is to provide a systematic analysis of SOTA methods focusing on OSNs from two aspects (e.g., defense → anonymity and attack → de-anonymity), identify novel application scenario(s) of OSNs and corresponding privacy-preserving approaches, analyze the role of AI in privacy preservation of OSNs (or privacy breaches), identify major users’ privacy challenges that OSNs’ service providers are facing or can likely face in the coming years, and list potential research avenues for researchers. Lastly, this work makes a timely contribution towards responsible data science (
https://redasci.org/ , accessed on 2 May 2022) amid rapid technical advancements in OSNs services/sites in recent years.
The rest of this article is organized as follows. Section 2 presents detailed background of privacy concepts in OSNs. Section 3 discusses the taxonomy of privacy-preserving graph publishing (PPGP) approaches and SOTA developments in each category. Section 4 presents major developments regarding privacy preservation in application-specific scenarios of SNs. Section 5 highlights the major developments in de-anonymization of G. Section 6 discusses the challenges to the privacy protection of G. Section 7 lists various research directions that are vital to combat privacy issues in OSNs. Section 8 discusses the limitations of this review. Finally, we conclude this paper in Section 9. Figure 3 presents the high-level structure of this review article.
2. Background
In this section, we provide a detailed analysis of threats to the validity of this review article and comprehensive background concerning OSNs privacy. In the next subsection, we state the search strings and databases that have been explored to find the related work for this review article.
2.1. Threats to Validity
For this review article, we included the SOTA studies that (1) deal with the privacy preservation of OSN data, (2) target privacy preservation in application-specific scenarios of OSNs, (3) discuss the significance of AI techniques in privacy protection/disclosure in OSNs, (4) deal with jeopardizing OSNs’ users’ privacy by either linking, statistical matching, or background knowledge attacks, and (5) discuss the performance evaluation in terms of privacy, utility, or computational complexity. We have used multiple phrases and combinations of strings such as ‘privacy preservation in OSNs’ and ‘social graph publishing and anonymization’ to extract the peer-reviewed articles from journals, renowned conference proceedings, recently published book chapters, and technical reports. We have mainly targeted eight databases, namely, IEEE Xplore, ScienceDirect, SpringerLink, Scopus, ACM Digital Library, MDPI, Hindawi, and Web of Science. We took advantage of the Google Scholar search engine for forward and backward searches. We have focused on papers that have been highly cited by recent studies and are highly technical with improved results. In total, 3500 documents were retrieved, and 1700 duplicated studies were removed. The titles and the abstracts’ contents were carefully screened to identify potential papers. The full texts of the 1780 studies were assessed to find the highly relevant papers to be included in this review. We have excluded the articles that discussed (1) a defense solution other than anonymization, privacy preservation of stored OSNs data, and content-based privacy attacks in OSNs, (2) cyber attacks (e.g., denial of service) on OSN data, (3) threats to OSNs’ security and privacy breaches in interactions. With a backward and forward search, 16 more closely related studies were retrieved. In total, 291 studies were finally selected for data extraction purposes. Figure 4 depicts the process of the SOTA article selection for this paper that was adopted from the previous SOTA reviews [31,32]. The findings of previous surveys and review articles were also used in addition to these included papers to provide distinctive features and a comprehensive performance evaluation.
2.2. Classification in the Scope of Privacy
The nature of privacy is highly subjective, meaning its perception varies from person to person. In simple words, privacy is all about hiding SI from the prying eyes [33]. The scope of privacy mainly falls into four categories [34], as demonstrated in Figure 5. This work belongs to the first category, which is about handling (i.e., aggregation, storage, analysis, anonymization, distribution, etc.) person-specific data.
Person-specific data can be modeled in a variety of styles such as tables, graphs, matrices, traces, logs, images, multimedia, and hybrid [35]. However, we consider personal data represented in a graph form G, where , in our work.
2.3. Operation Utility of Social Graphs
The publishing of G is vital for many analytical and data-mining purposes. The operational utility, , offered by G can be one of the three cases listed in Equation (1):
(1)
In the first case/level, the SN service provides the release of only the structure of G, and all profile information is usually hidden. In the second case, the structure of G is hidden, but profile information is shared with a researcher. In the last level, both the G structure and the of V is shared for analytical purposes [36].
2.4. Key Privacy Issues That Can Occur in Publishing G with Analysts
As stated earlier, publishing G is vital for many purposes, but it can introduce multiple privacy issues. We summarize the five key privacy issues occurring in G publishing in Figure 6.
2.5. Anonymization Operation That Can Be Applied to G
Many techniques, such as anonymization, masking, encryption, obfuscation, watermarking, zero knowledge proofs, and pseudonymization, are employed to preserve users’ privacy in G. Due to conceptual simplicity, anonymization techniques have been widely used to preserve users’ privacy [37,38]. Various anonymity operations are performed in order to provide sufficient resilience against contemporary privacy issues. Table 2 presents the concise description of anonymization operations that are applied to G for privacy protection. The strength/weakness and complexity of each operation vary depending upon the size and nature of G.
2.6. Important Aspects of Privacy Preservation in OSN Data
OSNs contain a treasure of information, and sufficient care is needed to preserve the privacy of most parts of that data [39]. In Figure 7, we present the different types of data collected/processed in SN and the pieces of information that require privacy preservation in publishing G. As shown in Figure 7a, there are three main types of data: identity, social, and content in SNs. All these types need privacy preservation from prying eyes. For example, if a rare user of an SN has a just one friend, and the respective friend is known to be an HIV doctor, in this case, the SI of the SN user can be inferred (e.g., he/she might have contracted an HIV disease) [40]. Similarly, profession information can lead to income disclosures. In G, anonymization methods need to preserve most parts of the SI shown in Figure 7b. SN data need more care regarding privacy preservation because most data can be available to the adversaries as background knowledge (BK) [41,42].
In Figure 8, we summarize important BK that can be within the adversaries’ access, and which can lead to privacy breaches. Apart from the BK and other auxiliary types of data, a new risk known as interdependent privacy risk (IPR) has become one of the major privacy threats in SNs in recent times [43,44,45]. Furthermore, inference attacks [46], ML-based attacks [47], privacy leakage in health SNs [48], profile cloning [49], profile matching [50], community-based threats [51], cross-SN user matching [52], and privacy concerns in different OSN services (e.g., recommendation systems [53], query evaluations [54], and sentiment analyses [55]) have made privacy preservation in OSNs an active area of research.
2.7. Role of Artificial Intelligence in the Domain of OSNs
Recently, AI has been extensively used in the information privacy domain for multiple purposes. It has been used to safeguard the personal data from prying eyes as well as for de-anonymizing large G-encompassing data of a substantial number of users [56,57,58]. We summarize the role of AI from three different perspectives as follows:
AI as a protection tool: AI can be used to preserve the privacy of SN data;
AI as an attack tool: AI can be used to compromise a user’s privacy from SN data;
AI as protection of target: Privacy concepts can be used to secure AI systems.
Recently, many graph-type–specific, attack-specific, domain-specific, application-specific, and AI-powered anonymization techniques have been developed. In the rest of this paper, we summarize the major developments concerning G privacy.
3. Overview of Privacy-Preserving Graph Publishing (PPGP) and Taxonomy of PPGP Approaches Used for Online Social Networks
In this section, we discuss the life cycle of PPGP, the basic concepts of G anonymization, and the taxonomy of PPGP approaches. We arrange and discuss the concepts in three different subsections. In the next subsection, we discuss the life cycle of PPGP.
3.1. Overview of the Life Cycle of Ppgp
The typical PPGP process contains six steps. A concise description of all the steps is given below. In Step A, appropriate data are collected from relevant users. Examples of data collection are account-opening procedures in an SN website, or a check-up from a diagnostic center. In both of these scenarios, some basic information (i.e., QIDs) as well as SI is obtained. In this research, we assume that G has already been collected by the SN service providers (a.k.a. data owners). In Step B, the collected G data are stored in safe repositories/databases for further analysis. Storage can be in graph form (e.g., SN data) or tabular form (e.g., hospital/bank data) depending upon the nature of the data. Due to the recent advancements in technology, storage capacity has become significantly large, and fine-grained data can now be stored for utilization in multiple contexts. In Step C, pre-processing is applied to the collected G. During this step, the G is cleaned (outliers and isolated nodes are removed, and redundant nodes/edges are removed). In Step D, the cleaned G from Step C are anonymized. During data anonymization, the structure of the original G is modified to preserve privacy, leaving the anonymized G, which is useful for analysis. The anonymization can be performed in multiple ways (e.g., DP methods, constrained methods, etc.) In Step E, the anonymized G is published for SN analysis and data-mining purposes. In the final step, analytics techniques are applied to the published G in order to extract useful information from it. The extracted information can be used for hypothesis generation/verification or for policy making.
A conceptual overview of the anonymization techniques applied to raw data given in a raw G form for PPDP is demonstrated in Figure 9. As shown in Figure 9, both privacy risks and graph utility are higher at the beginning. Anonymization is applied to G to strike a balance between utility and privacy [59]. The anonymization approaches usually modify the structure of the G in such a way that both privacy and utility are preserved. In the next subsection, we present a conceptual overview of anonymizing G along with an example.
3.2. The Basic Concepts of G Anonymization
Basically, the anonymization approaches change the structure of G into a new graph, . The size (# of nodes and edges) of can/cannot be the same as that of G. Hereafter, we refer to G as an original graph, and to as an anonymized graph. In Figure 10, we demonstrate an overview of the G anonymization with examples. The given in Figure 10 satisfies the k-degree anonymity, where because each node has at least two edges. Thus far, many graph anonymization approaches have been developed for sharing with the researchers/analysts [60,61,62]. In the next subsection, we present a taxonomy of graph anonymization approaches and the major developments in each category.
In the next subsection, we present a high-level taxonomy of anonymization approaches that have been proposed to foster graph data publishing, and discuss the SOTA approaches in each category.
3.3. High-Level Taxonomy of PPGP Approaches
There exist plenty of graph anonymization approaches in the literature. In Figure 11, we present a detailed taxonomy of PPGP approaches. The taxonomy presented in this paper is more detailed and complete than existing surveys. The rest of this subsection summarizes the major developments in terms of the SOTA approaches in each category.
3.3.1. Graph Modification Methods
The graph modification methods modify the G’s structure by deleting/adding nodes or edges to protect the privacy of users (see Figure 10). In addition to the add/delete operations, in some cases, the positions of vertices or edges are switched or re-organized to preserve users’ privacy. The graph modification methods are classified into two types: unconstrained and constrained. In the former type, the structure of G is modified without strict criteria. Moreover, the later type stops anonymity when some condition/criteria is being met (e.g., all nodes have achieved the same degree). Both types have been extensively studied in the literature for SN users’ privacy preservation. We demonstrate an example of unconstrained (a.k.a. random) anonymization adopted from [63] in Figure 12. In Figure 12b, two edges have been removed (i.e., ()) while two new edges have been added (i.e., ()) to produce . In contrast, , shown in Figure 12c, was obtained by the random edge switch method. In this example, two edges have been switched as: . As shown in Figure 12, there is no specific constraint/condition to be met while converting .
The constrained anonymization methods usually follow the same strategy as unconstrained (a.k.a. random) anonymization, but the strength of nodes/edges is bounded by some constraints (e.g., degree, node counts, the number of edges to be switched/modified, etc.). We present an example of constrained anonymization adopted from [63] in Figure 13. The shown in Figure 13b is two-degree anonymous (i.e., ). The degree sequence was changed from by the edge modification/switching method. In this particular example, the constraint was related to the number of edges in the network. The shown in Figure 13c is also two-degree anonymous, and it was obtained by applying the new edges and vertex addition. Two new edges (), and one node (i.e., ) were included to convert G into . In Figure 13c, the modification of G was bounded to the number of both edges and vertices. The degree sequence of each node is as . In constrained anonymization, the anonymization process stops on the completion of constraints related to closeness, degree, and/or clustering co-efficient, etc.
There are six main modification techniques that can be applied to anonymize SN data stored in a G form, as shown in Figure 14. The selection of modification techniques usually depends on the graph type (e.g., simple, bipartite, labeled, and uncertain graphs.) and objective of the PPGP.
We summarize and compare the SOTA G modification-based anonymity techniques in Table 3. In Table 3, we compared the existing approaches in terms of assertion(s), study nature, type of datasets on which experiments were performed, and anonymity type. The reason to perform an evaluation based on these metrics is to provide basic as well as experimental analyses that can help researchers to grasp the research status conveniently. Furthermore, these analyses can help researchers to make a rational decision towards conducting high-level research. For example, performance evaluation on a real dataset and writing technical papers are handy takeaways from the below analysis, as most previous studies have been evaluated on real datasets and their nature is technical.
The G modification techniques have been extensively studied in the recent literature. In addition to the analysis presented in Table 3, we refer interested readers to previous surveys for more detailed analyses of the vertex/edge modification techniques [87,88,89,90].
3.3.2. Graph Generalization/Clustering Methods
The generalization/clustering-based G anonymization methods perturb the graph structure by partitioning it into different clusters/groups, and anonymity is applied subsequently [91]. The core anonymization concepts of these methods closely resembles the syntactic methods (i.e., k-anonymity, ℓ-diversity, and t-closeness) of tabular data in terms of classes/cluster formation and the generalization of nodes/edges. However, the size of clusters and the degree of generalization is measured in such a way that maximal information is preserved in for legitimate information consumers. In Figure 15, we present an example of G’s anonymization using graph generalization/clustering methods. In this example, a network G with seven nodes is given as input (see Figure 15a), where each node contains the gender and age information of each user. Afterwards, user are arranged into three clusters based on similarity in gender and age information as follows: , , and . Later, all three clusters are generalized to super nodes, as shown in Figure 15b. The two numbers in each super node denotes the number of nodes and intra-cluster edges, respectively. The largest cluster is , with three nodes and two intra-cluster edges. Due to the superior results in both utility and privacy, generalization/clustering-based G anonymization methods have been extensively investigated in the recent literature.
We summarize and compare the famous generalization/clustering-based G anonymization techniques in Table 4.
In addition to the analysis given in Table 4, further information about clustering-based anonymization can be gained from previous surveys centering solely on these techniques [111,112,113]. Recently, clustering-based anonymization methods have gained popularity from multiple perspectives [114].
3.3.3. Privacy-Aware Graph Computing Methods
Privacy-aware graph computing methods do not perturb the structure of G; instead, they compute interesting characteristics that can be helpful from multiple perspectives. These methods share the analysis of the computation rather than the whole G. The privacy-aware graph computing methods extract useful statistics from G in such a way that the privacy of users is preserved while computed statistics remain applicable for SN analysis and mining purposes [115]. The useful analysis provided by privacy-aware graph computing methods are: G density, count of edges, relationships degree, distributions of degrees, size of the G, closeness, centralities, average similarity/distance between users, the number of subgraphs, top k-users with higher connections, clustering coefficients, path length, the number of communities, hypergraphs, the number of users with d degree, where d can be any real number, tie strength among people, trust/influence of people in G, communication/interactions, etc. We highlight an example of degree computation from G in Figure 16. The G shown in Figure 16 has the degree counts as follows: . The distribution of degrees can be determined using the following formula: . For instance, the fraction of nodes for degree 1 can be computed as: , as shown in Figure 16. Only graph statistics (i.e., degree) are published, thereby preserving users’ privacy. The degree information gives important information concerning G’s structure.
Instead of degree distributions, many such statistics (i.e., subgraphs [116]) can be computed from G. In Figure 17, we present an example of triangle computations from G. Aside from triangles, the start can also be computed for finding influential people in SNs. These statistics can be used for information diffusion/contagion purposes, marketing, collaborative filtering, opinion/preference mining and analysis, and epidemiological investigations.
The key findings of the latest SOTA privacy-aware graph computing methods are summarized as follows. Shun et al. [117] developed a simple, fast, and in-memory parallel triangles computing algorithm for large-scale SN data. The proposed method requires fewer parameter tunings and is scalable. Yang et al. [118] developed a linear-algebra-based platform for computing multiple statistics from G on GPU platforms. Mazlumi et al. [119] explored the possibility of using SN analysis concepts in the IoT domain regarding path length optimization, critical nodes identification, and advancing IoT applications. Specifically, the authors used SN concepts in the IoT domain for improving multiple aspects of the IoT domain. Behera et al. [120] developed a large clique finding (and missing cliques finding) method from SN graphs. The proposed method has a number of applications such as community detection, pattern recognition, and clustering in SN analysis. Sahraoui et al. [121] applied the SN concepts in the early prevention of the COVID-19 pandemic by detecting contacts in real time. The proposed method detects the communities of people that have likely been exposed to COVID-19 in an analogous way to community detection in SN via analyzing online relationships. Rezvani et al. [122] devised a new and very fast method for detecting communities in SN by using the k-triangle computing method. Laeuchli et al. [123] developed a centrality measurement method in large-scale G. The proposed method has abilities to compute three types of centralities, such as Laplacian, eigenvector, and closeness centralities, from G. A new and low-cost subgraph counting method based on fuzzy set theory for SN data was developed by Hou et al. [124]. Nunez et al. [125] developed a privacy-aware frequent sequential patterns mining method from large-scale G with applications to recommender systems. Further information about privacy-aware graph computing methods can be obtained from the book chapters and reviews in [126,127]. Recently, this category of G privacy has been religiously investigated due to the rapid developments in AI methods and tools.
3.3.4. Differential Privacy-Based Graph Anonymization Methods
Differential privacy (DP) has become a central part of the privacy domain, and it has been extensively investigated in the graph data publishing field [128,129]. DP, in the SN data privacy context, can be defined in simple words as follows. Let us say a query function f is to be evaluated on a graph G. We want to have a privacy-preserving algorithm A running on G and returning as an output/answer, and should be with a minimal amount of noise added. Hence, the goal of DP is to make in order to preserve data utility, and at the same time have protect all entities’ privacy in G. The DP concept has been widely used in SN for multiple purposes, such as computing statistics from G, answering analyst queries by perturbing the output (see Figure 18), and privacy preservation in application-specific scenarios (i.e., recommendations, community clustering, etc.).
As shown in Figure 18, DP can be achieved by injecting an appropriate amount of noise to the query answer, that is, , where Z is the noise. Adding too much Z may damage data utility, while adding too little Z may yield an insufficient privacy guarantee. Therefore, deciding the appropriate value of Z that can strike the balance well between privacy and utility is a very challenging task. Sensitivity, which denotes the largest change to the query answers caused by deleting/adding any node/edge in the G, is a key parameter to find the magnitude of added Z.
In the DP model, any anonymity algorithm, ℑ, satisfies the -DP property if for all pairs of neighbors and for all , such that (e.g., differs from the by just one node):
(2)
where represents the privacy loss budget, and the value is usually higher than 0 (i.e., ).If , full protection can be guaranteed at the expense of utility. Determining an appropriate value for is very challenging. DP has been extensively used in different settings for fulfilling the expectation of data owners. Furthermore, it has been extended in multiple ways. Its new variants, such as (,)-DP, offer a better trade-off between utility and privacy. DP can be used to compute important statistics from G that can be handy in performing analytics (see Figure 19).
There are two types of DP models: local and global [130]. In the former type, noise is added to the personal data before sharing it with the curator, and the server is assumed to be untrustworthy. In the latter type, the original G is curated at some central place (i.e., the server is assumed to be trustworthy), and noise is added at the time of G’s release to the analysts/parties. In Figure 20, we present both settings of the DP model in real-world settings.
We summarize and compare the famous DP-based G anonymization techniques in Table 5.
Due to the rigorous privacy guarantees, DP has been widely used with diverse data formats (i.e., tables, graphs, images, texts, matrices, etc.). Detailed information about the DP model in the context of SNs can be learned from DP-specific surveys [163,164,165]. Recently, DP has been extensively used with emerging technologies, such as federated learning, to preserve privacy [166]. Furthermore, DP has been used to preserve the privacy of patients’ COVID-19 data [167]. In the coming years, the synergy and applications of DP are expected to increase drastically.
3.3.5. Artificial Intelligence-Based Graph Anonymization Methods
AI has revolutionized almost every discipline with automated decision-making abilities. In the privacy field, AI-based techniques have been widely used to either safeguard or compromise privacy. Recently, AI has been increasingly used in graph data anonymization [110]. Liu et al. [168] presented the link between machine learning and OSN privacy. The authors highlighted the significance of ML in the privacy domain, and vice versa. In Figure 21, we demonstrate the role of AI in OSNs’ privacy.
As shown in Figure 21, AI and privacy can complement each other in three different ways. The first category is out of the scope of this paper and is about securing AI systems themselves by using either anonymization or DP-based techniques. This area of research (i.e., securing AI systems) is capturing researchers’ interest drastically [169]. The second category is about employing AI methods to safeguard users’ privacy in publishing [68]. As shown in Figure 21, AI techniques (i.e., machine and deep learning) can assist in preserving OSN users’ privacy in multiple ways. The last category is about the dark sides of AI techniques in the information privacy domain. In this category, the adversary takes advantage of the AI techniques in order to predict/infer the private information of individuals from [170,171,172]. In recent years, the synergy between AI and OSNs’ privacy has been extensively investigated in the literature [173]. We summarize and compare the famous AI-based G anonymization techniques used for PPGP in Table 6.
The AI-based approaches have improved various critical aspects of OSN data anonymization. In the coming years, AI will be a central element in privacy-preservation solutions of most data styles because AI-based techniques are more robust than traditional anonymization solutions. Further details about AI’s role in privacy domains can be learned from the previous surveys in [209,210,211]. AI-based methods are improving traditional G anonymity methods from multiple perspectives. Although AI has brought a huge revolution in the privacy domain as a defense tool, the computing complexity (CC) of some models can be very high. Shaukat et al. [212] described the CC of many famous machine learning algorithms. In general, the time complexity of any AI model depends on the nature of the data, the input size (e.g., n), the number of iterations/steps (e.g., k), and the parameters (e.g., N). For example, the complexity of simple decision tree is for tabular data, where n denotes the number of tuples, and m denotes the number of columns. In contrast, the time complexity of a deep belief network (BBN) is , where n is the number of records, k denotes the iterations, and N is the number of parameters.
3.3.6. Hybrid Graph Anonymization Methods
Hybrid G anonymization methods employ more than one anonymity operation/method while converting G into . For example, graph modification and clustering methods can be jointly applied to anonymize OSN data enclosed in a G form. Many SOTA hybrid G anonymization methods have been proposed to anonymize OSN data with a better balance of privacy and utility. Liu et al. [213] presented a hybrid anonymization algorithm (e.g., k-anonymity and randomization) for OSN data. The proposed algorithm employs the k-anonymous concept to hide the SI in natural groups/classes of OSN data and uses a randomization approach to process the residual data. The proposed algorithm is more stable and changes the G less than the k-degree anonymity and randomization algorithms. Later, k-anonymity and randomization were jointly used to lower the structural changes in the anonymization of G [214]. Mortazavi et al. [215] used both k-anonymity and ℓ-diversity concepts to anonymize OSN data. The proposed method optimizes the privacy–utility trade-off in PPGP and is more computationally efficient than previous algorithms. Liao et al. [216] used both k-degree anonymity and a genetic algorithm in order to anonymize OSN data enclosed in a G form for recommendation purposes. A hybrid algorithm based on fuzzing SI and converting user’s association into an uncertain form was given by Wang et al. [217]. Specifically, the authors define a new attack model in a G and propose an algorithm and safety parameter to safeguard against such attacks.
Qu et al. [218] proposed a hybrid method for a location as well as identity privacy preservation by using a game-based Markov decision process. A new framework that optimizes the utility of by employing multiple anonymization techniques was given by Wang et al. [219]. A generic and hybrid anonymization method that guarantees users’ privacy and utility in OSN data was proposed by Mortazavi et al. [220]. Similarly, a low-cost G anonymization method based on k-degree anonymity and contractions (i.e., inverse operation, vertex cloning, connectivity, etc.) was proposed by Talmon et al. [221]. A contact G-based approach to anonymize OSN data was proposed by An et al. [222]. The proposed method uses a k-anonymity-based method and contact graphs with location patterns to anonymize G. Although hybrid methods yield better performances in most cases, their complexity is relatively higher than the individual methods. Furthermore, applying the hybrid anonymization method can severely degrade either privacy or utility in some cases. Hence, more efforts are needed to improve the technical aspects of the hybrid anonymity method as well as to determine the correct application scenarios for them.
In summary, all anonymization methods developed for G have their own merits and demerits. For example, graph modification methods expose the G’s structure, which can be helpful to analyze the G for recommendation and marketing purposes. In contrast, clustering methods provide better privacy but suppress the G’s structure, which may hinder knowledge discovery in from all perspectives. Privacy-aware G computing methods ensure the strong privacy of users without degrading the utility. The DP-based methods ensure better privacy even if most parts of G are already exposed to the adversaries. However, utility is the main concern of DP-based anonymization methods. AI-based methods are good at striking the balance between utility and privacy. However, pre-mature convergence and deciding optimal values for hyperparameters are the main challenges in AI-based anonymity methods. Hybrid anonymization methods are computationally expensive and may lead to the redundant usage of some techniques. In the current literature, clustering, DP, and AI-based methods are more popular than others. In the coming years, most studies and enhancements are expected in clustering, DP, and AI-based methods. Furthermore, some recent studies have hinted that hybrid anonymization methods are more useful in safeguarding users’ privacy in dynamic settings (e.g., federated learning, collaborative learning, etc.). We compare the methods based on various factors in Table 7. Furthermore, we rate the approaches based on their protection level and future research potential. This analysis can pave the way for choosing the right privacy solutions as well as for exploring the research possibilities of these methods.
4. Major Developments in Privacy Preservation in Application-Specific Scenarios of OSNs
With the passage of time, the services of OSNs are expanding in both scale and scope. For example, OSNs enable the formation of communities of like-minded people where people can interact and share their activities/events [223]. OSNs enable information sharing at a much faster pace than any other medium by identifying and delivering information to influential people [224]. They enable friend recommendations by analyzing the demographic, spatial, and interest similarities among users in a seamless manner [225]. Moreover, OSNs enable topic modeling and event detection (i.e., earthquakes, pandemics, floods, etc.) [226]. In the coming years, OSNs are likely to play a key role in assisting mankind in multiple ways. We refer to these services (i.e., community detection, information spread/control, friend recommendations, topic modeling, events detection, etc.) of OSNs as application-specific scenarios of OSNs. We demonstrate an overview of the community detection from G in Figure 22 before presenting privacy-preserving solutions in different application-specific scenarios of OSNs.
In Table 8, we summarize and compare the SOTA anonymization techniques proposed for privacy preservation in application-specific scenarios of OSNs.
Apart from the famous application-specific scenarios of OSNs listed in Table 8, OSN privacy preservation has been improved by many of the latest techniques, such as federated learning [253]. Therefore, application-specific scenarios of OSNs will be expanded further in the coming years. Furthermore, in some cases, application scenarios of OSNs were used to protect the privacy of OSN users. For example, Rajabzadeh et al. [254] used the community detection concept in a k-degree-based anonymization method in order to preserve the privacy of OSN users (as shown in Figure 23). The proposed method can safeguard users’ privacy without degrading the utility of . Bourahla et al. [255] discussed the method of privacy preservation in dynamic scenarios (i.e., sequential publishing) of OSNs. Further information concerning OSN privacy in application-specific scenarios can be learned from previous studies [256,257]. Lastly, privacy preservation in application-specific scenarios of the OSNs is expected to become an emerging avenue of research in the coming years.
5. Major Developments in De-Anonymization of OSNs
The research in OSN privacy is continued in two tracks: defense and attack. The former is concerned with privacy protection from adversaries (a.k.a. anonymization) and the latter is concerned with breaching privacy (a.k.a. de-anonymization). Recently, a substantial number of de-anonymization approaches have been proposed to compromise the privacy of OSN users. The basic goal of the de-anonymization approaches is to re-identify the people uniquely from even though a strong anonymization is performed. The de-anonymization is usually performed by exploiting the weaknesses of the anonymity methods, linking and auxiliary graphs, and/or background knowledge available to adversary. In Figure 24, we demonstrate an example of how de-anonymity is performed on . As shown in Figure 24d, adversaries can exploit the structural information between two graphs, and can infer the identity/SI of OSN users.
Recently, many de-anonymization methods have been proposed, and some methods have accuracies of over 80% in correctly identifying nodes from [259]. Recently, due to rapid developments in digitization, the availability of personal information on various OSNs is rising rapidly, leading to a variety of privacy problems [260,261,262,263,264,265]. These developments indicate the eve-increasing interest of researchers in de-anonymization rather than anonymization. In Table 9, we summarize the findings of various SOTA de-anonymization approaches proposed for OSNs.
This topic (i.e., graph de-anonymization) has become a mainstream research area in OSN privacy in recent times. Many approaches have been proposed in order to infer identity, SI, membership, and degree information by linking and graph data available at external sources. Recently, the use of AI techniques have advanced the de-anonymization area, and many approaches have been proposed for cross-OSN users matching, content-based identity linkage, link prediction, and social connection information disclosure, to name a few. We refer interested readers to learn more about de-anonymizibility from previous surveys focusing solely on privacy attacks in OSNs [288,289,290]. In the coming years, more developments are expected in the graph de-anonymization area amid the rapid rise in auxiliary information as well as the maturity of AI tools.
6. Challenges of Preserving Privacy in Online Social Networks
The privacy preservation of OSNs is relatively more challenging than the tabular data due to the existence of more information in G data [291]. As stated above, OSN privacy can be compromised in various ways, and therefore, privacy preservation in OSN data is highly challenging. In Figure 25, we present a high-level overview of challenges in OSN privacy preservation.
In Figure 25, we classified the challenges into four categories (i.e., flexible anonymity methods, privacy preservation from AI-powered attacks, incorporating privacy preferences in the design of anonymity methods, and accurately quantifying the privacy and utility levels in ). Apart from these challenges, devising evaluation metrics for G data, evading the power of data mining tools, and resisting the linking of multiple Gs are also very challenging. These challenges can be addressed by devising innovative technologies in the future.
7. Promising Future Research Directions
Researchers are constantly devising new privacy-enhancing techniques for OSNs because the scale and scope of the privacy threats are expanding due to the higher adoption of OSNs across the globe. Privacy preservation of OSNs is more challenging compared to hospitals/banks because a lot of personal data (e.g., user profiles) has already been exposed to adversaries. There are a variety of research tracks in OSNs, for example, privacy preservation in publishing G, de-anonymization of , metrics for measuring privacy and utility in PPGP, privacy preservation in mining and crawling users’ data from OSNs’ sites, and privacy preservation using AI tools/methods, to name a few. In the coming years, more practical and robust techniques will be developed in each track cited above. In Figure 26, we list promising avenues for future research based on the extensive analysis of the published literature, the developed anonymization tools, the challenges in OSNs privacy, and dedicated surveys. We believe that the list of research opportunities listed in Figure 26 offers a starting point for early researchers in the OSN privacy area. Furthermore, these research gaps require further investigation/research from the research community amid the rapid rise in OSN privacy breaches.
The development of privacy-preserving approaches that can incorporate the preferences (e.g., users can decide which item among their attributes is most sensitive and thereby needs stronger privacy, or users can specify how their data should be processed in OSN environments) of users is an important avenue for future research. The development of anonymization methods that can be tuned easily based on the type of graph is an active area of research. Devising privacy-preserving solutions that can ensure defense against well-known and executable privacy attacks (e.g., background, linking, minimality attacks, etc.) is a vibrant area of research. Recently, to optimize privacy guarantees, many AI-based techniques have been integrated with traditional anonymization methods. Therefore, exploring the opportunities of AI techniques in terms of privacy preservation in OSNs is an active area of research. The development of anonymization methods that require fewer parameters and that can be used in resource-constrained environments (e.g., cell phones, gadgets, etc.) is another important research direction. The development of hybrid privacy-preserving solutions (e.g., combining different techniques) that can overcome each other’s weaknesses is a vibrant area of research. Developing new metrics that can technically measure the privacy strength from multiple perspectives (i.e., active and passive adversary, across domains, etc.) is an important and active area of research. Developing privacy solutions that can be used in multiple scenarios for privacy protection in OSNs is another potential research direction. Devising methods that can quantify the privacy loss while mining/crawling OSNs data is a prominent area of research. In addition, devising robust de-anonymization methods is a handy direction for the future as it can accelerate development from defense perspectives. Optimizing privacy utility is a longstanding research problem in the privacy domain, and requires technical solutions from the research community. Lastly, developing strong privacy-enhancing techniques to provide resilience against AI-powered attacks/tools is a very hot research area in recent times.
Apart from the research opportunities cited above, exploring the role of the federated learning paradigm in the OSN privacy area is also expected to be a vibrant area of research in the coming years [292]. Recently, a relatively new risk to the individual’s privacy, named interdependent privacy (i.e., co-location and location information), is emerging [293,294]. Therefore, advanced privacy-preserving methods are imperative to addressing this emerging risk [295]. Recently, synthetic data-generation methods are also posing a threat to OSN users’ privacy by creating data similar to real data [296,297]. Therefore, many privacy-preserving approaches are needed to provide resilience against these threats. Additionally, some anonymization methods that are proposed for other data styles (e.g., tabular, traces, sets, matrices, etc.) can be adopted to preserve the privacy of OSN users in the PPGP. Finally, devising practical methods that can restrict user identity linkages across OSNs is also one of the hot research topics for future endeavours.
8. Limitations of This Review Article
Although this review is more systematic, comprehensive, and insightful than previous reviews, certain limitations exist concerning the number of studies and the coverage domain. For example, we could not find many studies that ensured the anonymity of multimedia data (i.e., images or text written over images) in OSNs, which is one of the hot research topics in recent times. In addition, we could not present any analysis or categorization based on the types of social graphs because the anonymization methods proposed for one type of graph cannot be directly applied to another type of graph (e.g., let us say the PPGP approaches proposed for the directed graph cannot be straightforwardly used for the undirected graph, and vice versa). In addition, this paper does not include studies that have adopted the OSN concept for other services. For example, OSNs’ data modeling concepts have been widely used in the COVID-19 arena for infection-spread modeling and analysis. In addition, many OSN concepts have been used for clinical data processing, modeling, and knowledge derivation. Furthermore, we could find some studies that can be simultaneously applied to multiple (i.e., friend recommendations and fried discovery, information spread and contagion, etc.) service scenario(s) in OSNs. This article did not highlight the AI methods in detail (e.g., workflow, parameters, time and space complexity, convergence rates, etc.) but rather focus on AI use in OSNs’ privacy preservation (or breaches). Lastly, we mainly focused on recent studies, and we did not consider a span (e.g., the last 5 years, or the last decade) while searching for the studies. However, these limitations do not significantly undermine the quality of this review and can be investigated in future reviews.
9. Conclusions and Future Work
In this paper, we have presented a systematic review of SOTA and recent anonymization techniques that have proposed ways to combat privacy issues in OSNs. Specifically, we have classified the privacy dilemma into two categories: privacy preservation in publishing G and privacy preservation in application-specific scenarios of OSNs. We have presented an extended (i.e., common approaches + AI approaches) taxonomy of anonymization approaches concerning graph data publishing. Moreover, we have presented various representative techniques that are being developed to address privacy issues in the application-specific scenarios (i.e., community clustering, topic modeling, information diffusion, friend recommendations, etc.) of OSNs. We also described various methods that are used to infer identity or private information from published G. Lastly, we discussed various challenges related to OSN privacy and suggested promising opportunities for future research. Through an extensive analysis of the literature, we found that the privacy preservation of OSNs is a very trendy topic among other data styles (e.g., tabular, set, logs, etc.). Many developments are stemming from both anonymization and de-anonymization perspectives. In the coming years, privacy preservation in OSNs will be more challenging, as OSNs are being adopted by an increasingly large number of people across the world. Furthermore, our reliance on OSNs is also increasing over time, leading to the exposure of more fine-grained data on OSNs sites. In this article, we highlighted the latest SOTA developments concerning the privacy of OSN users. To the best of our knowledge, this is the first work that discusses OSN privacy from a broader perspective, including AI approaches, in the OSN domain. The detailed analysis presented in this article can pave the way for grasping the status of the latest research as well as for developing secure privacy-preserving methods to safeguard OSN users’ privacy from prying eyes. Most importantly, our work aligns with the recent trends toward responsible data science (i.e., preventing misuse of personal data). In the future, we intend to explore the role of the latest technologies, such as federated learning, in preserving users’ privacy in OSNs. We intend to explore privacy and utility metrics that can be used to quantify the level of privacy and utility offered by anonymization methods in PPGP. Lastly, we intend to explore the role of AI in the privacy domain in heterogeneous data formats (e.g., tables, graphs, matrix, logs, traces, sets, etc.), and multiple computing paradigms such as OSNs, cloud computing, location-based systems, Internet of Things, recommender systems, telemedicine, and AI-based services.
Conceptualization, A.M. and S.O.H.; methodology, A.M. and S.K.; software, A.M. and S.K.; validation, A.M., S.K. and S.O.H.; formal analysis, A.M.; investigation, A.M.; resources, A.M. and S.O.H.; data curation, A.M.; writing—original draft preparation, A.M.; writing—review and editing, A.M., S.K. and S.O.H.; visualization, A.M. and S.K.; supervision, S.O.H.; project administration, S.O.H.; funding acquisition, A.M. and S.O.H. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Data and studies that were used to support the findings of this research are included within this article.
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 2. Overview of SN data anonymization by using original G given in Figure 1.
Figure 4. Flow diagram demonstrating the SOTA article selection process for this systematic review.
Figure 6. Five famous and practical privacy threats that can occur in publishing G with researchers.
Figure 7. Different types of SN data and pieces of information concerning privacy in social graphs.
Figure 8. Types of BK employed by the adversaries to jeopardize users’ privacy in a released G.
Figure 10. Overview of the G anonymization by adding edges and nodes and edges for PPGP.
Figure 11. High-level taxonomy of the anonymization techniques applied to G for PPGP.
Figure 12. Example of G anonymization by the unconstrained (a.k.a. random) anonymization method.
Figure 16. Overview of degree computation from G by privacy-aware graph computing methods.
Figure 17. Overview of triangle computations from G by privacy-aware graph computing methods.
Figure 22. Overview of community detection, (a) G with nine nodes, and (b) G with three communties.
Figure 24. Overview of G de-anonymization using auxiliary graph (adopted from [258]).
Figure 26. List of promising opportunities for future research in the area of OSN privacy preservation.
Overview and comparisons of existing surveys with our review paper.
Ref. | Coverage of Anonymization Methods | Coverage of de-Anonymization Methods | Privacy in Multiple Scenario (s) of OSNs | Experimental Details | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
MM | CM | PAGCM | DPM | AIM | HM | Common | AI | Common | AI | ||
Du et al. [ |
× | × | × | √ | × | × | × | × | × | × | ∘ |
Alemany et al. [ |
√ | × | × | × | √ | × | × | × | ∘ | × | √ |
Shejy et al. [ |
∘ | ∘ | ∘ | ∘ | ∘ | ∘ | √ | √ | √ | × | × |
Majeed et al. [ |
√ | √ | ∘ | ∘ | × | × | √ | × | ∘ | × | × |
Avinash et al. [ |
∘ | ∘ | × | × | × | × | √ | × | × | × | √ |
Ji et al. [ |
√ | × | × | ∘ | × | √ | √ | × | √ | × | ∘ |
Casas et al. [ |
√ | √ | × | × | × | × | √ | × | ∘ | × | ∘ |
Zhou et al. [ |
√ | √ | × | × | × | × | √ | × | ∘ | × | × |
Wu et al. [ |
√ | √ | × | × | × | × | × | × | × | × | × |
Praveena et al. [ |
√ | × | × | √ | × | × | √ | × | × | × | ∘ |
Joshi et al. [ |
√ | √ | ∘ | × | × | × | √ | × | √ | × | ∘ |
Droby et al. [ |
√ | √ | √ | ∘ | √ | ∘ | √ | ∘ | √ | × | × |
Injadat et al. [ |
√ | √ | × | × | √ | × | √ | ∘ | × | ∘ | × |
This review paper | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ | √ |
Abbreviations: MM(modification methods), CM (clustering methods), PACGM (privacy-aware graph computing methods), DPM (differential privacy-based methods), AIM (artificial intelligence-based methods), HM (hybrid methods). Key: √⇒ available/reported and ×⇒ not available/not reported, ∘→ partially covered
Summary of anonymization operations that can be applied to G for privacy preservation.
Anonymity Operation | Brief Description | Examples |
---|---|---|
G modification | This operation modifies the structure of G by adding/removing vertices or edges. | k-degree anonymity |
G generalization | This operation clusters the nodes and edges of G into super nodes/edges. | Node grouping |
G Obfuscation | This operation adds noise in the form of fake edges/edges to preserve privacy. | Node-level DP |
G computation | This operation computes properties from G, and releases them to analysts. | Degree, size |
G Hybrid Operation | More than one anonymity operation are jointly used to perturb G. | k-degree clustered G |
Summary and comparison of state-of-the-art graph modification-based techniques.
Ref. | Nature of Study | Key Assertion (s) | Experimental Analysis | G Anonymity Type | Datasets Used |
---|---|---|---|---|---|
Wang et al. [ |
Technical | Defence against identity disclosure | √ | Constrained | R |
Casas et al. [ |
Technical | Better utility of large-scale G | √ | Constrained | R |
Ma et al. [ |
Technical | Defence against identity disclosure | √ | Constrained | R |
Roma et al. [ |
Technical | Defence for identity and link disclosures | √ | Constrained | R |
Hamideh et al. [ |
Technical | Fewer modifications while converting G→ |
√ | Constrained | R,S |
Mauw et al. [ |
Technical | Strong defense against identity disclosure | √ | Constrained | R,S |
Yuan et al. [ |
Conceptual | Maintains stability of G’s structure. | √(limited) | Constrained | S |
Majeed et al. [ |
Technical | Protection of sensitive labels of users | √ | Constrained | R |
Gangarde et al. [ |
Technical | Protection of nodes, edges, and attributes | √ | Constrained | R |
Srivatsan et al. [ |
Technical | Lower information loss while changing G→ |
√ | Constrained | R |
Nettleton et al. [ |
Technical | Strong privacy protection in |
√ | Constrained | R |
Ying et al. [ |
Theoretical | Discussion of various privacy attacks in |
× | Unconstrained | - |
Kiabod et al. [ |
Technical | Improve utility of |
√ | Unconstrained | R |
Masoumzadeh et al. [ |
Technical | Control distortion while changing G→ |
√ | Unconstrained | R |
Ren et al. [ |
Technical | Protection against three privacy attacks in |
√ | Unconstrained | R |
Ninggal et al. [ |
Technical | Significantly improves the utility of anonymized graph | √ | Unconstrained | R |
Zhang et al. [ |
Technical | Controls re-identification of users from |
√ | Unconstrained | R,S |
Xiang et al. [ |
Technical | Controls privacy issues in dynamic scenarios of G analysis | √ | Unconstrained | R |
Zhang et al. [ |
Theoretical | Heuristic analysis-based privacy protection in |
√ | Unconstrained | - |
Kavianpour et al. [ |
Technical | Privacy protection in interactions between user and third parties | √ | Unconstrained | R |
Lan et al. [ |
Technical | Effective resolution of privacy utility in G anonymization | √ | Unconstrained | R |
Hamzehzadeh et al. [ |
Technical | Fewer changes in structure of G during anonymization | √ | Unconstrained | R |
Key: √⇒ available/reported and ×⇒ not available/not reported, R⇒ real, S⇒ synthetic, and -⇒ not used.
Summary and comparison of state-of-the-art graph clustering-based techniques.
Ref. | Nature of Study | Key Assertion (s) | Experimental Analysis | G Anonymity Type | Datasets Used |
---|---|---|---|---|---|
Siddula et al. [ |
Technical | Privacy protection of nodes, edges, and attributes | √ | Clustering | R |
Li et al. [ |
Technical | Prevention of inference attacks in SN data | √ | Clustering | R |
Gangarde et al. [ |
Technical | Strong defense against users’ identities revelation in OSNs | √ | Clustering | R |
Karimi et al. [ |
Technical | Privacy preservation of multiple SAs in G publishing | √ | Clustering | R |
Jethava et al. [ |
Technical | Strong defense against Sybil attack detection attacks in SN data | √ | Clustering | R |
Li et al. [ |
Technical | Strong defense against social identity linkage problem across SNs | √ | Clustering | R, S |
Kiranmayi et al. [ |
Technical | Strong defense against attribute couplet attacks via factor analysis | √ | Clustering | R |
Kaveri et al. [ |
Conceptual | Privacy and utility preservation in SN data anonymization | × | Clustering | R |
Langari et al. [ |
Technical | Defence against identity, attribute, link, and similarity attacks | √ | Clustering | R |
Guo et al. [ |
Technical | Privacy and utility preservation in stream data handling | √ | Clustering | R |
Sarah et al. [ |
Conceptual | Better utility preservation in |
√ | Clustering | R, S |
Shakeel et al. [ |
Technical | Protection of identity disclosures in SN data publishing | √ | Clustering | R |
Poulin et al. [ |
Technical | Protection of privacy and information loss in anonymizing G | √ | Clustering | S |
Ghate et al. [ |
Conceptual | Protection of privacy by restricting more changes in G | √(limited) | Clustering | S |
Sihag et al. [ |
Conceptual | Controls heavier changes in the structure of G during anonymity | √(limited) | Clustering | R |
Yu et al. [ |
Technical | Strong defense against identity disclosure by injecting false targets | √ | Clustering | R |
Ros et al. [ |
Technical | Strong defense against identity disclosure in large-scale graphs | √ | Clustering | R |
Yazdanjue et al. [ |
Technical | Improves runtime of G anonymization by greedy approaches | √ | Clustering | R |
Tian et al. [ |
Technical | Ensure strong privacy in crawling and mining SN graph data | √ | Clustering | R |
Key: √⇒ available/reported and ×⇒ not available/not reported, R⇒ real and S⇒ synthetic.
Summary and comparison of state-of-the-art DP-based G anonymity techniques.
Ref. | Nature of Study | Key Assertion (s) | Experimental Analysis | DP Anonymity Type | Datasets Used |
---|---|---|---|---|---|
Gao et al. [ |
Technical | Better utility in |
√ | Node-level | R |
Gao et al. [ |
Technical | Reduction in noise scale while anonymizing G | √ | Node-level | R,S |
Gao et al. [ |
Technical | Protection of important structures of G | √(limited) | Node-level | R |
Gao et al. [ |
Technical | Preserves G’s structure using dK-1, dK-2, and dK-3 | √ | Node-level | R |
Zhang et al. [ |
Technical | Privacy preservation of degree sequence in |
√ | Node-level | R |
Zheng et al. [ |
Technical | Privacy preservation at G collection time in IoTs | √ | Node-level | R |
Fang et al. [ |
Technical | Construction of a synthetic |
√ | Node-level | S |
Yin et al. [ |
Technical | Graph data publishing with controlled utility loss | √ | Node-level | R |
Huang et al. [ |
Technical | Solves privacy–utility trade-off in converting G → |
√ | Node-level | R |
Macwan et al. [ |
Technical | Protection of degree distributions in answering queries | √ | Node-level | R |
Zhu et al. [ |
Technical | Strong privacy in |
√ | Node-level | R |
Huang et al. [ |
Conceptual | Privacy preservation by generating synthetic G | √ | Node-level | S |
Macwan et al. [ |
Theoretical | Privacy guarantees of G with anonymity and node DP | × | Node-level | - |
Macwan et al. [ |
Technical | Preserving higher utility in |
√ | Node-level | R |
Liu et al. [ |
Technical | Preservation of G structural properties without privacy loss | √ | Node-level | R |
Iftikhar et al. [ |
Technical | Reduction in noise to achieve |
√ | Node-level | R |
Li et al. [ |
Technical | Privacy preservation of edge weights in |
√ | Edge-level | R |
Guan et al. [ |
Technical | Privacy preservation of link disclosure in |
√ | Edge-level | R |
Wang et al. [ |
Technical | Privacy preservation of links’ attributes in |
√ | Edge-level | R |
Yang et al. [ |
Technical | Privacy preservation of degrees of links in |
√ | Edge-level | R |
Wang et al. [ |
Technical | Privacy protection in |
√ | Edge-level | R |
Wang et al. [ |
Technical | Preserving topological structure of |
√ | Edge-level | R |
Lv et al. [ |
Technical | Preserving privacy of users by modifying edge structure in |
√ | Edge-level | R |
Wang et al. [ |
Technical | Preserving privacy of users by dividing G into subgraphs | √ | Edge-level | S |
Lei et al. [ |
Technical | Preserving privacy of sensitive edge weights using DP model | √ | Edge-level | R |
Reuben et al. [ |
Conceptual | Stresses the need of edges’ privacy preservation in |
√ | Edge-level | - |
Yan et al. [ |
Technical | Better utility and privacy preservation in SN data | √ | Hybrid | R |
Yan et al. [ |
Technical | Reduces information loss in G anonymity without sacrificing privacy | √ | Hybrid | R |
Qian et al. [ |
Technical | Privacy preservation of social links between users via |
√ | Hybrid | R |
Qiuyang et al. [ |
Technical | Privacy preservation based on sub-graph reconstruction and local DP model | √ | Hybrid | R |
Qu et al. [ |
Technical | Privacy preservation in dynamically evolving G data with better utility | √ | Hybrid | R |
Iftikhar et al. [ |
Technical | Privacy protection in DP-based computations for releasing G’s distributions | √ | Hybrid | R |
Key: √⇒ available/reported and ×⇒ not available/not reported, R⇒ real, S⇒ synthetic, and -⇒ not used.
Summary and comparison of state-of-the-art AI-based G anonymity techniques.
Ref. | Nature of Study | Key Assertion (s) | Experimental Analysis | AI Technique Used | Datasets Used |
---|---|---|---|---|---|
Bilogrevic et al. [ |
Conceptual | Predicts the level of detail for each sharing decision in OSNs | √ | Logistic Regression | S |
Caliskan et al. [ |
Technical | Predicts the SI in a G using ML and suggests how to safeguard it | √ | NB, SVM, RF | S |
Yin et al. [ |
Technical | Strikes a balance between privacy and utility in distributing |
√ | k-means algorithm | R |
Wang et al. [ |
Technical | Privacy preservation of degree information in releasing |
√ | k-means algorithm | R |
Ju et al. [ |
Technical | Strong privacy of V in |
√ | k-means algorithm | R |
Zheng et al. [ |
Technical | Strong privacy of V in |
√ | GNN algorithm | R |
Paul et al. [ |
Technical | Preserves the structural properties of G in anonymization process | √ | k-means algorithm | R |
Hoang et al. [ |
Technical | Preserves the privacy of SN users modelled via knowledge of G | √ | k-ad algorithm | R |
Hoang et al. [ |
Technical | Preserves the privacy of SN users when G is subject to multiple releases | √ | CTKGA algorithm | R |
Chen et al. [ |
Technical | Privacy preservation of SN users when G contains outliers and categorical attributes | √ | DBSCAN clustering | R |
Narula et al. [ |
Technical | Privacy preservation of identity and emotion-related information in OSN data | √ | CNN algorithm | R |
Zitouni et al. [ |
Technical | Privacy preservation by concealing the identity in image data | √ | CNN and LSTM | R |
Ahmed et al. [ |
Technical | Privacy preservation by concealing the identity and other SI in images | √ | Neural Network | R |
Matheswaran et al. [ |
Technical | Privacy preservation of image data in retrieval and storage in clouds | √ | Watermarking | R |
Li et al. [ |
Technical | Both anonymity- and utility-preserving solutions for OSN data | √ | GAN Algorithm | R |
Lu et al. [ |
Technical | Privacy preservation by reducing the prediction accuracy of sensitive links in G | √ | VGAE and ARVGA | R |
Li et al. [ |
Technical | Privacy preservation using profile, graph structure, and behavioral information | √ | GCNN algorithm | R |
Wanda et al. [ |
Technical | Privacy preservation of vulnerable nodes in G using dynamic deep learning | √ | CNN architecture | R |
Li et al. [ |
Technical | Privacy preservation of users when a user’s job/education-place changes with time | √ | Supervised ML | R |
Bioglio et al. [ |
Technical | Privacy preservation of contents in OSN platforms based on sensitivity analysis | √ | Deep NN | R |
Hermansson et al. [ |
Technical | Preserves better accuracy for data-mining and analytical tasks from |
√ | SVM algorithm | R |
Kalunge et al. [ |
Technical | Preserves better utility (path length and IL) for data-mining-realted tasks from |
√ | SVM algorithm | R,S |
Zhang et al. [ |
Technical | Strong privacy preservation of users against text-based user-linkage attack | √ | SVM algorithm | R |
Halimi et al. [ |
Technical | Strong privacy preservation by identifying the vulnerable users profiles from G | √ | PCA algorithm | R |
Kumar et al. [ |
Technical | Strong privacy preservation of graph structure without degrading utility of G | √ | PageRank algorithm | R |
Kumar et al. [ |
Technical | Strong privacy preservation while showing better utility in three data-mining tasks | √ | PPRA algorithm | R |
Li et al. [ |
Technical | Strong privacy preservation of communities in G with better usability of |
√ | Pregel model | R |
Chavhan et al. [ |
Technical | Strong privacy preservation in |
√ | DST algorithm | R |
Wang et al. [ |
Technical | Strong privacy preservation in |
√ | Kruskal & Prim | S |
Kansara et al. [ |
Theoretical | Strong privacy preservation in |
× | Multiple algorithms | - |
Ma et al. [ |
Technical | Strong privacy preservation of user’s location while executing queries on G | √ | KNN algorithm | R |
Zhang et al. [ |
Technical | Minimization of privacy disclosures in |
√ | Bayesian Network | R |
Mauw et al. [ |
Technical | Strong privacy preservation in |
√ | K-MATCH algorithm | R |
Maag et al. [ |
Technical | Strong privacy preservation against multiple attacks in publishing |
√ | EDA algorithm | R |
Gao et al. [ |
Technical | Solves multi-objective optimization problem in anonymizing |
√ | GAN model | R |
Key: √⇒ available/reported and ×⇒ not available/not reported, R⇒ real, S⇒ synthetic, and -⇒ not used.
Comparison of anonymization methods (a.k.a. privacy-preserving solutions) used in OSNs.
Ref. | Privacy and Utility Results Status | Future Research Potentials | Rating in Discipline (OSNs) | |
---|---|---|---|---|
Privacy | Utility | |||
Modification methods | Acceptable | High | Medium | 3 |
Clustering methods | High | Acceptable | High | 3 |
PAGC methods | High | Low | High | 4 |
DP-based methods | High | Low | Very High | 4 |
AI-based methods | High | Acceptable | Very High | 4 |
Hybrid methods | High | High | Very High | 5 |
Abbreviations: PAC (Privacy-aware graph computing); Rating criteria: 5: very high and 1: very low.
SOTA techniques proposed for privacy preservation in OSNs’ application-specific scenarios.
Ref. | Nature of Study | Key Assertion (s) | Experimental Analysis | Application Scenario | Datasets Used |
---|---|---|---|---|---|
Zheng et al. [ |
Technical | Privacy protection of sensitive link information in OSNs | √ | Community detection | R |
Wang et al. [ |
Technical | Controls privacy leakage to the application server using ZKPs | √ | Friend recommendations | R |
Li et al. [ |
Technical | A secure plugin for privacy preservation of bystanders in OSNs | √ | Content sharing | R |
Yi et al. [ |
Technical | Privacy protection of profiles in OSNs using multiple servers and encryptions | √ | Profile matching | R |
Wei et al. [ |
Technical | Privacy protection of social content using |
√ | Topic recommendations | R |
Valliyammai et al. [ |
Technical | Privacy protection of sensitive topics by detecting sensitive content | √ | Diffusion of sensitive topics | R |
Casas et al. [ |
Technical | Privacy protection and utility enhancements of users’ data in OSNs | √ | Analytics and mining of G | R |
Gao et al. [ |
Technical | Privacy protection in partitioning and mining G for analytical purposes | √ | Subgraph mining from G | R |
Li et al. [ |
Technical | Privacy protection of online communities in sensitive content sharing | √ | Content recommendation | R |
Mazeh et al. [ |
Technical | Privacy protection of online activity data and purchase histories | √ | Recommender systems | R |
Yargic et al. [ |
Technical | Privacy protection of users’ sensitive preferences in OSN environments | √ | Collaborative filtering | R |
Bahri et al. [ |
Theoretical | Privacy protection of users when OSN data is located in multiple locations | × | Decentralized services | - |
Dong et al. [ |
Technical | Social proximity analysis with privacy guarantees identification of potential friends in OSNs | √ | Friend discovery | S |
Liu et al. [ |
Technical | Analyzes the risk of community privacy and suggests ways to hide them in OSNs | √ | Hiding community structure | S |
Guo et al. [ |
Technical | Quantifies the influence of users based on attributes with privacy preservation in OSNs | √ | Influence estimation | R |
Yin et al. [ |
Technical | Privacy preservation in OSNs by analyzing the relationship between pairs of users | √ | Social relationship | R,S |
Kukkala et al. [ |
Technical | Designs a privacy-preservation protocol based on secure multi-party computation for OSNs | √ | Influential spreaders | S |
Yuan et al. [ |
Technical | Designs a privacy-preservation method for OSNs with restricted changes in the structure of G | √ | Node relationships | R |
Gao et al. [ |
Technical | Privacy preservation in OSN data by minimally removing edges/nodes from the original G | √ | Data publishing | R |
Zheng et al. [ |
Technical | Privacy preservation in OSNs by controlling higher distortion in G through DP method | √ | Mining and analytics | R |
Ferrari et al. [ |
Technical | Privacy preservation in OSN data by clustering and anonymizing people in G | √ | Pattern extraction | R |
Aljably et al. [ |
Technical | Privacy preservation of the user information from OSNs utilizing restricted LDP | √ | Anomaly detection | R |
Liang et al. [ |
Technical | Privacy preservation of the user action in OSNs via suboptimal estimator | √ | Users action privacy | R |
Shan et al. [ |
Technical | Privacy preservation based on user’s privacy preferences in OSN environments | √ | Personalized privacy | R |
Stokes et al. [ |
Technical | Incidence geometries and clique complexes based privacy preservation of OSN data | √ | Statistical analysis | R |
Wen et al. [ |
Technical | Privacy preservation of OSN data by identifying and hiding the vulnerable nodes in G | √ | Recommendation systems | R |
Key: √⇒ available/reported and ×⇒ not available/not reported, R⇒ real, S⇒ synthetic, and -⇒ not used.
SOTA de-anonymization approaches proposed for breaching users’ privacy in OSNs.
Ref. | Nature of Study | Key Assertion (s) | Privacy Attack | Items Exploited | Datasets Used |
---|---|---|---|---|---|
Ji et al. [ |
Technical | Privacy disclosure by exploiting attribute and |
Identity disclosure | User’s attributes | R |
Li et al. [ |
Technical | DNN is adopted to learn features for node matching from |
Identity disclosure | Structure of |
R |
Jiang et al. [ |
Technical | Privacy disclosure through structure and attribute similarity | Identity disclosure | Node properties | R |
Sun et al. [ |
Technical | Privacy disclosure through spectrum-partitioning method | Identity disclosure | Subgraphs of |
S |
Qu et al. [ |
Technical | FBI-based method to extract identities of real-world users | Identity disclosure | Profile, |
R |
Qu et al. [ |
Technical | RCM-based user matching across OSNs using similarities of concepts | Identity disclosure | Salient features | R |
Desai et al. [ |
Technical | Semantic knowledge-based private users’ information disclosure | SI disclosure | Background knowledge | R |
Hirschprung et al. [ |
Technical | Identification of people through music preference data | Identity disclosure | Music interests | R |
Mao et al. [ |
Technical | Identification of people by thoroughly analyzing the structure of |
SI disclosure | Structure of |
R |
Qian et al. [ |
Technical | Identification of sensitive data by linking |
SI disclosure | Structure of |
R |
Li et al. [ |
Technical | NHDS-based method for revealing sensitive data of the users of OSNs | SI disclosure | R | |
Feng et al. [ |
Technical | Link privacy breaches in OSNs using three types of similarity metrics | Link prediction | Structure of |
R |
Gulyás et al. [ |
Technical | Correct re-identification of a large number of nodes using similarity function | Re-identify nodes | Auxiliary graphs | R |
Horawala et al. [ |
Technical | Correct re-identification of a large number of nodes using ML techniques | Re-identify nodes | Node attributes | R |
Wu et al. [ |
Technical | Matching a large number of users via overlapped communities concepts | Re-identify nodes | Overlapping Communities | R |
Zhou et al. [ |
Technical | Identify multiple accounts of a same person in different OSNs | Identity linkage | Social interactions | R |
Chen et al. [ |
Technical | Identify a user by analyzing the social content (i.e., text and images) | Linking users’ identities | Social contents | R |
Halimi et al. [ |
Technical | Identify a user’s profiles with high probability using ML | User’s profiles | Auxiliary data | R |
Tang et al. [ |
Technical | Matching users to extract SI in different G using embedding vectors | Link prediction | Neighbors’ information | R |
Zhou et al. [ |
Technical | Correctly linking same users across OSNs using graph neural network | Identity linkage | Node distribution | R |
Chen et al. [ |
Technical | Correctly linking the identity of user using semi-supervised method | Identity linkage | Semantic features | R |
Wang et al. [ |
Technical | Correctly links a profile of users across multiple OSN platforms | Profile linkage | Duplicate profiles | R |
Key: R⇒ real, S⇒ synthetic, and -⇒ not used.
References
1. Tassa, T.; Dror, J.C. Anonymization of centralized and distributed social network by sequential clustering. IEEE Trans. Knowl. Data Eng.; 2011; 25, pp. 311-324. [DOI: https://dx.doi.org/10.1109/TKDE.2011.232]
2. Peng, S.; Zhou, Y.; Cao, L.; Yu, S.; Niu, J.; Weijia, J. Influence analysis in social network: A survey. J. Netw. Comput. Appl.; 2018; 106, pp. 17-32. [DOI: https://dx.doi.org/10.1016/j.jnca.2018.01.005]
3. Safi, S.M.; Movaghar, A.; Ghorbani, M. Privacy Protection Scheme for Mobile Social Network. J. King Saud-Univ.-Comput. Inf. Sci.; 2022; in press
4. Nedunchezhian, P.; Mahalingam, M. The Improved Depression Recovery Motivation Recommendation System (I-DRMRS) in Online social network. Comput. Sci.; 2002; 3, pp. 1-17.
5. Dong, Y.; Tang, J.; Wu, S.; Tian, J.; Chawla, N.V.; Rao, J.; Cao, H. Link prediction and recommendation across heterogeneous social networks. Proceedings of the 2012 IEEE 12th International Conference on Data Mining; Brussels, Belgium, 10–13 December 2012; pp. 181-190.
6. Liu, H.; Zheng, C.; Li, D.; Zhang, Z.; Lin, K.; Shen, X.; Xiong, N.N.; Wang, J. Multi-perspective social recommendation method with graph representation learning. Neurocomputing; 2022; 468, pp. 469-481. [DOI: https://dx.doi.org/10.1016/j.neucom.2021.10.050]
7. Wang, X.; Liu, Y.; Zhou, X.; Wang, X.; Leng, Z. A Point-of-Interest Recommendation Method Exploiting Sequential, Category and Geographical Influence. ISPRS Int. J. Geo-Inf.; 2022; 11, 80. [DOI: https://dx.doi.org/10.3390/ijgi11020080]
8. Suat-Rojas, N.; Gutierrez-Osorio, C.; Pedraza, C. Extraction and Analysis of social network Data to Detect Traffic Accidents. Information; 2022; 13, 26. [DOI: https://dx.doi.org/10.3390/info13010026]
9. Kuikka, V.; Monsivais, D.; Kaski, K.K. Influence spreading model in analysing ego-centric social network. Phys. Stat. Mech. Its Appl.; 2022; 588, 126524. [DOI: https://dx.doi.org/10.1016/j.physa.2021.126524]
10. Liang, F.; Chen, H.; Lin, K.; Li, J.; Li, Z.; Xue, H.; Shakhov, V.; Liaqat, H.B. Route recommendation based on temporal–spatial metric. Comput. Electr. Eng.; 2022; 97, 107549. [DOI: https://dx.doi.org/10.1016/j.compeleceng.2021.107549]
11. Alemany, J.; Del Val, E.; García-Fornes, A. A Review of Privacy Decision-making Mechanisms in Online social network. ACM Comput. Surv.; 2022; 55, pp. 1-32. [DOI: https://dx.doi.org/10.1145/3494067]
12. Shejy, G. Data Privacy and Security in social network. Principles of Social Networking; Springer: Singapore, 2021; pp. 387-411.
13. Majeed, A.; Lee, S. Anonymization Techniques for Privacy Preserving Data Publishing: A Comprehensive Survey. IEEE Access; 2020; 9, pp. 8512-8545. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3045700]
14. Backstrom, L.; Dwork, C.; Kleinberg, J. Wherefore art thou R3579X? Anonymized social network, hidden patterns, and structural steganography. Proceedings of the 16th international conference on World Wide Web; Banff, AB, Canada, 8–12 May 2007; pp. 181-190.
15. Zheleva, E.; Getoor, L. Privacy in social networks: A survey. Social Network Data Analytics; Springer: Boston, MA, USA, 2011; pp. 277-306.
16. Almogbel, R.S.; Alkhalifah, A.A. User Behavior in Social Networks Toward Privacy and Trust: Literature Review. Int. J. Interact. Mob. Technol.; 2022; 16, pp. 38-51. [DOI: https://dx.doi.org/10.3991/ijim.v16i01.27763]
17. Avinash, M.; Harini, N. Privacy Preservation Using Anonymity in social network. Proceedings of the Second International Conference on Sustainable Expert Systems; Lalitpur, Nepal, 17–18 September 2021; Springer: Singapore, 2022; pp. 623-631.
18. Gao, Y.; Yi, L.; Yunchuan, S.; Cai, Z.; Ma, L.; Pustišek, M.; Hu, S. IEEE Access Special Section: Privacy Preservation for Large-Scale User Data in social network. IEEE Access; 2022; 10, pp. 4374-4379. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3036101]
19. Tahir, H.; Brézillon, P. Contextualization of Personal Data Discovery and Anonymization Tools. Intelligent Sustainable Systems; Springer: Singapore, 2022; pp. 277-285.
20. Ferreira, G.; Alves, A.; Veloso, M.; Bento, C. Identification and Classification of Routine Locations Using Anonymized Mobile Communication Data. ISPRS Int. J. Geo-Inform.; 2022; 11, 228. [DOI: https://dx.doi.org/10.3390/ijgi11040228]
21. Krishnakumar, S.K.; Maheswari, K.M.U. A Comprehensive Review on Data Anonymization Techniques for social network. Webology; 2022; 19.
22. Li, Y.; Tao, X.; Zhang, X.; Wang, M.; Wang, S. Break the Data Barriers While Keeping Privacy: A Graph Differential Privacy Method. IEEE Internet Things J.; 2022; Early Access [DOI: https://dx.doi.org/10.1109/JIOT.2022.3151348]
23. Ji, S.; Li, W.; Mittal, P.; Hu, X.; Beyah, R. SecGraph: A Uniform and Open-source Evaluation System for Graph Data Anonymization and De-anonymization. Proceedings of the 24th USENIX Security Symposium (USENIX Security 15); Washington, DC, USA, 12–14 August 2015; pp. 303-318.
24. Ni, C.; Li, S.C.; Gope, P.; Min, G. Data Anonymization Evaluation for Big Data and IoT Environment. Inf. Sci.; 2022; 605, pp. 381-392. [DOI: https://dx.doi.org/10.1016/j.ins.2022.05.040]
25. Zhou, B.; Pei, J.; Luk, W. A brief survey on anonymization techniques for privacy preserving publishing of social network data. ACM Sigkdd Explor. Newsl.; 2008; 10, pp. 12-22. [DOI: https://dx.doi.org/10.1145/1540276.1540279]
26. Wu, X.; Ying, X.; Liu, K.; Chen, L. A Survey of Privacy-Preservation of Graphs and social network. Managing and Mining Graph Data; Springer: Boston, MA, USA, 2010; pp. 421-453. [DOI: https://dx.doi.org/10.1007/978-1-4419-6045-014]
27. Praveena, A.; Smys, S. Anonymization in Social Networks: A Survey on the issues of Data Privacy in Social Network Sites. Int. J. Eng. Comput. Sci.; 2016; 5, pp. 15912-15918. [DOI: https://dx.doi.org/10.18535/ijecs/v5i3.07]
28. Joshi, P.; Kuo, C.-J. Security and privacy in online social network: A survey. Proceedings of the 2011 IEEE International Conference on Multimedia and Expo; Barcelona, Spain, 11–15 July 2011; pp. 1-6.
29. Drobyshevskiy, M.; Turdakov, D. Random graph modeling: A survey of the concepts. ACM Comput. Surv.; 2019; 52, pp. 1-36. [DOI: https://dx.doi.org/10.1145/3369782]
30. Injadat, M.; Salo, F.; Nassif, A.B. Data mining techniques in social media: A survey. Neurocomputing; 2016; 214, pp. 654-670. [DOI: https://dx.doi.org/10.1016/j.neucom.2016.06.045]
31. Shaukat, K.; Luo, S.; Varadharajan, V.; Hameed, I.A.; Xu, M. A Survey on Machine Learning Techniques for Cyber Security in the Last Decade. IEEE Access; 2020; 8, pp. 222310-222354. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3041951]
32. Shaukat, K.; Luo, S.; Varadharajan, V.; Hameed, I.A.; Chen, S.; Liu, D.; Li, J. Performance Comparison and Current Challenges of Using Machine Learning Techniques in Cybersecurity. Energies; 2020; 13, 2509. [DOI: https://dx.doi.org/10.3390/en13102509]
33. Gurses, S.; Diaz, C. Two tales of privacy in online social network. IEEE Secur. Priv.; 2013; 11, pp. 29-37. [DOI: https://dx.doi.org/10.1109/MSP.2013.47]
34. Mendes, R.; Vilela, J.P. Privacy-preserving data mining: Methods, metrics, and applications. IEEE Access; 2017; 5, pp. 10562-10582. [DOI: https://dx.doi.org/10.1109/ACCESS.2017.2706947]
35. Cunha, M.; Mendes, R.; Vilela, J.P. A survey of privacy-preserving mechanisms for heterogeneous data types. Comput. Sci. Rev.; 2021; 41, 100403. [DOI: https://dx.doi.org/10.1016/j.cosrev.2021.100403]
36. Watanabe, C.; Amagasa, T.; Liu, L. Privacy Risks and Countermeasures in Publishing and Mining Social Network Data. Proceedings of the 7th International Conference on Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom); Orlando, FL, USA, 15–18 October 2011; pp. 55-66.
37. Perikos, I.; Michael, L. A Survey on Tie Strength Estimation Methods in Online Social Networks. ICAART; 2022; 3, pp. 484-491.
38. Tian, Y.; Zhang, Z.; Xiong, J.; Chen, L.; Ma, J.; Peng, C. Achieving Graph Clustering Privacy Preservation Based on Structure Entropy in Social IoT. IEEE Internet Things J.; 2021; 9, pp. 2761-2777. [DOI: https://dx.doi.org/10.1109/JIOT.2021.3092185]
39. Pham, V.V.H.; Yu, S.; Sood, K.; Cui, L. Privacy issues in social network and analysis: A comprehensive survey. IET Netw.; 2018; 7, pp. 74-84. [DOI: https://dx.doi.org/10.1049/iet-net.2017.0137]
40. Peng, W.; Li, F.; Zou, X.; Wu, J. A Two-Stage Deanonymization Attack against Anonymized social network. IEEE Trans. Comput.; 2012; 63, pp. 290-303. [DOI: https://dx.doi.org/10.1109/TC.2012.202]
41. Chetioui, K.; Bah, B.; Alami, A.O.; Bahnasse, A. Overview of Social Engineering Attacks on social network. Procedia Comput. Sci.; 2022; 198, pp. 656-661. [DOI: https://dx.doi.org/10.1016/j.procs.2021.12.302]
42. Villalón-Huerta, A.; Ripoll-Ripoll, I.; Marco-Gisbert, H. A Taxonomy for Threat Actors’ Delivery Techniques. Appl. Sci.; 2022; 12, 3929. [DOI: https://dx.doi.org/10.3390/app12083929]
43. Olteanu, A.-M.; Huguenin, K.; Shokri, R.; Humbert, M.; Hubaux, J.-P. Quantifying Interdependent Privacy Risks with Location Data. IEEE Trans. Mob. Comput.; 2016; 16, pp. 829-842. [DOI: https://dx.doi.org/10.1109/TMC.2016.2561281]
44. Biczók, G.; Chia, P.H. Interdependent privacy: Let me share your data. Proceedings of the International Conference on Financial Cryptography and Data Security; Roseau, The Commonwealth of Dominica, 28 February–3 March 2013; pp. 338-353.
45. Alsarkal, Y.; Zhang, N.; Xu, H. Your privacy is your friend’s privacy: Examining interdependent information disclosure on online social network. Proceedings of the 51st Hawaii International Conference on System Sciences; Hilton Waikoloa Village, HI, USA, 3–6 January 2018.
46. Piao, Y.; Ye, K.; Cui, X. Privacy Inference Attack Against Users in Online Social Networks: A Literature Review. IEEE Access; 2021; 9, pp. 40417-40431. [DOI: https://dx.doi.org/10.1109/ACCESS.2021.3064208]
47. Sharad, K.; Danezis, G. An automated social graph de-anonymization technique. Proceedings of the 13th Workshop on Privacy in the Electronic Society; Scottsdale, AZ, USA, 3 November 2014; pp. 47-58.
48. Al Faresi, A.; Alazzawe, A.; Alazzawe, A. Privacy leakage in health social network. Comput. Intell.; 2013; 30, pp. 514-534. [DOI: https://dx.doi.org/10.1111/coin.12005]
49. Kharaji, Y.M.; Rizi, F.S.; Khayyambashi, M.R. A new approach for finding cloned profiles in online social network. arXiv; 2014; arXiv: 1406.7377
50. Halimi, A.; Ayday, E. Efficient Quantification of Profile Matching Risk in social network. arXiv; 2020; arXiv: 2009.03698
51. Tai, C.-H.; Yu, P.S.; Yang, D.-N.; Chen, M.-S. Structural Diversity for Resisting Community Identification in Published social network. IEEE Trans. Knowl. Data Eng.; 2013; 26, pp. 235-252. [DOI: https://dx.doi.org/10.1109/TKDE.2013.40]
52. Nurgaliev, I.; Qu, Q.; Bamakan, S.M.H.; Muzammal, M. Matching user identities across social network with limited profile data. Front. Comput. Sci.; 2020; 14, pp. 1-14. [DOI: https://dx.doi.org/10.1007/s11704-019-8235-9]
53. Jave, U.; Shaukat, K.; Hameed, I.A.; Iqbal, F.; Alam, T.M.; Luo, S. A review of content-based and context-based recommendation systems. Int. J. Emerg. Technol. Learn.; 2021; 16, pp. 274-306. [DOI: https://dx.doi.org/10.3991/ijet.v16i03.18851]
54. Shaukat, K.; Shaukat, U. Comment extraction using declarative crowdsourcing (CoEx Deco). Proceedings of the 2016 International Conference on Computing, Electronic and Electrical Engineering (ICE Cube); Quetta, Pakistan, 11–12 April 2016; pp. 74-78.
55. Shaukat, K.; Hameed, I.A.; Luo, S.; Javed, I.; Iqbal, F.; Faisal, A.; Masood, R. Domain Specific Lexicon Generation through Sentiment Analysis. Int. J. Emerg. Technol. Learn.; 2020; 15, pp. 190-204. [DOI: https://dx.doi.org/10.3991/ijet.v15i09.13109]
56. Sattikar, A.A.; Kulkarni, R.V. A role of artificial intelligence techniques in security and privacy issues of social networking. Int. J. Comput. Sci. Eng. Technol.; 2012; 2, pp. 792-806.
57. Chung, K.-C.; Chen, C.-H.; Tsai, H.-H.; Chuang, Y.-H. Social media privacy management strategies: A SEM analysis of user privacy behaviors. Comput. Commun.; 2021; 174, pp. 122-130. [DOI: https://dx.doi.org/10.1016/j.comcom.2021.04.012]
58. Seshadhri, C.; Pinar, A.; Kolda, T.G. Wedge sampling for computing clustering coefficients and triangle counts on large graphs. Stat. Anal. Data Mining: Asa Data Sci. J.; 2014; 7, pp. 294-307. [DOI: https://dx.doi.org/10.1002/sam.11224]
59. Skarkala, M.; Gritzalis, S.; Mitrou, L.; Toivonen, H.; Moen, P. Privacy preservation by k-anonymization of weighted social network. Proceedings of the 2012 IEEE/ACM International Conference on Advances in social network Analysis and Mining; Istanbul, Turkey, 26–29 August 2012; pp. 423-428.
60. Ding, X.; Wang, C.; Choo, K.-K.R.; Jin, H. A Novel Privacy Preserving Framework for Large Scale Graph Data Publishing. IEEE Trans. Knowl. Data Eng.; 2019; 33, pp. 331-343. [DOI: https://dx.doi.org/10.1109/TKDE.2019.2931903]
61. Zhang, H.; Li, X.; Xu, J.; Xu, L. Graph Matching Based Privacy-Preserving Scheme in social network. Proceedings of the International Symposium on Security and Privacy in Social Network and Big Data; Fuzhou, China, 19–21 November 2021; Springer: Singapore, 2021; pp. 110-118.
62. Salas, J.; Domingo-Ferrer, J. Some basics on privacy techniques, anonymization and their big data challenges. Math. Comput. Sci.; 2018; 12, pp. 263-274. [DOI: https://dx.doi.org/10.1007/s11786-018-0344-6]
63. Casas-Roma, J.; Herrera-Joancomartí, J.; Torra, V. A survey of graph-modification techniques for privacy-preserving on Netw. Artif. Intell. Rev.; 2016; 47, pp. 341-366. [DOI: https://dx.doi.org/10.1007/s10462-016-9484-8]
64. Casas-Roma, J. An evaluation of vertex and edge modification techniques for privacy-preserving on graphs. J. Ambient Intell. Humaniz. Comput.; 2019; 15, pp. 1-17. [DOI: https://dx.doi.org/10.1007/s12652-019-01363-6]
65. Wang, Y.; Zheng, B. Preserving privacy in social network against connection fingerprint attacks. Proceedings of the 31st International Conference on Data Engineering; Seoul, Korea, 25–26 November 2015; pp. 54-65.
66. Casas-Roma, J.; Herrera-Joancomartí, J.; Torra, V. k-Degree anonymity and edge selection: Improving data utility in large Netw. Knowl. Inf. Syst.; 2016; 50, pp. 447-474. [DOI: https://dx.doi.org/10.1007/s10115-016-0947-7]
67. Ma, T.; Zhang, Y.; Cao, J.; Shen, J.; Tang, M.; Tian, Y.; Al-Dhelaan, A.; Al-Rodhaan, M. KDVEM KDVEM: A k-degree anonymity with vertex and edge modification algorithm. Computing; 2015; 97, pp. 1165-1184.
68. Casas-Roma, J.; Salas, J.; Malliaros, F.D.; Vazirgiannis, M. k-Degree anonymity on directed Netw. Knowl. Inf. Syst.; 2018; 61, pp. 1743-1768. [DOI: https://dx.doi.org/10.1007/s10115-018-1251-5]
69. Erfani, H.; Seyedeh,; Mortazavi, R. A Novel Graph-modification Technique for User Privacy-preserving on social network. J. Telecommun. Inf. Technol.; 2019; [DOI: https://dx.doi.org/10.26636/jtit.2019.134319]
70. Mauw, S.; Ramírez-Cruz, Y.; Trujillo-Rasua, R. Conditional adjacency anonymity in social graphs under active attacks. Knowl. Inf. Syst.; 2018; 61, pp. 485-511. [DOI: https://dx.doi.org/10.1007/s10115-018-1283-x]
71. Yuan, J.; Ou, Y.; Gu, G. An improved privacy protection method based on k-degree anonymity in social network. Proceedings of the 2019 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA); Dalian, China, 29–31 March 2019; pp. 416-420.
72. Majeed, A.; Lee, S. Attribute susceptibility and entropy based data anonymization to improve users community privacy and utility in publishing data. Appl. Intell.; 2020; 50, pp. 2555-2574. [DOI: https://dx.doi.org/10.1007/s10489-020-01656-w]
73. Gangarde, R.; Sharma, A.; Pawar, A.; Joshi, R.; Gonge, S. Privacy Preservation in Online social network Using Multiple-Graph-Properties-Based Clustering to Ensure k-Anonymity, l-Diversity, and t-Closeness. Electronics; 2021; 10, 2877. [DOI: https://dx.doi.org/10.3390/electronics10222877]
74. Srivatsan, S.; Maheswari, N. Privacy Preservation in Social Network Data using Evolutionary Model. Mater. Today Proc.; 2022; in press [DOI: https://dx.doi.org/10.1016/j.matpr.2022.03.251]
75. Nettleton, D.F.; Salas, J. A data driven anonymization system for information rich online social network graphs. Expert Syst. Appl.; 2016; 55, pp. 87-105. [DOI: https://dx.doi.org/10.1016/j.eswa.2016.02.004]
76. Ying, X.; Pan, K.; Wu, X.; Guo, L. Comparisons of randomization and k-degree anonymization schemes for privacy preserving social network publishing. Proceedings of the 3rd Workshop on Social Network Mining and Analysis; Paris, France, 28 June 2009; pp. 1-10.
77. Kiabod, M.; Dehkordi, M.N.; Barekatain, B. A Fast Graph Modification Method for Social Network Anonymization. Expert Syst. Appl.; 2021; 180, 115148. [DOI: https://dx.doi.org/10.1016/j.eswa.2021.115148]
78. Masoumzadeh, A.; Joshi, J. Preserving Structural Properties in Edge-Perturbing Anonymization Techniques for social network. IEEE Trans. Dependable Secur. Comput.; 2012; 9, pp. 877-889. [DOI: https://dx.doi.org/10.1109/TDSC.2012.65]
79. Ren, X.; Jiang, D. A Personalized-Anonymity Model of Social Network for Protecting Privacy. Wirel. Commun. Mob. Comput.; 2022; 2022, pp. 1-11. [DOI: https://dx.doi.org/10.1155/2022/7187528]
80. Ninggal, H.M.I.; Abawajy, J.H. Utility-aware social network graph anonymization. J. Netw. Comput. Appl.; 2015; 56, pp. 137-148. [DOI: https://dx.doi.org/10.1016/j.jnca.2015.05.013]
81. Zhang, H.; Lin, L.; Xu, L.; Wang, X. Graph partition based privacy-preserving scheme in social network. J. Netw. Comput. Appl.; 2021; 195, 103214. [DOI: https://dx.doi.org/10.1016/j.jnca.2021.103214]
82. Xiangxiang, D.O.N.G.; Ang, G.A.O.; Ying, L.I.A.N.G.; Xiaodi, B.I. Method of Privacy Preserving in Dynamic Social Network Data Publication. J. Front. Comput. Sci. Technol.; 2019; 13, 1441.
83. Zhang, H.; Zhang, X.; Liu, L.; Zhang, J. On Study of Privacy Preserving in Large-scale social network Based on Heuristic Analysis. J. Phys. Conf. Ser.; 2018; 1087, 062002. [DOI: https://dx.doi.org/10.1088/1742-6596/1087/6/062002]
84. Kavianpour, S.; Tamimi, A.; Shanmugam, B. A privacy-preserving model to control social interaction behaviors in social network sites. J. Inf. Secur. Appl.; 2019; 49, 102402. [DOI: https://dx.doi.org/10.1016/j.jisa.2019.102402]
85. Lan, L.; Tian, L. Preserving social network privacy using edge vector perturbation. Proceedings of the International Conference on Information Science and Cloud Computing Companion; Guangzhou, China, 7–8 December 2013; pp. 188-193.
86. Hamzehzadeh, S.; Mazinani, S.M. ANNM: A New Method for Adding Noise Nodes Which are Used Recently in Anonymization Methods in social network. Wirel. Pers. Commun.; 2019; 107, pp. 1995-2017. [DOI: https://dx.doi.org/10.1007/s11277-019-06370-6]
87. Li, Y.; Purcell, M.; Rakotoarivelo, T.; Smith, D.; Ranbaduge, T.; Ng, S.T. Private Graph Data Release: A Survey. arXiv; 2021; arXiv: 2107.04245
88. Cai, Z.; He, Z.; Guan, X.; Li, Y. Collective data-sanitization for preventing sensitive information inference attacks in social networks. IEEE Trans. Dependable Secur. Comput.; 2016; 15, pp. 577-590. [DOI: https://dx.doi.org/10.1109/TDSC.2016.2613521]
89. Siddula, M.; Li, L.; Li, Y. An Empirical Study on the Privacy Preservation of Online social network. IEEE Access; 2018; 6, pp. 19912-19922. [DOI: https://dx.doi.org/10.1109/ACCESS.2018.2822693]
90. Nguyen, L.B.; Zelinka, I.; Snasel, V.; Nguyen, L.T.; Vo, B. Subgraph mining in a large graph: A review. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery; Wiley: Hoboken, NJ, USA, 2022; 1454.
91. Mohapatra, D.; Patra, M.R. Anonymization of attributed social graph using anatomy based clustering. Multimed. Tools Appl.; 2019; 78, pp. 25455-25486. [DOI: https://dx.doi.org/10.1007/s11042-019-07745-4]
92. Siddula, M.; Li, Y.; Cheng, X.; Tian, Z.; Cai, Z. Anonymization in Online social network Based on Enhanced Equi-Cardinal Clustering. IEEE Trans. Comput. Soc. Syst.; 2019; 6, pp. 809-820. [DOI: https://dx.doi.org/10.1109/TCSS.2019.2928324]
93. Li, K.; Luo, G.; Ye, Y.; Li, W.; Ji, S.; Cai, Z. Adversarial Privacy-Preserving Graph Embedding Against Inference Attack. IEEE Internet Things J.; 2020; 8, pp. 6904-6915. [DOI: https://dx.doi.org/10.1109/JIOT.2020.3036583]
94. Gangarde, R.; Sharma, A.; Pawar, A. Clustering Approach to Anonymize Online Social Network Data. Proceedigs of the International Conference on Sustainable Computing and Data Communication Systems (ICSCDS); Erode, India, 7–9 April 2022; pp. 1070-1076.
95. Rizi, A.K.; Dehkordi, M.N.; Bakhsh, N.N. SNI: Supervised Anonymization Technique to Publish social network Having Multiple Sensitive Labels. Secur. Commun. Netw.; 2019; 2019, pp. 1-23. [DOI: https://dx.doi.org/10.1155/2019/8171263]
96. Jethava, G.; Rao, U.P. A novel trust prediction approach for online social networks based on multifaceted feature similarity. Clust. Comput.; 2022; pp. 1-15. [DOI: https://dx.doi.org/10.1007/s10586-022-03617-z]
97. Li, X.; Yang, Y.; Chen, Y.; Niu, X. A Privacy Measurement Framework for Multiple Online social network against Social Identity Linkage. Appl. Sci.; 2018; 8, 1790. [DOI: https://dx.doi.org/10.3390/app8101790]
98. Kiranmayi, M.; Maheswari, N. Reducing Attribute Couplet Attack in social network using Factor Analysis. Proceedings of the International Conference on Recent Trends in Advance Computing (ICRTAC); Chennai, India, 10–11 September 2018; pp. 212-217.
99. Kaveri, V.V.; Maheswari, V. Cluster based anonymization for privacy preservation in social network data community. J. Theor. Appl. Inf. Technol.; 2015; 73, pp. 269-274.
100. Langari, R.K.; Sardar, S.; Mousavi, S.A.A.; Radfar, R. Combined fuzzy clustering and firefly algorithm for privacy preserving in social network. Expert Syst. Appl.; 2019; 141, 112968. [DOI: https://dx.doi.org/10.1016/j.eswa.2019.112968]
101. Guo, K.; Zhang, Q. Fast clustering-based anonymization approaches with time constraints for data streams. Knowl.-Based Syst.; 2013; 46, pp. 95-108. [DOI: https://dx.doi.org/10.1016/j.knosys.2013.03.007]
102. Sarah, L.-K.A.; Tian, Y.; Al-Rodhaan, M. A Novel (K, X)-isomorphism Method for Protecting Privacy in Weighted social Network. Proceedings of the 21st Saudi Computer Society National Computer Conference (NCC); Riyadh, Saudi Arabia, 25–26 April 2018; pp. 1-6.
103. Shakeel, S.; Anjum, A.; Asheralieva, A.; Alam, M. k-NDDP: An Efficient Anonymization Model for Social Network Data Release. Electronics; 2021; 10, 2440. [DOI: https://dx.doi.org/10.3390/electronics10192440]
104. Poulin, J.; Mathina, K. Preserving the privacy on social network by clustering based anonymization. Int. J. Adv. Res. Comput. Sci. Technol.; 2014; 2, pp. 11-14.
105. Ghate, R.B.; Rasika, I. Clustering based Anonymization for privacy preservation. Proceedings of the International Conference on Pervasive Computing (ICPC); Maharashtra, India, 8–10 January 2015; pp. 1-3.
106. Sihag, V.K. A clustering approach for structural k-anonymity in social network using genetic algorithm. Proceedings of the CUBE International Information Technology Conference; Pune, India, 3–5 September 2012; pp. 701-706.
107. Yu, F.; Chen, M.; Yu, B.; Li, W.; Ma, L.; Gao, H. Privacy preservation based on clustering perturbation algorithm for social network. Multimed. Tools Appl.; 2017; 77, pp. 11241-11258. [DOI: https://dx.doi.org/10.1007/s11042-017-5502-3]
108. Ros-Martín, M.; Salas, J.; Casas-Roma, J. Scalable non-deterministic clustering-based k-anonymization for rich networks. Int. J. Inf. Secur.; 2018; 18, pp. 219-238. [DOI: https://dx.doi.org/10.1007/s10207-018-0409-1]
109. Yazdanjue, N.; Fathian, M.; Amiri, B. Evolutionary Algorithms For k-Anonymity in social network Based on Clustering Approach. Comput. J.; 2019; 63, pp. 1039-1062. [DOI: https://dx.doi.org/10.1093/comjnl/bxz069]
110. Tian, H.; Zheng, X.; Zhang, X.; Zeng, D.D. ϵ-k anonymization and adversarial training of graph neural Netw. for privacy preservation in social network. Electron. Commer. Res. Appl.; 2021; 50, 101105. [DOI: https://dx.doi.org/10.1016/j.elerap.2021.101105]
111. Kausar, F.; Al Beladi, S.O. A Comparative Analysis of Privacy Preserving Techniques in Online social network. Trans. Netw. Commun.; 2015; 3, 59. [DOI: https://dx.doi.org/10.14738/tnc.32.854]
112. Budiardjo, E.K.; Wibowo, W.C. Privacy preserving data publishing with multiple sensitive attributes based on overlapped slicing. Information; 2019; 10, 362. [DOI: https://dx.doi.org/10.3390/info10120362]
113. Du, J.; Pi, Y. Research on Privacy Protection Technology of Mobile Social Network Based on Data Mining under Big Data. Secur. Commun. Netw.; 2022; 2022, pp. 1-9. [DOI: https://dx.doi.org/10.1155/2022/3826126]
114. Majeed, A.; Khan, S.; Hwang, S.O. Toward Privacy Preservation Using Clustering Based Anonymization: Recent Advances and Future Research Outlook. IEEE Access; 2022; 10, pp. 53066-53097. [DOI: https://dx.doi.org/10.1109/ACCESS.2022.3175219]
115. Cuzzocrea, A.; Leung, C.K.; Olawoyin, A.M.; Fadda, E. Supporting privacy-preserving big data analytics on temporal open big data. Procedia Comput. Sci.; 2022; 198, pp. 112-121. [DOI: https://dx.doi.org/10.1016/j.procs.2021.12.217]
116. Chen, X.; Lui, J.C.S. Mining graphlet counts in online social network. ACM Trans. Knowl. Discov. Data; 2018; 12, pp. 1-38. [DOI: https://dx.doi.org/10.1145/3182392]
117. Shun, J.; Tangwongsan, K. Multicore triangle computations without tuning. Proceedings of the IEEE 31st International Conference on Data Engineering; Seoul, Korea, 13–17 April 2015; pp. 149-160.
118. Yang, C.; Buluç, A.; Owens, J.D. GraphBLAST: A High-Performance Linear Algebra-based Graph Framework on the GPU. ACM Trans. Math. Softw.; 2022; 48, pp. 1-51. [DOI: https://dx.doi.org/10.1145/3466795]
119. Mazlumi, S.H.H.; Kermani, M.A.M. Investigation the structure of the Internet of things (IoT) patent network using social network analysis. IEEE Internet Things J.; 2022; Early Access [DOI: https://dx.doi.org/10.1109/JIOT.2022.3142191]
120. Behera, B.; Husic, E.; Jain, S.; Roughgarden, T.; Seshadhri, C. FPT algorithms for finding near-cliques in c-closed graphs. In Proceedings of the 13th Innovations in Theoretical Computer Science Conference (ITCS 2022), Schloss Dagstuhl-Leibniz-Zentrum fur Informatik. 2022; Available online: https://drops.dagstuhl.de/opus/volltexte/2022/15613/ (accessed on 5 May 2022).
121. Sahraoui, Y.; Lucia, L.D.; Vegni, A.M.; Kerrache, C.A.; Amadeo, M.; Korichi, A. TraceMe: Real-Time Contact Tracing and Early Prevention of COVID-19 based on Online social network. Proceedings of the 2022 IEEE 19th Annual Consumer Communications & Networking Conference (CCNC); Las Vegas, NV, USA, 8–11 January 2022; pp. 893-896.
122. Rezvani, M.; Rezvani, M. Truss decomposition using triangle graphs. Soft Comput.; 2021; 26, pp. 55-68. [DOI: https://dx.doi.org/10.1007/s00500-021-06468-9]
123. Laeuchli, J.; Ramírez-Cruz, Y.; Trujillo-Rasua, R. Analysis of centrality measures under differential privacy models. Appl. Math. Comput.; 2021; 412, 126546. [DOI: https://dx.doi.org/10.1016/j.amc.2021.126546]
124. Hou, Y.; Xia, X.; Li, H.; Cui, J.; Mardani, A. Fuzzy Differential Privacy Theory and its Applications in Subgraph Counting. IEEE Trans. Fuzzy Syst.; 2022; [DOI: https://dx.doi.org/10.1109/TFUZZ.2022.3157385]
125. Nunez-del-Prado, M.; Maehara-Aliaga, Y.; Salas, J.; Alatrista-Salas, H.; Megías, D. A Graph-Based Differentially Private Algorithm for Mining Frequent Sequential Patterns. Appl. Sci.; 2022; 12, 2131. [DOI: https://dx.doi.org/10.3390/app12042131]
126. Risselada, H.; Ochtend, J. Social Network Analysis. Handbook of Market Research; Springer: Berlin/Heidelberg, Germany, 2022; 693.
127. Khanam, K.Z.; Srivastava, G.; Mago, V. The homophily principle in social network analysis: A survey. Multimed. Tools Appl.; 2022; 932, pp. 1-44. [DOI: https://dx.doi.org/10.1007/s11042-021-11857-1]
128. Odeyomi, O.T. Differential Privacy in social network Using Multi-Armed Bandit. IEEE Access; 2022; 10, pp. 11817-11829. [DOI: https://dx.doi.org/10.1109/ACCESS.2022.3144084]
129. Task, C.; Clifton, C. What Should We Protect? Defining Differential Privacy for Social Network Analysis. State of the Art Applications of Social Network Analysis; Springer: Cham, Switzerland, 2014; pp. 139-161.
130. Liu, H.; Peng, C.; Tian, Y.; Long, S.; Tian, F.; Wu, Z. GDP vs. LDP: A Survey from the Perspective of Information-Theoretic Channel. Entropy; 2022; 24, 430. [DOI: https://dx.doi.org/10.3390/e24030430]
131. Gao, T.; Li, F.; Chen, Y.; Zou, X. Preserving local differential privacy in online social network. International Conference on Wireless Algorithms, Systems, and Applications; Springer: Cham, Switzerland, 2017; pp. 393-405.
132. Gao, T.; Li, F.; Chen, Y.; Zou, X. Local differential privately anonymizing online social network under hrg-based model. IEEE Trans. Comput. Soc. Syst.; 2018; 5, pp. 1009-1020. [DOI: https://dx.doi.org/10.1109/TCSS.2018.2877045]
133. Gao, T.; Li, F. PHDP: Preserving persistent homology in differentially private graph publications. Proceedings of the IEEE INFOCOM 2019-IEEE Conference on Computer Communications; Paris, France, 29 April–2 May 2019; pp. 2242-2250.
134. Gao, T.; Li, F. Protecting Social Network with Differential Privacy Under Novel Graph Model. IEEE Access; 2020; 8, pp. 185276-185289. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3026008]
135. Zhang, S.; Ni, W.; Fu, N. Differentially private graph publishing with degree distribution preservation. Comput. Secur.; 2021; 106, 102285. [DOI: https://dx.doi.org/10.1016/j.cose.2021.102285]
136. Zheng, X.; Tian, L.; Hui, B.; Liu, X. Distributed and Privacy Preserving Graph Data Collection in Internet of Thing Systems. IEEE Internet Things J.; 2021; 9, pp. 9301-9309. [DOI: https://dx.doi.org/10.1109/JIOT.2021.3112186]
137. Fang, J.; Li, A.; Jiang, Q. GDAGAN: An anonymization method for graph data publishing using generative adversarial network. Proceedings of the 2019 6th International Conference on Information Science and Control Engineering (ICISCE); Penang, Malaysia, 29 November–1 December 2019; pp. 309-313.
138. Yin, Y.; Liao, Q.; Liu, Y.; Xu, R. Structural-Based Graph Publishing under Differential Privacy. Proceedings of the International Conference on Cognitive Computing; Milan, Italy, 8–13 July 2019; Springer: Cham, Switzerland, 2019; pp. 67-78.
139. Huang, H.; Zhang, D.; Xiao, F.; Wang, K.; Gu, J.; Wang, R. Privacy-preserving approach PBCN in social network with differential privacy. IEEE Trans. Netw. Serv. Manag.; 2020; 17, pp. 931-945. [DOI: https://dx.doi.org/10.1109/TNSM.2020.2982555]
140. Macwan, K.R.; Patel, S.J. Node Differential Privacy in Social Graph Degree Publishing. Procedia Comput. Sci.; 2018; 143, pp. 786-793. [DOI: https://dx.doi.org/10.1016/j.procs.2018.10.388]
141. Zhu, H.; Zuo, X.; Xie, M. DP-FT: A Differential Privacy Graph Generation with Field Theory for Social Network Data Release. IEEE Access; 2019; 7, pp. 164304-164319. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2952452]
142. Huang, H.; Yang, Y.; Li, Y. PSG: Local Privacy Preserving Synthetic Social Graph Generation. Proceedings of the International Conference on Collaborative Computing: Networking, Applications andWorksharing; Virtual, 16–18 October 2021; Springer: Cham, Switerland, 2021; pp. 389-404.
143. Macwan, K.; Patel, S. Privacy Preserving Approaches for Online Social Network Data Publishing. Handbook of Research on Digital Transformation and Challenges to Data Security and Privacy; IGI Global: Gerais, Brazil, 2021; pp. 119-132.
144. Macwan, K.; Patel, S. Privacy Preservation Approaches for Social Network Data Publishing. Artificial Intelligence for Cyber Security: Methods, Issues and Possible Horizons or Opportunities; Springer: Cham, Switzerland, 2021; pp. 213-233.
145. Liu, P.; Xu, Y.; Jiang, Q.; Tang, Y.; Guo, Y.; Wang, L.; Li, X. Local differential privacy for social network publishing. Neurocomputing; 2020; 391, pp. 273-279. [DOI: https://dx.doi.org/10.1016/j.neucom.2018.11.104]
146. Iftikhar, M.; Wang, Q.; Lin, Y. dk-microaggregation: Anonymizing graphs with differential privacy guarantees. Pacific-Asia Conference on Knowledge Discovery and Data Mining; Springer: Cham, Switzerland, 2020; pp. 191-203.
147. Li, X.; Yang, J.; Sun, Z.; Zhang, J. Differential Privacy for Edge Weights in social network. Secur. Commun. Netw.; 2017; 2017, 972. [DOI: https://dx.doi.org/10.1155/2017/4267921]
148. Guan, Y.; Lu, R.; Zheng, Y.; Zhang, S.; Shao, J.; Wei, G. Achieving Efficient and Privacy-Preserving (,)-Core Query over Bipartite Graphs in Cloud. IEEE Trans. Dependable Secur. Comput.; 2022; 974, [DOI: https://dx.doi.org/10.1109/TDSC.2022.3169386]
149. Wang, J.; Li, Z.; Lui, J.C.S.; Sun, M. Topology-theoretic approach to address attribute linkage attacks in differential privacy. Comput. Secur.; 2022; 113, 102552. [DOI: https://dx.doi.org/10.1016/j.cose.2021.102552]
150. Yang, J.; Ma, X.; Bai, X.; Cui, L. Graph publishing with local differential privacy for hierarchical social network. Proceedings of the 2020 IEEE 10th International Conference on Electronics Information and Emergency Communication (ICEIEC); Beijing, China, 17–19 July 2020; pp. 123-126.
151. Wang, Y.; Yang, J.; Zhang, J. Differential Privacy for Weighted Network Based on Probability Model. IEEE Access; 2020; 8, pp. 80792-80800. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.2991062]
152. Wang, Y.; Yang, J.; Zhan, J. Differentially Private Attributed Network Releasing Based on Early Fusion. Secur. Commun. Netw.; 2021; 2021, 983. [DOI: https://dx.doi.org/10.1155/2021/9981752]
153. Lv, T.; Li, H.; Tang, Z.; Fu, F.; Cao, J.; Zhang, J. Publishing Triangle Counting Histogram in social network Based on Differential Privacy. Secur. Commun. Netw.; 2021; 2021, [DOI: https://dx.doi.org/10.1155/2021/7206179]
154. Wang, D.; Long, S. Boosting the accuracy of differentially private in weighted social network. Multimed. Tools Appl.; 2019; 78, pp. 34801-34817. [DOI: https://dx.doi.org/10.1007/s11042-019-08092-0]
155. Lei, H.; Li, S.; Wang, H. A weighted social network publishing method based on diffusion wavelets transform and differential privacy. Multimed. Tools Appl.; 2022; 81, pp. 1-18. [DOI: https://dx.doi.org/10.1007/s11042-022-12726-1]
156. Reuben, J. Towards a differential privacy theory for edge-labeled directed graphs. Sicherheit; 2018; pp. 273-278.
157. Yan, J.; Tian, Y.; Liu, H.; Zhenqiang, W. Uncertain graph generating approach based on differential privacy for preserving link relationship of social network. In. J. Secur. Netw.; 2022; 17, pp. 28-38. [DOI: https://dx.doi.org/10.1504/IJSN.2022.122545]
158. Yan, J.; Liu, H.; Wu, Z. An Efficient Differential Privacy Method with Wavelet Transform for Edge Weights of social network. Int. J. Netw. Secur.; 2022; 24, pp. 181-192.
159. Qian, Q.; Li, Z.; Zhao, P.; Chen, W.; Yin, H.; Zhao, L. Publishing graph node strength histogram with edge differential privacy. Proceedings of the International Conference on Database Systems for Advanced Applications; Taipei, Taiwan, 11–14 April 2021; Springer: Cham, Switzerland, 2018; pp. 75-91.
160. Qiuyang, G.; Qilian, N.; Xiangzhao, M.; Zhijiao, Y. Dynamic social privacy protection based on graph mode partition in complex social network. Pers. Ubiquitous Comput.; 2019; 23, pp. 511-519. [DOI: https://dx.doi.org/10.1007/s00779-019-01249-6]
161. Qu, Y.; Gao, L.; Yu, S.; Xiang, Y. Personalized Privacy. Privacy Preservation in IoT: Machine Learning Approaches; Springer: Singapore, 2022; pp. 49-76.
162. Iftikhar, M.; Wang, Q.; Li, Y. dK-Personalization: Publishing Network Statistics with Personalized Differential Privacy. Proceedings of the Advances in Knowledge Discovery and Data Mining: 26th Pacific-Asia Conference, PAKDD 2022; Chengdu, China, 16–19 May 2022; pp. 194-207.
163. Jiang, H.; Pei, J.; Yu, D.; Yu, J.; Gong, B.; Cheng, X. Applications of differential privacy in social network analysis: A survey. IEEE Trans. Knowl. Data Eng.; 2021; Early Access [DOI: https://dx.doi.org/10.1109/TKDE.2021.3073062]
164. Kiranmayi, M.; Maheswari, N. A Review on Privacy Preservation of social network Using Graphs. J. Appl. Secur. Res.; 2020; 16, pp. 190-223. [DOI: https://dx.doi.org/10.1080/19361610.2020.1751558]
165. Hua, J.; Tang, A.; Fang, Y.; Shen, Z.; Zhong, S. Privacy-preserving utility verification of the data published by non-interactive differentially private mechanisms. IEEE Trans. Inf. Forensics Secur.; 2016; 11, pp. 2298-2311. [DOI: https://dx.doi.org/10.1109/TIFS.2016.2532839]
166. Tran, K.-D.T.; Huang, Y. FedSGDCOVID: Federated SGD COVID-19 Detection under Local Differential Privacy Using Chest X-ray Images and Symptom Information. Sensors; 2022; 22, 3728. [DOI: https://dx.doi.org/10.3390/s22103728]
167. Jiang, H.; Sarwar, S.M.; Yu, H.; Islam, S.A. Differentially private data publication with multi-level data utility. High-Confid. Comput.; 2022; 2, 100049. [DOI: https://dx.doi.org/10.1016/j.hcc.2022.100049]
168. Liu, B.; Ding, M.; Shaham, S.; Rahayu, W.; Farokhi, F.; Lin, Z. When machine learning meets privacy: A survey and outlook. ACM Comput. Surv.; 2021; 54, pp. 1-36. [DOI: https://dx.doi.org/10.1145/3436755]
169. Cristofaro, D.E. A critical overview of privacy in machine learning. IEEE Secur. Priv.; 2021; 19, pp. 19-27. [DOI: https://dx.doi.org/10.1109/MSEC.2021.3076443]
170. Aljably, R.; Tian, Y.; Al-Rodhaan, M. Preserving privacy in multimedia social network using machine learning anomaly detection. Secur. Commun. Netw.; 2020; 2020, 5874935. [DOI: https://dx.doi.org/10.1155/2020/5874935]
171. Narayanan, A.; Shi, E.; Rubinstein, B.I. Link prediction by de-anonymization: How we won the kaggle social network challenge. Proceedings of the 2011 International Joint Conference on Neural Network; San Jose, CA, USA, 31 July–5 August 2011; pp. 1825-1834.
172. Qian, J.; Li, X.; Zhang, C.; Chen, L.; Jung, T.; Han, J. Social network de-anonymization and privacy inference with knowledge graph model. IEEE Trans. Dependable Secur. Comput.; 2017; 16, pp. 679-692. [DOI: https://dx.doi.org/10.1109/TDSC.2017.2697854]
173. Tanuwidjaja, C.H.; Choi, R.; Baek, S.; Kim, K. Privacy-preserving deep learning on machine learning as a service—A comprehensive survey. IEEE Access; 2020; 8, pp. 167425-167447. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3023084]
174. Bilogrevic, I.; Huguenin, K.; Agir, B.; Jadliwala, M.; Hubaux, J. Adaptive information-sharing for privacy-aware mobile social network. Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing; Zurich, Switzerland, 8–12 September 2013; pp. 657-666.
175. Islam, C.; Aylin,; Walsh, J.; Greenstadt, R. Privacy detective: Detecting private information and collective privacy behavior in a large social network. Proceedings of the 13th Workshop on Privacy in the Electronic Society; Scottsdale, AZ, USA, 3 November 2014; pp. 35-46.
176. Yin, S.; Liu, J. A K-means Approach for Map-Reduce Model and Social Network Privacy Protection. J. Inf. Hiding Multim. Signal Process.; 2016; 7, pp. 1215-1221.
177. Wang, S.L.; Shih, C.; Ting, I.; Hong, T. Degree anonymization for k-shortest-path privacy. Proceedings of the 2013 IEEE International Conference on Systems, Man, and Cybernetics; Manchester, UK, 13–16 October 2013; pp. 1093-1097.
178. Ju, X.; Zhang, X.; Cheung, W.K. Generating synthetic graphs for large sensitive and correlated social network. Preceedings of the 2019 IEEE 35th International Conference on Data Engineering Workshops (ICDEW); Macao, China, 8–12 April 2019; pp. 286-293.
179. Zheng, Y.; Wu, J.; Zhang, X.; Chu, X. Graph-DPP: Sampling Diverse Neighboring Nodes via Determinantal Point Process. Proceedings of the 2020 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT); Virtual Conference, 14–17 December 2020; pp. 540-545.
180. Paul, A.; Suppakitpaisarn, V.; Bafna, M.; Rangan, C.P. Improving accuracy of differentially private kronecker social network via graph clustering. Proceedings of the 2020 International Symposium on Network, Computers and Communications (ISNCC); Montreal, QC, Canada, 20–22 October 2020.
181. Hoang, A.H.; Carminati, B.; Ferrari, E. Cluster-based anonymization of knowledge graphs. Proceedings of the International Conference on Applied Cryptography and Network Security; Rome, Italy, 20–23 June 2020; Springer: Cham, Switzerland, 2020; pp. 104-123.
182. Hoang, A.H.; Carminati, B.; Ferrari, E. Privacy-Preserving Sequential Publishing of Knowledge Graphs. Proceedings of the 2021 IEEE 37th International Conference on Data Engineering (ICDE); Athens, Greece, 19–22 April 2021.
183. Chen, Z.G.; Kang, H.; Yin, S.; Kim, S. An efficient privacy protection in mobility social network services with novel clustering-based anonymization. Eurasip J. Wirel. Commun. Netw.; 2016; 2016, pp. 1-9. [DOI: https://dx.doi.org/10.1186/s13638-016-0767-1]
184. Narula, V.; Feng, K.; Chaspari, T. Preserving privacy in image-based emotion recognition through user anonymization. Proceedings of the 2020 International Conference on Multimodal Interaction; Virtual Event, The Netherlands, 25–29 October 2020; pp. 452-460.
185. Zitouni, S.M.; Lee, P.; Lee, U.; Hadjileontiadis, L.; Khandoker, A. Privacy Aware Affective State Recognition from Visual Data. IEEE Access; 2022; 10, pp. 40620-40628. [DOI: https://dx.doi.org/10.1109/ACCESS.2022.3165622]
186. Ahmed, W.K.; Hasan, M.Z.; Mohammed, N. Image-centric social discovery using neural network under anonymity constraint. 2017 IEEE International Conference on Cloud Engineering (IC2E); Vancouver, BC, Canada, 4–7 April 2017; pp. 238-244.
187. Matheswaran, P.; Navaneethan, C.; Meenatchi, S.; Ananthi, S.; Janaki, K.; Manjunathan, A. Image Privacy in Social Network Using Invisible Watermarking Techniques. Ann. Rom. Soc. Cell Biol.; 2021; 25, pp. 319-327.
188. Li, A.; Fang, J.; Jiang, Q.; Zhou, B.; Jia, Y. A graph data privacy-preserving method based on generative adversarial Netw. Proceedings of the International Conference on Web Information Systems Engineering; Melbourne, VIC, Australia, 26–29 October 2020; Springer: Cham, Switzerland, 2020; pp. 227-239.
189. Lu, Y.; Deng, Z.; Gao, Q.; Jing, T. Graph Embedding-Based Sensitive Link Protection in IoT Systems. Wirel. Commun. Mob. Comput.; 2022; 2022, [DOI: https://dx.doi.org/10.1155/2022/2432351]
190. Li, X.; Xin, Y.; Zhao, C.; Yang, Y.; Chen, Y. Graph convolutional Netw. for privacy metrics in online social network. Appl. Sci.; 2020; 10, 1327. [DOI: https://dx.doi.org/10.3390/app10041327]
191. Wanda, P.; Jie, H.J. DeepFriend: Finding abnormal nodes in online social network using dynamic deep learning. Soc. Netw. Anal. Min.; 2021; 11, pp. 1-12. [DOI: https://dx.doi.org/10.1007/s13278-021-00742-2]
192. Li, X.; Xin, Y.; Zhao, C.; Yang, Y.; Luo, S.; Chen, Y. Using user behavior to measure privacy on online social network. IEEE Access; 2020; 8, pp. 108387-108401. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3000780]
193. Bioglio, L.; Pensa, R.G. Analysis and classification of privacy-sensitive content in social media posts. Epj Data Sci.; 2022; 11, 12. [DOI: https://dx.doi.org/10.1140/epjds/s13688-022-00324-y]
194. Hermansson, L.; Kerola, T.; Johansson, F.; Jethava, V.; Dubhashi, D. Entity disambiguation in anonymized graphs using graph kernels. Proceedings of the 22nd ACM International Conference on Information Knowledge Management; San Francisco, CA, USA, 27 October–1 November 2013; pp. 1037-1046.
195. Kalunge, V.; Deepika, S. Data Mining Techniques for Privacy Preservation in Social Network Sites Using SVM. Techno-Societal; Springer: Cham, Switzerland, 2021; pp. 733-743.
196. Zhang, J.; Sun, J.; Zhang, R.; Zhang, Y.; Hu, X. Privacy-preserving social media data outsourcing. Proceedings of the IEEE INFOCOM 2018-IEEE Conference on Computer Communications; Honolulu, HI, USA, 15–19 April 2018; pp. 1106-1114.
197. Halimi, A.; Ayday, E. Real-time privacy risk quantification in online social network. Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Network Analysis and Mining; Virtual Event, The Netherlands, 8–11 November 2021; pp. 74-81.
198. Kumar, S.; Kumar, P. Upper approximation based privacy preserving in online social network. Expert Syst. Appl.; 2017; 88, pp. 276-289. [DOI: https://dx.doi.org/10.1016/j.eswa.2017.07.010]
199. Kumar, S.; Kumar, P. Privacy Preserving in Online social network Using Fuzzy Rewiring. IEEE Trans. Eng. Manag.; 2021; Early Access [DOI: https://dx.doi.org/10.1109/TEM.2021.3072812]
200. Li, J.; Zhang, X.; Liu, J.; Gao, L.; Zhang, H.; Feng, Y. Large-Scale Social Network Privacy Protection Method for Protecting K-Core. Int. J. Netw. Secur.; 2021; 23, pp. 612-622.
201. Chavhan, K.; Challagidad, P.S. Anonymization Technique For Privacy Preservation In social network. Proceedings of the 2021 5th International Conference on Electrical, Electronics, Communication, Computer Technologies and Optimization Techniques (ICEECCOT); Mysuru, India, 10–11 December 2021; pp. 131-136.
202. Wang, J.; Wan, Z.; Song, J.; Huang, Y.; Lin, Y.; Lin, L. Anonymizing Global Edge Weighted Social Network Graphs. Proceedings of the International Symposium on Security and Privacy in social network and Big Data; Fuzhou, China, 19–21 November 2021; Springer: Singapore, 2021; pp. 119-130.
203. Kansara, K.; Kadhiwala, B. Non-cryptographic Approaches for Collaborative Social Network Data Publishing-A Survey. Proceedings of the 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC); Palladam, India, 7–9 October 2020; pp. 348-351.
204. Ma, T.; Jia, J.; Xue, Y.; Tian, Y.; Al-Dhelaan, A.; Al-Rodhaan, M. Protection of location privacy for moving kNN queries in social network. Appl. Soft Comput.; 2018; 16, pp. 525-532. [DOI: https://dx.doi.org/10.1016/j.asoc.2017.08.027]
205. Zhang, J.; Shi, S.; Weng, C.; Xu, L. Individual Attribute and Cascade Influence Capability-Based Privacy Protection Method in social network. Secur. Commun. Netw.; 2022; 2022, [DOI: https://dx.doi.org/10.1155/2022/6338123]
206. Mau, S.; Ramírez-Cruz, Y.; Trujillo-Rasua, R. Preventing active re-identification attacks on social graphs via sybil subgraph obfuscation. Knowl. Inf. Syst.; 2022; 64, pp. 1077-1100. [DOI: https://dx.doi.org/10.1007/s10115-022-01662-z]
207. Maag, L.M.; Denoyer, L.; Gallinari, P. Graph anonymization using machine learning. Proceedings of the 2014 IEEE 28th International Conference on Advanced Information Networking and Applications; Victoria, BC, Canada, 13–16 May 2014; pp. 1111-1118.
208. Gao, T.; Li, F. Machine Learning-based Online Social Network Privacy Preservation. Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security; Nagasaki, Japan, 30 May–3 June 2022; pp. 467-478.
209. Huynh-The, T.; Pham, Q.V.; Pham, X.Q.; Nguyen, T.T.; Han, Z.; Kim, D.S. Artificial Intelligence for the Metaverse: A Survey. arXiv; 2022; arXiv: 2202.10336
210. Mutlu, E.C.; Oghaz, T.; Rajabi, A.; Garibay, I. Review on Learning and Extracting Graph Features for Link Prediction. Mach. Learn. Knowl. Extr.; 2020; 2, pp. 672-704. [DOI: https://dx.doi.org/10.3390/make2040036]
211. Nemec Zlatolas, L.; Hrgarek, L.; Welzer, T.; Hölbl, M. Models of Privacy and Disclosure on Social Networking Sites: A Systematic Literature Review. Mathematics; 2022; 10, 146. [DOI: https://dx.doi.org/10.3390/math10010146]
212. Shauka, K.; Luo, S.; Chen, S.; Liu, D. Cyber threat detection using machine learning techniques: A performance evaluation perspective. Proceedings of the 2020 International Conference on Cyber Warfare and Security (ICCWS); Norfolk, VA, USA, 12–13 March 2020; pp. 1-6.
213. Li, P.; Cui, L.; Li, X. A hybrid algorithm for privacy preserving social network publication. Proceedings of the International Conference on Advanced Data Mining and Applications; Brisbane, Australia, 28–30 November 2014; Springer: Cham, Switzerland, 2014; pp. 267-278.
214. Liu, P.; Bai, Y.; Wang, L.; Li, X. Partial k-anonymity for privacy-preserving social network data publishing. Int. J. Softw. Eng. Knowl. Eng.; 2017; 27, pp. 71-90. [DOI: https://dx.doi.org/10.1142/S0218194017500048]
215. Mortazavi, R.; Erfani, S.H. GRAM: An efficient (k, l) graph anonymization method. Expert Syst. Appl.; 2020; 153, 113454. [DOI: https://dx.doi.org/10.1016/j.eswa.2020.113454]
216. Liao, S.H.; Yang, C.A. Big data analytics of social network marketing and personalized recommendations. Soc. Netw. Anal. Min.; 2021; 11, pp. 1-19. [DOI: https://dx.doi.org/10.1007/s13278-021-00729-z]
217. Wang, L.E.; Li, X. A graph-based multifold model for anonymizing data with attributes of multiple types. Comput. Secur.; 2018; 72, pp. 122-135. [DOI: https://dx.doi.org/10.1016/j.cose.2017.09.003]
218. Qu, Y.; Yu, S.; Gao, L.; Zhou, W.; Peng, S. A hybrid privacy protection scheme in cyber-physical social networks. IEEE Trans. Comput. Soc. Syst.; 2018; 5, pp. 773-784. [DOI: https://dx.doi.org/10.1109/TCSS.2018.2861775]
219. Wang, Y.; Xie, L.; Zheng, B.; Lee, K.C. High utility k-anonymization for social network publishing. Knowl. Inf. Syst.; 2014; 41, pp. 697-725. [DOI: https://dx.doi.org/10.1007/s10115-013-0674-2]
220. Mortazavi, R.; Erfani, S.H. An effective method for utility preserving social network graph anonymization based on mathematical modeling. Int. J. Eng.; 2018; 31, pp. 1624-1632.
221. Talmon, N.; Hartung, S. The complexity of degree anonymization by graph contractions. Inf. Comput.; 2017; 256, pp. 212-225. [DOI: https://dx.doi.org/10.1016/j.ic.2017.07.007]
222. An, S.; Li, Y.; Wang, T.; Jin, Y. Contact Graph Based Anonymization for Geosocial Network Datasets. Proceedings of the 2018 5th International Conference on Behavioral, Economic, and Socio-Cultural Computing (BESC); Taiwan, China, 12–14 November 2018; pp. 132-137.
223. Naik, D.; Ramesh, D.; Gandomi, A.H.; Gorojanam, N.B. Parallel and distributed paradigms for community detection in social network: A methodological review. Expert Syst. Appl.; 2022; 187, 115956. [DOI: https://dx.doi.org/10.1016/j.eswa.2021.115956]
224. Mithagari, A.; Shankarmani, R. Mining Active Influential Nodes for Finding Information Diffusion in social network. IoT and Cloud Computing for Societal Good; Springer: Cham, Switzerland, 2022; pp. 245-255.
225. Huang, M.; Jiang, Q.; Qu, Q.; Chen, L.; Chen, H. Information fusion oriented heterogeneous social network for friend recommendation via community detection. Appl. Soft Comput.; 2022; 114, 108103. [DOI: https://dx.doi.org/10.1016/j.asoc.2021.108103]
226. Karimi, S.; Shakery, A.; Verma, R.M. Enhancement of Twitter event detection using news streams. Nat. Lang. Eng.; 2022; pp. 1-20. [DOI: https://dx.doi.org/10.1017/S1351324921000462]
227. Zheng, X.; Cai, Z.; Luo, G.; Tian, L.; Bai, X. Privacy-preserved community discovery in online social network. Future Gener. Comput. Syst.; 2019; 93, pp. 1002-1009. [DOI: https://dx.doi.org/10.1016/j.future.2018.04.020]
228. Wang, W.; Wang, S.; Huang, J. Privacy Preservation for Friend-Recommendation Applications. Secur. Commun. Netw.; 2018; 2018, 1265352. [DOI: https://dx.doi.org/10.1155/2018/1265352]
229. Li, F.; Sun, Z.; Li, A.; Niu, B.; Li, H.; Cao, G. Hideme: Privacy-preserving photo sharing on social network. Proceedings of the IEEE INFOCOM 2019-IEEE Conference on Computer Communications; Paris, France, 29 April–2 May 2019; pp. 154-162.
230. Yi, X.; Bertino, E.; Rao, F.; Bouguettaya, A. Practical privacy-preserving user profile matching in social network. Proceedings of the 2016 32nd International Conference on Data Engineering (ICDE); Helsinki, Finland, 16–20 May 2016; pp. 373-384.
231. Wei, J.; Li, J.; Lin, Y.; Zhang, J. LDP-Based Social Content Protection for Trending Topic Recommendation. IEEE Internet Things J.; 2020; 8, pp. 4353-4372. [DOI: https://dx.doi.org/10.1109/JIOT.2020.3026366]
232. Valliyammai, C.; Bhuvaneswari, A. Semantics-based sensitive topic diffusion detection framework towards privacy aware online social network. Clust. Comput.; 2019; 22, pp. 407-422. [DOI: https://dx.doi.org/10.1007/s10586-018-2142-y]
233. Casas-Roma, J. DUEF-GA: Data utility and privacy evaluation framework for graph anonymization. Int. J. Inf. Secur.; 2019; 19, pp. 465-478. [DOI: https://dx.doi.org/10.1007/s10207-019-00469-4]
234. Gao, J.R.; Chen, W.; Xu, J.; Liu, A.; Li, Z.; Yin, H.; Zhao, L. An efficient framework for multiple subgraph pattern matching models. J. Comput. Sci. Technol.; 2019; 34, pp. 1185-1202. [DOI: https://dx.doi.org/10.1007/s11390-019-1969-x]
235. Li, D.; Lv, Q.; Shang, L.; Gu, N. Efficient privacy-preserving content recommendation for online social communities. Neurocomputing; 2017; 219, pp. 440-454. [DOI: https://dx.doi.org/10.1016/j.neucom.2016.09.059]
236. Mazeh, I.; Shmueli, E. A personal data store approach for recommender systems: Enhancing privacy without sacrificing accuracy. Expert Syst. Appl.; 2019; 139, 112858. [DOI: https://dx.doi.org/10.1016/j.eswa.2019.112858]
237. Yargic, A.; Bilge, A. Privacy-preserving multi-criteria collaborative filtering. Inf. Process. Manag.; 2019; 56, pp. 994-1009. [DOI: https://dx.doi.org/10.1016/j.ipm.2019.02.009]
238. Bahri, L.; Carminati, B.; Ferrari, E. Decentralized privacy preserving services for Online social network Online Soc. Netw. Media; 2018; 6, pp. 18-25. [DOI: https://dx.doi.org/10.1016/j.osnem.2018.02.001]
239. Dong, W.; Dave, V.; Qiu, L.; Zhang, Y. Secure Friend Discovery in Mobile Social Network. Proceedings of the INFOCOM; Shanghai, China, 10–15 April 2011.
240. Liu, Y.; Liu, J.; Zhang, Z.; Zhu, L.; Li, A. Rem: From structural entropy to community structure deception. Adv. Neural Inf. Process. Syst.; 2019; 32, pp. 1-11.
241. Guo, L.; Zhang, C.; Fang, Y.; Lin, P. A Privacy-Preserving Attribute-Based Reputation System in Online social network. J. Comput. Sci. Technol.; 2015; 30, pp. 578-597. [DOI: https://dx.doi.org/10.1007/s11390-015-1547-9]
242. Yin, D.; Shen, Y.; Liu, C. Attribute Couplet Attacks and Privacy Preservation in social network. IEEE Access; 2017; 5, pp. 25295-25305. [DOI: https://dx.doi.org/10.1109/ACCESS.2017.2769090]
243. Kukkala, V.B.; Iyengar, S. Identifying Influential Spreaders in a Social Network (While Preserving Privacy). Proc. Priv. Enhancing Technol.; 2020; 2020, pp. 537-557. [DOI: https://dx.doi.org/10.2478/popets-2020-0040]
244. Yuan, M.; Chen, L.; Yu, P.S.; Yu, T. Protecting Sensitive Labels in Social Network Data Anonymization. IEEE Trans. Knowl. Data Eng.; 2011; 25, pp. 633-647. [DOI: https://dx.doi.org/10.1109/TKDE.2011.259]
245. Gao, T.; Li, F. Privacy-preserving sketching for online social network data publication. Proceedings of the 2019 16th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON); Boston, MA, USA, 10–13 June 2019; pp. 1-9.
246. Zheng, X.; Zhang, L.; Li, K.; Zeng, X. Efficient publication of distributed and overlapping graph data under differential privacy. Tsinghua Sci. Technol.; 2021; 27, pp. 235-243. [DOI: https://dx.doi.org/10.26599/TST.2021.9010018]
247. Ferrari, L.; Rosi, A.; Mamei, M.; Zambonelli, F. Extracting urban patterns from location-based social network. Proceedings of the 3rd ACM SIGSPATIAL International Workshop on Location-Based Social Network; Chicago, IL, USA, 1 November 2011; pp. 9-16.
248. Aljably, R.; Tian, Y.; Al-Rodhaan, M.; Al-Dhelaan, A. Anomaly detection over differential preserved privacy in online social network. PLoS ONE; 2019; 14, e0215856. [DOI: https://dx.doi.org/10.1371/journal.pone.0215856] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31022238]
249. Liang, S.; Lam, J.; Lin, H. Secure Estimation with Privacy Protection. IEEE Trans. Cybern.; 2022; pp. 1-15. [DOI: https://dx.doi.org/10.1109/TCYB.2022.3151234]
250. Shan, F.; Ji, P.; Li, F.; Liu, W. A Smart Access Control Mechanism Based on User Preference in Online social network. Proceedings of the International Conference on Mobile Multimedia Communications; Virtual Event, 23–25 July 2021; Springer: Cham, Switzerlands, 2021; pp. 577-590.
251. Stokes, K. Cover-up: A probabilistic privacy-preserving graph database model. J. Ambient Intell. Humaniz. Comput.; 2019; pp. 1-8. [DOI: https://dx.doi.org/10.1007/s12652-019-01515-8]
252. Wen, G.; Liu, H.; Yan, J.; Wu, Z. A privacy analysis method to anonymous graph based on bayes rule in social network. Proceedings of the 2018 14th International Conference on Computational Intelligence and Security (CIS); Hangzhou, China, 16–19 November 2018; pp. 469-472.
253. Yin, L.; Feng, J.; Xun, H.; Sun, Z.; Cheng, X. A privacy-preserving federated learning for multiparty data sharing in social IoTs. IEEE Trans. Netw. Sci. Eng.; 2021; 8, pp. 2706-2718. [DOI: https://dx.doi.org/10.1109/TNSE.2021.3074185]
254. Rajabzadeh, S.; Shahsafi, P.; Khoramnejadi, M. A graph modification approach for k-anonymity in social network using the genetic algorithm. Soc. Netw. Anal. Min.; 2020; 10, pp. 1-17. [DOI: https://dx.doi.org/10.1007/s13278-020-00655-6]
255. Bourahla, S.; Laurent, M.; Challal, Y. Privacy preservation for social networks sequential publishing. Comput. Netw.; 2020; 170, 107106. [DOI: https://dx.doi.org/10.1016/j.comnet.2020.107106]
256. Aiello, L.M.; Ruffo, G. LotusNet: Tunable privacy for distributed online social network services. Comput. Commun.; 2012; 35, pp. 75-88. [DOI: https://dx.doi.org/10.1016/j.comcom.2010.12.006]
257. Kushwah, V.R.S.; Verma, K. Security and Privacy Challenges for Big Data on Social Media. Big Data Analytics in Cognitive Social Media and Literary Texts; Springer: Singapore, 2021; pp. 267-285.
258. Shao, Y.; Liu, J.; Shi, S.; Zhang, Y.; Cui, B. Fast de-anonymization of social network with structural information. Data Sci. Eng.; 2019; 4, pp. 76-92. [DOI: https://dx.doi.org/10.1007/s41019-019-0086-8]
259. Zhang, C.; Jiang, H.; Wang, Y.; Hu, Q.; Yu, J.; Cheng, X. User identity de-anonymization based on attributes. Proceedings od the International Conference on Wireless Algorithms, Systems, and Applications; Harbin, China, 23–25 June 2019; pp. 458-469.
260. Fu, L.; Zhang, J.; Wang, S.; Wu, X.; Wang, X.; Chen, G. De-anonymizing social network with overlapping community structure. IEEE/Acm Trans. Netw.; 2020; 28, pp. 360-375. [DOI: https://dx.doi.org/10.1109/TNET.2019.2962731]
261. Jiang, H.; Yu, J.; Cheng, X.; Zhang, C.; Gong, B.; Yu, H. Structure-Attribute-Based Social Network Deanonymization with Spectral Graph Partitioning. IEEE Trans. Comput. Soc. Syst.; 2021; 9, pp. 902-913. [DOI: https://dx.doi.org/10.1109/TCSS.2021.3082901]
262. Zhang, J.; Qu, S.; Li, Q.; Kang, H.; Fu, L.; Zhang, H.; Wang, X.; Chen, G. On Social Network De-anonymization with Communities: A Maximum A Posteriori Perspective. IEEE Trans. Knowl. Data Eng.; 2021; Early Access [DOI: https://dx.doi.org/10.1109/TKDE.2021.3124559]
263. Zhang, J.; Fu, L.; Long, H.; Meng, G.; Tang, F.; Wang, X.; Chen, G. Collective De-anonymization of social network with Optional Seeds. IEEE Trans. Mob. Comput.; 2021; Early Access [DOI: https://dx.doi.org/10.1109/TMC.2021.3077520]
264. Miao, B.; Wang, S.; Fu, L.; Lin, X. De-anonymizability of social network: Through the lens of symmetry. Proceedings of the Twenty-First International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Netw and Mobile Computing; Virtual Event, USA, 11–14 October 2020; pp. 71-80.
265. Creţu, A.M.; Monti, F.; Marrone, S.; Dong, X.; Bronstein, M.; de Montjoye, Y. Interaction data are identifiable even across long periods of time. Nat. Commun.; 2022; 13, pp. 1-11. [DOI: https://dx.doi.org/10.1038/s41467-021-27714-6]
266. Ji, S.; Wang, T.; Chen, J.; Li, W.; Mittal, P.; Beyah, R. De-sag: On the de-anonymization of structure-attribute graph data. IEEE Trans. Dependable Secur. Comput.; 2017; 16, pp. 594-607. [DOI: https://dx.doi.org/10.1109/TDSC.2017.2712150]
267. Li, K.; Lu, G.; Luo, G.; Cai, Z. Seed-free graph de-anonymiztiation with adversarial learning. Proceedings of the 29th ACM International Conference on Information Knowledge Management; Virtual Event Ireland, 19–23 October 2020; pp. 745-754.
268. Jian, H.; Yu, J.; Hu, C.; Zhang, C.; Cheng, X. SA framework based de-anonymization of social network. Procedia Comput. Sci.; 2018; 129, pp. 358-363. [DOI: https://dx.doi.org/10.1016/j.procs.2018.03.089]
269. Sun, Q.; Yu, J.; Jiang, H.; Chen, Y.; Cheng, X. De-anonymizing Scale-Free social network by Using Spectrum Partitioning Method. Procedia Comput. Sci.; 2019; 147, pp. 441-445. [DOI: https://dx.doi.org/10.1016/j.procs.2019.01.262]
270. Qu, Y.; Yu, S.; Zhou, W.; Niu, J. FBI: Friendship learning-based user identification in multiple social network. Proceedings of the 2018 IEEE Global Communications Conference (GLOBECOM); Abu Dhabi, United Arab Emirates, 10–12 December 2018; pp. 1-6.
271. Qu, Y.; Ma, H.; Wu, H.; Zhang, K.; Deng, K. A Multiple Salient Features-Based User Identification across Social Media. Entropy; 2022; 24, 495. [DOI: https://dx.doi.org/10.3390/e24040495] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35455158]
272. Desai, N.; Das, M.L. DeSAN: De-anonymization against Background Knowledge in social network. Proceedings of the 2021 12th International Conference on Information and Communication Systems (ICICS); Valencia, Spain, 24– 26 May 2021; pp. 99-105.
273. Hirschprung, R.S.; Leshman, O. Privacy disclosure by de-anonymization using music preferences and selections. Telemat. Informatics; 2021; 59, 101564. [DOI: https://dx.doi.org/10.1016/j.tele.2021.101564]
274. Mao, J.; Tian, W.; Jiang, J.; He, Z.; Zhou, Z.; Liu, J. Understanding structure-based social network de-anonymization techniques via empirical analysis. Eurasip J. Wirel. Commun. Netw.; 2018; 2018, pp. 1-16. [DOI: https://dx.doi.org/10.1186/s13638-018-1291-2]
275. Qian, J.; Li, X.; Zhang, C.; Chen, L. De-anonymizing social network and inferring private attributes using knowledge graphs. Proceedings of the IEEE INFOCOM 2016-The 35th Annual IEEE International Conference on Computer Communications; San Francisco, CA, USA, 10–14 April 2016.
276. Li, H.; Chen, Q.; Zhu, H.; Ma, D.; Wen, H.; Shen, X.S. Privacy leakage via de-anonymization and aggregation in heterogeneous social network. IEEE Trans. Dependable Secur. Comput.; 2017; 17, pp. 350-362. [DOI: https://dx.doi.org/10.1109/TDSC.2017.2754249]
277. Feng, S.; Shen, D.; Nie, T.; Kou, Y.; He, J.; Yu, G. Inferring anchor links based on social network structure. IEEE Access; 2018; 6, pp. 17340-17353. [DOI: https://dx.doi.org/10.1109/ACCESS.2018.2814000]
278. Gulyás, G.; Simon, B.; Imre, S. An efficient and robust social network de-anonymization attack. Proceedings of the 2016 ACM on Workshop on Privacy in the Electronic Society; Vienna, Austria, 24–28 October 2016; pp. 1-11.
279. Horawalavithana, S.; Flores, J.A.; Skvoretz, J.; Iamnitchi, A. The risk of node re-identification in labeled social graphs. Appl. Netw. Sci.; 2019; 4, pp. 1-20. [DOI: https://dx.doi.org/10.1007/s41109-019-0148-x]
280. Wu, X.; Hu, Z.; Fu, X.; Fu, L.; Wang, X.; Lu, S. Social network de-anonymization with overlapping communities: Analysis, algorithm and experiments. Proceedings of the IEEE INFOCOM 2018-IEEE Conference on Computer Communications; Honolulu, HI, USA, 15–19 April 2018; pp. 1151-1159.
281. Zhou, J.; Fan, J. TransLink: User Identity Linkage across Heterogeneous social network via Translating Embeddings. Proceedings of the IEEE INFOCOM 2019-IEEE Conference on Computer Communications; Paris, France, 29 April–2 May 2019.
282. Chen, X.; Song, X.; Cui, S.; Gan, T.; Cheng, Z.; Nie, L. User identity linkage across social media via attentive time-aware user modeling. IEEE Trans. Multimed.; 2020; 23, pp. 3957-3967. [DOI: https://dx.doi.org/10.1109/TMM.2020.3034540]
283. Halimi, A.; Ayday, E. Profile matching across online social network. Proceedings of the International Conference on Information and Communications Security; Chongqing, China, 19–21 November 2021; Springer: Cham, Switzerland, 2020; pp. 54-70.
284. Tang, R.; Miao, Z.; Jiang, S.; Chen, X.; Wang, H.; Wang, W. Interlayer Link Prediction in Multiplex social network Based on Multiple Types of Consistency between Embedding Vectors. IEEE Trans. Cybern.; 2021; Early Access [DOI: https://dx.doi.org/10.1109/TCYB.2021.3120134]
285. Zhou, F.; Wen, Z.; Zhong, T.; Trajcevski, G.; Xu, X.; Liu, L. Unsupervised User Identity Linkage via Graph Neural Netw. Proceedings GLOBECOM 2020-2020 IEEE Global Communications Conference; Taipei, Taiwan, 7–11 December 2020; pp. 1-6.
286. Chen, B.; Chen, X. MAUIL: Multilevel attribute embedding for semisupervised user identity linkage. Inf. Sci.; 2022; 593, pp. 527-545. [DOI: https://dx.doi.org/10.1016/j.ins.2022.02.023]
287. Wang, M.; Wang, W.; Chen, W.; Zhao, L. EEUPL: Towards effective and efficient user profile linkage across multiple social platforms. World Wide Web; 2021; 24, pp. 1731-1748. [DOI: https://dx.doi.org/10.1007/s11280-021-00882-7]
288. Jain, K.A.; Sahoo, S.R.; Kaubiyal, J. Online social network security and privacy: Comprehensive review and analysis. Complex Intell. Syst.; 2021; 7, pp. 2157-2177. [DOI: https://dx.doi.org/10.1007/s40747-021-00409-7]
289. Waterval, R. How Information Sharing on Online social network May Allow for Personalized Cyberattacks. Bachelor’s Thesis; University of Twente: Enschede, The Netherlands, 2022.
290. Safhi, A.; Adel, A.Z.; Alhibbi, M. Major Security Issue That Facing social network with Its Main Defense Strategies. Tehnički Glasnik; 2022; 16, pp. 205-212. [DOI: https://dx.doi.org/10.31803/tg-20220124140610]
291. Tran, H.-Y.; Hu, J. Privacy-preserving big data analytics a comprehensive survey. J. Parallel Distrib. Comput.; 2019; 134, pp. 207-218. [DOI: https://dx.doi.org/10.1016/j.jpdc.2019.08.007]
292. Shen, Y.; Gou, F.; Wu, J. Node Screening Method Based on Federated Learning with IoT in Opportunistic social network. Mathematics; 2022; 10, 1669. [DOI: https://dx.doi.org/10.3390/math10101669]
293. Tawnie, T.C.; Kisalay, B.O. Interdependent privacy. Orbit J.; 2017; 1, pp. 1-14. [DOI: https://dx.doi.org/10.29297/orbit.v1i2.38]
294. Humbert, M.; Trubert, B.; Huguenin, K. A survey on interdependent privacy. Acm Comput. Surv.; 2019; 52, pp. 1-40. [DOI: https://dx.doi.org/10.1145/3360498]
295. Krishna, T.; Siva Rama, L.; Venkateswara, K.; Siva Prasad, P. Privacy control on location and co-location in interdependent data. Proceedings of the 2019 International Conference on Vision Towards Emerging Trends in Communication and Networking (ViTECoN); Vellore, India, 30–31 March 2019.
296. Gao, N.; Xue, H.; Shao, W.; Zhao, S.; Qin, K.K.; Prabowo, A.; Rahaman, M.S.; Salim, F.D. Generative adversarial Netw. For spatio-temporal data: A survey. ACM Trans. Intell. Syst. Technol.; 2022; 13, pp. 1-25.
297. Sosa, J.; Betancourt, B. A latent space model for multilayer network data. Comput. Stat. Data Anal.; 2022; 162, 107432. [DOI: https://dx.doi.org/10.1016/j.csda.2022.107432]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Owning to the massive growth in internet connectivity, smartphone technology, and digital tools, the use of various online social networks (OSNs) has significantly increased. On the one hand, the use of OSNs enables people to share their experiences and information. On the other hand, this ever-growing use of OSNs enables adversaries to launch various privacy attacks to compromise users’ accounts as well as to steal other sensitive information via statistical matching. In general, a privacy attack is carried out by the exercise of linking personal data available on the OSN site and social graphs (or statistics) published by the OSN service providers. The problem of securing user personal information for mitigating privacy attacks in OSNs environments is a challenging research problem. Recently, many privacy-preserving solutions have been proposed to secure users’ data available over OSNs from prying eyes. However, a systematic overview of the research dynamics of OSN privacy, and findings of the latest privacy-preserving approaches from a broader perspective, remain unexplored in the current literature. Furthermore, the significance of artificial intelligence (AI) techniques in the OSN privacy area has not been highlighted by previous research. To cover this gap, we present a comprehensive analysis of the state-of-the-art solutions that have been proposed to address privacy issues in OSNs. Specifically, we classify the existing privacy-preserving solutions into two main categories: privacy-preserving graph publishing (PPGP) and privacy preservation in application-specific scenarios of the OSNs. Then, we introduce a high-level taxonomy that encompasses common as well as AI-based privacy-preserving approaches that have proposed ways to combat the privacy issues in PPGP. In line with these works, we discuss many state-of-the-art privacy-preserving solutions that have been proposed for application-specific scenarios (e.g., information diffusion, community clustering, influence analysis, friend recommendation, etc.) of OSNs. In addition, we discuss the various latest de-anonymization methods (common and AI-based) that have been developed to infer either identity or sensitive information of OSN users from the published graph. Finally, some challenges of preserving the privacy of OSNs (i.e., social graph data) from malevolent adversaries are presented, and promising avenues for future research are suggested.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details


1 Department of Computer Engineering, Gachon University, Seongnam 13120, Korea
2 Department of IT Convergence Engineering, Gachon University, Seongnam 13120, Korea;