1. Introduction
The number of IoT devices and edge devices has increased significantly, which results in the extraordinary growth of generated data [1]. Predictions have been drawn that the global data will reach 180 trillion GBs, and 80 billion nodes will most probably be linked to the Internet in 2025 [1]. Nevertheless, the nature of most of the data has been privacy-sensitive, and there is a risk of privacy breaches to store the data in data centers, as well as becoming expensive in terms of communication [2].
To sustain the privacy of edge data and to decrease the communication cost, it is essential to have a different category of machine learning (ML) approaches, which moves the processing over the edge nodes so that the clients’ data can be maintained. It is possible by using a prevalent approach called federated learning (FL). This approach is not only a precise algorithm but also a design framework for edge computing.
Federated learning is a method of ML that trains an ML algorithm with the local data samples distributed over multiple edge devices or servers without any exchange of data. This term was first introduced in 2016 by McMahan in [3].
Federated learning distributes deep learning by eliminating the necessity of pooling the data into a single place [4], as shown in Figure 1. In FL, the model is trained at different sites in numerous iterations [5]. This method stands in contrary to other conventional techniques of ML, where the datasets are transferred to a single server and to more traditional decentralized techniques that undertake that local datasets are scattered identically.
FL allows multiple nodes to form a joint learning model, with no need of exchanging their data samples [3], and it addresses critical problems such as access rights, access to heterogeneous data, privacy, security, etc. Applications of this distributed learning approach are spread over several business industries including traffic prediction and monitoring [6], healthcare [7], telecom, IoT [8], transport and autonomous vehicles [9], pharmaceutics, and medical AI.
The FL stances new tasks to prevailing privacy-preserving ML algorithms [10]. Outside providing demanding privacy assurances, it is essential to mature the techniques with computationally economical methods and to become tolerant to dropped devices with communication efficiency and increased accuracy (as represented in Figure 1).
1.1. Related Work
Federated learning (FL) is an evolving approach to solve privacy problems in distributed data. Many studies have been conducted to design new frameworks to improve this new paradigm of ML, but few survey studies and literature reviews have been performed to evaluate the research showing the new frameworks and approaches. These surveys are reviewed in this section.
Xu et al. performed a study focusing on the advancement of federated learning in healthcare informatics [7]. They summarized the general statistical challenges and their solutions, system challenges, as well as privacy issues in this regard. With the results of this survey, they hope to provide useful resources for computational research on machine learning techniques to manage extensive scattered data without ignoring its privacy and health informatics. However, there should be some discussion on the datasets used for health and informatics systems.
Yang et al. proposed frameworks for secure federated learning [11]. They introduced a secure federated learning framework, including both vertical and horizontal federated learning as well as federated transfer learning. They provided descriptions of architecture and the applications of federated learning. They also provided a detailed survey of already existing research works in this area. Besides, based on federated mechanisms, they proposed data networks building among organizations to share data without compromising the privacy of the user. However, they did not discuss a detailed taxonomy on the domains in which this technique can be applied.
Yang et al. surveyed and reviewed the current difficulties of executions of federated learning as well as their solutions [12]. The author also displayed portable edge enhancements and then concluded the most vital challenges and problems for future research in FL. However, they did not discuss the datasets used for implementation of federated learning in edge networks.
Recently, [5] has a broad narrative about the attributes and challenges of federated learning gathered from diverse published articles. Although, they mostly focus on cross-device FL, where the nodes are a very huge number of IoT and mobile devices.
To the best of our knowledge, there is no systematic literature review with a discussion of datasets of FL and implementation techniques published as of yet. All the surveys on this area are summarized in Table 1, and the detailed comparison is summarized in Table 2.
1.2. Contribution and Significance
The key emphasis of this paper is to perform systematic literature review (SLR) of present research studies that evidently defines the adoption of federated learning in multiple application areas.
-
The main contribution of this research study is that it analyzes and investigates the state-of-the-art research on how federated learning is used to preserve client privacy.
-
Furthermore, the taxonomy of FL algorithms is proposed to help the data scientists to have an overview about this technique.
-
Moreover, a complete analysis of the industrial applications that can obtain benefits from FL implementation has been presented.
-
In addition, the research gaps and challenges have been identified and explained for future research.
-
Lastly, the overview of available distributed datasets, which can be used for this approach, are discussed.
1.3. Organization of the Article
This research article has been partitioned into seven main sections: Section 2 represents the background of the areas related to this research study and presents basic knowledge to the reader. Section 3 discusses the protocol and methodology for conducting SLR by defining the research questions (RQs), search scheme, search procedure, inclusion and exclusion criteria, and results; Section 4 presents the execution of the systematic review for our problem. Section 5 presents the discussions on the findings and outcomes; where Section 5.1 addresses the applications of FL, Section 5.2 explains the algorithms of FL and their advantages, followed by Section 5.3 which explains the datasets used in FL, and the last sub-section of the discussion section explains the challenges of deploying FL on large scale. The article is concluded in Section 6.
2. Background
Data plays an important role in machine learning-based systems, for the fact that it brings effective model performance. The data produced from a huge number of IoT devices on an hourly basis arises from the major challenge of resource consumption for the data science industry in pooling the data. Moreover, data privacy of IoT can be at risk. For mitigating the issues, the FL technique provides an adaptable platform to the data scientists.
Federated learning sets novel challenges to current privacy-preserving methods and algorithms [10]. Outside providing demanding privacy assurances, it is essential to mature computationally economical and communication efficient methods that can be tolerant to dropped devices without compromising accuracy.
2.1. Iterative Learning
To attain as better performance as centralized machine learning, FL employs an iterative method containing multiple client-server exchanges, which is known as federated learning round [21]. Each interaction/round in iterative learning starts with diffusing the current/updated global model state to the contributing nodes (participants), then training the local models on those nodes to yield certain potential model updates from the nodes, and then processing and aggregating the updates from local nodes into an aggregated global update so that the central model can be updated accordingly (see Figure 1).
For this methodology, a server (named FL server) is used for this processing and aggregation of local updates to global updates. The local training is performed by local nodes with respect to the commands of FL server. The iterative learning of the model is performed in three major steps, i.e., initiation, iterative training, and termination, as shown in Figure 2. The details of these steps are described as follows:
Initiation: A model is selected for its training and initialized. The nodes are activated and go on waiting for the commands from the central FL server.
Iterative training: These steps are executed for numerous iterations of learning rounds [12]:
Selection: A segment of edge devices are chosen for training on their own data sample by providing the same recent statistical model from the FL server [22], whereas passive devices wait for the next iteration.
Configuration: FL server asks clients to train the current model on their local data in a stated manner [23].
Reporting: Every node reverts the learned model to the FL server. All results are aggregated and processed by the server, and the new model is stored [21]. It also tackles failures (such as if a node connection is lost). Then, it goes back to the selection.
Termination: Upon reaching a stated criterion for termination (such as local accuracy of the nodes higher than some target maximal number of rounds), the central server asks the termination of the iterative training. Then, the FL server considers the globally trained model as a robust model because of its training on multiple heterogeneous sources.
2.2. How FL Works
FL is based on the “FedAvg” federated averaging method. FedAvg is Google’s first vanilla federated learning algorithm for tackling federated learning challenges. Since then, numerous variants of FedAvg algorithms have been created to handle many of the federated learning challenges, including “FedProx”, “FedOpt”, “FedMa”, and “Scaffold” (outlined in Section 5.2).
The following is a high-level explanation of how the FedAvg algorithm works.
The goal of each round of FedAvg is to reduce the global model’s objective ‘w’, which is just the total of the weighted average of the local device loss.
A random selection of clients/devices is taken. Each client receives the server’s global model. Clients execute SGD (stochastic gradient descent) on their loss function, in parallel, and direct the learned model to the FL server for model aggregation. The server then uses the average of these local models to update its global model. The technique is then repeated for n more rounds of communication.
3. Systematic Review Protocol
A literature review is typically carried out to identify any critical gaps or overlooked extents of the research field that necessitate more investigation or analysis. A systematic literature review, on the other hand, can be used to make any relevant judgments or compile findings in a certain field (SLR). The SLR aids in identifying future research avenues and focusing on research gaps. Because SLR evaluates all of the academics who have started working on certain subjects so far, it necessitates a lot of labor and time. A consistent study approach, on the other hand, can demonstrate the completeness of SLR.
The first step in this research project was to perform a literature review on the subject. Several fragments of research linked to the topics are identified during the initial search. As a result, the problem was resolved in order to conduct SLR. SLR that has never been published can be found by analyzing the literature on federated learning. This could be due to the fact that FL is still a new paradigm. As a result, SLR can be used to create a framework for federated learning. This SLR is conducted using the reference manual adapted from Kitchenham (2007).
3.1. Research Objectives (RO)
The research objectives are as follows:
RO1.. To explore the areas that can potentially obtain advantages from using FL techniques.
RO2.. Evaluating the practicality and feasibility of federated learning in comparison with centralized learning in terms of privacy, performance, availability, reliability, and accuracy.
RO3.. To explain about the datasets used in different studies of federated learning and to highlight their experimental potential.
RO4.. To explore the research trends in applying federated learning.
RO5.. To highlight the challenges that can be encountered due to the employment of federated learning in edge devices.
Later, a search string including primary , secondary, tertiary, and additional keywords was selected to choose all the potential work for the SLR.
(FL) or (federated (deep or machine) learning) or (federated (application or framework or implementation)) or (federated (algorithm or method or approach)) or (federated learning in (edge computing or IoT or smart cities or NLP or healthcare or autonomous industry))
Figure 3 depicts the crucial words. The core keywords for this research study are the basic phrases utilized for federated machine learning, as well as the application in edge and other sectors. The secondary keyword is the application in edge and other fields. The secondary and additional keywords are used to locate studies on alternative applications in various industries, as well as concerns or challenges discovered during the process. Figure 4 depicts the procedure for conducting the review.
3.2. Research Questions
This SLR aims at summarizing and acquiring the comprehension of federated learning with respect to its usage, applications, and challenges to fulfill the research objectives. To this end, the research objectives are transformed into these research questions (RQs), as shown in Table 3:
3.3. Search Strategy
In this section, the search strategy for obtaining literature to analyze and answer the aforementioned RQs is presented.
3.3.1. Database
The most popular and reliable literature sources are used for this SLR. The search process for this paper is based upon the digital libraries as shown in Table 4.
3.3.2. Study Selection
The study concentrates on high-quality scholarly research work in the area of federated learning. After the retrieval of the initial results, the impertinent papers were filtered by executing a set of inclusion/exclusion criteria. These criteria reflect the most relevant and appropriate literature. Table 5 provides an insight of the criteria on which articles were selected or exempted for this research.
3.3.3. Data Extraction
At this phase, an Excel structure was made as taught in the rule by Kitchenham [24]. The main rationale behind the structure is to collect the publications and the data achieved in critical examinations and to monitor the information expected to answer the RQs.
The gathered data incorporate the open paper with the papers’ titles, keywords, abstract and full-text, year of publication, and type of research.
Those papers that have the properties for the three filter columns as Yes were then moved to a new sheet to extract the necessary information for addressing the RQs:
Applications: One or more applications of FL that are presented by a paper.
Implementation: One or more implementation approaches for FL or algorithms that are explained, described, compared, or discussed by a paper.
Dataset: The paper provides any distributed dataset or expanded any dataset for FL.
Challenges: An array of issues pertinent to federated learning, which are needed to be addressed by the researcher in this area.
If the recent research papers (ranging from 2016 to 2021) were related to these three factors, then these were considered as systematic literature reviews.
4. Systematic Review Execution
The search process was started in April 2020 and proceeded to May 2021. Initially a total of 7546 papers published between 2016 and 2020 were found through the digital libraries, out of which 478 were shortlisted after going through the titles and keywords filter. After reviewing the abstracts, 185 publications remained for the review conduction. Afterward, 80 papers being exempted following a detailed analysis of their substance, 105 papers were chosen as primary studies. The papers collected are from the Process, which is depicted in Figure 5, and Table 6 summarizes the detailed numbers for each phase of the filtration process.
Studies Related to the Research Question
The selected papers were organized in the form of clusters with respect to the research questions. Studies related to the question are represented as one cluster as shown in Table 7.
Table 7Research studies related to the research questions.
Sr. No | RQs | Category | Related Studies |
---|---|---|---|
01 | RQ1 | Applications | [3,7,9,11,13,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44] |
02 | RQ2 | Algorithms and models | [3,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60] |
03 | RQ3 | Data sets | [61,62,63,64,65,66] |
04 | RQ4 | Challenges | [3,5,12,13,19,25,26,30,45,46,49,50,67,68,69,70,71,72,73] |
5. Discussions
This study underwent 105 research studies to find the applications, algorithms, datasets, and challenges that can be encountered while employing FL. Over careful filtration and examination of the shortlisted studies, a detailed analysis of the approaches and outcomes of some potential studies are achieved. Those are presented in Table A1 of Appendix A. The number of published research articles and technical reports are increasing exponentially in the area of FL as depicted in Figure 6, thus making FL an emerging technique of machine learning. From 105 research articles, eight were from the year 2016, followed by seven articles published in 2017, 10 from the year 2018, and 65 were published in the year 2019. Until the first half of 2020, there were 35 that were shortlisted for this study.
In addition, the latest research articles were examined to conclude that the research questions discussion is divided according to the answers to the research questions. The RQ1 related studies are highlighted in Section 5.1, Section 5.2 discusses the studies focusing on RQ2, and for RQ3, Section 5.3 summarizes the datasets used for this research. Finally, the challenges that are summarized in Section 5.4 explain the RQ4.
5.1. Applications of Federated Learning
FL is a promising distributed ML approach with the advantage of privacy preservation. It allows multiple nodes to build a joint learning model without exchanging their data. That is how it addresses critical problems such as data access rights, privacy, security, and access to heterogeneous data types.
Its applicability is claimed to be in a variety of fields such as autonomous vehicles, traffic prediction, and monitoring, healthcare, telecom, IoT, pharmaceutics, industrial management [74], industrial IoT [75], and healthcare and medical AI [76]. The proportion of the trend to used FL in different fields is depicted in Figure 7. Its first application was in Google GBoard, where it shows tremendous results, some of which are summarized in Table 8. Nevertheless, it applicability in some other areas such as finance [77] and quantum computing [78] is still being explored.
Federated learning provides an extensive variety of possible applications in many areas such as NLP, IoT, etc. (as shown in Figure 7). FL is adopted particularly in circumstances where privacy concerns and the desire to develop algorithms collide. This diverse set of FL applications is organized into a taxonomy, as shown in Figure 8. The most well-known federated learning initiatives are now being carried out on smartphones (as shown in Table 8); however, the same approaches can be used for PCs, IoT, and other edge devices such as autonomous vehicles.
Nearly of the current and potential FL applications include.
5.1.1. Google Gboard
In the first place, Google opted for a federated learning strategy in GBoard for preserving the client privacy while providing better word recommendations and maintaining client privacy. This happened to be the first real-world application of FL where the algorithmic model is trained by the words typed by the user, and then the trained model is sent to the server. Then the aggregated model is used to enhance Google’s predictive text feature [28]. This facilitates users to have better keyboard suggestions persistently with no need to share the data.
5.1.2. Healthcare
Modern healthcare systems entail a collaboration among hospitals, research labs and institutes, and federal agencies for the betterment of healthcare nation-wide [79]. Furthermore, collaborative research among nations is significant when worldwide health emergencies, such as COVID-19, are being encountered [80].
Table 8Applications of federated learning in different domains and industries (RQ1).
Domain | Applications | Related Studies |
---|---|---|
Edge computing | FL is implemented in edge systems using the MEC (mobile edge computing) and DRL (deep reinforcement learning) frameworks for anomaly and intrusion detection. | [8,22,81,82,83,84,85] |
Recommender systems | To learn the matrix, federated collaborative filter methods are built utilizing a stochastic gradient approach and secured matrix factorization using federated SGD. | [86,87,88,89,90,91,92] |
NLP | FL is applied in next-word prediction in mobile keyboards by adopting the FedAvg algorithm to learn CIFG [93]. | [28,94,95,96,97] |
IoT | FL could be one way to handle data privacy concerns while still providing a reliable learning model | [12,98,99,100] |
Mobile service | The predicting services are based on the training data coming from edge devices of the users, such as mobile devices. | [28,94] |
Biomedical | The volume of biomedical data is continually increasing. However, due to privacy and regulatory considerations, the capacity to evaluate these data is limited. By collectively building a global model for the prediction of brain age, the FL paradigm in the neuroimaging domain works effectively. | [101,102,103,104,105,106,107,108] |
Healthcare | Owkin [31] and Intel [32] are researching how FL could be leveraged to protect patients‘ data privacy while also using the data for better diagnosis. | [7,79,109,110,111,112,113] |
Autonomous industry | Another important reason to use FL is that it can potentially minimize latency. Federated learning may enable autonomous vehicles to behave more quickly and correctly, minimizing accidents and increasing safety. Furthermore, it can be used to predict traffic flow. | [9,34,114,115,116,117] |
Banking and finance | The FL is applied in open banking and in finance for anti-financial crime processes, loan risk prediction, and the detection of financial crimes. | [77,118,119,120,121,122] |
In the healthcare industry, data privacy and security are extremely difficult to manage [123]. Many organizations have large volumes of sensitive and valuable patient data, which hackers are eager to get their hands on. Nobody wants their unpleasant diagnosis to be made public [7]. For frauds such as identity theft and insurance fraud, the abundance of data available in these repositories is quite beneficial. Because of the vast amounts of data and the significant threats that the health business faces, most nations have enacted strong laws governing how healthcare data ought to be handled, such as the HIPAA standards in the United States. These restrictions are fairly tight, and if an organization breaks them, it will face severe penalties. This is often beneficial for patients who are concerned about their personal information being misused. These laws, on the other hand, make it harder to use certain sorts of data in studies that could lead to new medical advances. Because of this complicated legal issue, companies such as Owkin [31] and Intel [32] are looking at how FL may be used to preserve patients’ privacy while still putting their data to good use. Owkin is developing a platform that employs FL to secure patient data in experiments to identify drug toxicity, forecast disease progression, and estimate survivability rates for rare cancers. As a proof of concept, Intel teamed up with the “University of Pennsylvania’s Center for Biomedical Image Computing and Analytics” in 2018 to show how federated learning may be used in medical imaging. Their DL model may be developed to be 99 percent as accurate as a model trained using conventional approaches, using a FL methodology; according to the cooperation.
5.1.3. Autonomous Vehicles
Federated learning has two primary applications for self-driving automobiles. The first is that it may safeguard user data privacy—many people are uncomfortable with the thought of their journey logs and other traveling data being shared and evaluated on a central server. By merely apprising the algorithms with precise data rather than whole user information, federated learning could improve user privacy.
Another important reason to use federated learning is that it has the potential to reduce latency. When there are many self-driving vehicles on roads in the future, they will need to respond and address quickly during safety incidents.
Because conventional cloud-learning entails huge transfers of data and a slower learning rate, federated learning has the potential to permit autonomous vehicles to respond more quickly and correctly, minimizing accidents and increasing safety.
Much research demonstrate that FL has the potential to revolutionize autonomous vehicles and the Internet of Things [124]. In [114], the authors claimed that due to a large number of self-driving cars on the road, it is necessary to respond quickly to real-world circumstances [114]. However, various security concerns occur with the standard cloud-based method. They argued that federated learning may be utilized to tackle this problem and eliminate such threats by limiting data transfer volume and speeding up the learning process of autonomous vehicles.
5.2. Algorithms and Models
The algorithms and models used for the implementation of federated learning and its better performance are summarized in Table 9. In this table, the most adopted algorithms are mentioned with respect to model and privacy mechanism, applied areas, and related studies of these algorithms.
Konečný et al. (2016) set the algorithms through which each client can independently compute the update based on its local data to the current model [50]. They can send the updated information to a central server. A new global model is computed in the central server by combining the updates from clients. Mobile phones are the system’s key clients, and their communication efficiency is critical. The researchers offered two approaches to reduce the cost of uplink transmission in this study: structured updates and sketching updates.
Chen et al. described an end-to-end tree boosting system named XGBoost [51]. This system is used by data scientists widely to obtain many state-of-the-art results on several ML (machine learning) tasks. For tree learning, they weighted quantile sketch, and for the sparse data, they proposed a unique new sparsity aware algorithm. The research paper also provides insights on data compression and sharding to build scalable XGBoost. Conclusively, XGBoost uses way fewer resources compared to other systems and scales far billions of examples [4].
Nilsson et al. benchmarked three FL algorithms. They compared the performance of these three algorithms by residing the data on the server [125]. The algorithms include federated averaging (FedAvg), CO-OP, and Federated-Stochastic Variance Reduced Gradient. These algorithms were evaluated on MINIST dataset with the use of non-i.i.d. and i.i.d. partitioning of data. The research resulted in FedAvg as the highest accuracy algorithm among all.
Chawla et al. proposed an over-sampling technique named SMOTE (synthetic minority over-sampling technique), which generates minority classes records to rebalance the data sample [126]. Han et al. enhanced the SMOTE by seeing the data distribution of marginal classes [127], but it also requires a larger dataset. However, this method is not appropriate for federated learning because the client’s data is distributed. Some other approaches, such as Xgboost [51] and AdaBoost, can reduce bias as it learns from misclassification. However, these algorithms are subtle to outliers and noise.
Table 9Algorithms of federated learning proposed in different studies (RQ2).
Ref | Algorithms | Model Implemented | Area | Privacy Mechanism | Remarks | Applied Areas | Related Studies |
---|---|---|---|---|---|---|---|
[23] | Secure aggregation FL | Practicality enhancement | CM | Privacy guarantees | State-of-the-art algorithm for aggregation of data. Applicability across the field. | [128,129,130] | |
[66] | LEAF | \ | Benchmark | \ | Benchmark | Language modeling, sentiment analysis, and image classification. | [131] |
[132] | FedML | \ | Benchmark | \ | Benchmark | Language modeling, sentiment analysis, and image classification. | [128,133,134,135,136,137,138] |
[3] | FedAvg | Neural network | Effective Algorithm | \ | SGD based | Model averaging and convergence of algorithms. | [27,71,135,139,140] |
[141] | FedSVRG | Linear model | Efficient communication and convergence. | [142] | |||
[143] | FedBCD | NN | Reduction of communication cost. | [144,145] | |||
[69] | FedProx | \ | Algorithm accuracy, convergence rate, and autonomous vehicles. | [146] | |||
[147] | Agonistic FL | NN, LM | Optimization problems and reduction of communication cost. | [148,149] | |||
[150] | FedMA | NN | NN specialized | Communication efficiency, convergence rate, NLP, and image classification. | [151] | ||
[152] | PFNM | NN | Language modeling and image classification. | [150] | |||
[153] | Tree-based FL | DT | DP | DT-specialized | Privacy preservation. | [154] | |
[155] | SimFL | hashing | |||||
[10] | FedXGB | Algorithm accuracy and convergence rate. | [155] | ||||
[156] | FedForest | Privacy preservation and accuracy of algorithms. | [157] | ||||
[56] | SecureBoost | DT | CM | DT-specialized | Privacy preservation, scalability, and credit risk analysis. | [158] | |
[159] | Ridge Regression FL | LM | CM | LM-specialized | Reduction in model complexity. | [160,161] | |
[162] | PPRR | [163] | |||||
[164] | Linear regression FL | Global regression and goodness-of-fit diagnosis. | [165,166] | ||||
[167] | Logistic regression FL | Biomedical and image classification. | [168] | ||||
[169] | Federated MTL | \ | Multi-task learning | Simulation on human activity recognition and vehicle sensor. | [170] | ||
[171] | Federated meta-learning | NN | Meta-learning | Efficient communication, convergence, and recommender system. | [89,171,172,173,174,175] | ||
[64] | Personalized FedAvg | Efficient communication, convergence, anomaly detection, and IoT. | [84,143,171,176] | ||||
[177] | LFRL | Reinforcement learning | Cloud robotic systems and autonomous vehicle navigation. | [178] |
Certain protocols and frameworks need to be implemented on edge networks to successfully implement the federated learning approach. Some of those are discussed in this section.
Wang et al. proposed the integration of deep reinforcement learning (DRL) as well as federated learning to improve edge systems [179]. This proposal was integrated to optimize mobile edge computing, communication, and caching. To make use of edge nodes and collaboration among devices, they designed the “In-Edge Al” framework. While this framework was demonstrated to reach near-optimal performance, the overhead of training was relatively low. Finally, they discussed different challenges as well as opportunities to reveal a capable future of “In-Edge Al” such as AI acceleration in edge computing and [6,81].
Xu et al. performed a survey focusing on the progress of federated learning in healthcare informatics [7]. They summarized the general statistical challenges and their solutions, system challenges, as well as privacy issues in this regard. With the results of this survey, they provide useful resources for computational research on machine learning techniques to manage extensive scattered data without ignoring its privacy and health informatics.
Yang et al. proposed frameworks for secure federated learning [11]. They introduced a comprehensive secure FL framework, including horizontal and vertical FL as well as federated transfer learning. They provided definitions, architecture, and applications for FL. They also provided a detailed survey of already existing works in this area. Besides, based on federated mechanisms, they proposed data networks building among organizations to share data without compromising the privacy of the user [9].
Basnayake, V. developed a method to improve sensor measurement reliabilities in a mobile robot network [180]. It considered the cost of repairing faulty sensors as well as inter-robot communication. They built a system for anticipating sensor failures using sensor features in this work. The wireless connection and network-wide sensor replacement cost capturing were then minimized, given the sensor measurement reliability constraint. For the aforementioned task, they used convex optimization approaches to construct an algorithm that gave the optimal wireless information communication strategy and sensor selection. To detect sensor failures and estimate sensor properties in a distributed manner, they used federated learning. Finally, they ran extensive simulations and compared the proposed mechanism to existing state-of-the-art procedures to demonstrate its effectiveness.
Sattler et al. (2019) presented clustered federated learning to address the issue of suboptimal results by FL if the data distribution of the local client diverges [169]. CFL is a federated multi-task learning framework. The geometric properties of federated learning loss surface are exploited by FMTL, which helps to group the populations of clients into clusters with trainable data distribution. There are no modifications required for the FL communication protocol in CFL. It applies to deep neural networks, and, on the clustering quality, it gives strong mathematical guarantees. CFL handles diverse client populations over time and is also flexible enough to preserve privacy. CFL is considered a post-processing mechanism and is achieving similar or more significant goals than the FL.
Mohri et al. optimized a centralized model in a newly proposed framework of agnostic federated learning [147]. Client distributions’ mixture optimized it for any of the target distribution formed. They suggested that the framework yields a notion of fairness. To solve the corresponding optimization issues, they also proposed a fast stochastic optimization algorithm. For this, they also proved convergence bounds supposing a convex hypothesis set, as well as a convex loss function. They also demonstrated the advantages of their approaches in different datasets. Their framework can also be interesting for other scenarios of learning as drifting, cloud computing, domain adaptation, and others [12].
Han et al. introduced the value of imbalanced datasets as well as their broad application areas in data mining [127]. After that, they summarized the matrices of evaluation and previously existing possible methods to solve any imbalance problem. To address this issue, the synthetic minority over-sampling technique (SMOTE) is one of the oversampling techniques used. Two new minorities were proposed by this method using borderline-SMOTE 1 and borderline-SMOTE 2 over the sampling method.
In [181], the authors introduced a generalization of Dropout, the DropConnect to regularize the large and fully connected layers in neural networks. In contrast to Dropout, which sets the randomly selected activation subsets to zero in each segment, DropConnect sets a subset of weights in the system to zero. They compared the DropConnect to the Dropout, evaluating on a range of datasets. They aggregated multiple DropConnect trained models, and, on different image recognition benchmarks, they showed state-of-the-art results.
5.3. Datasets for Federated Learning
For the federated learning implementation, there were numerous datasets accessible. Some were accessible to the public, while others were not, and this provides an overview of the publicly available datasets.
Different client devices deconstruct the federated datasets. For experimentation, several datasets are turned into federated datasets. A benchmark LEAF [66] provides some public federated datasets and evaluation framework, and other datasets and models used in existing publications in top tier conferences of the machine learning community throughout the past two years are summarized in Table 10 below.
Table 10Datasets used for the research in FL (RQ3).
Dataset | No. of Data Items | No. of Clients | Details |
---|---|---|---|
Street 5 [61] | 956 | 26 | |
Street 20 [61] | 956 | 20 | |
Shakespears [3] | 16,068 | 715 | Dataset constructed from “The Complete Works of William Shakespeare”. |
CIFAR-100 | 60,000 | 500 | CIFAR-100 produced by google [182] by randomly distributing the data among 500 nodes, with each node having 100 records [59] |
StackOverflow [62] | 135,818,730 | 342,477 | Google TFF [65] team maintained this federated dataset comprised of data from StackOverflow. |
Federated EMNIST | 671,585 and 77,483 | 3400 | EMNIST [66] is a federated partition version that covers natural heterogeneity stemming from writing style. |
5.4. Challenges and Research Scope
There are some other issues of this field that can arise as challenging problems to be addressed such as resource allocation [183], data imbalance, statistical heterogeneity, etc., as depicted in Figure 9. All the stated challenges are described in this section and summarized in Table 11.
Table 11Challenges in implementing federated machine learning.
Ref | Year | Research Type | Problem Area | Contribution | Related Researches |
---|---|---|---|---|---|
[25] | 2018 | Experimental | Statistical heterogeneity | “They demonstrated a mechanism to improve learning on non-IID data by creating a small subset of data which is globally shared over the edge nodes.” | [3,26,30,67,184,185] |
[3] | 2017 | Experimental | Statistical and communication cost | “They experimented a method for the FL of deep networks, relying on iterative model averaging, and conducted a detailed empirical evaluation.” | [3,25,26,67,186,187] |
[67] | 2020 | Experimental | Convergence analysis and resource allocation | “They presented a novel algorithm for FL in wireless networks to resolve resource allocation optimization that captures the trade-off between the FL convergence.“ | [3,25,26,27] |
[49] | 2019 | Experimental | Communication cost | “They proposed a technique named CMFL, which provides client nodes with the feedback regarding the tendency of global model updations.” | [45,46] |
[45] | 2018 | Framework | Communication cost | “ A framework is presented for atomic sparsification of SGD that can lead to distributed training in a short time.” | [3,46,49,50] |
[26] | 2019 | Experimental | Statistical heterogeneity | “They demonstrated that the accuracy degradation in FL is usually because of imbalanced distributed training data and proposed a new approach for balancing data using [188].” | [3,25,30,67,68,69,70] |
[50] | 2017 | Experimental | Communication efficiency | “They proposed structured updates, parametrized by using less number of variables, which can minimize the communication cost by two orders of magnitude.” | [45,46,49] |
[46] | 2018 | Numerical Experiment | Communication cost | “They performed analysis of SGD with k-sparsification or compression and showed that this approach converges at the same rate as vanilla SGD (equipped with error compensation).” | [3,46,49,50] |
[189] | 2020 | Experimental | Biasnesss of data | “They demonstrated that generative models can be used to resolve several data-related issues even when ensuring the data‘s privacy. They also explored these models by applying it to images using a novel algorithm for differentially private federated GANs and to text with differentially private federated RNNs.” | [47,48] |
5.4.1. Imbalanced Data
Using their local data, each edge node in FL trains a shared model. As a result, the distribution of data from those edge devices is based on their many uses. In comparison to cameras located in the wild, cameras in the park, for example, capture more photographs of humans. We divided these FL imbalances into three categories to make it easier to distinguish between them:
Size imbalance: when the size of each edge node’s data sample is uneven.
Local imbalance: this is also known as non-identical distribution (non-identically distributed) or independent distribution because not all nodes have the same data distribution.
Global imbalance: denotes a collection of data that is class imbalanced across all nodes.
To explain the effect of imbalanced data on the training process, we will use the federated learning approach to train CNN (convolutional neural networks) with an imbalanced distributed dataset.
5.4.2. Expensive Communication
FL networks hypothetically encompassed a gigantic quantity of devices [17] (such as millions of notebooks and hand-held devices) and network communication may be computationally expensive and slower due to orders of magnitude. In such networks, communication requires more computations as compared to traditional data center environments. To make a model trained through the data provided by devices in an edge-based network, communication-efficient methods must be developed, which iteratively communicates short messages or model changes as a part of the training process, rather than transferring the complete dataset over the network.
5.4.3. Systems Heterogeneity
The ferreted networks are natively heterogeneous due to differences in network connectivity (Wi-Fi, 3G, 4G, 5G), hardware (CPU, RAM), power (battery level), communication, storage, and computing capacities of nodes. Furthermore, due to device and network size-related limits, only a small fraction of end nodes is active at any given time. The devices may be unreliable, and an active gadget will frequently stop working after a certain iteration. Fault tolerance is made possible by these system-level properties.
5.4.4. Statistical Heterogeneity
The edges frequently collect and share data in a non-i.i.d. manner across the network [12,25,139,184]. For the prediction of the next word, cellular phone users may utilize a wide range of vocabulary. Furthermore, the amount of data on different edges may differ, and an underlying structure that reflects the interaction between devices and their associated distributions may exist. This data generation paradigm challenges widely held i.i.d. principles in distributed optimization, raises the likelihood of stragglers, and may increase the complexity of analysis, modeling, and assessment.
5.4.5. Privacy Concerns
Privacy is often a major concern in FL applications in comparison to learning in the centralized data in data centers [12]. By sharing only model updates (such as gradient information) rather than the whole data, FL takes a step toward securing user data. However, transmitting local model updates throughout the training process may divulge sensitive data to the central server or a third party. While current efforts try to increase the privacy of federated learning through the use of mechanisms such as differential privacy [190] and secure multiparty computation [11], these approaches often provide privacy at the cost of lesser system efficiency or reduced model accuracy. Recognizing and assessing these trade-offs, theoretically and empirically, is a significant task in achieving private federated learning systems.
All the challenges addressed in the state-of-the-art literature are summarized in Table 9, with respect to its methodologies and their contribution.
6. Conclusions
Federated learning enables the collaborative training of a machine learning model and deep learning for mobile edge network optimization. FL allows multiple nodes to form a joint learning model to address critical problems such as access rights, access to heterogeneous data, privacy, security, etc. Applications of this distributed learning approach are spread over several business industries including traffic prediction and monitoring, healthcare, telecom, IoT, transport and autonomous vehicles, pharmaceutics, and medical AI. This paper summarized how federated learning is used to preserve client privacy through a detailed review of the literature. The search procedure was performed from April 2020 to May 2021, with the total initial number of papers being 7546 published in the duration of 2016 to 2020. After careful screening and filtering, 105 papers were selected to adequately describe the research questions of this study. It provides a systematic literature review about FL across domain applications and the algorithms, models, and frameworks of federated learning and its scope of application in different domains. Moreover, this study discusses the current challenges of implementing, and a taxonomy is proposed on implementation of, FL over a variety of domains. The survey reveals that healthcare and IoT have a vast implementation opportunity of FL models, as 30% and 25% of the selected studies used FL in healthcare and edge applications. The growing and real-world trend of FL research is seen in NLP with more than 10% of the total literature. The domains of recommender systems, FinTech, and the autonomous industry can adapt the FL, but the challenges these domains can encounter are statistical heterogeneity, system heterogeneity, data imbalance, resource allocation, and privacy.
7. Future Directions
To mitigate the data privacy concerns, along with providing a transfer learning paradigm, FL has emerged as an innovative learning platform by enabling edge devices to train the model with their own data. The growing storage and computation capacity of edge nodes, such as autonomous vehicles, smartphones, tablets, and fast communication such as 5G, FL has revolutionized the way of machine learning in the modern era. Thus, the applications of FL are cross-domain. However, there are certain areas that require further development of FL. For example, the convergence of its baseline aggregation algorithm, FedAvg, is application-dependent, and more refined methods for aggregation are worth exploring. Similarly, with the heavy computation required for FL, resource management can play an important part. So, optimization of communication, computation, and storage cost for edge devices during the process of model training needs to be refined and matured. In addition, most of the studies usually cover the area of IoT, healthcare, etc. However, more application areas can benefit from this learning paradigm, such as food delivery systems, VR applications, finance, public safety, hazard detection, traffic control, and monitoring, etc.
Conceptualization, M.S. and T.U.; methodology, M.S.F.; software, T.U.; validation, M.S.F., M.S., T.U. and B.-S.K.; formal analysis, M.S.; investigation, M.S.; resources, M.S. and M.S.F.; data curation, T.U. and B.-S.K.; writing—original draft preparation, M.S.; writing—review and editing, M.S., B.-S.K. and T.U.; visualization, M.S.; supervision, M.S.F. and T.U.; project administration, B.-S.K.; funding acquisition, B.-S.K. All authors have read and agreed to the published version of the manuscript.
This research was funded by the National Research Foundation (NRF), Korea, under project BK21 FOUR (F21YY8102068).
The authors declare no conflict of interest.
AST-GNN | Attention-based Spatial-Temporal Graph Neural Networks |
ATV | Aggregated Trust Value |
CMFL | Communication Mitigated Overhead for Federated Learning |
CNN | Convolutional Neural Networks |
DFNAS | Direct Federated NAS |
DL | Deep Learning |
DNN | Deep Neural Networks |
DP | Differential Policy |
DRL | Deep Reinforcement Learning |
DT | Decision Trees |
FASTGNN | Federated Attention-based Spatial-Temporal Graph Neural Networks |
FedAvg | Federated Averaging |
FedBCD | Federated Stochastic Block Coordinate Descent |
FederatedMTL | Federated Multi-Task Learning framework |
FedMA | Federated Learning with Matched Averaging |
FL | Federated Learning |
GAN | Generative Adversarial Networks |
HAR | Human Activity Recognition |
IID | Independent and Identically Distributed |
IoT | Internet-of-Things |
IoV | Internet of Vehicles |
KLD | KullbackLeibler Divergence |
LFRL | Lifelong Federated Reinforcement Learning |
LM | Linear Model |
LSTM | Long Short-Term Memory |
MAPE | Mean Absolute Percentage Error |
MEC | Mobile Edge Computing |
ML | Machine Learning |
MLP | Multilayer Perceptron |
MSL | Multi-weight Subjective Logic |
NAS | Neural Architecture Search |
NLP | Natural Language Processing |
oVML | On-Vehicle Machine Learning |
RNN | Recurrent Neural Network |
SGD | Stochastic Gradient Descent |
SMC | Secure Multiparty Computation |
SMOTE | Synthetic Minority Over-sampling Technique |
SVM | Support Vector Machine |
TFF | TensorFlow Federated |
TSL | Traditional Subjective Logic |
TTS | Text-to-Speech |
URLLC | Ultra-Reliable Low Latency Communication |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 3. Representation of search string used to identify research publications to be included in this SLR.
Figure 4. Graphical representation of the process of selection of recent research for SLR.
Figure 6. The trend of the research studies conducted primarily on federated learning over four years, from 2016 to 2019.
Figure 8. Taxonomy for applications of federated learning across different domains and sub-domains.
Summary of related work.
Ref | Year | Journal/Conference | Problem Area | Contribution | Related Research |
---|---|---|---|---|---|
[ |
2020 | arXiv:1909.11875v2 | Applications and challenges of FL implementation of FL in edge networks | “They highlighted the challenges of FL and reviewed prevailing solutions. Discusses the applications of FL for the optimization of MEC.” | [ |
[ |
2019 | “ACM Transactions on Intelligent Systems and Technology” | Survey on FL | “They presented an initial tutorial on classification of different FL settings, e.g., horizontal FL, vertical FL, and Federated Transfer Learning.” | [ |
[ |
2019 | arXiv preprint arXiv:1908.06847 | Survey on implementation of FL in wireless networks | “They provided a survey on FL in optimizing resource allocation while preserving data privacy in wireless networks.” | [ |
[ |
2019 | arXiv preprint arXiv:1908.07873 | Survey on FL approaches and its challenges | “This paper provides a detailed tutorial on Federated Learning and discusses execution issues of FL.” | [ |
Comparison to other studies.
[ |
[ |
[ |
[ |
[ |
This Paper | |
---|---|---|---|---|---|---|
Problem Discussed | Literature Survey | Systematic Survey | Literature Survey | Literature Survey | Systematic Literature Review | Systematic Literature Review |
Discussion of novel algorithms | × | × | × | √ | √ | √ |
Discussion of the applications of FL in field of data science | √ | × | √ | √ | √ | √ |
Discussion of the datasets of FL | × | × | × | × | × | √ |
FL implementation in edge networks | √ | √ | × | × | √ | √ |
Taxonomy for FL approach | √ | × | × | × | √ | √ |
Challenges and gaps on implementing FL | √ | × | × | × | √ | √ |
Year | 2019 | 2019 | 2018 | 2019 | 2021 | 2021 |
Research questions.
Sr. No | RQs | Motivation |
---|---|---|
RQ1 | Which types of mobile edge applications and sub-fields can obtain advantage from FL? | The industries can obtain many benefits by deploying the FL, and these areas of interest need to be determined. |
RQ2 | Which algorithms, tools, and techniques have been implemented in edge-based applications using federated learning? | This would help to find the implementation and advantages of deploying FL in mobile edges |
RQ3 | Which datasets have been used for the implementation of federated learning? | To know about the details of datasets available to experiment in the field of FL. |
RQ4 | What are the possible challenges and gaps of implementing federated learning in mobile edge networks? | The implementation of FL in different fields may face some issues and challenges, which are need to be discussed. |
Digital libraries used for conduction of the literature review.
Sr. No | Digital Library | Link |
---|---|---|
01 | ACM Digital Library | |
02 | IEEE Xplore | |
03 | ScienceDirect | |
04 | Springer Link | |
05 | Wiley Online Library |
Inclusion and exclusion criteria.
Criteria | Description | |
---|---|---|
Inclusion Criteria | IC1 | Papers that unambiguously examine federated learning and are accessible. |
IC2 | Papers that mention and investigate the implementation approaches and applications of federated learning. | |
IC3 | Papers that are focused on presenting research trend opportunities and the challenges of adopting federated learning. | |
IC4 | Papers that are published as technical reports and book chapters. | |
IC5 | Papers that are focused on presenting research trend opportunities and challenges of adopting federated learning. | |
Exclusion Criteria | EC1 | Papers that have duplicate or identical titles |
EC2 | Papers that do not entail federated learning as a primary study. | |
EC3 | Papers that are not accessible | |
EC4 | Papers in which the methodology is unclear |
Studies screened during the filtration process.
Library | Initial Results without Filtering | Title and Keyword Selected | Abstract Selected | Full Text Selected |
---|---|---|---|---|
ACM | 1194 | 158 | 43 | 22 |
IEEE | 119 | 70 | 32 | 17 |
ScienceDirect | 879 | 115 | 46 | 29 |
Springer | 644 | 55 | 28 | 16 |
Wiley | 4710 | 80 | 36 | 21 |
Results | 7546 | 478 | 185 | 105 |
Appendix A
The detailed analysis of latest research with respect to their approach, tools, strengths, shortcomings, and future scope.
Ref. | Year | Domain | Sub-Domain | Dataset | Approach | Tools/Techniques | Contribution | Shortcoming and Future Scope |
---|---|---|---|---|---|---|---|---|
[ |
2019 | Erlang-programming language | FL | NA | Functional implementation of ANN in Erlang. | Nifty, C language, Erlang language | Creating ffl-erl, a framework for federated learning in Erlang language. | Erlang incurs a performance penalty, and it needs to be explored in different scenarios of FL. |
[ |
2019 | Image-based geolocation | FL | MNIST, CIFAR-10 | An asynchronous aggregation scheme is proposed to overcome performance degradation caused by imposing hard data constraints on edge devices. | PyTorch, Apache drill-based tool | The convergence rate has become better than the synchronous approach. | The system’s performance with the data constraints needs to be explored. |
[ |
2019 | Algorithms | FL and gossip learning | Spam base dataset, HAR dataset, Pendigits dataset | To reduce communication cost, subsampling was applied for FL and GL approaches. | Simulations, Stunner, secure aggregation protocol | Gossip learning proved better performance than FL in all the given scenarios while the training data was distributed uniformly over the nodes. | Gossip learning relies on message passing and no cloud resources. |
[ |
2020 | Poison defense in FL | Generative Adversarial Network | MNIST and Fashion-MNIST | To defend against poison attacks, GAN deployed at server side to regenerate user‘s edge training data and verification of accuracy for each model trained through data. The client node with lower accuracy than the threshold is recognized as an attacker. | PyTorch | The PDGAN employs partial classes data for the reconstruction of samples of local data of each node model. This method increased the efficiency with the rate of 3.4%, i.e., 89.4%. | There is a need to investigate the performance of PDGAN over class level heterogeneity. |
[ |
2019 | Data distribution in FL | IoT | FEMNIST, MNIST | In the proposed mechanism, the multi-criteria role of each end node device is processed in a prioritized fashion by leveraging a priority-aware aggregation operator. | CNN, LEAF | The mechanism proved to achieve online adjustment of the parameters by employing a local search strategy with backtracking. | Extensive experiments on diverse datasets needs to be performed for examining the positives of this mechanism. |
[ |
2019 | Data distribution in FL | Edge systems | Imbalanced EMNIST, FEMNIST and imbalanced CINIC-1 | A self-balancing FL framework called Astraea is proposed to overcome the issue of Global data imbalance by data augmentation and client rescheduling. | CNN, Tensor flow Federated | Astraea averages the local data imbalance and forms the mediator to reschedule the training of participant nodes based on KLD of their data distribution. It resulted in +5.59% and +5.89% improvement of accuracy for both datasets. | The Global data augmentation, perhaps, was a shortcoming for FL, as the data is shared globally by using this model. |
[ |
2019 | Mobile edge computing | FL | NA | DRL techniques are integrated with FL framework to optimize the MEC communication and caching. | Simulator AI chipset hardware, Tensor Flow | “In-Edge AI” is claimed to reach optimal performance but with low training overhead. | The applicability of this framework needs to be explored over real-world scenarios to find the efficacy in real manner. |
[ |
2019 | IoT | Edge systems and FL | MNIST/CIFAR-10 datasets | Communication-efficient FedAvg (CE-FedAvg) is introduced. FedAvg is integrated with Adam optimization for reducing the number of rounds to achieve convergence of the algorithm. | Raspberry Pi, Tensor Flow, ReLU, CNN | CE-FedAvg can reach a pre-satisfied accuracy in fewer communication rounds than in non-IID settings (up to 6× fewer) as compared to FedAvg. CE-FedAvg is cost-effective, robust to aggressive compression of transferred data, converged with up to 3× less iteration rounds. | The model is effective and cost efficient; however, there can be scope to add AI task management to reduce more computing cost. |
[ |
2018 | Human activity recognition | FL | The Heterogeneity Human Activity Recognition Dataset (2015) | A SoftMax regression and a DNN model are developed separately for HAR, to prove that accuracy similar to to centralized models can be achieved using FL. | Tensor Flow Apache Spark and Dataproc | In the experiments, FL achieved 89% accuracy, while in centralized training 93% accuracy is for DNN. | The models can be enhanced by integrating optimization techniques. |
[ |
2021 | Internet traffic classification | FL | ISCXVPN2016 dataset | An FL Internet traffic classification protocol (FLIC) is introduced to achieve an accuracy comparable to centralized DNN for Internet application identification. | Tensorflow | FLIC can classify new applications on the fly with an accuracy of 88% under non-IID traffic across clients. | There is a need to explore the protocol for real-world scenarios where the systems are heterogeneous systems. |
[ |
2018 | Algorithms evaluation | FL | MNIST | The algorithms FedAvg, CO-OP, and Federated Stochastic Variance Reduced Gradient are executed on MNIST, using both i.i.d. and non-i.i.d. partitionings of the data. | Ubuntu 16.04 LTS via VirtualBox 5.2, Python JSON | An MLP model FedAvg achieves the highest accuracy of 98% among these algorithms, regardless of the manner of data partitioning. | The evaluation study can be expanded by having the algorithm compared with some newly introduced benchmarks. |
[ |
2020 | Mobile Networks | FL | MNIST | A worker selection mechanism is formed for FL tasks evaluated based on the reputation threshold. A worker with a lesser reputation than the threshold is treated as an unreliable worker. | Tensor Flow | ATV scheme under lower reputation thresholds provides higher accuracy than MSL. MSL scheme performs the same as ATV scheme when the reputation is higher than 0.35. | Some validation schemes for non-IID datasets can be developed to detect the performance under poisoning attacks. |
[ |
2021 | NLP | FL | LJSpeech dataset | Dynamic transformer (FedDT-TTS) is proposed, where encoder and decoder increase layers dynamically to provide faster convergence with lesser communication cost in FL for the TTS task, and then compare FedT-TTS and FedDT-TTS on an unbalanced dataset. | Simulations, Python | Their model greatly improved transformer models’ performance in the federated learning, reducing total training time by 40%. | The generalization ability of this model can be examined over diverse kinds of datasets. |
[ |
2021 | Traffic speed forecasting | FL | PeMS dataset and METR-LA | FASTGNN framework for traffic speed forecasting with FL is proposed, which integrates AST-GNN for local training to protect the topological information. | Simulations | FASTGNN can provide similar performance compared with the three baseline algorithms, where the MAPE of FASTGNN is only 0.31% more than MAPE of STGCN. | The generalization ability of this model can be examined over diverse kinds of datasets. |
[ |
2021 | Autonomous vehicles | FL | MNIST FEMNIST | FedProx for computer vision is used and analyzed based on the capability to learn an underrepresented class while balancing system accuracy. | Python, TensorFlow, CNN, rectifier linear unit (ReLU) | FedProx local optimizer allows better accuracy using DNN implementations. | There is a tradeoff between intensity and resource allocation efforts of FedPRox. |
[ |
2020 | Healthcare | FL | NA | Surveyed recent FL approaches for Healthcare. | NA | Summarized the FL challenges of statistical, system heterogeneity, and privacy issues with recent solutions. | The solutions can be described more technically for health informatics. |
[ |
2018 | Vehicular networks | FL | To minimalize the network-wide power consumption of vehicular users with reliability in relation of probabilistic waiting delays, a joint transmit resource allocation and power method for enabling URLLC in vehicular networks is proposed. | Manhattan mobility model | The method provides approximately 60% reduced VUEs large queue lengths, without additional power consumption, compared to an average queue-based baseline. | The solutions can be described more technically. | |
[ |
2019 | Image detection | FL | Steet-5. Street 20 | A non-IID image dataset containing 900+ images taken by 26 street cameras with seven types of objects is presented. | Object detection algorithms (YOLO and Faster R-CNN) | These datasets also capture the realistic non-IID distribution problem in federated learning. | The data heterogeneity and imbalancing in these public datasets should be addressed for object identification using novel FL techniques. |
[ |
2021 | Smart cities and IoT | FL | NA | The latest research on the applicability of FL over fields of smart cities. | Survey, literature review | Provides the current improvement of FL from the IoT, transportation, communications, finance, medical, and other fields. | The detailed use case scenarios can be described more technically for smart cities. |
[ |
2020 | Autonomous vehicles | FL | Data collected by the oVML. | A Blockchain-based FL (BFL) is designed for privacy-aware IoV, where local oVML model updates are transferred and verified in distributed fashion. | Simulations | BFL is efficient in communication of autonomous vehicles. | The model BFL is needed to be explored over real world scenarios where the performance can be described more technically for IoV. |
[ |
2020 | Neural networks | FL | CIFAR10, CINIC-10 | An approach for computationally lightweight direct federated NAS, and a single step method to search for ready-to-deploy neural network models. | FedML | The inefficiency of the current practice of applying predefined neural network architecture to FL is claimed and addressed by using DFNAS. It resulted in lesser consume computation and communication bandwidth with 92% test accuracy. | The DFNAS applicability is needed to be explored over some real-world scenarios, such as text recommendation. |
[ |
2021 | Medical imaging | FL | NA | Reviews the latest research of FL to find its applicability in medical imaging. | Survey | Explains how patient privacy is maintained across sites using FL. | The technical presentation of the medical imaging can be illustrated by using a certain case study. |
References
1. Reinsel, D.; Gantz, J.; Rydning, J. Data Age 2025: The Digitization of the World From Edge to Core. Int. Data Corp.; 2018; 16, 28.
2. Khan, L.U.; Saad, W.; Han, Z.; Hong, C.S. Dispersed Federated Learning: Vision, Taxonomy, and Future Directions. IEEE Wireless Commun.; 2021; 28, pp. 192-198. [DOI: https://dx.doi.org/10.1109/MWC.011.2100003]
3. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A. Communication-Efficient Learning of Deep Networks from Decentralized Data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics; Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 1273-1282.
4. Ji, S.; Saravirta, T.; Pan, S.; Long, G.; Walid, A. Emerging Trends in Federated Learning: From Model Fusion to Federated X Learning. arXiv; 2021; arXiv: 2102.12920
5. Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A.N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R. et al. Advances and Open Problems in Federated Learning. arXiv; 2021; arXiv: 1912.04977
6. Kang, J.; Xiong, Z.; Niyato, D.; Zou, Y.; Zhang, Y.; Guizani, M. Reliable Federated Learning for Mobile Networks. IEEE Wirel. Commun.; 2020; 27, pp. 72-80. [DOI: https://dx.doi.org/10.1109/MWC.001.1900119]
7. Xu, J.; Wang, F.; Glicksberg, B.S.; Su, C.; Walker, P.; Bian, J. Federated Learning for Healthcare Informatics. J. Healthc. Inform. Res.; 2020; 5, pp. 1-19. [DOI: https://dx.doi.org/10.1007/s41666-020-00082-4]
8. Zhao, Y.; Zhao, J.; Jiang, L.; Tan, R.; Niyato, D. Mobile Edge Computing, Blockchain and Reputation-based Crowdsourcing IoT Federated Learning: A Secure, Decentralized and Privacy-preserving System. arXiv; 2020; arXiv: 1906.10893
9. Samarakoon, S.; Bennis, M.; Saad, W.; Debbah, M. Federated Learning for Ultra-Reliable Low-Latency V2V Communications. Proceedings of the 2018 IEEE Global Communications Conference (GLOBECOM); Abu Dhabi, United Arab Emirates, 9–13 December 2018; pp. 1-7. [DOI: https://dx.doi.org/10.1109/glocom.2018.8647927]
10. Liu, Y.; Ma, Z.; Liu, X.; Ma, S.; Nepal, S.; Deng, R. Boosting Privately: Privacy-Preserving Federated Extreme Boosting for Mobile Crowdsensing. arXiv; 2019; arXiv: 1907.10218
11. Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated Machine Learning : Concept and Applications. ACM Trans. Intell. Syst. Technol.; 2019; 10, pp. 1-19. [DOI: https://dx.doi.org/10.1145/3298981]
12. Lim, W.Y.B.; Luong, N.C.; Hoang, D.T.; Jiao, Y.; Liang, Y.-C.; Yang, Q.; Niyato, D.; Miao, C. Federated Learning in Mobile Edge Networks: A Comprehensive Survey. IEEE Commun. Surv. Tutor.; 2020; 22, pp. 2031-2063. [DOI: https://dx.doi.org/10.1109/COMST.2020.2986024]
13. Niknam, S.; Dhillon, H.S.; Reed, J.H. Federated Learning for Wireless Communications: Motivation, Opportunities and Challenges. IEEE Commun. Mag.; 2019; 58, pp. 46-51. [DOI: https://dx.doi.org/10.1109/MCOM.001.1900461]
14. Zhang, C.; Xie, Y.; Bai, H.; Yu, B.; Li, W.; Gao, Y. A survey on federated learning. Knowl.-Based Syst.; 2021; 216, 106775. [DOI: https://dx.doi.org/10.1016/j.knosys.2021.106775]
15. Mothukuri, V.; Parizi, R.M.; Pouriyeh, S.; Huang, Y.; Dehghantanha, A.; Srivastava, G. A survey on security and privacy of federated learning. Futur. Gener. Comput. Syst.; 2020; 115, pp. 619-640. [DOI: https://dx.doi.org/10.1016/j.future.2020.10.007]
16. Hou, D.; Zhang, J.; Man, K.L.; Ma, J.; Peng, Z. A Systematic Literature Review of Blockchain-based Federated Learning: Architectures, Applications and Issues. Proceedings of the 2021 2nd Information Communication Technologies Conference (ICTC); Nanjing, China, 7–9 May 2021; pp. 302-307.
17. Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Proces. Mag.; 2020; 37, pp. 50-60. [DOI: https://dx.doi.org/10.1109/MSP.2020.2975749]
18. Bouzinis, P.S.; Diamantoulakis, P.D.; Karagiannidis, G.K. Wireless Federated Learning (WFL) for 6G Networks—Part I: Research Challenges and Future Trends. IEEE Commun. Lett.; 2021; 26, pp. 3-7. [DOI: https://dx.doi.org/10.1109/LCOMM.2021.3121071]
19. Soto, J.C.; Kyt, W.; Jahn, M.; Pullmann, J.; Bonino, D.; Pastrone, C.; Spirito, M. Towards a Federation of Smart City Services. Proceedings of the International Conference on Recent Advances in Computer Systems (RACS 2015); Hail, Saudi Arabia, 30 November–1 December 2015; pp. 163-168.
20. Liu, Y.; Zhang, L.; Ge, N.; Li, G. A Systematic Literature Review on Federated Learning: From A Model Quality Perspective. arXiv; 2020; arXiv: 2012.01973
21. Bonawitz, K.; Eichner, H.; Grieskamp, W.; Huba, D.; Ingerman, A.; Ivanov, V.; Roselander, J.; Kiddon, C.; Mazzocchi, S.; McMahan, B. et al. Towards Federated Learning at Scale: System Design. Proc. Mach. Learn. Syst.; 2019; 1, pp. 374-388.
22. Nishio, T.; Yonetani, R. Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge. Proceedings of the ICC 2019—IEEE International Conference on Communications (ICC); Shanghai, China, 20–24 May 2019; pp. 1-7.
23. Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical secure aggregation for privacy-preserving machine learning. Proceedings of the ACM Conference on Computer and Communications Security; Dallas, TX, USA, 30 October–3 November 2017; pp. 1175-1191.
24. Quiñones, D.; Rusu, C. How to develop usability heuristics: A systematic literature review. Comput. Stand. Interfaces; 2017; 53, pp. 89-122. [DOI: https://dx.doi.org/10.1016/j.csi.2017.03.009]
25. Zhao, Y.; Li, M.; Lai, L.; Suda, N.; Civin, D.; Chandra, V. Federated Learning with Non-IID Data. arXiv; 2018; arXiv: 1806.00582[DOI: https://dx.doi.org/10.1016/j.neucom.2021.07.098]
26. Duan, M.; Liu, D.; Chen, X.; Tan, Y.; Ren, J.; Qiao, L.; Liang, L. Astraea: Self-Balancing Federated Learning for Improving Classification Accuracy of Mobile Deep Learning Applications. Proceedings of the 2019 IEEE 37th International Conference on Computer Design (ICCD); Abu Dahbi, United Arab Emirates, 17–20 November 2019; pp. 246-254.
27. Li, X.; Huang, K.; Yang, W.; Wang, S.; Zhang, Z. On the Convergence of FedAvg on Non-IID Data. arXiv; 2019; arXiv: 1907.02189
28. Ramaswamy, S.; Mathews, R.; Rao, K.; Beaufays, F. Federated Learning for Emoji Prediction in a Mobile Keyboard. arXiv; 2019; arXiv: 1906.04329
29. Brisimi, T.S.; Chen, R.; Mela, T.; Olshevsky, A.; Paschalidis, I.C.; Shi, W. Federated learning of predictive models from federated Electronic Health Records. Int. J. Med. Inform.; 2018; 112, pp. 59-67. [DOI: https://dx.doi.org/10.1016/j.ijmedinf.2018.01.007] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29500022]
30. Huang, L.; Yin, Y.; Fu, Z.; Zhang, S.; Deng, H.; Liu, D. LoAdaBoost: Loss-Based AdaBoost Federated Machine Learning on medical Data. PLoS ONE; 2018; 15, e0230706.
31. Federated Learning—OWKIN. Available online: https://owkin.com/federated-learning/ (accessed on 13 June 2020).
32. Intel Works with University of Pennsylvania in Using Privacy-Preserving AI to Identify Brain Tumors|Intel Newsroom. Available online: https://newsroom.intel.com/news/intel-works-university-pennsylvania-using-privacy-preserving-ai-identify-brain-tumors/#gs.7wuma4 (accessed on 13 June 2020).
33. Xiao, Z.; Xu, X.; Xing, H.; Song, F.; Wang, X.; Zhao, B. A federated learning system with enhanced feature extraction for human activity recognition. Knowl.-Based Syst.; 2021; 229, 107338. [DOI: https://dx.doi.org/10.1016/j.knosys.2021.107338]
34. Qi, Y.; Hossain, M.S.; Nie, J.; Li, X. Privacy-preserving blockchain-based federated learning for traffic flow prediction. Futur. Gener. Comput. Syst.; 2021; 117, pp. 328-337. [DOI: https://dx.doi.org/10.1016/j.future.2020.12.003]
35. Demertzis, K. Blockchained Federated Learning for Threat Defense. arXiv; 2021; arXiv: 2102.12746
36. Zhang, W.; Li, X.; Ma, H.; Luo, Z.; Li, X. Federated learning for machinery fault diagnosis with dynamic validation and self-supervision. Knowl. Based Syst.; 2021; 213, 106679. [DOI: https://dx.doi.org/10.1016/j.knosys.2020.106679]
37. Zhang, C.; Zhang, S.; James, J.Q.; Yu, S. FASTGNN: A Topological Information Protected Federated Learning Approach for Traffic Speed Forecasting. IEEE Trans. Ind. Inform.; 2021; 17, pp. 8464-8474. [DOI: https://dx.doi.org/10.1109/TII.2021.3055283]
38. Zellinger, W.; Wieser, V.; Kumar, M.; Brunner, D.; Shepeleva, N.; Gálvez, R.; Langer, J.; Fischer, L.; Moser, B. Beyond federated learning: On confidentiality-critical machine learning applications in industry. Procedia Comput. Sci.; 2019; 180, pp. 734-743. [DOI: https://dx.doi.org/10.1016/j.procs.2021.01.296]
39. Huang, Z.; Liu, F.; Zou, Y. Federated Learning for Spoken Language Understanding. Proceedings of the 28th International Conference on Computational Linguistics; Barcelona, Spain, 8–13 December 2020; pp. 3467-3478. [DOI: https://dx.doi.org/10.18653/v1/2020.coling-main.310]
40. Yang, Z.; Chen, M.; Wong, K.K.; Poor, H.V.; Cui, S. Federated Learning for 6G: Applications, Challenges, and Opportunities. Engineering; 2021; in press
41. Mun, H.; Lee, Y. Internet Traffic Classification with Federated Learning. Electronics; 2020; 10, 27. [DOI: https://dx.doi.org/10.3390/electronics10010027]
42. Mahmood, Z.; Jusas, V. Implementation Framework for a Blockchain-Based Federated Learning Model for Classification Problems. Symmetry; 2021; 13, 1116. [DOI: https://dx.doi.org/10.3390/sym13071116]
43. Moubayed, A.; Sharif, M.; Luccini, M.; Primak, S.; Shami, A. Water Leak Detection Survey: Challenges & Research Opportunities Using Data Fusion & Federated Learning. IEEE Access; 2021; 9, pp. 40595-40611. [DOI: https://dx.doi.org/10.1109/access.2021.3064445]
44. Tzinis, E.; Casebeer, J.; Wang, Z.; Smaragdis, P. Separate But Together: Unsupervised Federated Learning for Speech Enhancement from Non-IID Data. Proceedings of the 2021 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA); New Paltz, NY, USA, 17–20 October 2021; pp. 46-50.
45. Wang, H.; Sievert, S.; Liu, S.; Charles, Z.; Papailiopoulos, D.; Wright, S. ATOMO: Communication-efficient learning via atomic sparsification. Adv. Neural Inf. Process. Syst.; 2018; 31, pp. 9850-9861.
46. Stich, S.U.; Cordonnier, J.B.; Jaggi, M. Sparsified SGD with Memory. Adv. Neural Info. Process. Syst.; 2018; 31, pp. 1-12.
47. Nikolenko, S.I. Synthetic Data for Deep Learning; Springer: Cham, Switzerland, 2021.
48. Dwork, C.; McSherry, F.; Nissim, K.; Smith, A. Calibrating Noise to Sensitivity in Private Data Analysis. J. Priv. Confid.; 2017; 7, pp. 17-51. [DOI: https://dx.doi.org/10.29012/jpc.v7i3.405]
49. Wang, L.; Wang, W.; Li, B. CMFL: Mitigating Communication Overhead for Federated Learning. Proceedings of the 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS); Dallas, TX, USA, 7–10 July 2019; pp. 954-964.
50. Konečný, J.; McMahan, H.B.; Yu, F.X.; Richtárik, P.; Suresh, A.T.; Bacon, D. Federated learning: Strategies for improving communication efficiency. arXiv; 2016; arXiv: 1610.05492
51. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; San Francisco, CA, USA, 13–17 August 2016.
52. Huang, A.; Liu, Y.; Chen, T.; Zhou, Y.; Sun, Q.; Chai, H.; Yang, Q. StarFL: Hybrid Federated Learning Architecture for Smart Urban Computing. ACM Trans. Intell. Syst. Technol.; 2021; 12, pp. 1-23. [DOI: https://dx.doi.org/10.1145/3467956]
53. Deng, Y.; Lyu, F.; Ren, J.; Chen, Y.-C.; Yang, P.; Zhou, Y.; Zhang, Y. FAIR: Quality-Aware Federated Learning with Precise User Incentive and Model Aggregation. Proceedings of the IEEE INFOCOM 2021—IEEE Conference on Computer Communications; Vancouver, BC, Canada, 10–13 May 2021; pp. 1-10.
54. Li, X.; Jiang, M.; Zhang, X.; Kamp, M.; Dou, Q. Fedbn: Federated learning on non-iid features via local batch normalization. arXiv; 2021; arXiv: 2102.07623
55. Wang, S.; Lee, M.; Hosseinalipour, S.; Morabito, R.; Chiang, M.; Brinton, C.G. Device Sampling for Heterogeneous Federated Learning: Theory, Algorithms, and Implementation. Proceedings of the IEEE INFOCOM 2021—IEEE Conference on Computer Communications; Vancouver, BC, Canada, 10–13 May 2021.
56. Cheng, K.; Fan, T.; Jin, Y.; Liu, Y.; Chen, T.; Papadopoulos, D.; Yang, Q. SecureBoost: A Lossless Federated Learning Framework. IEEE Intell. Syst.; 2021; 36, pp. 87-98. [DOI: https://dx.doi.org/10.1109/MIS.2021.3082561]
57. Zeng, D.; Liang, S.; Hu, X.; Xu, Z. FedLab: A Flexible Federated Learning Framework. arXiv; 2021; arXiv: 2107.11621
58. Ye, C.; Cui, Y. Sample-based Federated Learning via Mini-batch SSCA. Proceedings of the ICC 2021—IEEE International Conference on Communications; Montreal, QC, Canada, 14–23 June 2021; pp. 1-6.
59. Budrionis, A.; Miara, M.; Miara, P.; Wilk, S.; Bellika, J.G. Benchmarking PySyft Federated Learning Framework on MIMIC-III Dataset. IEEE Access; 2021; 9, pp. 116869-116878. [DOI: https://dx.doi.org/10.1109/ACCESS.2021.3105929]
60. Tan, A.Z.; Yu, H.; Cui, L.; Yang, Q. Towards Personalized Federated Learning. arXiv; 2021; arXiv: 2103.00710
61. Luo, J.; Wu, X.; Luo, Y.; Huang, A.; Huang, Y.; Liu, Y.; Yang, Q. Real-World Image Datasets for Federated Learning. arXiv; 2019; arXiv: 1910.11089
62. TensorFlow Federated. Available online: https://www.tensorflow.org/federated/api_docs/python/tff/simulation/datasets/stackoverflow/load_data (accessed on 17 August 2020).
63. Reddi, S.; Charles, Z.; Zaheer, M.; Garrett, Z.; Rush, K.; Konečný, J.; Kumar, S.; McMahan, H.B. Adaptive Federated Optimization. arXiv; 2020; arXiv: 2003.00295
64. Jiang, Y.; Konečný, J.; Rush, K.; Kannan, S. Improving Federated Learning Personalization via Model Agnostic Meta Learning. arXiv; 2019; arXiv: 1909.12488
65. TensorFlow Federated. Available online: https://www.tensorflow.org/federated (accessed on 17 August 2020).
66. Caldas, S.; Duddu, S.M.K.; Wu, P.; Li, T.; Konečný, J.; McMahan, H.B.; Smith, V.; Talwalkar, A. LEAF: A Benchmark for Federated Settings. arXiv; 2018; arXiv: 1812.01097
67. Dinh, C.T.; Tran, N.H.; Nguyen, M.N.H.; Hong, C.S.; Bao, W.; Zomaya, A.Y.; Gramoli, V. Federated Learning Over Wireless Networks: Convergence Analysis and Resource Allocation. IEEE/ACM Trans. Netw.; 2020; 29, pp. 398-409. [DOI: https://dx.doi.org/10.1109/TNET.2020.3035770]
68. Verma, D.C.; White, G.; Julier, S.; Pasteris, S.; Chakraborty, S.; Cirincione, G. Approaches to address the data skew problem in federated learning. Proceedings of the Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications; Baltimore, MD, USA, 14–18 April 2019; 11006.
69. Li, T.; Sahu, A.K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; Smith, V. Federated Optimization in Heterogeneous Networks. Proc. Mach. Learn. Syst.; 2020; 2, pp. 429-450.
70. He, H.; Garcia, E.A. Learning from Imbalanced Data. IEEE Trans. Knowl. Data Eng.; 2009; 21, pp. 1263-1284.
71. Yang, H.; Fang, M.; Liu, J. Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning. arXiv; 2021; arXiv: 2101.11203
72. Xia, Q.; Ye, W.; Tao, Z.; Wu, J.; Li, Q. A survey of federated learning for edge computing: Research problems and solutions. High-Confidence Comput.; 2021; 1, 100008. [DOI: https://dx.doi.org/10.1016/j.hcc.2021.100008]
73. Qu, L.; Zhou, Y.; Liang, P.P.; Xia, Y.; Wang, F.; Fei-Fei, L.; Adeli, E.; Rubin, D. Rethinking Architecture Design for Tackling Data Heterogeneity in Federated Learning. arXiv; 2021; arXiv: 2106.06047
74. Pham, Q.V.; Dev, K.; Maddikunta, P.K.R.; Gadekallu, T.R.; Huynh-The, T. Fusion of Federated Learning and Industrial Internet of Things: A Survey. arXiv; 2021; arXiv: 2101.00798
75. Zhang, X.; Hou, H.; Fang, Z.; Wang, Z. Industrial Internet Federated Learning Driven by IoT Equipment ID and Blockchain. Wirel. Commun. Mob. Comput.; 2021; 2021, pp. 1-9. [DOI: https://dx.doi.org/10.1155/2021/7705843]
76. Federated Learning for Privacy Preservation of Healthcare Data in Internet of Medical Things–EMBS. Available online: https://www.embs.org/federated-learning-for-privacy-preservation-of-healthcare-data-in-internet-of-medical-things/ (accessed on 28 November 2021).
77. Vatsalan, D.; Sehili, Z.; Christen, P.; Rahm, E. Privacy-Preserving Record Linkage for Big Data: Current Approaches and Research Challenges. Handbook of Big Data Technologies; Springer International Publishing: Cham, Switzerland, 2017; pp. 851-895. [DOI: https://dx.doi.org/10.1007/978-3-319-49340-4_25]
78. Chehimi, M.; Saad, W. Quantum Federated Learning with Quantum Data. arXiv; 2021; arXiv: 2106.00005
79. Friedman, C.P.; Wong, A.K.; Blumenthal, D. Policy: Achieving a nationwide learning health system. Sci. Transl. Med.; 2010; 2, 57cm29. [DOI: https://dx.doi.org/10.1126/scitranslmed.3001456] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/21068440]
80. Coronavirus (COVID-19) Events as They Happen. Available online: https://www.who.int/emergencies/diseases/novel-coronavirus-2019/events-as-they-happen (accessed on 14 June 2020).
81. Wang, X.; Han, Y.; Wang, C.; Zhao, Q.; Chen, X.; Chen, M. In-Edge AI : Intelligentizing Mobile Edge Computing, Caching and Communication by Federated Learning. IEEE Netw.; 2000; 33, pp. 156-165. [DOI: https://dx.doi.org/10.1109/MNET.2019.1800286]
82. Amiria, M.M.; Gündüzb, D.; Kulkarni, S.R.; Poor, H.V. Convergence of Update Aware Device Scheduling for Federated Learning at the Wireless Edge. IEEE Trans. Wirel. Commun.; 2021; 20, pp. 3643-3658. [DOI: https://dx.doi.org/10.1109/TWC.2021.3052681]
83. Du, Z.; Wu, C.; Yoshinaga, T.; Yau, K.-L.A.; Ji, Y.; Li, J. Federated Learning for Vehicular Internet of Things: Recent Advances and Open Issues. IEEE Open J. Comput. Soc.; 2020; 1, pp. 45-61. [DOI: https://dx.doi.org/10.1109/OJCS.2020.2992630] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32386144]
84. Wu, Q.; He, K.; Chen, X. Personalized Federated Learning for Intelligent IoT Applications: A Cloud-Edge Based Framework. IEEE Open J. Comput. Soc.; 2020; 1, pp. 35-44. [DOI: https://dx.doi.org/10.1109/OJCS.2020.2993259]
85. Lin, B.Y.; He, C.; Zeng, Z.; Wang, H.; Huang, Y.; Soltanolkotabi, M.; Xiang, R.; Avestimehr, S. FedNLP: A Research Platform for Federated Learning in Natural Language Processing. arXiv; 2021; arXiv: 2104.08815
86. Ammad-Ud-Din, M.; Ivannikova, E.; Khan, S.A.; Oyomno, W.; Fu, Q.; Tan, K.E.; Flanagan, A. Federated collaborative filtering for privacy-preserving personalized recommendation system. arXiv; 2019; arXiv: 1901.09888
87. Zhao, S.; Bharati, R.; Borcea, C.; Chen, Y. Privacy-Aware Federated Learning for Page Recommendation. Proceedings of the 2020 IEEE International Conference on Big Data (Big Data); Atlanta, GA, USA, 10–13 December 2020; pp. 1071-1080.
88. Chai, D.; Wang, L.; Chen, K.; Yang, Q. Secure Federated Matrix Factorization. IEEE Intell. Syst.; 2020; 36, pp. 11-20. [DOI: https://dx.doi.org/10.1109/MIS.2020.3014880]
89. Li, Z.; Zhou, F.; Chen, F.; Li, H. Meta-SGD: Learning to Learn Quickly for Few-Shot Learning. arXiv; 2017; arXiv: 1707.09835
90. Wang, Q.; Yin, H.; Chen, T.; Yu, J.; Zhou, A.; Zhang, X. Fast-adapting and privacy-preserving federated recommender system. Very Large Data Bases J.; 2021; [DOI: https://dx.doi.org/10.1007/s00778-021-00700-6]
91. Yang, L.; Tan, B.; Zheng, V.W.; Chen, K.; Yang, Q. Federated Recommendation Systems. Federated Learning; Springer: Cham, Switzerland, 2020; Volume 12500, pp. 225-239.
92. Jalalirad, A.; Scavuzzo, M.; Capota, C.; Sprague, M. A Simple and Efficient Federated Recommender System. Proceedings of the 6th IEEE/ACM International Conference on Big Data Computing, Applications and Technologies; Auckland, New Zealand, 2–5 December 2019; pp. 53-58.
93. Greff, K.; Srivastava, R.K.; Koutník, J.; Steunebrink, B.R.; Schmidhuber, J. LSTM: A Search Space Odyssey. IEEE Trans. Neural Netw. Learn. Syst.; 2016; 28, pp. 2222-2232. [DOI: https://dx.doi.org/10.1109/TNNLS.2016.2582924]
94. Yang, T.; Andrew, G.; Eichner, H.; Sun, H.; Li, W.; Kong, N.; Beaufays, F. Applied federated learning: Improving google keyboard query suggestions. arXiv; 2018; arXiv: 1812.02903
95. Basu, P.; Roy, T.S.; Naidu, R.; Muftuoglu, Z.; Singh, S.; Mireshghallah, F. Benchmarking Differential Privacy and Federated Learning for BERT Models. arXiv; 2021; arXiv: 2106.13973
96. Hong, Z.; Wang, J.; Qu, X.; Liu, J.; Zhao, C.; Xiao, J. Federated Learning with Dynamic Transformer for Text to Speech. arXiv; 2021; arXiv: 2107.08795
97. Liu, M.; Ho, S.; Wang, M.; Gao, L.; Jin, Y.; Zhang, H. Federated Learning Meets Natural Language Processing: A Survey. arXiv; 2021; arXiv: 2107.08795
98. Nguyen, T.D.; Marchal, S.; Miettinen, M.; Fereidooni, H.; Asokan, N.; Sadeghi, A.R. DIOT: A Federated Self-learning Anomaly Detection System for IoT. Proceedings of the 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS); Dallas, TX, USA, 7–10 July 2019; pp. 756-767.
99. Nguyen, D.C.; Ding, M.; Pathirana, P.N.; Seneviratne, A.; Li, J.; Poor, H.V. Federated Learning for Internet of Things: A Comprehensive Survey. IEEE Commun. Surv. Tutor.; 2021; 23, pp. 1622-1658. [DOI: https://dx.doi.org/10.1109/COMST.2021.3075439]
100. Federated Learning for Internet of Things and Big Data|Hindawi. Available online: https://www.hindawi.com/journals/wcmc/si/891247/ (accessed on 28 November 2021).
101. Ng, D.; Lan, X.; Yao, M.M.-S.; Chan, W.P.; Feng, M. Federated learning: A collaborative effort to achieve better medical imaging models for individual sites that have small labelled datasets. Quant. Imaging Med. Surg.; 2021; 11, pp. 852-857. [DOI: https://dx.doi.org/10.21037/qims-20-595] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33532283]
102. Stripelis, D.; Ambite, J.L.; Lam, P.; Thompson, P. Scaling Neuroscience Research Using Federated Learning. Proceedings of the 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI); Nice, France, 13–16 April 2021; pp. 1191-1195.
103. Yates, T.S.; Ellis, C.T.; Turk-Browne, N.B. The promise of awake behaving infant fMRI as a deep measure of cognition. Curr. Opin. Behav. Sci.; 2020; 40, pp. 5-11. [DOI: https://dx.doi.org/10.1016/j.cobeha.2020.11.007]
104. Połap, D.; Srivastava, G.; Yu, K. Agent architecture of an intelligent medical system based on federated learning and blockchain technology. J. Inf. Secur. Appl.; 2021; 58, 102748. [DOI: https://dx.doi.org/10.1016/j.jisa.2021.102748]
105. Marek, S.; Greene, D.J. Precision functional mapping of the subcortex and cerebellum. Curr. Opin. Behav. Sci.; 2021; 40, pp. 12-18. [DOI: https://dx.doi.org/10.1016/j.cobeha.2020.12.011]
106. Smith, D.M.; Perez, D.C.; Porter, A.; Dworetsky, A.; Gratton, C. Light through the fog: Using precision fMRI data to disentangle the neural substrates of cognitive control. Curr. Opin. Behav. Sci.; 2021; 40, pp. 19-26. [DOI: https://dx.doi.org/10.1016/j.cobeha.2020.12.004]
107. Li, X.; Gu, Y.; Dvornek, N.; Staib, L.H.; Ventola, P.; Duncan, J.S. Multi-site fMRI analysis using privacy-preserving federated learning and domain adaptation: ABIDE results. Med. Image Anal.; 2020; 65, 101765. [DOI: https://dx.doi.org/10.1016/j.media.2020.101765]
108. Can, Y.S.; Ersoy, C. Privacy-preserving Federated Deep Learning for Wearable IoT-based Biomedical Monitoring. ACM Trans. Internet Technol.; 2021; 21, pp. 1-17. [DOI: https://dx.doi.org/10.1145/3428152]
109. Pharma Companies Join Forces to Train AI for Drug Discovery Collectively|BioPharmaTrend. Available online: https://www.biopharmatrend.com/post/97-pharma-companies-join-forces-to-train-ai-for-drug-discovery-collectively/ (accessed on 10 July 2020).
110. Yoo, J.H.; Son, H.M.; Jeong, H.; Jang, E.-H.; Kim, A.Y.; Yu, H.Y.; Jeon, H.J.; Chung, T.-M. Personalized Federated Learning with Clustering: Non-IID Heart Rate Variability Data Application. Proceedings of the 2021 International Conference on Information and Communication Technology Convergence (ICTC); Jeju Island, Korea, 20–22 October 2021; pp. 1046-1051.
111. Aich, S.; Sinai, N.K.; Kumar, S.; Ali, M.; Choi, Y.R.; Joo, M.-I.; Kim, H.-C. Protecting Personal Healthcare Record Using Blockchain & Federated Learning Technologies. Proceedings of the 2021 23rd International Conference on Advanced Communication Technology (ICACT); PyeongChang, Korea, 7–10 February 2021; pp. 109-112.
112. Sarma, K.V.; Harmon, S.; Sanford, T.; Roth, H.R.; Xu, Z.; Tetreault, J.; Xu, D.; Flores, M.G.; Raman, A.G.; Kulkarni, R. et al. Federated learning improves site performance in multicenter deep learning without data sharing. J. Am. Med. Inform. Assoc.; 2021; 28, pp. 1259-1264. [DOI: https://dx.doi.org/10.1093/jamia/ocaa341]
113. Pfitzner, B.; Steckhan, N.; Arnrich, B. Federated Learning in a Medical Context: A Systematic Literature Review. ACM Trans. Internet Technol.; 2021; 21, pp. 1-31. [DOI: https://dx.doi.org/10.1145/3412357]
114. Pokhrel, S.R.; Choi, J. Federated Learning With Blockchain for Autonomous Vehicles: Analysis and Design Challenges. IEEE Trans. Commun.; 2020; 68, pp. 4734-4746. [DOI: https://dx.doi.org/10.1109/TCOMM.2020.2990686]
115. Saputra, Y.M.; Hoang, D.T.; Nguyen, D.N.; Dutkiewicz, E.; Mueck, M.D.; Srikanteswara, S. Energy Demand Prediction with Federated Learning for Electric Vehicle Networks. Proceedings of the 2019 IEEE Global Communications Conference (GLOBECOM); Waikoloa, HI, USA, 9–13 December 2019; pp. 1-6.
116. Savazzi, S.; Nicoli, M.; Bennis, M.; Kianoush, S.; Barbieri, L. Opportunities of Federated Learning in Connected, Cooperative, and Automated Industrial Systems. IEEE Commun. Mag.; 2021; 59, pp. 16-21. [DOI: https://dx.doi.org/10.1109/MCOM.001.2000200]
117. Xianjia, Y.; Queralta, J.P.; Heikkonen, J.; Westerlund, T. An Overview of Federated Learning at the Edge and Distributed Ledger Technologies for Robotic and Autonomous Systems. Proc. Comput. Sci.; 2021; 191, pp. 135-142. [DOI: https://dx.doi.org/10.1016/j.procs.2021.07.041]
118. Federated Machine Learning in Anti-Financial Crime Processes Frequently Asked Questions. Available online: https://finreglab.org/wp-content/uploads/2020/12/FAQ-Federated-Machine-Learning-in-Anti-Financial-Crime-Processes.pdf (accessed on 24 October 2021).
119. Federated Learning: The New Thing in AI/ML for Detecting Financial Crimes and Managing Risk—Morning Consult. Available online: https://morningconsult.com/opinions/federated-learning-the-new-thing-in-ai-ml-for-detecting-financial-crimes-and-managing-risk/ (accessed on 28 November 2021).
120. Long, G.; Tan, Y.; Jiang, J.; Zhang, C. Federated Learning for Open Banking. Federated Learning; Springer: Cham, Switzerland, 2020; pp. 240-254.
121. Federated Machine Learning for Finance or Fintech|Techwasti. Available online: https://medium.com/techwasti/federated-machine-learning-for-fintech-b875b918c5fe (accessed on 28 November 2021).
122. Federated Machine Learning for Loan Risk Prediction. Available online: https://www.infoq.com/articles/federated-machine-learning/ (accessed on 28 November 2021).
123. Rieke, N.; Hancox, J.; Li, W.; Milletarì, F.; Roth, H.R.; Albarqouni, S.; Bakas, S.; Galtier, M.N.; Landman, B.A.; Maier-Hein, K. et al. The future of digital health with federated learning. NPJ Digit. Med.; 2020; 3, pp. 1-7. [DOI: https://dx.doi.org/10.1038/s41746-020-00323-1]
124. Elbir, A.M.; Soner, B.; Coleri, S. Federated Learning for Vehicular Networks. arXiv; 2020; arXiv: 2006.01412
125. Nilsson, A.; Smith, S.; Ulm, G.; Gustavsson, E.; Jirstrand, M. A Performance Evaluation of Federated Learning Algorithms. Proceedings of the Second Workshop on Distributed Infrastructures for Deep Learning; Rennes, France, 10–11 December 2018; pp. 1-8.
126. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic Minority Over-sampling Technique. J. Artif. Intell. Res.; 2002; 16, pp. 321-357. [DOI: https://dx.doi.org/10.1613/jair.953]
127. Han, H.; Wang, W.Y.; Mao, B.H. Borderline-SMOTE : A New Over-Sampling Method in Imbalanced Data Sets Learning. Advances in Intelligent Computing; Springer: Cham, Switzerland, 2005; Volume 3644, pp. 878-887.
128. Fereidooni, H.; Marchal, S.; Miettinen, M.; Mirhoseini, A.; Mollering, H.; Nguyen, T.D.; Rieger, P.; Sadeghi, A.-R.; Schneider, T.; Yalame, H. et al. SAFELearn: Secure Aggregation for private FEderated Learning. Proceedings of the 2021 IEEE Security and Privacy Workshops (SPW); San Francisco, CA, USA, 27 May 2021; pp. 56-62. [DOI: https://dx.doi.org/10.1109/spw53761.2021.00017]
129. Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical Secure Aggregation for Federated Learning on User-Held Data. arXiv; 2016; arXiv: 1611.04482
130. Yang, C.S.; So, J.; He, C.; Li, S.; Yu, Q.; Avestimehr, S. LightSecAgg: Rethinking Secure Aggregation in Federated Learning. arXiv; 2021; arXiv: 2109.14236
131. LEAF. Available online: https://leaf.cmu.edu/ (accessed on 25 January 2022).
132. He, C.; Li, S.; So, J.; Zeng, X.; Zhang, M.; Wang, H.; Shen, L.; Yang, Y.; Yang, Q.; Avestimehr, S. et al. FedML: A Research Library and Benchmark for Federated Machine Learning. arXiv; 2020; arXiv: 2007.13518
133. Razmi, N.; Matthiesen, B.; Dekorsy, A.; Popovski, P. Ground-Assisted Federated Learning in LEO Satellite Constellations. IEEE Wirel. Commun. Lett.; 2022; [DOI: https://dx.doi.org/10.1109/LWC.2022.3141120]
134. Huang, B.; Li, X.; Song, Z.; Yang, X. FL-NTK: A Neural Tangent Kernel-based Framework for Federated Learning Convergence Analysis. arXiv; 2021; arXiv: 2105.05001
135. Garg, A.; Saha, A.K.; Dutta, D. Direct Federated Neural Architecture Search. arXiv; 2020; arXiv: 2010.06223
136. Zhu, W.; White, A.; Luo, J. Federated Learning of Molecular Properties with Graph Neural Networks in a Heterogeneous Setting. Cell Press; 2022; under review
137. Lee, J.W.; Oh, J.; Lim, S.; Yun, S.Y.; Lee, J.G. TornadoAggregate: Accurate and Scalable Federated Learning via the Ring-Based Architecture. arXiv; 2020; arXiv: 2012.03214
138. Cheng, G.; Chadha, K.; Duchi, J. Fine-tuning in Federated Learning: A simple but tough-to-beat baseline. arXiv; 2021; arXiv: 2108.07313
139. Sahu, A.K.; Li, T.; Sanjabi, M.; Zaheer, M.; Talwalkar, A.; Smith, V. On the convergence of federated optimization in heterogeneous networks. arXiv; 2018; arXiv: 1812.06127
140. Federated Learning: A Simple Implementation of FedAvg (Federated Averaging) with PyTorch | by Ece Işık Polat | Towards Data Science. Available online: https://towardsdatascience.com/federated-learning-a-simple-implementation-of-fedavg-federated-averaging-with-pytorch-90187c9c9577 (accessed on 25 January 2022).
141. Konečný, J.; McMahan, H.B.; Ramage, D.; Richtárik, P. Federated Optimization: Distributed Machine Learning for On-Device Intelligence. arXiv; 2016; arXiv: 1610.02527
142. FedSVRG Based Communication Efficient Scheme for Federated Learning in MEC Networks | Request PDF. Available online: https://www.researchgate.net/publication/352418092_FedSVRG_Based_Communication_Efficient_Scheme_for_Federated_Learning_in_MEC_Networks (accessed on 19 January 2022).
143. Liu, Y.; Kang, Y.; Zhang, X.; Li, L.; Cheng, Y.; Chen, T.; Hong, M.; Yang, Q. A Communication Efficient Collaborative Learning Framework for Distributed Features. arXiv; 2019; arXiv: 1912.11187
144. Wu, R.; Scaglione, A.; Wai, H.T.; Karakoc, N.; Hreinsson, K.; Ma, W.K. Federated Block Coordinate Descent Scheme for Learning Global and Personalized Models. arXiv; 2020; arXiv: 2012.13900
145. GitHub—REIYANG/FedBCD: Federated Block Coordinate Descent (FedBCD) code for ‘Federated Block Coordinate Descent Scheme for Learning Global and Personalized Models’, Accepted by AAAI Conference on Artificial Intelligence 2021. Available online: https://github.com/REIYANG/FedBCD (accessed on 25 January 2022).
146. Donevski, I.; Nielsen, J.J.; Popovski, P. On Addressing Heterogeneity in Federated Learning for Autonomous Vehicles Connected to a Drone Orchestrator. Front. Commun. Netw.; 2021; 2, 28. [DOI: https://dx.doi.org/10.3389/frcmn.2021.709946]
147. Mohri, M.; Sivek, G.; Suresh, A.T. Agnostic Federated Learning. Proceedings of the 36th International Conference on Machine Learning; Long Beach, CA, USA, 9–15 June 2019.
148. Ro, J.; Chen, M.; Mathews, R.; Mohri, M.; Suresh, A.T. Communication-Efficient Agnostic Federated Averaging. arXiv; 2021; arXiv: 2104.02748
149. Afonin, A.; Karimireddy, S.P. Towards Model Agnostic Federated Learning Using Knowledge Distillation. arXiv; 2021; arXiv: 2110.15210
150. Wang, H.; Yurochkin, M.; Sun, Y.; Papailiopoulos, D.; Khazaeni, Y. Federated Learning with Matched Averaging. arXiv; 2020; arXiv: 2002.06440
151. Layer-Wise Federated Learning with FedMA—MIT-IBM Watson AI Lab. Available online: https://mitibmwatsonailab.mit.edu/research/blog/fedma-layer-wise-federated-learning-with-the-potential-to-fight-ai-bias/ (accessed on 25 January 2022).
152. Golam Kibria, B.M.; Banik, S. Some Ridge Regression Estimators and Their Performances. J. Mod. Appl. Stat. Methods; 2016; 15, 12.
153. Zhao, L.; Ni, L.; Hu, S.; Chen, Y.; Zhou, P.; Xiao, F.; Wu, L. InPrivate Digging: Enabling Tree-based Distributed Data Mining with Differential Privacy. Proceedings of the IEEE INFOCOM 2018—IEEE Conference on Computer Communications; Honolulu, HI, USA, 16–19 April 2018; pp. 2087-2095. [DOI: https://dx.doi.org/10.1109/infocom.2018.8486352]
154. Wu, Y.; Cai, S.; Xiao, X.; Chen, G.; Ooi, B.C. Privacy preserving vertical federated learning for tree-based models. Proc. VLDB Endow.; 2020; 13, pp. 2090-2103. [DOI: https://dx.doi.org/10.14778/3407790.3407811]
155. Li, Q.; Wen, Z.; He, B. Practical Federated Gradient Boosting Decision Trees. Proc. Conf. AAAI Artif. Intell.; 2020; 34, pp. 4642-4649. [DOI: https://dx.doi.org/10.1609/aaai.v34i04.5895]
156. Liu, Y.; Liu, Y.; Liu, Z.; Liang, Y.; Meng, C.; Zhang, J.; Zheng, Y. Federated Forest. IEEE Trans. Big Data; 2020; [DOI: https://dx.doi.org/10.1109/TBDATA.2020.2992755]
157. Dong, T.; Li, S.; Qiu, H.; Lu, J. An Interpretable Federated Learning-based Network Intrusion Detection Framework. arXiv; 2022; arXiv: 2201.03134
158. Chen, Y.; Qin, X.; Wang, J.; Yu, C.; Gao, W. FedHealth: A Federated Transfer Learning Framework for Wearable Healthcare. IEEE Intell. Syst.; 2020; 35, pp. 83-93. [DOI: https://dx.doi.org/10.1109/MIS.2020.2988604]
159. Nikolaenko, V.; Weinsberg, U.; Ioannidis, S.; Joye, M.; Boneh, D.; Taft, N. Privacy-Preserving Ridge Regression on Hundreds of Millions of Records. Proceedings of the 2013 IEEE Symposium on Security and Privacy; Berkeley, CA, USA, 19–22 May 2013; pp. 334-348. [DOI: https://dx.doi.org/10.1109/sp.2013.30]
160. Yurochkin, M.; Agarwal, M.; Ghosh, S.; Greenewald, K.; Hoang, N.; Khazaeni, Y. Bayesian Nonparametric Federated Learning of Neural Networks. Proceedings of the 36th International Conference on Machine Learning; Long Beach, CA, USA, 9-15 June 2019; pp. 12583-12597.
161. Kibria, B.M.G.; Lukman, A.F. A New Ridge-Type Estimator for the Linear Regression Model: Simulations and Applications. Scientifica; 2020; 2020, pp. 1-16. [DOI: https://dx.doi.org/10.1155/2020/9758378]
162. Chen, Y.-R.; Rezapour, A.; Tzeng, W.-G. Privacy-preserving ridge regression on distributed data. Inf. Sci.; 2018; 451-452, pp. 34-49. [DOI: https://dx.doi.org/10.1016/j.ins.2018.03.061]
163. Awan, S.; Li, F.; Luo, B.; Liu, M. Poster: A reliable and accountable privacy-preserving federated learning framework using the blockchain. Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security; London, UK, 11–15 November 2019; pp. 2561-2563.
164. Sanil, A.P.; Karr, A.F.; Lin, X.; Reiter, J.P. Privacy preserving regression modelling via distributed computation. Proceedings of the KDD-2004—Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; Seattle, WA, USA, 22–25 August 2004; pp. 677-682.
165. An Example of Implementing FL for Linear Regression.|Download Scientific Diagram. Available online: https://www.researchgate.net/figure/An-example-of-implementing-FL-for-linear-regression_fig3_346038614 (accessed on 25 January 2022).
166. Anand, A.; Dhakal, S.; Akdeniz, M.; Edwards, B.; Himayat, N. Differentially Private Coded Federated Linear Regression. Proceedings of the 2021 IEEE Data Science and Learning Workshop (DSLW); Toronto, ON, Canada, 5–6 June 2021; pp. 1-6. [DOI: https://dx.doi.org/10.1109/dslw51110.2021.9523408]
167. Hardy, S.; Henecka, W.; Ivey-Law, H.; Nock, R.; Patrini, G.; Smith, G.; Thorne, B. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. arXiv; 2017; arXiv: 1711.10677
168. Mandal, K.; Gong, G. PrivFL: Practical privacy-preserving federated regressions on high-dimensional data over mobile networks. Proceedings of the 2019 ACM SIGSAC Conference on Cloud Computing Security Workshop; London, UK, 11 November 2019; pp. 57-68.
169. Sattler, F.; Müller, K.R.; Samek, W. Clustered Federated Learning : Model-Agnostic Distributed Multi-Task Optimization under Privacy Constraints. IEEE Trans. Neural Netw. Learn. Syst.; 2020; 32, pp. 3710-3722. [DOI: https://dx.doi.org/10.1109/TNNLS.2020.3015958] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32833654]
170. Marfoq, O.; Neglia, G.; Bellet, A.; Kameni, L.; Vidal, R. Federated Multi-Task Learning under a Mixture of Distributions. Adv. Neural Inform. Proces. Syst.; 2021; 34.
171. Chen, F.; Luo, M.; Dong, Z.; Li, Z.; He, X. Federated Meta-Learning with Fast Convergence and Efficient Communication. arXiv; 2018; arXiv: 1802.07876
172. Zhou, F.; Wu, B.; Li, Z. Deep Meta-Learning: Learning to Learn in the Concept Space. arXiv; 2018; arXiv: 1802.03596
173. Lin, S.; Yang, G.; Zhang, J. A Collaborative Learning Framework via Federated Meta-Learning. Proceedings of the 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS); Singapore, 29 November–1 December 2020; pp. 289-299. [DOI: https://dx.doi.org/10.1109/icdcs47774.2020.00032]
174. Federated Meta-Learning for Recommendation—arXiv Vanity. Available online: https://www.arxiv-vanity.com/papers/1802.07876/ (accessed on 25 January 2022).
175. Yue, S.; Ren, J.; Xin, J.; Zhang, D.; Zhang, Y.; Zhuang, W. Efficient Federated Meta-Learning over Multi-Access Wireless Networks. IEEE J. Sel. Areas Commun.; 2022; [DOI: https://dx.doi.org/10.1109/JSAC.2022.3143259]
176. Pye, S.K.; Yu, H. Personalized Federated Learning : A Combinational Approach. arXiv; 2021; arXiv: 2108.09618
177. Liu, B.; Wang, L.; Liu, M. Lifelong Federated Reinforcement Learning: A Learning Architecture for Navigation in Cloud Robotic Systems. IEEE Robot. Autom. Lett.; 2019; 4, pp. 4555-4562. [DOI: https://dx.doi.org/10.1109/LRA.2019.2931179]
178. Liang, X.; Liu, Y.; Chen, T.; Liu, M.; Yang, Q. Federated Transfer Reinforcement Learning for Autonomous Driving. arXiv; 2019; arXiv: 1910.06001:1910.06001
179. Wang, H.; Wu, Z.; Xing, E.P. Removing Confounding Factors Associated Weights in Deep Neural Networks Improves the Prediction Accuracy for Healthcare Applications. Biocomputing; 2018; 24, pp. 54-65. [DOI: https://dx.doi.org/10.1142/9789813279827_0006]
180. Basnayake, V. Federated Learning for Enhanced Sensor Realiabity of Automated Wireless Networks. Master’s Thesis; University of Oulu: Oulu, Finland, 2019.
181. Zeiler, M.; Fergus, R.; Wan, L.; Zhang, S.; Le Cun, Y. Regularization of Neural Networks using DropConnect. Int. Conf. Mach. Learn.; 2012; 28, pp. 1058-1066.
182. Krizhevsky, A.; Hinton, G. Learning Multiple Layers of Features from Tiny Images; University of Toronto: Toronto, ON, Canada, 2009.
183. Ren, J.; Ni, W.; Nie, G.; Tian, H. Research on Resource Allocation for Efficient Federated Learning. arXiv; 2021; arXiv: 2104.09177
184. Luo, M.; Chen, F.; Hu, D.; Zhang, Y.; Liang, J.; Feng, J. No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data. Adv. Neural Inform. Proces. Syst.; 2021; 34.
185. Chen, M.; Mao, B.; Ma, T. FedSA: A staleness-aware asynchronous Federated Learning algorithm with non-IID data. Futur. Gener. Comput. Syst.; 2021; 120, pp. 1-12. [DOI: https://dx.doi.org/10.1016/j.future.2021.02.012]
186. Asad, M.; Moustafa, A.; Ito, T.; Aslam, M. Evaluating the Communication Efficiency in Federated Learning Algorithms. Proceedings of the 2021 IEEE 24th International Conference on Computer Supported Cooperative Work in Design (CSCWD); Dalian, China, 5–7 May 2021; pp. 552-557. [DOI: https://dx.doi.org/10.1109/cscwd49262.2021.9437738]
187. Shahid, O.; Pouriyeh, S.; Parizi, R.M.; Sheng, Q.Z.; Srivastava, G.; Zhao, L. Communication Efficiency in Federated Learning: Achievements and Challenges. arXiv; 2021; arXiv: 2107.10996
188. Kullback-Leibler Divergence—An Overview. ScienceDirect Topics. Available online: https://www.sciencedirect.com/topics/engineering/kullback-leibler-divergence (accessed on 24 March 2020).
189. Augenstein, S.; McMahan, H.B.; Ramage, D.; Ramaswamy, S.; Kairouz, P.; Chen, M.; Mathews, R. Generative Models for Effective ML on Private, Decentralized Datasets. arXiv; 2019; arXiv: 1911.06679
190. Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H.B.; Mironov, I.; Talwar, K.; Zhang, L. Deep Learning with Differential Privacy. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security; Vienna, Austria, 24–28 October 2016; pp. 308-318. [DOI: https://dx.doi.org/10.1145/2976749.2978318]
191. Ulm, G.; Gustavsson, E.; Jirstrand, M. Functional Federated Learning in Erlang (ffl-erl). International Workshop on Functional and Constraint Logic Programming; Springer: Cham, Switzerland, 2019; pp. 162-178. [DOI: https://dx.doi.org/10.1007/978-3-030-16202-3_10]
192. Sprague, M.R.; Jalalirad, A.; Scavuzzo, M.; Capota, C.; Neun, M.; Do, L.; Kopp, M. Asynchronous Federated Learning for Geospatial Applications. Communications in Computer and Information Science; Springer: Cham, Switzerland, 2019; Volume 967, pp. 21-28. [DOI: https://dx.doi.org/10.1007/978-3-030-14880-5_2]
193. Hegedűs, I.; Danner, G.; Jelasity, M. Gossip Learning as a Decentralized Alternative to Federated Learning. IFIP International Conference on Distributed Applications and Interoperable Systems; Springer: Cham, Switzerland, 2019; pp. 74-90. [DOI: https://dx.doi.org/10.1007/978-3-030-22496-7_5]
194. Zhao, Y.; Chen, J.; Zhang, J.; Wu, D.; Teng, J.; Yu, S. PDGAN: A Novel Poisoning Defense Method in Federated Learning Using Generative Adversarial Network. Algorithms and Architectures for Parallel Processing; Springer: Cham, Switzerland, 2020; pp. 595-609. [DOI: https://dx.doi.org/10.1007/978-3-030-38991-8_39]
195. Anelli, V.W.; Deldjoo, Y.; Di Noia, T.; Ferrara, A. Towards Effective Device-Aware Federated Learning. International Conference of the Italian Association for Artificial Intelligence; Springer: Cham, Switzerland, 2019; pp. 477-491. [DOI: https://dx.doi.org/10.1007/978-3-030-35166-3_34]
196. Mills, J.; Hu, J.; Min, G. Communication-Efficient Federated Learning for Wireless Edge Intelligence in IoT. IEEE Internet Things J.; 2019; 7, pp. 5986-5994. [DOI: https://dx.doi.org/10.1109/JIOT.2019.2956615]
197. Sozinov, K.; Vlassov, V.; Girdzijauskas, S. Human Activity Recognition Using Federated Learning. Proceedings of the 2018 IEEE International Conference on Parallel & Distributed Processing with Applications, Ubiquitous Computing & Communications, Big Data & Cloud Computing, Social Computing & Networking, Sustainable Computing & Communications (ISPA/IUCC/BDCloud/SocialCom/SustainCom); Melbourne, VIC, Australia, 11–13 December 2018; pp. 1103-1111. [DOI: https://dx.doi.org/10.1109/bdcloud.2018.00164]
198. Zheng, Z.; Zhou, Y.; Sun, Y.; Wang, Z.; Liu, B.; Li, K. Federated Learning in Smart Cities: A Comprehensive Survey. arXiv; 2021; arXiv: 2102.01375
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
The federated learning technique (FL) supports the collaborative training of machine learning and deep learning models for edge network optimization. Although a complex edge network with heterogeneous devices having different constraints can affect its performance, this leads to a problem in this area. Therefore, some research can be seen to design new frameworks and approaches to improve federated learning processes. The purpose of this study is to provide an overview of the FL technique and its applicability in different domains. The key focus of the paper is to produce a systematic literature review of recent research studies that clearly describes the adoption of FL in edge networks. The search procedure was performed from April 2020 to May 2021 with a total initial number of papers being 7546 published in the duration of 2016 to 2020. The systematic literature synthesizes and compares the algorithms, models, and frameworks of federated learning. Additionally, we have presented the scope of FL applications in different industries and domains. It has been revealed after careful investigation of studies that 25% of the studies used FL in IoT and edge-based applications and 30% of studies implement the FL concept in the health industry, 10% for NLP, 10% for autonomous vehicles, 10% for mobile services, 10% for recommender systems, and 5% for FinTech. A taxonomy is also proposed on implementing FL for edge networks in different domains. Moreover, another novelty of this paper is that datasets used for the implementation of FL are discussed in detail to provide the researchers an overview of the distributed datasets, which can be used for employing FL techniques. Lastly, this study discusses the current challenges of implementing the FL technique. We have found that the areas of medical AI, IoT, edge systems, and the autonomous industry can adapt the FL in many of its sub-domains; however, the challenges these domains can encounter are statistical heterogeneity, system heterogeneity, data imbalance, resource allocation, and privacy.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details



1 Department of Computer Science, SST, University of Management and Technology, Lahore 54000, Pakistan;
2 Department of Computer Science, SST, University of Management and Technology, Lahore 54000, Pakistan;
3 Department of Computer Science, COMSATS University Islamabad Lahore Campus, Lahore 54000, Pakistan;
4 Department of Software and Communication Engineering, Hongik University, Sejong 30016, Korea