1. Introduction
In the era of digital healthcare transformation, the Internet of Medical Things (IoMT) has emerged as a form of critical infrastructure, enabling healthcare providers to collect, analyze, and utilize vast amounts of patient data efficiently [1]. The proliferation of IoMT devices has facilitated unprecedented levels of patient monitoring and care, enabling rapid innovations in personalized medicine and remote healthcare delivery [2]. The goal of collaborative healthcare management in geographically distributed IoMT networks is to efficiently allocate and manage resources across various edge devices to meet healthcare demands effectively. However, as IoMT adoption grows, so do the challenges associated with ensuring security, privacy, and compliance, particularly in distributed healthcare environments [3]. A recent survey highlights the landscape of IoMT adoption, revealing that wearable devices are the leading IoMT category, utilized by 69.6% of surveyed healthcare organizations [2]. Remote patient monitoring systems follow with a 62.3% adoption rate, and smart medical equipment is used by 60.2% of organizations. This diverse usage emphasizes the necessity for robust strategies to manage multi-device IoMT environments effectively [4,5].
The geographically distributed IoMT networks, where medical devices and edge computing resources are deployed across multiple locations, offer key advantages such as reduced latency in critical care scenarios, improved resilience, and enhanced compliance with regional health data regulations. These architectures are vital for meeting the demands of global healthcare delivery and ensuring uninterrupted service during localized disruptions or emergencies [6]. However, they offer significant safety concerns. The need to protect sensitive health information across a variety of locations, defend against a broader spectrum of online cyber threats, and ensure compliance under distinct healthcare regulatory standards increases layers of complexities as cited in [7]. Although several advanced security measures have been designed for IoMT, significant barriers still remain in meeting the security requirements of geographically dispersed healthcare networks. These problems include critical communication overhead that causes latency in time-sensitive medical applications, challenges with data integrity across various medical devices, and scalability issues arising from the incapacity of current security mechanisms to manage increasing amounts of health data [8]. Additionally, it is common for medical device combinations to be non-standardized, which leads to hazards and discrepancies. Inadequate visibility in auditing and monitoring further contributes to the inability to identify and address medical security issues in healthcare settings [9]. Furthermore, providing comprehensive security will require a lot of processing and storage power for a medical device with limited resources, which will affect its overall effectiveness and cost. Due to these difficulties, an effective solution for IoMT contexts is urgently needed. The suggested architecture, EdgeGuard, combines federated learning’s privacy with blockchain’s decentralized security [9,10]. It is specifically designed for the IOMT. This is the whole framework that, on the one hand, enhances health data security and optimizes resource management in collaborative healthcare environments and, on the other, permits safe and effective cooperation between dispersed medical devices and edge nodes.
EdgeGuard is an innovative framework that seeks to redefine the contours of secure and efficient data management in IoMT networks. It leverages blockchain technology, adaptive federated learning, and edge computing capabilities to address significant limitations of current approaches in healthcare data security and privacy [3,11]. EdgeGuard implements a novel decentralized architecture that optimizes resource utilization across diverse IoMT devices while ensuring robust data privacy and integrity [12]. Our framework introduces several key innovations: a lightweight blockchain consensus mechanism specifically designed for IoMT networks, an adaptive aggregation function for privacy-preserving federated learning, and intelligent resource allocation through reinforcement learning [13]. As illustrated in Figure 1, EdgeGuard introduces a secure blockchain layer that enables safe collaborative learning while preventing information leakage of patient data. The main contributions of this work are as follows:
Privacy-preserving federated learning architecture: A novel adaptive aggregation mechanism that enables secure model training across distributed healthcare institutions [14]. This architecture incorporates differential privacy techniques and secure aggregation protocols, ensuring patient privacy while optimizing model performance through quality-aware aggregation.
IoMT-optimized blockchain consensus: A lightweight consensus mechanism specifically engineered for resource-constrained medical devices, providing robust security guarantees while maintaining efficiency. This mechanism ensures data integrity and creates immutable audit trails for regulatory compliance.
Intelligent resource management: Advanced optimization techniques for heterogeneous IoMT environments include the following:
–. A dynamic model complexity adaptation based on available computational resources.
–. Adaptive learning rate scheduling, considering resource constraints.
–. Quality-aware device selection for optimal federated learning rounds.
–. Efficient model update compression for bandwidth-constrained scenarios.
Comprehensive performance framework: A multi-dimensional evaluation framework that assesses not only diagnostic accuracy but also computational efficiency, communication effectiveness, energy consumption, and fairness across diverse IoMT devices, ensuring practical deployability in real-world healthcare settings.
This paper is structured as follows: Section 2 discusses the related work and Section 3 presents EdgeGuard’s system formulation, including the system model, assumptions, threat model, problem statement, and design goals. Section 3 describes the proposed EdgeGuard framework in detail. Section 4 explains the operational design and algorithm analysis. Section 5 discusses the experimental setup, results, and analysis, and Section 6 concludes the paper.
2. Related Work
The integration of blockchain technology with federated learning in IoMT networks has emerged as a significant research area in recent years, driven by concerns over data privacy and security vulnerabilities in healthcare systems. In this section, we will analyze some of the most recent advances in this area, focusing on three main aspects: blockchain mechanisms, federated learning in healthcare, and integrated decentralized blockchain-FL architectures. In the context of IoMT networks, ref. [8] presented a security framework that provides security when medical data are being transferred based on a combination of encryption techniques, pattern recognition modules, and adaptive learning mechanisms. Although this approach has made advances in both the detection of anomalies as well as attack resistance, it neither considers the quality of data in a distributed setting nor does it take into account the computational capabilities of IoMT products. In addition, the framework lacks mechanisms for enabling decentralized yet secure collaborative machine learning across healthcare institutions—a must-have component to enhance diagnostic models while guaranteeing the privacy of records. This shortcoming is something EdgeGuard addresses with its blockchain-secured federated learning architecture.
Mohammed et al. [15] proposed a federated learning paradigm toward collaborative usage of health information while keeping it private based on both secure multi-party computations and differential privacy. Even though it has provided significant privacy preservation, there has been no proper consideration given to securing the model updates, the participating devices, or the data quality. A recent work proposed by Biken Singh et al. [16] introduces a blockchain-supported federated learning system for WBANs, focusing on energy efficiency and privacy through QNNs, differential privacy, and homomorphic encryption. However, their approach primarily focuses on energy optimization and basic privacy preservation, with no consideration of data quality assessment and device reliability in medical contexts. The authors failed to provide explicit security requirements for IoMTs in healthcare environments. EdgeGuard bridges this gap through its adaptive aggregation and lightweight consensus mechanisms, which are specifically tailored for IoMT.
Prior work by Yu et al. [17] proposed an I-UDEC framework combining blockchain, AI, and federated learning to optimize computation offloading and resource allocation in ultra-dense edge computing. The work obtained great improvements in task execution time but was mainly focused on general IoT scenarios and did not consider specific medical data sensitivity and healthcare regulatory compliance. Furthermore, their blockchain implementation was not designed for resource-constrained medical devices—something EdgeGuard addresses in its healthcare-specific design and lightweight consensus mechanism. The paper by Ali Kashif et al. [18] explored the integration of federated learning in healthcare Metaverse applications, highlighting potential benefits and challenges in medical diagnosis, patient monitoring, and drug discovery. While comprehensive in scope, the paper primarily focused on theoretical aspects and future possibilities, lacking practical implementation details or specific solutions for current IoMT security and privacy challenges that EdgeGuard addresses through its concrete blockchain-secured federated learning architecture. Previous research [19] proposed combining DFT with differential privacy in federated learning for healthcare, achieving better accuracy and reduced communication costs, but their approach lacked security mechanisms for model updates and did not address device reliability or data quality validation in medical networks—gaps that EdgeGuard specifically addresses.
Although the existing works are tremendous in terms of healthcare security through federated learning and integrating blockchain, they merely focus on individual aspects like privacy preservation or energy efficiency and cannot achieve a holistic solution in IoMT environments. Most approaches usually miss critical points related to data quality assessment and reliability of devices and, most importantly, come up with lightweight security mechanisms tailored for resource-constrained medical devices.
EdgeGuard has addressed this by designing an integrated framework that contains blockchain-secured federated learning and IoMT-specific optimizations. Our solution uniquely integrates adaptive aggregation based on data quality and device reliability, a lightweight consensus mechanism designed for medical devices, and comprehensive security measures that maintain HIPAA compliance while enabling efficient collaborative learning. The comparative analysis in Table 1 demonstrates the current work addresses particular aspects of privacy and security for health networks but not in totality. As such, EdgeGuard clearly distinguishes itself by combining privacy preservation, lightweight blockchain security, resource optimization, data quality assessment, and device reliability monitoring within the proposed healthcare-specific framework.
3. Problem Formulation
3.1. System Model
More than a thousand devices form a vast network of IoMT healthcare data resources. The adaptive federated learning model, with its dynamic aggregation, is recognized as a well-known system in the field of IoMT. This novel idea has changed the concept and analysis of medical data. It ensures privacy and security in these increasingly developing digital health systems. Edge devices, local datasets, central servers, edge servers, local and global models, adaptive aggregation functions, blockchain security layers, and more, are the main components used in this proposed model. The following are the main parts of our system:
Edge devices (): The different numbers of IOMT N edge devices range in terms of their computing and sensing capabilities. These edge devices are important components of our distributed learning setup from simple fitness trackers to cutting-edge medical equipment.
Local datasets (): Every edge device tracks a local dataset, thus reflecting a different portion of global health data. These datasets are characterized by their size and a quality metric .
Edge servers (): A collection of M edge servers arranged specifically to enable intermediate aggregation and computational offloading. These servers improve system scalability and responsiveness by acting as a link between the central server and resource-limited peripheral devices. This makes our whole system work better and faster, especially when dealing with lots of devices that might not be very powerful on their own.
Central server (): A high-performance central server that orchestrates the federated learning process, aggregates model updates, and maintains the global model. It is responsible for initiating learning rounds and disseminating the updated global model to edge devices.
Global and local models (): A shared neural network with d parameters; the global model represents the collective knowledge extracted from various medical data sources across the network. Collection consists of local model instances, where the model parameters trained on the local dataset of edge device are represented by each . These local models feed into the global model and are updated regularly, enabling a distributed learning process that protects data privacy and makes use of the IoMT network’s collective insights.
Communication links (): The group of communication lines that link the central server, edge servers, and edge devices. The bandwidth and latency of each connection are its key characteristics.
Adaptive aggregation function (): A completely novel function that dynamically weighs each edge device’s contribution according to measures for data quality and reliability. It is defined as follows:
(1)
where denotes the device ’s reliability and denotes the data quality measure.Blockchain Security Layer (BSL) To ensure the security, integrity, and traceability of the federated learning process [20], our system incorporates a BSL. This decentralized ledger system, denoted as , consists of a chain of blocks , each containing a set of verified transactions. The function maps entities to transactions recorded in blocks, while the cryptographic hash function ensures the immutability of the blockchain. A validation function verifies the legitimacy of transactions and blocks. Each model update and aggregation operation is recorded as a transaction in the blockchain, ensuring the integrity and traceability of the learning process:
(2)
where the symbol denotes concatenation. This blockchain layer gives our system a higher level of security by providing an editable and tamper-proof record of all learning activities inside the IoMT network.Quality assessment module (): A new module that grades the quality of a local dataset based on various criteria, such as data distribution, label accuracy, and task relevance, around the world. The quality score that is derived for every local dataset is .
Reliability evaluation function (): The feature that scores the reliability of the edge devices over time, accounting for the needs of hardware, uptime, and consistency of contributions. For each device at time t it returns a reliability score .
In the complex ecosystem, the adaptive federated learning scheme requires a series of communication rounds. For each round t, we choose a subset of edge devices . Based on their own datasets, this subset of edge devices computes local updates:
(3)
where denotes the loss function and represents the learning rate. Subsequently, using our adaptive aggregation function, the central server aggregates these updates:(4)
In this dynamic aggregation technique by IoMT, worldwide learning depends not just on the volume of data but on the goodness of data along with the credibility of origin. This, in turn, opens up the gates for an even more adaptive and robust learning process.3.2. Assumptions and Threat Model
The main assumptions and threats are considered, which define our system architecture and security measures while designing EdgeGuard for safe and effective federated learning in IoMT healthcare networks.
3.2.1. Main Assumptions
-
Data privacy and locality: The local dataset for every edge device remains on the device. According to healthcare data standards and patient privacy, only model updates are shared. Formally:
(5)
-
Device heterogeneity and intermittent connectivity: The edge devices, , are heterogeneous in terms of processing capability and network reliability. We represent this heterogeneity by defining a time-varying subset of devices that are active, in each round t:
(6)
-
Semi-honest participants: While implementing the steps, participants have the opportunity to learn from the data they have received. We assume the reliability assessment function represents this behavior over time:
(7)
3.2.2. Primary Threats
-
Data poisoning attacks : The malicious entities may manipulate the local dataset or introduce fake data into the global model updates. We represent this threat as a perturbation of local updates:
(8)
where is an adversarial distribution. -
Privacy breaches : The adversaries could try to recreate private information from model updates. In such cases, the differential privacy anticipates and assists the users from the privacy risks:
(9)
where datasets D and differ by one record; here, f is our learning algorithm, is the privacy budget, and is the failure probability. -
Integrity and authentication attacks : The attackers might fabricate the devices or interfere with model updates. In order to solve this, our blockchain security layer ensures the following:
(10)
where is a block in the blockchain, and V is the validation function.
With its blockchain-based integrity verification , adaptive aggregation function f, and privacy-preserving methods, EdgeGuard confronts these fundamental assumptions and threats. The reliability evaluation function R and quality assessment module are designed to counteract data poisoning attacks and the blockchain layer offers an unchangeable audit trail to guarantee the authenticity and integrity of model modifications. By keeping raw data localized and introducing noise to model updates, the federated learning method, when paired with differential privacy measures, naturally contributes to privacy protection against breaches.
3.3. Problem Statement and Design Goals
In this study, a federated learning system, called EdgeGuard, is developed using blockchain technology to enable safe and effective cooperation in the Internet of Medical Things (IoMT) networks with N-distributed edge devices. The challenge involves effectively carrying out federated learning tasks across many IoMT devices while reducing threats (, , and ). This is conducted to enhance the learning process’s overall efficiency and provide strong security in a cooperative healthcare setting. During the federated learning process over periods , the objective is to maximize data utility () and model accuracy () while minimizing communication overhead (), potential threats (), and model divergence () in the presence of unknown malicious clients, such that we have the following:
(11)
where if edge device participates in learning round j at local edge devices. Equation (12) formulates the threat model by incorporating the considered potential threats (: data poisoning, : privacy breaches, and : integrity attacks), aiming to minimize these risks and enhance the overall security of the collaborative environment:(12)
Equation (13) is formulated in a way that guarantees the effective distribution of edge devices to learning rounds across geographically dispersed data centers:(13)
The critical constraints that must be satisfied during the federated learning process within the IoMT network are stated in Equations (15)–(18):(14)
(15)
(16)
(17)
(18)
The constraint specifies that each edge device must participate in exactly one learning round; the constraints {C2–C4} state that the available resource capacity (CPU, RAM, bandwidth) of the edge network must be greater than or equal to the total requesting resource capacity of participating edge devices; constraint specifies the geographical constraints, indicating the availability of edge devices within the IoMT network. The design goals of EdgeGuard are, thus, formulated as a multi-objective optimization problem:(19)
subject to constraints {C1–C5}, where denotes the global model parameters. To achieve these goals, EdgeGuard employs the following:An adaptive aggregation function f to balance data utility and privacy:
(20)
A blockchain security layer to ensure integrity and traceability.
A quality assessment module to evaluate data quality.
A reliability evaluation function to assess device trustworthiness.
These components work in concert to create a secure, efficient, and privacy-preserving federated learning system for IoMT healthcare networks.
4. Proposed Framework
The EdgeGuard framework in Figure 1 enables secure federated learning across distributed IoMTs by allowing the edge devices to perform local model training over the sensitive health data, and the encrypted model updates are transmitted over the blockchain layer safeguarding the data integrity and privacy. With the direction of continuous security analysis and resource management, the central server then aggregates these changes into a global model via adaptive aggregation. This very sophisticated technique enables collaborative learning with raw patient data remaining localized, therefore balancing usefulness and privacy against system efficiency in challenging medical situations. EdgeGuard will, therefore, address particular challenges when processing remote medical data and aid in providing more insight into healthcare by combining various modules without compromising individual patient privacy or system security.
4.1. EdgeGuard Framework
The EdgeGuard architecture consists of six main steps: local model training, local model upload, cross-verification, block generation and propagation, adaptive aggregation, and global model update. These procedures provide safe and efficient federated learning in IoMT healthcare networks.
4.1.1. Local Model Training
The local training process is independently conducted using locally stored data on each IoMT edge device. We consider N edge devices as . Let there be M medical sensors at each edge device denoted as sending their health data to the IoMT environment for processing and analysis. Each sensor generates a set of health measurements, , along with particular parameters like the timestamp, sensor type, and measurement value . The edge device consists of computational resources characterized by CPU, memory, and bandwidth capacities . A convolutional neural network (CNN) is utilized at the edge devices to process and analyze the health data. The neural network comprises p-q-r number of neurons at the input, hidden, and output layers. These layers are interconnected through NN weights , with the size of the NN as S, such that we have the following:
(21)
The NN weights and biases are initialized randomly in the range of . The CNN collects the historical health data and normalizes the data to create and provide p input values, such as , into the input layer. The prediction process consists of three main steps: training, testing, and prediction. Data validation is conducted to improve the model’s performance, using the mean absolute percentage error (MAPE) as the error function to assess the model’s accuracy. The pre-processing of data extracts health measurements from different sensors and aggregates them over a fixed time interval. To normalize the input data within the range of , the data aggregation process is applied, as shown in Equation (22):(22)
In the dataset, represents the highest value obtained, while corresponds to the lowest value. The normalized data, denoted as , comprise a collection of all normalized data values, represented as . These normalized one-dimensional values are utilized as input to the input layer of the CNN. This model analyzes previous p health data values to predict the health status at the time instance. The ReLU activation function, as depicted in Equation (23), is used in the hidden layers:(23)
The evaluation of the accuracy and performance of the LM training process is carried out by utilizing the MAPE score, as given in Equation (24):(24)
where n represents the total number of data samples, while and correspond to the actual and predicted health status, respectively. Stochastic gradient descent with momentum (SGD-M) is employed to achieve dynamic and adaptive optimization of the network weights. In this context, velocity represents the gradient change needed to reach the global minimum, as expressed in Equation (25):(25)
The updated weight vector is represented as , while the current weight vector is denoted as . The calculation of can be performed using Equation (26):(26)
Here, the momentum is represented by the term . The constant has a value between 0 and 1, the learning rate is denoted as , and corresponds to the gradient of the loss function for the weight. represents the velocity at the previous step. Then, the local model is uploaded to the blockchain to complete federated learning aggregation.4.1.2. Local Model Upload
In EdgeGuard, we collect model updates from our medical devices, represented as . These updates are added to our blockchain, forming a series of connected blocks . Each block has two parts: a body and a header. The body, shown in Equation (27), contains the model updates and calculation times, as follows:
(27)
Here, denotes the update from device k for training round l, and denotes how long device i takes to compute. The header, given by Equation (28), is like an information tag, as follows:(28)
links to the previous block, denotes how fast we make blocks, and PoW is proof that the block is legit. We also track the block size using Equation (29):(29)
This depends on the header size h, the size of each update , and the number of devices . This setup helps us keep our medical AI updates organized and secure, balancing technical precision with practical application in our IoMT network [21].4.1.3. Cross-Verification
Miners broadcast and verify model updates, accumulate verified updates in a candidate block B, and finalize the block if it follows Equation (30):
(30)
4.1.4. Block Generation and Propagation
EdgeGuard employs a proof of work (PoW) mechanism for secure block generation and propagation. This process unfolds in three key steps: hash generation, block generation rate determination, and block propagation with ledger update. In the hash generation step, a miner m in the network computes a hash value H by iteratively modifying a nonce value N. The goal is to find a hash that satisfies the condition expressed in Equation (31):
(31)
Here, H represents the hash function, N is the nonce, and T denotes the target value that defines the PoW difficulty. The block generation rate, denoted as , is inversely proportional to the PoW difficulty, which is reflected in the target hash value T. This relationship is captured in Equation (32):(32)
This inverse relationship implies that a more stringent target value T results in a lower , thereby reducing the frequency of block generation. As stated in Equation (33), upon generation of a valid hash by a miner, the new block B is verified, approved, and disseminated to all miners. At this point, the miners cease their proof of work calculations and append B to their local ledgers.(33)
The proof-of-work method, thus, ensures integrity and security within the EdgeGuard architecture and provides a good base for decentralized storage and validation of the model changes within the IoMT network.4.1.5. Adaptive Aggregation and Global Model Update
Another innovation is the adaptive aggregation function that weighs contributions from each edge device within the function of data quality and reliability of devices. Such a feature is extremely critical for ensuring that the federated learning process remains robust and effective in the eventuality of a possible problem that results from data quality and security threats. Let represent the gradient updates acquired from the blockchain layer. Finally, the central server takes the adaptive aggregation function f to obtain the new global gradient at the time step . Mathematically, we have the following:
(34)
where is the set of participating devices in round t, is the quality score of device i’s data, and is the reliability score of device i. The adaptive aggregation function f is defined as follows:(35)
where is the weight assigned to device i’s update and is calculated as follows:(36)
Here, is the average of all updates, and is a hyperparameter controlling the influence of update similarity. This formulation ensures the following:Higher quality data (higher ) have more influence on the global model.
More reliable devices (higher ) contribute more significantly.
Updates that are closer to the average (potentially more trustworthy) are given higher weight.
(37)
where denotes the completeness of the data, denotes the validity, denotes the outlier ratio, and denote learnable parameters. The reliability score is calculated by the reliability evaluation function (REF), as follows:(38)
where is the previous reliability score, is the uptime ratio, is the contribution accuracy, is the error rate, is a smoothing factor, and are learnable parameters. This adaptive aggregation mechanism allows EdgeGuard to conduct the following:Mitigate the impact of low-quality or malicious updates.
Adapt to changing device behaviors and data characteristics.
Improve the overall robustness and accuracy of the global model.
Provide an implicit defense against various attacks, including data poisoning and free-riding.
The aggregated gradient is then used to update the global model . The has the same architecture as the local models. The update process is expressed in Equation (39):
(39)
4.1.6. Local Model Update
The global gradient is broadcast to all local clients using the blockchain layer, ensuring that the privacy of each client is maintained. Then each local model for the ith edge device is updated using the broadcasted global gradient as expressed in Equation (40):
(40)
where represents the local model parameters at epoch t, and represents the updated local model parameters for the next epoch. Local training for the next epoch begins with the updated parameters of the global model, ensuring consistency across all local models while maintaining the privacy and security of individual health data. This iterative process of local training, secure aggregation, and model distribution allows EdgeGuard to facilitate collaborative learning across IoMT devices, enabling improved healthcare insights while maintaining stringent privacy and security standards essential to medical applications.4.2. Smart Contract Implementation for Access Control and Model Updates
The IoMT network, running on the EdgeGuard framework, promises to have extremely complicated architectures for safe and verifiable smart contracts. Our implementation includes three leading types: device access control, model update verification, and secure aggregation protocols. The layer of smart contracts is important as it facilitates the transition between the federated learning process and the incorporation of blockchain security.
The proposed smart contract manages device registration, validates model updates, and ensures secure aggregation according to the privacy and security requirements of healthcare data. Specifically, this study targets three main aspects:
Access control: Ensures only authorized IoMT devices participate in the federated learning process.
Model update verification: Validates and records model updates in an immutable manner.
Secure aggregation: Implements privacy-preserving model aggregation using multi-party computation.
Algorithm 1 presents the comprehensive smart contract protocol that governs these interactions within EdgeGuard.
This protocol further completes our description of the blockchain security layer B in Section 3.1, such that the federated learning process can remain robust and free of attacks only when legitimate devices are involved in the training model. It makes use of the adaptive aggregation function in Equations (34)–(36), in which quality scores and reliability scores are used for weighing each edge device.
The implementation guarantees the following three important properties:
Security: Through robust access control and validation mechanisms.
Privacy: Via secure multi-party computation during aggregation.
Verifiability: Through immutable blockchain records of all operations.
This kind of architecture of the smart contract provides a needed base in the IoMT environment for secure and verifiable federated learning, yet it certainly aligns with EdgeGuard’s decentralized approach to medical resource management.
Algorithm 1 EdgeGuard smart contract protocol. |
|
4.3. Operational Design and Complexity Analysis
Algorithm 2 presents the functional design of the suggested EdgeGuard framework. The four main parts of the EdgeGuard algorithm—model update distribution, adaptive aggregation, blockchain operations, and edge device computations—determine how long it takes to run. Edge device computations including local model training and data preparation have temporal complexity , where n is the local dataset’s data point count and I is the number of training iterations. Using a temporal complexity of , where M is the number of blocks and d is the PoW difficulty level, the blockchain layer generates and verifies secure blocks using the proof of work (PoW) consensus technique. With denoting the number of edge devices and N denoting the overall number of model parameters, the adaptive aggregation processes of the central server introduce a complexity of .
This process aggregates updates from every edge device involved in participating. Moreover, the methods of quality assessment and dependability evaluation add a complexity of , where K is the count of evaluation criteria. As such, the entire time complexity of the EdgeGuard approach could be estimated as follows:
(41)
There is a balance between safe blockchain operations, local computations, and flexible global aggregation in the IoMT setting. This complexity analysis shows how EdgeGuard is spread out.Algorithm 2 EdgeGuard: secure federated learning for IoMT. |
|
5. Performance Analysis
5.1. Experimental Setup
The EdgeGuard framework was evaluated using a comprehensive simulation environment built with PyTorch v1.9.0 for federated learning implementation, integrated with Ethereum Ganache v2.5.4 for blockchain simulation. The experiments were conducted on a server equipped with 2 Intel® Xeon® Silver 4114 CPUs (Santa Clara, CA, USA) (40 cores, 2.20 GHz), 128 GB RAM, running Ubuntu 20.04 LTS, and NVIDIA Tesla V100 GPU with 32 GB memory. The federated learning environment was implemented using PyTorch DistributedDataParallel, incorporating our custom adaptive aggregation mechanism with differential privacy support through PyTorch DP. The blockchain component utilized a private Ethereum network with smart contracts written in Solidity v0.8.0, specifically modified for IoMT requirements. Web3.py v5.28.0 facilitated smart contract interactions, while NumPy v1.21.0 and Pandas v1.3.0 were used for efficient data manipulation and numerical computations. To simulate the IoMT environment, we configured three types of edge devices with varying computational capabilities, as shown in Table 2. The network environment was configured to reflect real-world conditions, with bandwidth variations from 1–10 Mbps, latency ranges of 10–100 ms, and packet loss rates of 0.1–1% in a star topology with a central aggregator.
Dataset: We utilized the MIMIC-III dataset, performing comprehensive preprocessing including temporal alignment of vital signs, missing value imputation using forward fill, feature normalization, and time series segmentation into 24-h windows. The dataset was split into training (80%), validation (10%), and test (10%) sets. Performance monitoring was conducted using Linux perf-tools v5.15.0 for resource utilization, iperf3 v3.12 for network statistics, and Intel RAPL (Running Average Power Limit) through powercap-utils v0.6.0 for energy consumption measurements.
5.2. Baseline Implementation
For comparative analysis, we implemented FedAvg as our baseline following McMahan et al.’s seminal work [22]. The FedAvg implementation performs standard federated averaging without the security enhancements of EdgeGuard. In each communication round, the server selects a fraction of available clients () and broadcasts the current global model. Each selected client trains the model on their local data for E epochs and returns the model updates. The global model is then updated using the following:
(42)
where represents the size of the local dataset at client k, n denotes the total dataset size, and represents the local model parameters at round t. This vanilla implementation differs from EdgeGuard in several key aspects:No quality-based weighting of client updates.
No reliability assessment of participating devices.
No blockchain-based security mechanisms.
No differential privacy protections.
Both EdgeGuard and the FedAvg baseline were implemented using the same optimization framework to ensure a fair comparison. The optimization configuration incorporates SGD with momentum as the base optimizer and includes additional enhancements such as cosine annealing for learning rate scheduling and gradient clipping to improve training stability. Table 3 provides a comprehensive list of all parameters used in our experiments, including the detailed optimization configuration that was previously omitted.
5.3. Evaluation and Simulation Results
We tested EdgeGuard on the MIMIC-III dataset in a simulated IoMT environment. Our evaluation focused on model accuracy, communication efficiency, security robustness, and resource utilization in four key aspects.
The experimentation environment maintains consistent conditions for both implementations, utilizing the MIMIC-III dataset with identical preprocessing steps and evaluation metrics. This setup ensures a fair comparison while highlighting EdgeGuard’s enhanced security and efficiency features.
5.3.1. Model Accuracy
Figure 2 presents how the accuracy of the model converges to rounds of communication in EdgeGuard compared to standard federated learning (FedAvg) and a centralized approach.
This resulted in an average test error of 94.3%, which was higher than that of FedAvg at 91.7% and closer to the 95.5% error rate achieved by a centralized approach. The adaptive aggregation mechanism contributed to faster convergence toward this higher final accuracy in the federated setting.
5.3.2. Communication Efficiency
Figure 3 shows the total amount of data transferred during the training process for different numbers of edge devices.
EdgeGuard outperformed FedAvg in terms of communication efficiency, cutting the overall amount of data sent by as much as 30%, especially as the number of edge devices rose.
5.3.3. Security Robustness
Figure 4 illustrates EdgeGuard’s performance under varying percentages of malicious nodes for different types of attacks.
EdgeGuard maintained an accuracy of over 90% even with up to 40% of the nodes being malicious, demonstrating strong resilience against data poisoning, model poisoning, and Sybil attacks.
5.3.4. Resource Utilization
Table 4 presents the average resource utilization per edge device type during training.
EdgeGuard showed effective resource management across a variety of device types. Compared to standard federated learning, the integration of blockchain activities resulted in an average 15% increase in energy usage. However, a decrease in communication overhead made up for this. To summarize, EdgeGuard outperformed conventional federated learning in terms of model correctness, communication effectiveness, and security robustness while exhibiting tolerable resource use increases. The adaptive methods of the system demonstrated efficacy in preserving performance in adversarial IoMT scenarios while optimizing resource utilization among heterogeneous edge devices.
6. Conclusions
In this paper, we present EdgeGuard, a unique architecture that improves federated learning in IoMT contexts in terms of security, efficiency, and speed. Through EdgeGuard’s integration of blockchain technology and adaptive federated learning, data privacy and integrity are guaranteed across distributed edge devices. Our analysis reveals that EdgeGuard resists up to 40.05% of malicious nodes while achieving a model accuracy of 94.34%, surpassing typical techniques by 2.68%. It also optimizes resource utilization in IoMT scenarios by reducing communication overhead by 30.67%. Coupled with proof of work consensus and differential privacy approaches, the framework’s adaptive aggregation mechanism provides a robust defense against various threats and ensures the protection of patient data. Thus, EdgeGuard offers a complete solution for machine learning in decentralized healthcare ecosystems that is safe, effective, and privacy-preserving, greatly advancing the development of reliable AI-driven healthcare applications.
S.P.: Conceptualization, methodology, formal analysis, validation, writing original draft, Conceptualization; J.L.: Formal analysis, validation, visualization, review and editing. All authors have read and agreed to the published version of the manuscript.
The data presented in this study are available on request from the corresponding author due to privacy and ethical restrictions.
The authors declare no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Comparison of related works in blockchain-secured federated learning for healthcare.
Work | Privacy | Security | Resource | Data Quality | Device | Healthcare |
---|---|---|---|---|---|---|
[ | DP | Encryption | No | No | No | Yes |
[ | DP + MPC | No | No | No | No | Yes |
[ | DP + HE | Blockchain | Energy-aware | No | No | Yes |
[ | FL | Blockchain | 2Ts-DRL | No | No | No |
[ | DP + DFT | No | Communication | No | No | Yes |
EdgeGuard (Ours) | DP + MPC | Lightweight Blockchain | Resource-aware | Yes | Yes | Yes |
DP: differential privacy, MPC: multi-party computation, HE: homomorphic encryption, DFT: discrete Fourier transform, 2Ts-DRL: two-timescale deep reinforcement learning, FL: Federated Learning.
Edge device configurations.
Device | CPU Cores | MIPS | RAM (GB) | Storage (GB) | Power (W) |
---|---|---|---|---|---|
E1 | 2 | 2660 | 4 | 32 | 5 |
E2 | 4 | 3067 | 8 | 64 | 8 |
E3 | 8 | 3467 | 16 | 128 | 12 |
Simulation parameters.
Parameter | Value |
---|---|
System Configuration | |
Number of Edge Devices | 50–500 |
Number of IoMT Sensors | 100–2000 |
CPU Cores per Server | 40 |
RAM per Server | 128 GB |
GPU Memory | 32 GB |
Federated Learning Parameters | |
Train/Test Split | 80:20 |
Local Epochs | 10–50 |
Batch Size | 64 |
Communication Rounds | 1–300 |
Client Selection Rate | 0.8 |
Malicious Devices | {10%, 20%,..., 50%} |
Optimization Parameters | |
Base Learning Rate | 0.01 |
Optimizer | SGD with Momentum ( |
Learning Rate Scheduler | Cosine Annealing |
Weight Decay | |
Gradient Clipping | 1.0 |
Early Stopping Patience | 10 epochs |
Momentum | 0.9 |
Blockchain Parameters | |
Block Generation Rate ( | {0.1, 0.3, 0.5, 0.7} |
Consensus Algorithm | Proof of Work |
Gas Limit | 6,721,975 |
Block Time | 15 s |
Smart Contract Version | Solidity v0.8.0 |
Network Parameters | |
Bandwidth Range | 1–10 Mbps |
Latency Range | 10–100 ms |
Packet Loss Rate | 0.1–1% |
Network Topology | Star |
Dataset Configuration | |
Training Set | 80% |
Validation Set | 10% |
Test Set | 10% |
Time Window | 24 h |
Sampling Rate | 5 min |
Security Parameters | |
Differential Privacy | 0.1–1.0 |
Privacy Budget | |
Encryption Method | AES-256 |
Key Length | 2048 bits |
Resource utilization.
Device | CPU (%) | RAM (GB) | Network (MB/s) | Energy (Wh) |
---|---|---|---|---|
E1 | 78.5 | 3.2 | 0.8 | 12.6 |
E2 | 65.3 | 6.1 | 1.2 | 20.1 |
E3 | 52.1 | 11.8 | 1.5 | 28.5 |
References
1. Al-Turjman, F.; Nawaz, M.H.; Ulusar, U.D. Intelligence in the Internet of Medical Things era: A systematic review of current and future trends. Comput. Commun.; 2020; 150, pp. 644-660. [DOI: https://dx.doi.org/10.1016/j.comcom.2019.12.030]
2. Huang, C.; Wang, J.; Wang, S.; Zhang, Y. Internet of medical things: A systematic review. Neurocomputing; 2023; 557, 126719. [DOI: https://dx.doi.org/10.1016/j.neucom.2023.126719]
3. Chang, Y.; Fang, C.; Sun, W. A Blockchain-Based Federated Learning Method for Smart Healthcare. Comput. Intell. Neurosci.; 2021; 2021, 4376418. [DOI: https://dx.doi.org/10.1155/2021/4376418] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34868289]
4. Raul, S.; Das, S.; Murty, C.S.; Devi, K. A Review on Intelligent Health Care System Using Learning Methods. Recent Developments in Electronics and Communication Systems; IOS Press: Amsterdam, The Netherlands, 2023; pp. 154-159.
5. Mohammed, M.A.; Lakhan, A.; Abdulkareem, K.H.; Zebari, D.A.; Nedoma, J.; Martinek, R.; Kadry, S.; Garcia-Zapirain, B. Energy-efficient distributed federated learning offloading and scheduling healthcare system in blockchain based networks. Internet Things; 2023; 22, 100815. [DOI: https://dx.doi.org/10.1016/j.iot.2023.100815]
6. Duan, Q.; Huang, J.; Hu, S.; Deng, R.; Lu, Z.; Yu, S. Combining Federated Learning and Edge Computing Toward Ubiquitous Intelligence in 6G Network: Challenges, Recent Advances, and Future Directions. IEEE Commun. Surv. Tutor.; 2023; 25, pp. 2892-2950. [DOI: https://dx.doi.org/10.1109/COMST.2023.3316615]
7. Myrzashova, R.; Alsamhi, S.H.; Shvetsov, A.V.; Hawbani, A.; Wei, X. Blockchain meets federated learning in healthcare: A systematic review with challenges and opportunities. IEEE Internet Things J.; 2023; 10, pp. 14418-14437. [DOI: https://dx.doi.org/10.1109/JIOT.2023.3263598]
8. Khan, M.F.; AbaOud, M. Blockchain-Integrated Security for real-time patient monitoring in the Internet of Medical Things using Federated Learning. IEEE Access; 2023; 11, pp. 117826-117850. [DOI: https://dx.doi.org/10.1109/ACCESS.2023.3326155]
9. Sai, S.; Chamola, V.; Choo, K.K.R.; Sikdar, B.; Rodrigues, J.J.P.C. Confluence of Blockchain and Artificial Intelligence Technologies for Secure and Scalable Healthcare Solutions: A Review. IEEE Internet Things J.; 2023; 10, pp. 5873-5897. [DOI: https://dx.doi.org/10.1109/JIOT.2022.3232793]
10. Khubrani, M.M. Artificial Rabbits Optimizer with Deep Learning Model for Blockchain-Assisted Secure Smart Healthcare System. Int. J. Adv. Comput. Sci. Appl.; 2023; 14, [DOI: https://dx.doi.org/10.14569/IJACSA.2023.0140998]
11. Manzoor, H.U.; Shabbir, A.; Chen, A.; Flynn, D.; Zoha, A. A Survey of Security Strategies in Federated Learning: Defending Models, Data, and Privacy. Future Internet; 2024; 16, 374. [DOI: https://dx.doi.org/10.3390/fi16100374]
12. Patni, S.; Lee, J. Explainable AI Empowered Resource Management for Enhanced Communication Efficiency in Hierarchical Federated Learning. Comput. Electr. Eng.; 2024; 117, 109260. [DOI: https://dx.doi.org/10.1016/j.compeleceng.2024.109260]
13. Otoum, Y.; Hu, C.; Said, E.H.; Nayak, A. Enhancing Heart Disease Prediction with Federated Learning and Blockchain Integration. Future Internet; 2024; 16, 372. [DOI: https://dx.doi.org/10.3390/fi16100372]
14. Solat, F.; Patni, S.; Lim, S.; Lee, J. Heterogeneous Privacy Level-Based Client Selection for Hybrid Federated and Centralized Learning in Mobile Edge Computing. IEEE Access; 2024; 12, pp. 108556-108572. [DOI: https://dx.doi.org/10.1109/ACCESS.2024.3436009]
15. Abaoud, M.; Almuqrin, M.A.; Khan, M.F. Advancing Federated Learning Through Novel Mechanism for Privacy Preservation in Healthcare Applications. IEEE Access; 2023; 11, pp. 83562-83579. [DOI: https://dx.doi.org/10.1109/ACCESS.2023.3301162]
16. Singh, M.B.; Singh, H.; Pratap, A. Energy-Efficient and Privacy-Preserving Blockchain Based Federated Learning for Smart Healthcare System. IEEE Trans. Serv. Comput.; 2024; 17, pp. 2392-2403. [DOI: https://dx.doi.org/10.1109/TSC.2023.3332955]
17. Yu, S.; Chen, X.; Zhou, Z.; Gong, X.; Wu, D. When Deep Reinforcement Learning Meets Federated Learning: Intelligent Multitimescale Resource Management for Multiaccess Edge Computing in 5G Ultradense Network. IEEE Internet Things J.; 2021; 8, pp. 2238-2251. [DOI: https://dx.doi.org/10.1109/JIOT.2020.3026589]
18. Bashir, A.K.; Victor, N.; Bhattacharya, S.; Huynh-The, T.; Chengoden, R.; Yenduri, G.; Maddikunta, P.K.R.; Pham, Q.V.; Gadekallu, T.R.; Liyanage, M. Federated Learning for the Healthcare Metaverse: Concepts, Applications, Challenges, and Future Directions. IEEE Internet Things J.; 2023; 10, pp. 21873-21891. [DOI: https://dx.doi.org/10.1109/JIOT.2023.3304790]
19. Hidayat, M.A.; Nakamura, Y.; Arakawa, Y. Enhancing Efficiency in Privacy-Preserving Federated Learning for Healthcare: Adaptive Gaussian Clipping with DFT Aggregator. IEEE Access; 2024; 12, pp. 88445-88457. [DOI: https://dx.doi.org/10.1109/ACCESS.2024.3418016]
20. Rajagopal, S.M.; Supriya, M.; Buyya, R. Blockchain Integrated Federated Learning in Edge/Fog/Cloud Systems for IoT-Based Healthcare Applications: A Survey. Federated Learning; Taylor & Francis Group: Oxfordshire, UK, 2024; pp. 237-269. [DOI: https://dx.doi.org/10.1201/9781003497196]
21. Rahman, M.A.; Hossain, M.S.; Islam, M.S.; Alrajeh, N.A.; Muhammad, G. Secure and provenance enhanced internet of health things framework: A blockchain managed federated learning approach. IEEE Access; 2020; 8, pp. 205071-205087. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3037474] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34192116]
22. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. Proceedings of the Artificial Intelligence and Statistics, PMLR; Ft. Lauderdale, FL, USA, 20–22 April 2017; pp. 1273-1282.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
The development of medical data and resources has become essential for enhancing patient outcomes and operational efficiency in an age when digital innovation in healthcare is becoming more important. The rapid growth of the Internet of Medical Things (IoMT) is changing healthcare data management, but it also brings serious issues like data privacy, malicious attacks, and service quality. In this study, we present EdgeGuard, a novel decentralized architecture that combines blockchain technology, federated learning, and edge computing to address those challenges and coordinate medical resources across IoMT networks. EdgeGuard uses a privacy-preserving federated learning approach to keep sensitive medical data local and to promote collaborative model training, solving essential issues. To prevent data modification and unauthorized access, it uses a blockchain-based access control and integrity verification system. EdgeGuard uses edge computing to improve system scalability and efficiency by offloading computational tasks from IoMT devices with limited resources. We have made several technological advances, including a lightweight blockchain consensus mechanism designed for IoMT networks, an adaptive edge resource allocation method based on reinforcement learning, and a federated learning algorithm optimized for medical data with differential privacy. We also create an access control system based on smart contracts and a secure multi-party computing protocol for model updates. EdgeGuard outperforms existing solutions in terms of computational performance, data value, and privacy protection across a wide range of real-world medical datasets. This work enhances safe, effective, and privacy-preserving medical data management in IoMT ecosystems while maintaining outstanding standards for data security and resource efficiency, enabling large-scale collaborative learning in healthcare.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer