Content area
ABSTRACT
Smart advanced metering infrastructure and edge devices show promising solutions in digitalising distributed energy systems. Energy disaggregation of household load consumption provides a better understanding of consumers’ appliance‐level usage patterns. Machine learning approaches enhance the power system's efficiency but this is contingent upon sufficient training samples for efficient and accurate prediction tasks. In a centralised setup, transferring such a substantially high volume of information to the cloud server has a communication bottleneck. Although high‐computing edge devices seek to address such problems, the data scarcity and heterogeneity among clients remain challenges to be addressed. Federated learning offers a compelling solution in such a scenario by leveraging the ML model training at edge devices and aggregating the client's updates at a cloud server. However, FL still faces significant security issues, including the potential eavesdropping by a malicious actor with the intention of stealing clients' information while communicating with an honest‐but‐curious server. The study aims to secure the sensitive information of energy users participating in the nonintrusive load monitoring (NILM) program by integrating differential privacy with a personalised federated learning approach. The Fisher information method was adapted to extract the global model information based on common features, while personalised updates will not be shared with the server for client‐specific features. Similarly, the authors employed an adaptive differential privacy only on the shared local updates (DP‐PFL) while communicating with the server. Experimental results on the Pecan Street and REFIT datasets depict that DP‐PFL exhibits more favourable performance on both the energy prediction and status classification tasks compared to other state‐of‐the‐art DP approaches in federated NILM.
Introduction
Energy management of the residential sector plays a significant role in reducing the global carbon emission rate, as it accounts for 30%–40% of total end-use consumption [1]. Robust control, monitoring and prediction programs have the potential to enhance the energy efficiency of the residential energy sector, with the capacity to reduce overall power usage by 5%–20% [2]. Smart grid application with advanced metering infrastructure and edge devices offers promising solutions for the digitalisation of energy systems. In home energy systems, ensuring user comfort represents a significant challenge, necessitating the installation of numerous electronic devices at the household level. Nevertheless, the monitoring of energy consumption for each household appliance through the utilisation of digitalised smart metres represents a costly, invasive and challenging process. To empower a robust energy system through effective demand-side management, nonintrusive load monitoring (NILM), also known as energy disaggregation, presents a better analysis of the accurate load prediction programs by disaggregating the net demand into appliance-level information [3] as shown in Figure 1. Deep learning approaches are new methods to disintegrate household energy consumption into appliance-level energy patterns [4]. However, such neural NILM approaches necessitate cloud servers to train the model with many variable energy data samples, which incurs high computational costs. Model pruning and compression techniques alleviate the training and testing burden and facilitate the implementation of DNN prediction models at the edge. Such edge computing approaches leverage the local training and preserve the consumer data for insights from other users, but data sparsity makes the trained model the least accurate. In such a scenario, collaborative machine learning suggests a better solution to train the ML model at the edge client and share the updates with the cloud server to converge the global model.
[IMAGE OMITTED. SEE PDF]
In a centralised energy network, a nonintrusive load monitoring (NILM) application is needed to transmit smart metre data along with the energy profile of home appliances to a remote server. The remote server uses the data to train a machine-learning model, which raises privacy concerns. An adversary could extract detailed information about household energy usage habits, daily activities at home, temperature preferences, appliance usage patterns and the number of occupants at different times [5]. The lifestyle of a household can vary depending on various factors, including financial, social, environmental and even psychological influences. These factors also impact energy usage patterns and the choice of appliances in the household. In this context, federated learning addresses privacy concerns and performs better than local model approaches by aggregating peer client training weights at a central server.
Despite addressing the essential privacy concern, FL is still prone to possible privacy leakage by a passive adversary during the parameters exchange between the clients and server. Several techniques, such as differential privacy, homomorphic encryption and secure multiparty computation, have been adopted to mitigate the privacy leakage issue [6–8]. Similarly, malicious clients in the federation extract private information by sending manipulated updates to the central server during the training process, such as poisoning attacks, model inversion attacks, membership inference attacks and backdoor attacks [9]. To secure the client private data in Google GBoard application, McMahan et al. introduced differential privacy mechanism in federated training [10]. Similarly, to protect the client data from inference attack, Shi et al. employed the sharpness-aware minimisation approaches in the SGD optimisation method of federated learning [11]. However, the existing mitigation and defence mechanisms designed to address the privacy and security concerns associated with FL must be examined in the context of NILM. Existing federated NILM-based approaches have not demonstrated the effectiveness of differential privacy in a heterogeneous home energy network comprising diverse household appliances and a heterogeneous feature distribution. Personalised federated learning (PFL) seems to be effective in preserving the privacy, while employing PFL with differential privacy may further enhance the privacy of the client in federated NILM.
To address these issues, using federated learning with personalisation may provide a more effective solution. This personalised approach to federated learning can identify common features across all clients and client-specific features based on individual appliance usage preferences. This study aims to gain a deeper understanding of the potential applications of federated learning in energy disaggregation, with a particular focus on the aspects of personalisation and privacy leakage. The objectives of the current study are as follows:
-
To preserve privacy via personalised federated learning, the Fisher's information matrix approach is employed to determine the clients' shared and personalised features.
-
To increase privacy budget and provide secure federated training, differential privacy mechanism has been employed in PFL that can prevent potential privacy breaches in federated energy disaggregation.
-
It compares the efficacy of adaptive differential privacy and deep learning models in a personalised federated setup by clipping and adding noise to the shared updates during the training process.
-
A fair comparison with other privacy-based federated learning algorithms, such as adaptive differential privacy without personalisation (DP-FedAvg) and sharpness aware minimisation optimiser to perturb gradient (DP-FedSAM), has been carried out.
Literature Review
Secure cyber-physical systems, such as smart grids, aim to protect system information and data, which is of paramount importance for maintaining confidentiality, integrity and availability. In such a cyber-physical system, smart metres collect private data from consumers and share it with trusted utilities, thereby facilitating the smooth operation of the grid. If the utility is deemed untrustworthy, there may be a possibility of information sharing with a third-party exploiter, such as advisors or insurance companies. The dissemination of client energy data to adversaries could result in the acquisition of private information about the user, including details such as lifestyle, appliance preferences, the condition of electric devices, incentive mechanisms, number of occupants and more. The application of energy disaggregation techniques, such as NILM, enables the decomposition of smart metre data into appliance-level signatures, thereby facilitating the extraction of high-level data inferences. Edge computing devices facilitate the prediction of household energy consumption, yet utilities require information regarding net energy usage for generation programs. Likewise, the data provided by the energy user is insufficient for capturing the consumption patterns of a household's net energy demand and that of the appliances used. Privacy-preserving distributed learning enables collaborative learning of a global model at client edges, addressing client privacy concerns and model accuracy. Recently, deep learning applications in NILM for appliance-level energy disaggregation have opened avenues for researchers to address energy consumers' privacy concerns through federated learning.
Qi Li et al. proposed DFNILM to analyse the application of transfer learning in distributed NILM, which transfers the trained CNN-LSTM model across various dataset domains [12]. A comparative analysis of data-driven deep learning models has been conducted across three distinct datasets, given the recognition that the datasets from geographically disparate countries exhibit notable differences. The study concluded that the transferability of trained models across other domain samples could be enhanced through the acquisition of higher quality training data. Yunchuan Shi et al. investigated the user data leakage concerns in the context of centralised NILM machine learning and proposed federated learning as a potential solution [13]. A convolutional neural network (CNN) model has been adopted to address the sequence-to-point disaggregation problem and classify events in appliance signatures on the UK-Dale dataset, thereby addressing the issue of model vulnerability using a centralised approach. Wang et al. proposed a multi-task learning approach to cluster appliances with similar features for energy decomposition, highlighting the potential risks associated with using deep neural networks for privacy leakage [14]. They utilised the U-Net architecture to perform multi-target quantile regression and multi-label classification on the UK-Dale dataset, measuring performance for respective loss metrics. Wang et al. proposed a FedNILM framework to evaluate the federated approach generalisation capability with both local and centralised learning NILM configurations on two measured residential and industrial datasets [15]. FedNILM implemented a seq2point learning model, demonstrating improved performance relative to the local model with a fast convergence rate and scalability across heterogeneous networks.
Kaspour et al. proposed attention-based aggregation mechanisms in federated NILM (FedAtt) to extract the appliance consumption pattern based on consumer behavioural differences [16]. FedAtt examines short sequence-to-point (S2P) and variational auto-encoder (VAE) models over UK-Dale and REFIT datasets and suggests differential privacy to add noise during the federated training process to address the problem of inversion attacks at the server. The study evaluated FedAvg, centralised learning, and FedAtt on precision, recall, F1 Score and MAE metrics. The FedAtt with the VAE model outperformed others with an F1 score of 0.89, but the study did not provide any insights about the DP adaptation mechanism, noise parameter selection and relevant results. Zhou et al. proposed a federated load forecasting approach of a household while decomposing the net energy demand into appliance-level prediction and aggregating the estimated values of individual consumption to forecast the home's total energy usage [17]. They used CNN-LSTM and BiLSTM models on the nine kinds of appliances of the UK-Dale dataset to forecast the 250-min load data of five households. The CNN-LSTM model has been employed to decompose the energy for appliance level and subsequently forecast the aggregated energy based on the hybrid BiLSTM model. The study evaluated different hybrid deep learning models with the BiLSTM model based on prediction metric, which yielded an index of 0.08141 MAE and 0.16739 RMSE, respectively.
HEMS of smart homes is responsible for data collection and sample labelling of high-frequency input signals to the respective intelligent devices. Incorrect appliance labelling of any input signal can disturb the monitoring and visualisation of home energy systems. Lin et al. investigated the data cleaning process to improve the data quality fed into the vertical federated NILM system [18]. They set up an experimental testbed to analyse the data pollution with different hardware devices, communication protocols and electrical appliances. The SplitNN model has been evaluated as a global model on the classification task of three experimental HEMS clients resulting in an F1 score of 0.94 for training and 0.92 for testing. Giuseppi et al. introduced a decentralised network for federated NILM load decomposition to address the issue concerning the trust and security of the central server [19]. The Decentralised Federated Learning (DecFedAvg) algorithm exhibits a comparable pattern of mean absolute error (MAE) loss to FedAvg for washing machine energy prediction. A comparison between FedAVG and DecFedAvg over the five appliances from the REFIT dataset on the seq2point learning model gives an average MAE error of 0.057 and 0.054, respectively. Similarly, the authors suggested a mutual consensus-based approach in future research to tailor sophisticated decentralised networks.
Zhang et al. introduced a novel approach, termed ‘FedNILM’, for training personalised models for similar clients in a federated multi-task learning framework [20]. They employed sequence-to-point (Seq2Point) modelling techniques to facilitate multi-tasking and decomposed the model into shared and task-specific layers. Convolutional layers are common for all the clients to extract transferable load features, including the status of the appliances. In contrast, fully connected layers are responsible for learning detailed features about the specific appliances. They analysed the efficacy of personalised federated learning through transfer learning approaches on five REFIT and REDD datasets appliances, utilising the MAE, SAE and F1 score as performance metrics. Results showed that transfer learning improves the personalisation task in all devices except the washing machine and dishwasher, which exhibits a reduction of −27.27% in F1-score and −143.52% SAE metrics degradation, respectively. Change et al. proposed a gradient boosting machine method FedGBM to decompose the smart metre data to predict appliance signatures in a collaborative learning setup [21]. GBM has been implemented across the client federation based on a two-stage voting and node-level parallelism approach to address the issue of co-modelling in NILM. Comparative analysis with CNN and GAN on three open source datasets, UK-DALE, REDD and REFIT, demonstrated that FedGBM achieved comparable accuracy among the four appliances to the two baseline models with the least communication and computation cost.
Federated learning utilises the differential privacy mechanism to ensure robust privacy protection for clients in the event of potential malicious attacks on updates. One of the earliest approaches in this field is DP-FedAvg, which employs the Gaussian mechanism to safeguard client data while training a language model for predicting the next word [10]. Shi et al. proposed DP-FedSAM to minimise the impact of random noise and clipping by using a gradient perturbation optimiser on the updates, thereby improving performance [11]. Sharpness-aware minimisation (SAM) is a min-max SGD optimisation approach that addresses the issue of overly parameterised models and enhances their generalisation ability [22]. DP-FedSAM utilises a SAM optimiser to address the impact of differential privacy (DP) noise, which can result in over-parameterisation of client models in federated learning. The objective is to reduce the clipping norm to address inconsistency among client updates and to achieve a flatter curve, thereby facilitating better generalisation of the global model. Similarly, Mazhar et al. utilised an adaptive DP-FedAvg, which employs a noise multiplier to adaptively perturb the noise and clipping during federated training for energy prediction [23]. Chen et al. assessed the significance of client data features by determining the diagonal of the Fisher information matrix for approximation [24]. This study applies a similar approach with dynamic personalisation and differential privacy to protect the household client from gradient leakage. The DP-FedAvg method applies differential privacy to all the local models aggregated on the server. Similarly, DP-FedSAM introduces the optimiser that also perturbs all the gradient updates with differential privacy. To mitigate the effect of DP on the global aggregated model, we only apply DP on the shared parameters of client updates, instead of all the weights. To assess the efficacy of personalised training in federated differential privacy, we compare the Fisher information-based DP mechanism with DP-FedAvg and DP-FedSAM in the NILM energy disaggregation application.
Similarly, several studies have been carried out to preserve the privacy of NILM clients using federated learning, in which they have used several deep learning models [2, 13, 15, 20]. We investigate the performance of such federated NILM deep learning models such as GRU, LSTM, CNN-based sequence-to-point and CNN-LSTM hybrid models under differential privacy and client personalisation using FIM. Previous studies in federated NILM only compared local, centralised, and federated setups, no study has been carried out that analyses the impact of differential privacy and client personalisation to guarantee more secure learning. The list of symbols used in this study is given in Table 1.
TABLE 1 List of symbols used in the study.
| Symbol | Description |
| Power consumption by smart metre | |
| Power consumption by appliance | |
| Measurement error at the smart metre | |
| Deep learning model under Gaussian DP | |
| Global model weight | |
| Number of client in the federated learning | |
| Data samples of client participating in training | |
| Noise multiplier to control privacy | |
| Shared and personalised model weights of client | |
| Fisher information likelihood function for client | |
| Fisher information matrix for client |
System Design
Nonintrusive Load Monitoring
NILM is a technique for identifying the energy consumption of individual appliances from aggregated data, such as energy consumption data from a smart metre. The approach requires a single point of measurement, such as a digitalised metre and no additional measuring instruments are required to be installed in the home, as indicated by its name, ‘nonintrusive’. The following equation defines the aggregated power consumption of a household at time .
Federated Learning
Instead of bringing all the data together in one place, a cloud server, federated learning (FL) enables the training of machine learning models on individual devices. This approach leverages the client's computing resources to train a global model, achieving higher accuracy than the edge client model. Federated averaging (FedAvg) is a common method used to combine updates from all the devices participating in the training process [25]. In FedAvg, a central server averages the client's updates to the global model and improves the generalisation; it then sends back the global model to the devices so they can keep training it based on their information until it converges. For a given loss function of the respective task, the objective of FL can be defined as follows:
Federated Differential Privacy
The objective behind a man-in-the-middle or poisoning attack in a federated NILM-based system is to gain financial advantages or avoid regulatory compliance issues. An attacker can manipulate the model to give users false information about their appliance health, leading them to purchase new appliances prematurely. Similarly, an attacker may also manipulate data regarding the appliance or net consumption to encourage clients to consume more energy during peak hours and avoid incentives. A promising defence mechanism to secure against such attacks in Federated Learning (FL) is through differential privacy (DP). In the Gaussian noise mechanism for DP in deep learning [26], the noise magnitude is calibrated to the norm of adjacent inputs and , given as follows:
Personalised Federated Learning
To address the heterogeneity across clients' datasets in federated learning, the disintegration of the single federated model into two parts can address the issue. In PFL, the parameter vector of the model is divided into a client-specific local part and a shared global part . Let's consider clients, each with a private dataset , where represents the size of the dataset for client . The parameter vector of is now split into two parts for each client, denoted as using personalisation techniques. The above optimisation problem is then transformed into the PFL objective function.
-
Local Update: In each round, the client in federation collects the latest global weights while retaining its local parameter from the earlier round to initial the local training with the model parameters . After locally training for local epoch on respective client data it obtain new model parameters . It send the local updates by after taking the difference between and .
-
Global Update: All the clients participating in the federated training send their local updates to the central server for collaborative learning. Once the server receives all the local updates, it performs the FedAVG averaging algorithm to update the shared parameters by . The server then sends the updated global model back to the client for the next round, and the process continues until convergence.
Methodology
DP for NILM Clients
In PFL, it is essential to develop a differential privacy mechanism to prevent the potential risk of any privacy leakage due to malicious attacks. User-level differential privacy (DP) will provide a privacy budget to the NILM clients that are indistinguishable, thereby preventing the extraction of their features. To safeguard the shared updates from the potential risks associated with information leakage, user-level differential privacy (DP) is achieved through the application of clipping and noise addition techniques. Clipping operation is carried out to control the maximum contribution of a single NILM client to the energy disaggregation model update, ensuring that DP preserves privacy in any outliers. Similarly, noise addition operation from the random Gaussian distribution, renders it challenging for an adversary to infer specific information about the NILM appliance signatures. The combined effect of clipping and noise addition is given by the following:
PFL via Fisher Information
In traditional PFL based on image classification tasks, the distinction between the shared and specific parts is already made explicit within the model. For example, in the case of a ResNet model, the first few CNN layers are typically regarded as shared layers, given that they extract the common image features. Conversely, the last few fully connected linear layers are client-specific tasks, as they extract features relevant to the client-specific attributes, which may differ from those of other clients. However, our approach to personalised federated NILM employs the Fisher Information matrix to dynamically determine the sharing and personalised parameters of energy users in each global round. The Fisher Information determines the sensitivity of the log-likelihood function to changes in each parameter . Let denote the parameters of the last round and be the client dataset, the Fisher value vector for each weight indexed in is given as follows:
The overall personalised federated learning mechanism with differential privacy is illustrated in Figure 2. The clients participating in federated learning download the global model from the server. For each global round, the clients compute shared and personalised parameters dynamically, based on the Fisher information as detailed in Equation (7) and associated Algorithm 2. Subsequently, after decomposing the clients' model parameters into the two components, clipping and noise is applied to the parameters according to the differential privacy mechanism as given in Equation (6). This is applied solely to the shared parameters, which are uploaded to the server for aggregation. The server sends the aggregated global model for the next round, and the iterative process continues until the global model converges. The overall personalised federated mechanism under differential privacy (DP) is presented in the Algorithm 1 with a detailed account of how client model decomposition occurs in each global round in the Algorithm 2.
[IMAGE OMITTED. SEE PDF]
Algorithm
Proposed personalised federated NILM under DP.
Algorithm
Client model decomposition based on Fisher Information.
Experiments
Datasets
In our research, we utilised two distinct datasets: the Pecan Street1 and REFIT2 datasets. The Pecan Street dataset comprises 15-min interval aggregated energy and household appliance data, obtained from energy-measuring devices. It has the energy consumption of four appliances—an air conditioner, an electric vehicle, a kitchen utensil and a refrigerator—estimated from the energy metre. This publicly available data contains electrical measurements from a variety of households and their respective appliances across different cities in the United States [27]. Similarly, the REFIT dataset incorporates high-frequency load measurement data from 20 households in the United Kingdom [28]. The measurements have been taken at every 6-s interval for household appliances, however, for this study purpose, we averaged the data over the one-minute interval. In this study, we focused on data from 10 households of the REFIT dataset, considering both the aggregated energy and the energy consumed by specific appliances, namely the refrigerator, washing machine, television, and microwave. It is important to note that household energy consumption patterns vary based on several factors, including individual preferences, living standards, occupancy and other factors. Although the datasets do not provide information about the factors influencing user energy consumption signatures, it is evident that energy consumption differs among clients, thus making it a heterogeneous home energy network.
Model Architecture
We have implemented federated learning using the PyTorch framework, simplifying the training of deep learning models [29]. PyTorch facilitates the separation of the model architecture from the training process, rendering it an appropriate choice for personalised federated setups and the integration of differential privacy in our simulation environment. We utilised the Ocapus module for the implementation of an adaptive differential privacy mechanism in DP-FedAvg and DP-PFL, which requires a noise multiplier for the target epsilon and delta values [30]. Our findings suggest that a noise multiplier value of 0.5 represents the optimal choice for privacy and accuracy trade-off. The optimal privacy budget is defined by epsilon 1.0 and delta 0.1, as evidenced by the detailed experiments with different hyperparameters. In the study, we also referenced this privacy budget from DP-FedSAM, in which no noise multiplier hyperparameter is required for adaptive privacy protection. The effect of noise in DP-FedSAM is mitigated through bounding the target clipping bound assumed as and norm of gradient updates given as . A noise term, denoted by the probability density function of a normal distribution with a mean and variance as is added to the clipped weights, where denotes the noise variance instead of noise multiplier in adaptive approach for targeted epsilon and .
We have developed highly adaptable modules for model architecture. The Gate Recurrent Unit (GRU) utilises recurrent neural network layers for time series-based data, with a ReLU activation function. This adaptability ensures that the model is capable of accommodating the varying energy consumption profiles exhibited by disparate clients. Additionally, sequence-to-point (S2P)-based convolutional neural network (CNN), long short-term memory (LSTM) and hybrid CNN-LSTM models, were also employed in this paper, as they have been used in previous federated NILM studies. In the S2P, two CNN layers and two dense linear layers have been used, whereas in the other models, only a single CNN and LSTM layer has been adopted with two hidden layers. To achieve personalisation, Fisher-based approaches have been employed, whereby the diagonal of the log-likelihood is computed in each round from the respective samples and labels associated with the weights in each layer. Such personalisation enables the derivation of shared and client-specific features on a dynamic basis, which is then employed to clip and noise updates during the differential privacy mechanism, thus addressing security concerns. The optimal Fisher threshold for personalisation in this study is 0.35, which results in fast convergence. The optimum value of different hyperparameters used in our simulation is given in Table 2.
TABLE 2 Optimum value of hyperparameters used in the simulation.
| Parameter | Value |
| Specific layer | GRU, CNN, LSTM |
| Hidden layer | 2 |
| Activation function | ReLU, Tanh |
| Noise multiplier | 0.5 |
| Epsilon | 1.0 |
| Delta | 0.1 |
| Clipping bound | 0.2 |
| Local epoch | 2 |
| Learning rate | 0.001 |
Results and Discussion
We conducted a study to evaluate the performance of personalisation (DP-PFL) in the federated DP context. To this end, we have compared DP-PFL with two other approaches, namely DP-FedAvg and DP-FedSAM, in an energy disaggregation application for smart homes. To ensure a fair comparison of the three DP methods, comprehensive experiments have been conducted to determine the optimal values of the hyper-parameters. Once the optimal values had been identified, we carried out federated training process with 300 global rounds and two local epochs. In contrast to classification tasks, the time series prediction tasks elevated based on accuracy metrics for possible malicious attack and security analysis [31, 32]. A fair comparison can be made using the RMSE metric to evaluate the loss during the privacy-preserving federated training in smart grids [33, 34]. Furthermore, we also considered personalised federated learning without DP (PFL) to assess the performance accuracy of the differential privacy mechanism in securing and providing privacy to FL.
Figures 3a and 4a show the loss results of the CNN model over the federated rounds for PFL, DP-PFL, DP-FedAvg and DP-FedSAM using the Pecan Street dataset. Similarly, Figures 3b and 4b show the respective RMSE losses of the CNN and hybrid CNN-LSTM models for PFL, DP-PFL, DP-FedAvg and DP-FedSAM on REFIT. It has been shown from the figures that DP-PFL approaches outperform DP-FedAvg and DP-FedSAM approaches, following the same trends with minimal gap to PFL without differential privacy. We evaluated our approaches on four deep learning model such as CNN, GRU, LSTM and CNN-LSTM models, but only shows a S2P-based CNN and hybrid CNN-LSTM models due to redundancy as all of them show similar trends. In DP-PFL, the DP noise and clipping updates are only included in the parameters that are shared by the clients during the communication with the server. In addition, the adaptive noise and clipping in DP-FedAvg performs better than DP-FedSAM, while all shows declining loss with increasing number of training rounds. The smoother performance of DP-PFL indicates that it experiences minimal and similar adjacent global loss variations to PFL. Since we keep the noise hyper-parameters and dense layers the same in all cases, hyper-parameter optimisation for the GRU model may address such a problem.
[IMAGE OMITTED. SEE PDF]
[IMAGE OMITTED. SEE PDF]
The REFIT dataset comprised 6s interval measurement of appliances, averaged over a minute-based time for the study. The DP-PFL performances on the REFIT dataset in Figures 3b and 4b shows a similar trend to that of Pecan Street. The RMSE loss of CNN-LSTM model in DP-FedSAM is higher than the other approaches on Pecan Street dataset as given in Figure 4a. The comparative performance of DP-PFL shows that it outperforms other differential privacy approaches with a least RMSE of 0.7741 for the S2P-based CNN model. Results of DP-FedAvg and DP-SAM for the REFIT dataset show a lower RMSE than Pecan Street for all the respective deep learning models. Appliance-based prediction give better insides to evaluate the performance of global model. We presented two appliances of respective datasets to demonstrate the forecasting accuracy in differentially private federated NILM setup as show in Figures 5 and 6. CNN-LSTM based model show smoother forecasting trends while CNN tough predict peak-value well but highly fluctuates. Studies show that transformer-based model performs well in prediction and address the issue of catastrophic forgetting. Figures 5 and 6 show that differential privacy mechanism in federated learning (in our case DP-PFL) not much affect the prediction performance and which can be further enhance via transformer-based model, specially for appliances that have nonrecurrent profiles.
[IMAGE OMITTED. SEE PDF]
[IMAGE OMITTED. SEE PDF]
The individual performance of each client in the DP-PFL setup for the two datasets is given in Table 3. The best performance is highlighted in bold. Similary, for comparative analysis the individual clients RMSE loss for other differentially private federated approaches DP-FedAvg and DP-FedSAM are given in Tables 4 and 5. The clients that are performing well on the deep learning models have been shown in bold for all the cases being discussed, which are detailed in Tables 3–5. Client-10 gives the smallest RMSE of 0.3836 for CNN model on REFIT dataset, while it gives the largest RMSE of 0.7949 for LSTM model in DP-FedSAM of Pecan Street dataset. Both the datasets are diverse and collected from two different geographical zone: Pecan Street from Austin, USA, while REFIT collected from households of Loughborough, UK. Moreover, the consumption interval also differs, making the study viable for both 1-min and 15-min analysis in future studies. The datasets are diverse as per frequency of appliances usage; however, the study has limitations over the number or types of appliance across each clients assumed to be same for the respective datasets.
TABLE 3 DP-PFL method: average training loss accuracy of individual clients on the two datasets.
| Client | Training loss on Pecan | Training loss on REFIT | ||||||
| GRU | CNN | LSTM | CNN-LSTM | GRU | CNN | LSTM | CNN-LSTM | |
| Client-01 | 0.7476 | 0.7293 | 0.7948 | 0.7619 | 0.6024 | 0.4234 | 0.5370 | 0.4767 |
| Client-02 | 0.7297 | 0.7716 | 0.7947 | 0.7676 | 0.6257 | 0.3904 | 0.5639 | 0.4769 |
| Client-03 | 0.7659 | 0.7575 | 0.7922 | 0.7568 | 0.6040 | 0.4616 | 0.5434 | 0.4443 |
| Client-04 | 0.7250 | 0.7764 | 0.7675 | 0.8099 | 0.6301 | 0.4570 | 0.5137 | 0.4652 |
| Client-05 | 0.7523 | 0.7867 | 0.8068 | 0.7625 | 0.6605 | 0.4316 | 0.5658 | 0.4526 |
| Client-06 | 0.7422 | 0.7592 | 0.8202 | 0.7661 | 0.5277 | 0.4140 | 0.5114 | 0.4429 |
| Client-07 | 0.7257 | 0.7832 | 0.8359 | 0.7614 | 0.6296 | 0.4531 | 0.5394 | 0.4680 |
| Client-08 | 0.7347 | 0.7964 | 0.8249 | 0.7708 | 0.6299 | 0.4168 | 0.5355 | 0.4234 |
| Client-09 | 0.7185 | 0.7953 | 0.8222 | 0.7935 | 0.6039 | 0.4165 | 0.5217 | 0.4415 |
| Client-10 | 0.7516 | 0.7784 | 0.8207 | 0.7729 | 0.6108 | 0.3836 | 0.5595 | 0.4209 |
TABLE 4 DP-FedAvg method: average training loss accuracy of DP-PFL individual clients on the two datasets.
| Client | Training loss on Pecan | Training loss on REFIT | ||||||
| GRU | CNN | LSTM | CNN-LSTM | GRU | CNN | LSTM | CNN-LSTM | |
| Client-01 | 0.7576 | 0.7773 | 0.8107 | 0.7581 | 0.6108 | 0.4188 | 0.5571 | 0.4834 |
| Client-02 | 0.7610 | 0.7777 | 0.8313 | 0.7741 | 0.6192 | 0.4088 | 0.5546 | 0.4734 |
| Client-03 | 0.7413 | 0.7655 | 0.7937 | 0.7693 | 0.6478 | 0.4199 | 0.5455 | 0.4423 |
| Client-04 | 0.7402 | 0.7556 | 0.7925 | 0.7871 | 0.6493 | 0.4417 | 0.5562 | 0.4043 |
| Client-05 | 0.7601 | 0.7999 | 0.7946 | 0.7878 | 0.6258 | 0.4431 | 0.5355 | 0.4860 |
| Client-06 | 0.7556 | 0.7747 | 0.8377 | 0.7993 | 0.6286 | 0.4050 | 0.5506 | 0.4757 |
| Client-07 | 0.7362 | 0.7869 | 0.7976 | 0.7512 | 0.6516 | 0.4406 | 0.5516 | 0.4815 |
| Client-08 | 0.7558 | 0.7707 | 0.8210 | 0.7698 | 0.6178 | 0.4441 | 0.5579 | 0.4839 |
| Client-09 | 0.7444 | 0.7627 | 0.8190 | 0.8105 | 0.6471 | 0.4378 | 0.5426 | 0.4092 |
| Client-10 | 0.7626 | 0.7836 | 0.8162 | 0.7528 | 0.6569 | 0.4083 | 0.5421 | 0.4453 |
TABLE 5 DP-FedSAM method: average training loss accuracy of individual clients on the two datasets.
| Client | Training loss on Pecan | Training loss on REFIT | ||||||
| GRU | CNN | LSTM | CNN-LSTM | GRU | CNN | LSTM | CNN-LSTM | |
| Client-01 | 0.7552 | 0.8010 | 0.8357 | 0.7891 | 0.6066 | 0.4185 | 0.5664 | 0.4834 |
| Client-02 | 0.7700 | 0.7977 | 0.8061 | 0.8104 | 0.6883 | 0.4443 | 0.6102 | 0.4201 |
| Client-03 | 0.7761 | 0.7603 | 0.7996 | 0.8253 | 0.6029 | 0.4532 | 0.5742 | 0.4977 |
| Client-04 | 0.7725 | 0.8088 | 0.7953 | 0.7759 | 0.6459 | 0.4227 | 0.6373 | 0.4658 |
| Client-05 | 0.7642 | 0.7847 | 0.8279 | 0.8169 | 0.6924 | 0.4304 | 0.6099 | 0.4949 |
| Client-06 | 0.7552 | 0.7674 | 0.8429 | 0.7785 | 0.6805 | 0.4747 | 0.5601 | 0.4276 |
| Client-07 | 0.7688 | 0.7653 | 0.8018 | 0.8135 | 0.6491 | 0.4251 | 0.5795 | 0.4610 |
| Client-08 | 0.7445 | 0.7430 | 0.8130 | 0.8141 | 0.6122 | 0.4792 | 0.6025 | 0.4830 |
| Client-09 | 0.7505 | 0.7742 | 0.8229 | 0.7839 | 0.6427 | 0.4735 | 0.6539 | 0.5142 |
| Client-10 | 0.7465 | 0.7813 | 0.7949 | 0.8062 | 0.6054 | 0.4156 | 0.6208 | 0.5084 |
The study evaluates the model performance based on the regression task to forecast the appliance energy demands. However, the status of the appliances have unique characteristics in most of the energy disaggregation studies. We also analysed the classification scenario in our study to predict the ON/OFF status of the appliances of Pecan Street and REFIT datasets. The detailed test results of the respective appliances of both the datasets is given in Appendix A. It shows the details about the federated differential privacy mechanism and DL models performance on the individual appliances with respect to classification metrices, such as accuracy, precision, recall and F1-score. CNN and CNN-LSTM model seems to be effective in classifying the status of the appliances in all the DP-mechanism. In most of the appliances, the accuracy of the DP-PFL models have slightly better performance than DP-FedSAM and DP-FedAVG. As federated learning involve ML training at edges and parameters communication between the server and edge devices. DP-PFL increases the privacy with the same performance level as that of DP-FedSAM and DP-FedAVG. DP-PFL only communicate the shared global model parameters while preserving the personalised weights of the edge clients from potential leakage.
The study shows insight into the impact of differential privacy on federated NILM tasks. It shows the significance of personalisation in enhancing the performance of energy disaggregation under a better privacy budget. Still, the study has some challenges, the study primarily focused on privacy concerns related to protecting the sensitive information of smart home consumers from potential gradient leakage. Also, the study considered an assumption over the types of appliance to be same, which is in realistic may differ across clients, increasing the heterogeneity of the clients. In future studies, we will try to handle such highly heterogeneous setup, and also aim to implement a transformer-based model to enhance the performance of NILM tasks across various time-series datasets. Additionally, model pruning methods and client personalisation under diverse types of appliance may yield improved results of personalisation and differential privacy, as they involve fewer parameters than traditional Federated Learning (FL) training.
Conclusion
Energy disaggregation of consumer smart metres to individual appliance signatures helps to monitor the status of the smart home appliances. NILM task assists demand response program, which can be helpful for efficient smart home operation. Our paper proposes a federated learning approach for nonintrusive load monitoring energy prediction, addressing privacy concerns in a centralised setup. We employed differential privacy to address the security concerns over the malicious attack on client updates. We introduce the personalisation of NILM clients using the Fisher information matrix to enhance federated learning performance within privacy constraints. We comparatively validate the results with other state-of-the-art DP mechanisms in federated NILM setup. The results show that DP-PFL performs better with the least RMSE loss than DP-FedAvg and DP-FedSAM. Similarly, CNN provide better results on appliance-level status classification for both datasets.
Author Contributions
Mazhar Ali: conceptualization, data curation, formal analysis, investigation, methodology, resources, software, validation, visualization, writing – original draft, writing – review and editing. Ajit Kumar: formal analysis, supervision, visualization, writing – review and editing. Bong Jun Choi: conceptualization, formal analysis, funding acquisition, investigation, methodology, project administration, supervision, validation, visualization, writing – review and editing.
Acknowledgements
This research was supported by the MSIT Korea under the NRF Korea (RS-2025-00557379, 90%) and the Information Technology Research Center (ITRC) support program (IITP-2025-RS-2020-II201602, 10%) supervised by the IITP.
Conflicts of Interest
The authors declare no conflicts of interest.
Data Availability Statement
Datasets used in the study are openly available in a public repository that issues datasets with links; (Pecan), (REFIT).
Appendix - A
A.1: Pecan Street
Details of the results are presented below.
See Tables A1–A4.
TABLE A1 Performance metrics of DP methods and DL models on air-conditioner.
| Methods | Model | Accuracy | Precision | Recall | F-1 |
| DP-FedSAM | GRU | 0.5392 | 0.5833 | 0.4580 | 0.5643 |
| LSTM | 0.5508 | 0.7080 | 0.6085 | 0.6960 | |
| CNN | 0.6768 | 0.7637 | 0.6885 | 0.7104 | |
| CNN-LSTM | 0.6756 | 0.7857 | 0.4167 | 0.5326 | |
| DP-FedAVG | GRU | 0.5207 | 0.5005 | 0.5012 | 0.5560 |
| LSTM | 0.5468 | 0.7123 | 0.5451 | 0.6929 | |
| CNN | 0.6544 | 0.7701 | 0.6645 | 0.6953 | |
| CNN-LSTM | 0.7266 | 0.7813 | 0.7662 | 0.7488 | |
| DP-PFL | GRU | 0.5964 | 0.5642 | 0.5380 | 0.5541 |
| LSTM | 0.6084 | 0.7718 | 0.6084 | 0.6593 | |
| CNN | 0.8324 | 0.7587 | 0.6324 | 0.6771 | |
| CNN-LSTM | 0.8029 | 0.7885 | 0.8112 | 0.7949 |
TABLE A2 Performance metrics of DP methods and DL models on electric vehicle.
| Methods | Model | Accuracy | Precision | Recall | F-1 |
| DP-FedSAM | GRU | 0.5165 | 0.3960 | 0.4965 | 0.4456 |
| LSTM | 0.5956 | 0.581 | 0.5946 | 0.5823 | |
| CNN | 0.6899 | 0.6845 | 0.6899 | 0.682 | |
| CNN-LSTM | 0.6098 | 0.6117 | 0.6098 | 0.6107 | |
| DP-FedAVG | GRU | 0.5336 | 0.3893 | 0.5146 | 0.4424 |
| LSTM | 0.5966 | 0.7593 | 0.4696 | 0.4458 | |
| CNN | 0.5783 | 0.5697 | 0.5316 | 0.5724 | |
| CNN-LSTM | 0.5998 | 0.5990 | 0.4598 | 0.4645 | |
| DP-PFL | GRU | 0.5424 | 0.5184 | 0.5041 | 0.5234 |
| LSTM | 0.6751 | 0.6706 | 0.5711 | 0.6715 | |
| CNN | 0.6042 | 0.5831 | 0.5028 | 0.5692 | |
| CNN-LSTM | 0.6985 | 0.7103 | 0.5915 | 0.6713 |
TABLE A3 Performance metrics of DP methods and DL models on oven.
| Methods | Model | Accuracy | Precision | Recall | F-1 |
| DP-FedSAM | GRU | 0.5596 | 0.3696 | 0.5091 | 0.4146 |
| LSTM | 0.5710 | 0.755 | 0.5031 | 0.4151 | |
| CNN | 0.5946 | 0.5846 | 0.5214 | 0.5533 | |
| CNN-LSTM | 0.5731 | 0.7236 | 0.5563 | 0.4202 | |
| DP-FedAVG | GRU | 0.5603 | 0.3206 | 0.4502 | 0.4138 |
| LSTM | 0.5986 | 0.5884 | 0.5026 | 0.5789 | |
| CNN | 0.6259 | 0.6231 | 0.6188 | 0.5993 | |
| CNN-LSTM | 0.5819 | 0.7172 | 0.5981 | 0.4415 | |
| DP-PFL | GRU | 0.5491 | 0.5074 | 0.5149 | 0.4801 |
| LSTM | 0.5644 | 0.3811 | 0.5154 | 0.4141 | |
| CNN | 0.6149 | 0.613 | 0.5601 | 0.5776 | |
| CNN-LSTM | 0.5712 | 0.6325 | 0.5217 | 0.4157 |
TABLE A4 Performance metrics of DP methods and DL models on refrigerator.
| Methods | Model | Accuracy | Precision | Recall | F-1 |
| DP-FedSAM | GRU | 0.4528 | 0.3707 | 0.3502 | 0.4190 |
| LSTM | 0.6767 | 0.6871 | 0.5576 | 0.6505 | |
| CNN | 0.6102 | 0.5994 | 0.6008 | 0.5791 | |
| CNN-LSTM | 0.5979 | 0.6502 | 0.5817 | 0.4804 | |
| DP-FedAVG | GRU | 0.4709 | 0.3553 | 0.3975 | 0.4255 |
| LSTM | 0.5553 | 0.6535 | 0.5465 | 0.4347 | |
| CNN | 0.6059 | 0.5945 | 0.5829 | 0.5674 | |
| CNN-LSTM | 0.5803 | 0.5528 | 0.5608 | 0.4320 | |
| DP-PFL | GRU | 0.5524 | 0.5004 | 0.4525 | 0.4831 |
| LSTM | 0.5801 | 0.7564 | 0.4581 | 0.4260 | |
| CNN | 0.6193 | 0.6102 | 0.5190 | 0.5940 | |
| CNN-LSTM | 0.5803 | 0.5606 | 0.4839 | 0.4278 |
A.2: REFIT
See Tables A5–A8.
TABLE A5 Performance metrics of DP methods and DL models on fridge.
| Methods | Model | Accuracy | Precision | Recall | F-1 |
| DP-FedSAM | GRU | 0.7804 | 0.7529 | 0.7065 | 0.7246 |
| LSTM | 0.8295 | 0.8422 | 0.8393 | 0.8401 | |
| CNN | 0.9047 | 0.9050 | 0.9037 | 0.9040 | |
| CNN-LSTM | 0.9035 | 0.9039 | 0.9035 | 0.9036 | |
| DP-FedAVG | GRU | 0.7954 | 0.7609 | 0.7104 | 0.7406 |
| LSTM | 0.8474 | 0.8565 | 0.8434 | 0.8488 | |
| CNN | 0.9065 | 0.9069 | 0.9051 | 0.9066 | |
| CNN-LSTM | 0.9051 | 0.9049 | 0.9021 | 0.9050 | |
| DP-PFL | GRU | 0.8035 | 0.8742 | 0.8014 | 0.8208 |
| LSTM | 0.8612 | 0.8643 | 0.8562 | 0.8620 | |
| CNN | 0.9091 | 0.9096 | 0.9094 | 0.9092 | |
| CNN-LSTM | 0.9102 | 0.9098 | 0.9108 | 0.9077 |
TABLE A6 Performance metrics of DP methods and DL models on washing machine.
| Methods | Model | Accuracy | Precision | Recall | F-1 |
| DP-FedSAM | GRU | 0.7987 | 0.7509 | 0.5954 | 0.8041 |
| LSTM | 0.8236 | 0.5497 | 0.8254 | 0.8255 | |
| CNN | 0.9099 | 0.9103 | 0.9059 | 0.9101 | |
| CNN-LSTM | 0.9104 | 0.9101 | 0.9104 | 0.9101 | |
| DP-FedAVG | GRU | 0.8054 | 0.7609 | 0.7945 | 0.8241 |
| LSTM | 0.8337 | 0.8450 | 0.8337 | 0.8353 | |
| CNN | 0.9106 | 0.9118 | 0.9103 | 0.9109 | |
| CNN-LSTM | 0.9056 | 0.9087 | 0.9015 | 0.9088 | |
| DP-PFL | GRU | 0.8684 | 0.8765 | 0.8548 | 0.8695 |
| LSTM | 0.8498 | 0.8579 | 0.8455 | 0.8511 | |
| CNN | 0.8935 | 0.8948 | 0.8921 | 0.8939 | |
| CNN-LSTM | 0.9125 | 0.9126 | 0.9005 | 0.9061 |
TABLE A7 Performance metrics of DP methods and DL models on television.
| Methods | Model | Accuracy | Precision | Recall | F-1 |
| DP-FedSAM | GRU | 0.4025 | 0.7609 | 0.5204 | 0.3625 |
| LSTM | 0.6844 | 0.6534 | 0.5444 | 0.5454 | |
| CNN | 0.9059 | 0.9068 | 0.9051 | 0.9062 | |
| CNN-LSTM | 0.9091 | 0.9088 | 0.9065 | 0.9088 | |
| DP-FedAVG | GRU | 0.5703 | 0.6677 | 0.5630 | 0.5561 |
| LSTM | 0.5858 | 0.7720 | 0.4588 | 0.5951 | |
| CNN | 0.9129 | 0.9134 | 0.9107 | 0.9130 | |
| CNN-LSTM | 0.9104 | 0.9126 | 0.9106 | 0.9106 | |
| DP-PFL | GRU | 0.5956 | 0.7103 | 0.4639 | 0.5722 |
| LSTM | 0.6413 | 0.6573 | 0.5634 | 0.5429 | |
| CNN | 0.9161 | 0.9164 | 0.9105 | 0.9162 | |
| CNN-LSTM | 0.9174 | 0.9179 | 0.9014 | 0.9176 |
TABLE A8 Performance metrics of DP methods and DL models on microwave.
| Methods | Model | Accuracy | Precision | Recall | F-1 |
| DP-FedSAM | GRU | 0.7971 | 0.7612 | 0.6907 | 0.7206 |
| LSTM | 0.8417 | 0.8540 | 0.8375 | 0.8433 | |
| CNN | 0.9062 | 0.9066 | 0.9012 | 0.9064 | |
| CNN-LSTM | 0.9110 | 0.9108 | 0.8910 | 0.9109 | |
| DP-FedAVG | GRU | 0.8539 | 0.7609 | 0.7946 | 0.8142 |
| LSTM | 0.8353 | 0.8532 | 0.8312 | 0.8371 | |
| CNN | 0.9077 | 0.9081 | 0.9047 | 0.9078 | |
| CNN-LSTM | 0.8962 | 0.8962 | 0.8902 | 0.8962 | |
| DP-PFL | GRU | 0.8208 | 0.8785 | 0.8106 | 0.8219 |
| LSTM | 0.8523 | 0.8644 | 0.8463 | 0.8538 | |
| CNN | 0.9081 | 0.9067 | 0.9015 | 0.9053 | |
| CNN-LSTM | 0.9065 | 0.9057 | 0.8905 | 0.9066 |
M. Ali, A. K. Singh, A. Kumar, S. S. Ali, and B. J. Choi, “Comparative Analysis of Data‐Driven Algorithms for Building Energy Planning via Federated Learning,” Energies 16, no. 18 (2023): 6517, https://doi.org/10.3390/en16186517.
Z. Pan, H. Wang, C. Li, H. Wang, and J. Zhao, “Perfednilm: A Practical Personalized Federated Learning‐Based Non‐Intrusive Load Monitoring,” Industrial Artificial Intelligence 2, no. 1 (2024): 4, https://doi.org/10.1007/s44244‐024‐00016‐8.
R. Gopinath, M. Kumar, C. P. C. Joshua, and K. Srinivas, “Energy Management Using Non‐Intrusive Load Monitoring Techniques–State‐of‐the‐Art and Future Research Directions,” Sustainable Cities and Society 62 (2020): 102411, https://doi.org/10.1016/j.scs.2020.102411.
H. Bousbiat, Y. Himeur, I. Varlamis, F. Bensaali, and A. Amira, “Neural Load Disaggregation: Meta‐Analysis, Federated Learning and Beyond,” Energies 16, no. 2 (2023): 991, https://doi.org/10.3390/en16020991.
I. Butun, A. Lekidis, and D. Ricardo dos Santos, “Security and Privacy in Smart Grids: Challenges, Current Solutions and Future Opportunities,” ICISSP 10 (2020): 0009187307330741.
M. Fang, X. Cao, J. Jia, and N. Gong, “Local Model Poisoning Attacks to Byzantine‐Robust Federated Learning,” in 29th USENIX Security Symposium (USENIX Security 20), (2020), 1605–1622.
M. Ul Hassan, M. H. Rehmani, and J. Chen, “Differential Privacy Techniques for Cyber Physical Systems: A Survey,” IEEE Communications Surveys & Tutorials 22, no. 1 (2019): 746–789, https://doi.org/10.1109/comst.2019.2944748.
M. S. Abdalzaher, M. M. Fouda, and I. I. Mohamed, “Data Privacy Preservation and Security in Smart Metering Systems,” Energies 15, no. 19 (2022): 7419, https://doi.org/10.3390/en15197419.
S. Zhao, S. Xu, S. Han, et al., “PPMM‐DA: Privacy‐Preserving Multi‐Dimensional and Multi‐Subset Data Aggregation With Differential Privacy for Fog‐Based Smart Grids,” IEEE Internet of Things Journal 11, no. 4 (2023): 6096–6110, https://doi.org/10.1109/jiot.2023.3309132.
H. B. McMahan, D. Ramage, K. Talwar, and Li Zhang, “Learning Differentially Private Recurrent Language Models,” arXiv preprint arXiv:1710.06963 (2017).
Y. Shi, Y. Liu, W. Kang, Li Shen, X. Wang, and D. Tao, “Make Landscape Flatter in Differentially Private Federated Learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2023), 24552–24562.
Q. Li, J. Ye, W. Song, and Z. Tse, “Energy Disaggregation With Federated and Transfer Learning,” in 2021 IEEE 7th World Forum on Internet of Things (WF‐IoT) (IEEE, 2021), 698–703.
Y. Shi, W. Li, X. Chang, and A. Y. Zomaya, “User Privacy Leakages From Federated Learning in NILM Applications,” in Proceedings of the 8th ACM International Conference on Systems for Energy‐Efficient Buildings, Cities, and Transportation, (2021), 212–213.
X. Wang and W. Li, “Mtfed‐nilm: Multi‐Task Federated Learning for Non‐Intrusive Load Monitoring,” in 2022 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech) (IEEE, 2022), 1–8.
H. Wang, C. Si, G. Liu, J. Zhao, F. Wen, and Y. Xue, “Fed‐NILM: A Federated Learning‐Based Non‐Intrusive Load Monitoring Method for Privacy‐Protection,” Energy Conversion and Economics 3, no. 2 (2022): 51–60, https://doi.org/10.1049/enc2.12055.
S. Kaspour and A. Yassine, “Federated Non‐Intrusive Load Monitoring for Smart Homes Utilizing Attention‐Based Aggregation,” in 2023 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT) (IEEE, 2023), 208–213.
X. Zhou, J. Feng, J. Wang, and J. Pan, “Privacy‐Preserving Household Load Forecasting Based on Non‐Intrusive Load Monitoring: A Federated Deep Learning Approach,” PeerJ Computer Science 8 (2022): e1049, https://doi.org/10.7717/peerj‐cs.1049.
Yu‐H. Lin and J.‐C. Ciou, “A Privacy‐Preserving Distributed Energy Management Framework Based on Vertical Federated Learning‐Based Smart Data Cleaning for Smart Home Electricity Data,” Internet of Things 26 (2024): 101222, https://doi.org/10.1016/j.iot.2024.101222.
A. Giuseppi, A. Giuseppi, S. Manfredi, D. Menegatti, A. Pietrabissa, and C. Poli, “Decentralized Federated Learning for Nonintrusive Load Monitoring in Smart Energy Communities,” in 2022 30th Mediterranean Conference on Control and Automation (MED) (IEEE, 2022), 312–317.
Y. Zhang, G. Tang, Q. Huang, et al., “Fednilm: Applying Federated Learning to Nilm Applications at the Edge,” in IEEE Transactions on Green Communications and Networking (IEEE, 2022).
X. Chang, W. Li, and A. Y. Zomaya, “Fed‐GBM: A Cost‐Effective Federated Gradient Boosting Tree for Non‐Intrusive Load Monitoring,” in Proceedings of the Thirteenth ACM International Conference on Future Energy Systems, (2022), 63–75.
P. Foret, A. Kleiner, H. Mobahi, and B. Neyshabur, “Sharpness‐Aware Minimization for Efficiently Improving Generalization,” arXiv preprint arXiv:2010.01412 (2020).
M. Ali, A. Kumar, and B. J. Choi, “Power Quality Forecasting of Microgrids Using Adaptive Privacy‐Preserving Machine Learning,” in International Conference on Applied Cryptography and Network Security (Springer, 2024), 235–245.
D. Chen, J. Hu, V. J. Tan, X. Wei, and E. Wu, “Elastic Aggregation for Federated Optimization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE, 2023), 12187–12197.
B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication‐Efficient Learning of Deep Networks From Decentralized Data,” in Artificial Intelligence and Statistics (PMLR, 2017), 1273–1282.
M. Abadi, A. Chu, I. Goodfellow, et al., “Deep Learning With Differential Privacy,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, (2016), 308–318.
O. Parson, G. Fisher, A. Hersey, et al., “Dataport and NILMTK: A Building Data Set Designed for Non‐Intrusive Load Monitoring,” in 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP) (IEEE, 2015), 210–214.
Murray, D. M., “NILM: Energy Monitoring, Modelling, and Disaggregation.” (2023).
K. Burlachenko, S. Horváth, and P. Richtárik, “Fl_pytorch: Optimization Research Simulator for Federated Learning,” in Proceedings of the 2nd ACM International Workshop on Distributed Machine Learning, (2021), 1–7.
A. Yousefpour, I. Shilov, A. Sablayrolles, et al., “Opacus: User‐Friendly Differential Privacy Library in PyTorch,” arXiv preprint arXiv:2109.12298 (2021).
N. Ravi, “Enhancing Networked Systems: A Comprehensive Approach to Robust and Privacy‐Preserving Optimization Algorithms.” PhD diss. (Cornell University, 2024).
D. Syed, S. S. Refaat, and O. Bouhali, “Privacy Preservation of Data‐Driven Models in Smart Grids Using Homomorphic Encryption,” Information 11, no. 7 (2020): 357, https://doi.org/10.3390/info11070357.
A. Bhattacharjee, S. Badsha, and S. Sengupta, “Personalized Privacy Preservation for Smart Grid,” in 2021 IEEE International Smart Cities Conference (ISC2) (IEEE, 2021), 1–7.
H.‐Y. Tran, J. Hu, and H. R. Pota, “A Privacy‐Preserving State Estimation Scheme for Smart Grids,” IEEE Transactions on Dependable and Secure Computing 20, no. 5 (2022): 3940–3956, https://doi.org/10.1109/tdsc.2022.3210017.
© 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.