Content area
The use of consumer electronics has grown drastically in recent times due to advancements in technology. It has resulted in users expecting privacy and security from data shared over these devices. Split learning has become a widespread technique in providing these assurances. It is necessary to extend it further. That resulted in a model where the traditional approach has more performance resources and a more security-aware/privacy-aware alternative. This superior performance is explained by the fact that It has a more detailed implementation than the split learning approach. Provides data leak prevention to meet different use cases, including (but not limited to) the ability to protect data at rest, in motion, and while in use. Therefore, it can enhance transaction speed and latency over split learning. The security and privacy-aware view helps the user by providing options to secure his data while the prevention relationship improves usability. It can also lead to greater data type flexibility in networks. SPAM crawled results in proposed 0.05135 FDR, where Pth: 1 results with fdr > This caused the programmable security and privacy-aware model to far outstrip Split learning in terms of performance, making it a good candidate for encrypting consumer electronics-transmitted data. It provides a secure, responsive means for users to exchange information while at the same time promising that user privacy is well considered.
INTRODUCTION
The consumer electronics industry is on the brink of a new era with this cutting-edge split learning concept, which radically benefits the device experience. It allows its users to use advanced technology without having to purchase expensive or technologically sophisticated devices [1]. The model of split learning transfers the computing duty to divide it into parts, such as personal devices and cloud devices. Different clients could then access and use the device as required in a given situation [2]. For example, we can use apps like Netflix on our phones when we are out of home or at various places using split learning for content streaming services. One of the other significant advantages is that it protects users’ data [3]. Direct access to data is also eliminated by enabling users to divide their information into multiple cloud storage systems, which presents fewer surface areas for criminals [4]. Moreover, cloud services usually have security measures built-in (e.g., two-factor authentication) to secure user data and avoid unauthorized access [5]. It means an end-run on even the possibilities of break-ins since only parts of computations can be completed concurrently and in no way could take a whole system down. This is particularly beneficial since, unlike a standard computing setup, split learning can be customized to the needs of each individual user, which significantly enhances their experience using any device [6]. One of the major consumer electronic innovations in this space is split learning, which stands to completely change how we interact with our devices – for us nerds on Reddit or Twitter, it’s a dream come true. With more adoption of this technology by different companies, user experience is bound to go higher than it can be today [7]. Split learning is a novel machine learning methodology transforming the consumer electronics landscape. Split learning insulates the data and moves computation to different locations securely, i.e., you do not have to share it between them during computing [8]. Definition: This technology has enabled manufacturers to produce consumer electronics featuring robust functionality without compromising security and privacy. Differential privacy allows split learning [9]. Figure 1 shows the different levels of security model.
Fig. 1. [Images not available. See PDF.]
Different levels of security model.
This way, one cannot trace back to individual points from the data in the process, eliminating room for a crackpot organization or hacker trying to get into users’ records [10]. Consumer devices also contain privacy-preserving encryption for further enhanced consumer identity protection. Split learning also ensures that the data never leaves a particular enclave of trust. Split learning in consumer electronics has helped manufacturers build more secure, private products [11]. Split learning enables manufacturers to validate that data can be securely processed without ever requiring the transfer of private user information. In addition, users would trust their devices and the data in them [12] by enforcing this technology. It can also lower the price of consumer electronics if combined with split learning. Device manufacturers can simplify and simplify data processing for security, allowing the cost of a device to go down as it ages and use new technologies [13]. Split learning also has the potential to increase device capabilities, allowing for more robust and secure devices than ever. The consumer electronics [14] industry, in particular, has started benefiting from the innovation of split learning. Manufacturers can use encryption, differential privacy, and the trusted enclave to build devices that are performant & secure while still keeping user data private [15]. The solution provided by SplitGAN is a glimpse of the ultimate direction that models in this market are likely to evolve toward: mature enough for consumer electronics and making it easier than ever for manufacturers to satisfy their customers’ stringent security and privacy requirements [16]. One such stream of contribution in emerging trends is split learning, a DL reconstructed to enhance performance and user experience on consumer electronics. This could reduce the energy consumption of user electronics and make devices more reactive [17]. Split learning is a method where you take a model of a task and split it into two distinct parts. The first part is trained on a massive server (or cluster of servers) to model the features most relevant to that task. The second part trains a constricted version of the model on an available device (a smartphone or tablet) for task-specific performance improvement [18]. Figure 2 shows the end-user privacy policy.
Fig. 2. [Images not available. See PDF.]
End-user privacy policy.
Split learning is a technique that divides one comprehensive computation into plural smaller computations. Its advantages are reduced power, invention, and implementation of specific software to divide and allocate the core which part of the computation task to work on [19]. The functional blocks of security model has shown in the following Fig. 3.
Fig. 3. [Images not available. See PDF.]
Functions of security model.
This could potentially lead to problems in production as well as maintenance if the software isn’t properly written or maintained. As consumer demand for efficient devices continues to increase, split learning will be a vital part of any device’s performance [20]. This technique is already being used by major companies like Apple and Samsung to reduce energy consumption and improve performance. The combination of its cost savings and energy efficiency makes it a valuable asset in the development of power-efficient consumer electronics [21]. The main contribution of this research has the following,
Reduced transmission costs: split learning reduces the amount of data that needs to be transmitted, resulting in lower network and hardware costs.
Improved privacy: split learning enables sensitive data to remain completely on the edge device or in the cloud, greatly improving the privacy of personal data.
Low latency: split learning decreases latency by processing the training and inference on the same device, eliminating the need to send data back and forth over the network.
Increased scalability: by allowing the training process to take place on the edge device, or on a data center, split learning makes it easier to scale machine learning applications to large numbers of devices.
Reduced energy consumption: split learning reduces power consumption by keeping computations local to the device.
LITERATURE REVIEW
Wu, H., et al. [22] has discussed the privacy-aware task allocation and data aggregation in fog-assisted spatial crowd sourcing is a system that allows for privacy-aware task assignment and information aggregation in ‘spatial crowd sourcing,’ which is the task of allocating tasks to workers in a specific geographical area. In this system, a fog device is set between workers and service providers, performing the role of an intermediary for task allocation and data aggregation. The fog performs encryption and anonymizing task records and also splits the task into smaller chunks which can, in turn, be privacy-preserved with computational power. It performs the privacy-preserving aggregation of the collected data and sends the aggregated output back to the service provider and workers. Ni, Q., et al. [23] has discussed the privacy-aware role-based access control is a type of access control that provides security for large-scale distributed systems that store sensitive data. It introduces role-based access control and a privacy-respecting architecture to ensure that the personal information of both system users is protected. The transformer presents a preserve of data so that they can be used in an aggregated form without allowing access to individual user records. This type of access control at a high level will help reduce the risk of data breaches and ensure personal data is adequately protected. Luc, NQ., et al. [24] has provide a method for creating the secure messaging service “CryptoMess,” which makes use of the Advanced Encryption Standard (AES) to safeguard communication content and the commutative super singular isogeny Diffie–Hellman (CSIDH) algorithm for safe key exchange. The model also considers the dynamics of communication traffic load to maximize energy savings. The model also used variable-rate resource allocation methods that exploit the natural resources (spectrum and power) and services of third-party networks to achieve different QoS for various D2D applications over the network. Also, the model uses ‘utility-based resource allocation’ that thus evaluates different ways of allocating resources depending on conditions in network and application requirements to choose the best one. It also has a decision-making system based on artificial intelligence, meaning whenever information about user data is stored in the model, it can predict what will be happening next, through which decisions are made, and automatically change resource allocation. Konno, T. et al. [25] has discussed the privacy-aware user watching system using 2D pose estimation is a system. The system applies computer vision-based tracking and recognition technology to capture critical points of the user body, assigning a semantic interpretation for each position. This ensures that the user is tracked accurately in terms of their movements and behavior– while also keeping this tracking anonymous–abiding by existing privacy regulations. It has been architected to deliver a secure, high-performance, scalable, and user-privacy-respecting system.
Zhang, K., et al. [26] has discussed the privacy-aware data intensive computing on hybrid clouds (PDC-HC) is a new type of cloud computing that combines public and private clouds to increase data security and privacy. PDC-HC also enables users to utilize various public and private cloud assets whenever necessary. With this technology, companies can keep information secured so that it is accessed only by those with permission. Køien, G. M. et al. [27] has discussed the security update and incident handling for IoT devices a privacy-aware approach is a security approach that focuses on maintaining an up-to-date security status of the device, providing timely reporting and mitigation of any incidents and communication of any changes in the security status at every level. Upon secure firmware updates, layers of authentication, encryption & protocol for transmitting securely can be added, and layered security architecture in devices makes it difficult to compromise. The Privacy-Aware approach will be supported by incident response plans that ensure compliance with any legal and industry guidelines while providing organizations with a tool for assessment, reaction, and protection of themselves & their customers in the event of security incidents. This would allow a secure environment while protecting consumer privacy and ensuring operational integrity. Eiza, M. H., et al. [28] has discussed the secure and privacy-aware cloud-assisted video reporting service in 5G-enabled vehicular networks is a cloud-connected service that allows vehicles to send video footage of traffic-related incidents securely and privately. It is also privacy-enabled by offering end-to-end encryption, which ensures that only the data sender and receiver can decode it. This service can help facilitate quicker, more accurate decisions and event responses to enhance driver safety performance and prevent injuries during accident occurrence processing stages on-site while also reducing traffic costs. Acs, G., et al. [29] Privacy-aware caching in ICN It ensures that the cached content can only be accessed by those authorized to deliver it efficiently. The caching system also allows us to have a per-data request privacy policy, i.e., use the cached content and ensure that if anyone wants access to this cached content, they can only do so by making an appropriate query that meets the privacy criteria.
These targeted wire-routed protocols are vulnerable to data-driven eavesdropping that can reveal the location and identity of an individual. Table 1 shows the comprehensive analysis and Table 2 shows the SWOT analysis.
Table 1. . Comprehensive Analysis
Authors | Network security | User privacy | Network scalability | Network usability | Threat detection rate |
|---|---|---|---|---|---|
Wu, H. et al. [22] | Very high | Moderate | Moderate | Low | Moderate |
Ni, Q. et al. [23] | Low | Very low | Very high | High | Very low |
Luc, N.Q. et al. [24] | High | High | High | Low | High |
Konno, T. et al. [25] | High | Low | Moderate | Very high | Moderate |
Zhang, K. et al. [26] | Low | Very low | Very high | High | Low |
Køien, G. M. et al. [27] | Moderate | High | Low | Very high | Very low |
Eiza, M. H. et al. [28] | Very high | Moderate | Very high | Low | High |
Acs, G. et al. [29] | High | Low | Moderate | High | Very high |
Table 2. . SWOT analysis
Authors | Year | Model Used | Industry | Strength |
|---|---|---|---|---|
Wu, H. et al. [22] | 2019 | Privacy-aware task allocation and data aggregation | Fog-networks | It employs fog computing to enable distributed computing of data aggregation tasks |
Ni, Q. et al. [23] | 2009 | Privacy-aware role-based access control | Consumer electronics | The data access is authorized based on a user’s role |
Luc, N.Q. et al. [24] | 2024 | Falcon post-quantum digital signature technology | cryptomess | This model present a technique for developing a secure messaging service called “CryptoMess”. |
Konno, T. et al. [25] | 2020 | 2D Pose estimation | Consumer electronics | It uses 2D Pose Estimation to detect and track a user’s movements without infringing on their privacy |
Zhang, K. et al. [26] | 2011 | Privacy-aware data intensive computing | Hybrid cloud networks | The hybrid system leverages the advantages of both clouds to provide a highly secure and efficient platform |
Køien, G. M. et al. [27] | 2016 | Privacy-aware approach | IoT devices | This approach ensures minimal privacy and security to protect user data and mitigate risk |
Eiza, M. H. et al. [28] | 2016 | Privacy-aware cloud-assisted video reporting | 5G vehicular networks | This service allows real-time video streaming from the vehicle to the cloud |
Acs, G. et al. [29] | 2017 | Privacy-aware caching | Information centric networks | It allows users to securely store their content in a distributed cache |
Research Gap
One of the current research gaps on a security and privacy-aware model that provides better performance over split learning in consumer electronics is the need to know its efficiency or detail. This is the basis for split learning, a distributed machine learning model with one or more remote nodes. This also implies that the data for training and inference is divided among devices, adding to the security and privacy features of the model. Yet, there still needs to be a research gap and knowledge of its efficiency. Finally, the security and privacy limits cannot be estimated against other models when running on consumer electronic deployments of split learning. In other words, is the nostril model scalable across devices.
That can mean a lot of devices trying to coordinate among themselves. One potential problem is that the more devices there are, the more it leads to increased management and synchronizing complexity in the training process. This can slow down the process and create choke points, making learning more challenging. To remedy this, split learning algorithms often utilize parallel processing and high-level data routing between devices. IoT data volume: Managing many splits in split learning can be problematic due to the sheer size of IOTS-generated datasets. The more data you have, the harder it gets to handle and store safely. Classical split learning algorithms alleviate this problem by employing various compression and encryption techniques to reduce the amount of data being transmitted back and forth while preserving its confidentiality. However, when it comes to many devices or data that exceeds a certain amount, which may be terabytes in size and beyond, Split learning can face scalability issues. Nonetheless, these are tractable problems with human and technical solutions: thoughtful algorithm design and implementation can tackle the flux issue by scaling up using parallel processing and making it easier for data to be kept secure while reducing privacy concerns involving large amounts of measurement information.
Split learning is a new technique for consumer electronics, and its adoption has patterns in the early stage. Despite its promising prospects for boosting privacy and security in voice image and speech recognition, among other applications, how it is used to dominate consumer electronics still seems like a long shot. One is the potential incompatibilities when trying to coalition split learning with current systems or electronics. Since split learning inherently involves two distinct devices (one for data processing and one for model training), joining it with powers from existing systems within a single device can be difficult. However, as split learning becomes more refined and gains wider acknowledgment, building efforts are underway to tackle these compatibility problems, opening the door for consumer electronics to deploy it.
Some of the followings were identified during the literature review. They are,
Many consumer electronics have improper encryption routines, which may endanger sensitive data to hackers and encourage cybercrimes.
Malware and viruses: The more dependent consumers become on their electronic devices for personal information–be it conversations with friends or addresses to friends’ homes, photos of children at soccer practice, or bank account numbers from an e-mailed statement–the bigger target these items make your security/privacy.
Insecure data storage requires users to store information in the cloud, because their machines have little storage space. Without the necessary security measures, it can result in a security breach.
Failures with user authentication: Using weak passwords that are easy to guess; not utilizing multi-factor authentication and quickly predicted security questions for access to personal information found in consumer electronics.
Research Objectives
Suppose we focus on better performance of security and private-aware model. In that case, The research objectives can be learned as follows: develop a more secure and private communication infrastructure for distributed deep learning tasks in consumer electronics. To tackle this issue, the model’s performance model is to be improved by dividing computation tasks and data sets into pieces. This enables secure, private communication between consumer electronics like smartphones and embedded devices connected to the cloud. Improved real-time performance & scalability of the model: Along with this, it would increase the performance and scalability of your model. Therefore, the research goal is to create a trust- and privacy-preserving communication model that can improve consumer electronics products’ performance.
MEASURING PARAMETERS
Those things are growing increasingly connected with each other- for the consumer electronic devices and the internet. This opens up various options that generate cost savings, including enhanced efficiency and convenience. However, it also increases risk since these are not secure-enabled end-points (may be accessed by unauthorized users) and data steal. The only thing that helps in such risky situations is security and privacy, so the data should be safe from malicious activity. A new way is split learning, a form of distributed deep learning that allows DL models to be triggered on the edge without accessing user data.
Integrating these tools into consumer electronics may require additional resources from the developers. In addition, split learning has hardware requirements that would impede running on some devices due to high computational power and memory capacity limitations. Given that split learning is still a relatively new concept, this could also create standardization and compatibility challenges with traditional hardware and software–something else to consider when considering the future implementation of such an approach in consumer electronics.
Data-at-rest: data is stored in databases and files like any other store device. Ensure security SPAM treats the following data as confidential information: IP addresses, networks, and secret keys used for authentication (user/agency identifiers or API). It uses reasonable means of secure storage, which includes encryption and access controls such that only assigned personnel have direct or derived access to them. Regular backups are made securely, too! Encryption converts data to code so authorized users can only access it with the decryption key. Access controls limit the availability of the data set, protecting it so only unwanted people can see or change it. Backups retain trailing data that can be recuperated when information misfortune or manipulation occurs.
Transfer data–Similar to the previous one, but for not only stored information. SSL/TLS, a secure communication protocol that encrypts the data and also ensures the authenticity to keep away from spoofing or snipping of information during transfer, is implemented by SPAM in such a scenario. Also, it is continuously monitored on the network using firewalls and intrusion detection systems; anything goes wrong here, too.
Data-in-use: Data that is accessed or processed by users and applications. SPAM implements access controls and authentication to protect unauthorized data access. This includes multifactor authentication that ensures users provide at least two pieces of evidence to demonstrate who they are, thus providing an extra layer of security than just entering your username and password. SPAM also provides granular access controls so that organizations can create and enforce particular user-based rules around data.
Methodology
Split learning is a new, privacy and security-aware model for consumer electronics. This model enables consumer hardware to delegate parts of a specific task to third parties, such as cloud computing. The purpose of sharing data with the cloud is to allow it to perform computations necessary to complete the task without sharing sensitive information, and this is why split learning is approached. Split learning provides dynamic task placement to share a task with the cloud without moving the user’s data and out of the user’s privacy and security boundaries. Data is trained across many devices without leaking the raw data, as split learning is machine learning.
The method for implementing split learning consists of several steps. In order to ensure maximum security and privacy-awareness in split learning models, the following measures should be adopted.
Strict authentication and authorization. A dedicated authentication and authorization process should restrict access to the model parameters only to authorized users. This can be achieved using authentication tokens, biometric authentication, or classic cryptography techniques.
Model protection and obfuscation. The model should be protected and obfuscated using encryption, masking, and watermarking. Encryption protects the model from unauthorized access while masking and watermarking and guards against tampering and reverse-engineering.
Data retention and leakage control. It is important to take measures to never lose or leak any data during the learning task sharing across devices. Data must be transferred in encrypted form and stored in secure databases. Individual access control measures must be implemented to ensure authorized users can access the data.
Data anonymization. The data must be anonymized using confidentiality, randomness, and perturbation wherever possible. It will help minimize the data exposure for the models and better protect the user’s privacy.
Securing communication channels. The communication channels between the devices and the machines must be encrypted and secure so that no sensitive data can be stolen during the model-sharing process. Establishing secure communication channels allows the devices to trade parameters with each other without the requirement of mutual trust.
Proper model testing. Models trained through split learning should be adequately and rigorously tested to ensure they are secure and do not infringe on the user’s privacy. Testing must include additional security measures, such as user access control and authorization tokens.
Constant monitoring and auditing. The model and the process must be monitored and audited constantly to ensure they are safe and function correctly. Regular security audits must be performed to ensure that all security breaches or privacy violations are discovered and removed.
The user must choose which task to develop in the cloud and what should be produced locally. Next, the user encrypts his data with an algorithm and uploads that to a cloud server. This way, the user will divide its data into several parts and keep it on the cloud separately. Once the data is stored securely, we must decide which tasks can be offloaded onto the cloud. Then, the user must specify how his cloud should be configured, access control rules must be set up, privacy policies need configuration, and security practices must be assigned. If someone properly configures the cloud settings, it will help them manage data privacy and security. Ultimately, the user will need to install/configure this software on both the client and cloud server sides to ensure that data transfers & tasks are performed securely. Split learning is a valuable model for security and privacy-aware consumer electronics. It protects user data and integrates it with a cloud computing system for faster task distribution. It similarly allows the users to configure the cloud servers to avoid theft and loss of their data. Split learning is a perfect way to secure user data and information by following security/privacy norms at the cost of accuracy.
If the team used them (the zero risks), there is a concern that they would require further modification to address potential security issues. For instance, they use software or system flaws (e.g., zero-day vulnerabilities) to access unauthorized information. Other methods of attack include phishing, social engineering, and malware. Some safeguards to prevent the risks mentioned against them are discussed as follows. These include encrypting data at rest and in transit, applying the latest patches to their software systems for known vulnerabilities, and utilizing multi-factor authentication (MFA) so that users can only log into a system if they also have access to another device of theirs referenced column name here. Measures for data protection: these may include firewalls and intrusion detection systems to help identify thieves. There are also regular security audits, and organizations regularly train employees to protect corporate data in the best possible way to minimize human error impact. A multilayered security strategy with constant check-ups is necessary to deal with the risk of user data privacy.
Key Functions of Security and Privacy-Aware Model
With split learning, you partition a neural network model into two sections: the edge node and the server node. The edge node is the end-user device with a basic model structure stored. This performs the most actual computation for the model while other server nodes remain maintained and do minimal amputations. That way, we keep data privacy, and the model is trained.
Create a secure communication channel: Securely connect two or more disparate networks, the consumer (for example, service buyer) and producer(for example, service provider). End-to-end encryption can be provided over these channels using TLS or IPSec.
Model-building with data masking: This part of the process may involve splitting up the dataset into parts between contributors before they come together. This also ensures that the service provider does not ever see any of your sensitive data. Random masks and hash-based methods can be applied using other algorithms.
Data integrity check: The data should be checked and validated for its accuracy/completeness before sharing. One way to accomplish this is checksum (or algorithmic validation).
Implement access control: Using role- or attribute-level-based control mechanisms to share only the necessary data among parties. This makes sure that the model and other data associated with it are only accessed by authorized users.
Deploy data encryption manageable: Encrypt and store all files to protect them from unauthorized access. You can do this using different methods, such as AES, RSA, DSA, and ECE.
Secured updates: Updates to the model data and other associated resources should be installed securely. This is achievable through security protocol mechanisms, such as digitally signed software updates.
Privacy-aware model validation: This includes certifying that the model meets all of these privacy requirements, such as deleting any personal identifiable files. This is achieved using techniques like k-anonymity and differential privacy.
Security and privacy benefits of split learning enabling training of models securely across multiple nodes without transferring data over a shared network. By keeping the data on the edge nodes, user data remains secure and private. The proposed model is divided into two parts; there are fewer parameters to be managed which reduces the overall computational cost of training the model.
Harmonic Encryption and Decryption
Harmonic encryption is a form of symmetric encryption that uses a single key (or a secret code) to encrypt and decrypt files and messages. It transforms data using mathematical calculations, such as replacing each character in a message with its corresponding numerical value. The sender converts the text to numerical values to encrypt a message with harmonic encryption. Then they apply mathematical operations such as modulation and multiplication to map the numerical values to new numerical values. The harmonic encryption and decryption process has demonstrated in the following Fig. 4.
Fig. 4. [Images not available. See PDF.]
Harmonic encryption and decryption process.
The proposed model is split into two parts, it is possible to update and improve the model in a more granular fashion–making it possible to update parts of the model without causing disruptions to the overall system. When the recipient receives the encrypted message, they can reverse the process by applying the same mathematical operations to the encrypted text and in the same order to get the original message. Decryption works by applying the same mathematical operations used in the encryption process. The receiver needs to know the secret key or code used to encrypt the message in the first place. With this information, they can reverse the operations used to encrypt and arrive at the original message. The encryption process for all the plain text having private key. Then it has converted into cipher text. This harmonic encryption has performed secure and sent it to evaluation process. Here the evaluation server performs the evaluation process and it has provided a public key for all the encrypted texts. Finally, the decryption process has performed. Consider the above harmonic encryption process has the plain texts t1, t2, …., tn in any valid information range. The private key (kprivate) and public key (kpublic) has generated in any secured combinations and the evaluation server (ES) has ensure the key values. Now the cipher text generations has ensured as the following Eq. 1
1
2
3
where, Ct is the cipher text, Enct is the encrypted text, ES is the evaluation server, Dect is the decrypted text and t is the plain text. Figure 5 shows the sequential key generation and authorization.
Fig. 5. [Images not available. See PDF.]
Sequential key generation (A) encryption; (B) authorization.
Let’s consider the numbers of privacy features are increased in the network and each user has the different attributed devices with prior knowledge. The privacy feature changes in the database are denoted as dab. It means the change of encrypted information in bth privacy feature in ath user profile. ua indicate the owner of ath profile. If ua = 1, then the user has trustful and ub = 0, then the user has vulnerable. Now, to compute the threshold,
4
5
The weight computation of the network has performed as the following Eq. 6,
6
The user activity at the certain time interval t has computed with the help of hidden markov Model (HMM). The probability conditions of HMM has the following,
7
8
The computation of HMM-transition matrix has the following,
9
The smart network environment has Qt number of encryptions with different time intervals. Here t = 0, 1, 2 … t – 1. So, the elements of HMM transition matrix has modified as the following,
10
where Rab denotes the no of transition limits from Qt to Qt + 1. Now the probability of privacy observation matrix has constructed in ‘n’ numbers of hidden states with (J) in the network. It has indicated as the following,
11
where, Qh (Jg) is the probability of privacy observing Jg from state Qh. It has represented in the following Eq. (17)
12
From the Eq. (12), the privacy issues were identified and the user activity and secured connection management has managed by the FB (forward and backward) HMM decoding algorithm. The forward recursion has shown in the Eq. (13)
13
The backward recursion has shown in the Eq. (19)
14
where t = 0, 1, 2, …, T–1. The connection probability of occurring secured user activity from the privacy observed states can be computed in Eq. (14),
15
Split learning can help improve the security and privacy of consumer electronics by allowing users to securely but quickly train deep learning models on their own devices.
PROPOSED MODEL
As consumer electronics appeared, it became increasingly important to pay attention to security and, in particular, the protection of privacy. This is particularly the case in applications—such as streamlining services that some companies can use to access crucial user data, sometimes without their permission. On the consumer electronics side, split learning has become a key to robust and provable security/privacy for users. Split learning is a privacy-preserving deep learning method that allows two parties, for instance, the provider and user) to train a standard single model, one party on local data regarding masking their samples. In our system, user data is pinned in the service provider network, and the end-user device network can do computation. This way, the user information is kept in a service provider network, and the device handling concerning networking activities is performed within another domain. The block diagram of the proposed model has shown in the following Fig. 6.
Fig. 6. [Images not available. See PDF.]
Proposed block diagram.
While developing, the two parts are kept separate so that no data is passed between networks-privacy-preserving computing. Another advantage of this method is that user data will only be exposed to organizations with appropriate read/write access security mechanisms. Thanks to its advantages over other techniques, split learning is registering an uptick in consumer electronics. For consumer applications, user data and computation are separated into two networks, so companies cannot access users’ data. This also makes split learning an excellent option for consumer applications, such as streaming services, to leverage user data without giving it to malicious organizations. This system provides low latency and great scalability (since the user device can continue processing such data while on a service provider network, it may store already decrypted). This lets the consumer electronics app run faster than if it had a one-network system. Split learning is also seen as a security and privacy-aware approach. Its purpose is to serve consumer electronics, where user data needs to be safely stored (only) on the device without allowing any apps to monetize it. It’s an excellent system for things like streaming services or any other thing where you have to keep user data safe from Chads in The Organization. Split learning allows confidence in the applications of users and ensures that data is secure with the proper attitude. It enables the building of a conducive ecosystem for consumer electronics products, the indispensable foundation of industry success.
Preprocessing
The security and privacy aware model will help reduce the security and privacy of consumer electronics perishability and protect user data in a way away. A federated neural network is a divide-and-conquer method of deep learning that breaks up a neural network into parts on two computing units: one in the cloud as a computation server and the other at a local terminal. The training occurs within the local terminal during preprocessing, while the neural network model is stored on the cloud. Security and privacy-aware preprocessing for training a neural network model were fine-tuned to identify the tasks for which the specific functions perform best. On a high level, it takes care of not transmitting sensitive user data to the cloud server first and averting any chances for your information ending up in malicious hands. Storing the data locally allows a fully decentralized architecture where users own their data and can say what they share and with whom. This preprocessing step means that model optimization can be carried out to account for all the logistics, such as the size of the device available or how much computing we have. Do so to provide the user with maximum possible accuracy and performance from their device while not draining much battery. The end user trusts this form of preprocessing as it ensures that their sensitive data does not need to be shared with a cloud where training happens. It increases the transparency and accountability of the apparent model quality as it grants access to everything that was trained on.
15
Where, A indicates the raw sensor signals, B indicates the filtered signal and x indicates the average sample in the current state. the preprocessing stage of consumer electronics’ security and privacy-aware model provides critical security and privacy measures while optimizing model performance. Through these measures, users can trust that their data is safe and secure, and they can be confident in the accuracy of the results.
Feature Extraction
Feature extraction is essential in applying the split learning technique over consumer electronics. Feature extraction involves analyzing the data from the collected images, texts, and videos on the device and extracting the necessary features. This step is essential for securing the data transmitted from the device and further facilitating machine learning. Feature extraction requires extracting the data records that represent unique characteristics of the data at the device level. There are multiple features are here to process in different time series. The feature function has mentioned in f. Then the initial feature can calculated the following Eq. (16)
16
The next feature has computed like the following Eq. (17)
17
Hence, the final feature has generated the following Eq. (18)
18
It is essential to identify the relationships between the data records and the relationships between the records and the class patterns predicted by the learning algorithm. It requires understanding the relationships between entire sets of features that may lead to improved machine learning accuracy. With the help of normalization process, we can obtain the linear interval of different users has computed. It has shown in the Eq. (19)
19
where amin and amax are the minimum and maximum durations. In the feature extraction stage, it is also essential to consider the technique’s accuracy, speed, and memory requirements. Data must reign quickly and efficiently for them. Privacy and Security are paramount. In addition, one must also figure out the typical trends associated with instances / ingest points and effectively represent their use within the processing system. Applying the developed model to consumer electronics hence demands careful feature extraction. The process involves extracting the top features from data to secure transmitted device data by increasing processing accuracy. Moreover, it depends on its accuracy and speed, alleviating memory needs for the safe transmission and processing of data.
Decision Making
Firstly, organizations should define the operational framework, which is conceptually managing data flows and data security within this system. These include safeguarding data distribution models, securing communication systems, and authentication/authorization/accounting techniques. The organizations also need to develop a process for handling SL-backed user interactions, i.e., all authentication and authorization practices regarding authentic users can access the system before ultimately using it.
20
where a and b are comparative sensor reading observed by the network. C is the Euclidean distance. This method provides businesses many benefits, such as better data security, higher scalability, and efficient workload performance. SL is a distributed computational model that partitions data segments across multiple nodes. It is deployed in the case of susceptible applications where it finally exposes (after all this pre-processing) to the central processing on your device. A hierarchical data model also provides greater privacy, as the information will be saved and processed at several nodes, which means an additional level of security. Understanding how these decision-making operations work and relate to security and privacy in a split learning system is essential. Organizations should develop an operational policy with the required security and confidentiality to secure SL and ensure a supportive safety awareness framework. This policy also must detail the governing principles around how SL is utilized, security and privacy requirements that must be designed into their system, and guidance on data in use/storage using these techniques.
Further, organizations have to identify the risks relevant to SL and define a plan for risk mitigation. It can be done in various ways, such as using encryption, digital signatures, or other safety procedures. Integrating a self-sovereign, privacy-preserving SL model into consumer electronics can improve the security and scalability of data processing routines. It doesn’t matter how cool these are; if your organization cannot implement and run such a system in at least an OK manner while being aware of the security/privacy issues (and appropriate steps to mitigate those risks), it is more likely than not that most organizations might end up turning off half or all their EDR correctly configured settings.
Proposed Algorithm
These algorithms are intelligent enough to discover potential abuses in applications/systems as they implement security and privacy consciousness. Anomaly algorithms work by applying data mining between detection methods, including anomaly detective and behavioral analytics, and using they-machine-learning with a rule-based algorithm to find the signal buried in noise that indicates any threat. Additionally, these algorithms can consider data security and privacy during their analysis. This way, they can determine which data is accessible and to what extent it should be remotely encrypted. They also can use data masking to protect potentially sensitive fields and create audit trails that track who is using the data for what business purpose. Additionally, infrastructure can enforce policies like access control and authentication to ensure only authorized consumers can view data. The measures mentioned above ensure that the data is secure and private and, at some level, help flag possible threats.
Algorithm.1: Privacy aware algorithm (threat detection) |
Inlet Entry: |
SM*N; //M is the number of samples and N is the number of features |
GEN_fs = Nout; // number of fake samples |
Nbin = ( ); // generation of bin-numbers |
Tbin= ( ); // top bin generation for fake samples |
Errornew = ( ); // Computation of new error rate |
For each inlet do |
V = inlet entry; |
VSamples = GET Samples (V); |
attack samples = GET Samples (~V); |
C ( ) = CG (N, Nout, Tbin); |
For each fakesamplesϵ attack samples do |
For each featuresamplesϵ attack samples do |
[f,e] = Hgen( ); |
[~,i] = Sbin (freq); |
i (Tbin+1:END) = ( ); |
b = edges (i (Cbin(fi, fake Samples)); |
fake Samples = random (b,b+1); |
end for |
end for |
V samples = ADD samples (fake samples); |
D = Compute DM (Tsamples); |
Error_Rate = Compute ER(D); |
end for |
new_ER = mean (VER); |
return new_ER |
However, based on this blog, they found that we use a small piece of code named Inlet Entry, which indulges in creating pseudo examples out of existing data to overfit one machine learning system. n_samples and n_features refer to SM*N (M samples of N features). It also sets the GEN_fs variable to how many fake samples it will generate. The function first defines a set of bin numbers labeled Nbin, describing where features will be inserted into counterfeit samples. Next is a list of top bin numbers denoted as Tbin, generated only for the fake samples (Token). Next, the function calculates and saves the latest error rate as Errornew. The critical next step is to loop through each inlet or data source and take the sample of a slice for that inlet. The dataset is then divided into Vsamples, which will be used for accurate data, and attack samples sourced from adversary-collected background traffic. In the loop for each inlet, it calls the factum function, which uses Hgen-like code to generate fake samples. Which then calls the Sbin function to set a (random) bin number for each feature in our counterfeit samples. The bin numbers for samples in these bins are assigned randomly to values of features in the phony sample. This is done for each attack sample, yielding a set of fake samples randomly “attacked” with features. After that, the function appends the fake samples and the correct data and makes a distance matrix (DM) on the resulting combined data. Computes and saves the error rate (ER) from the distance matrix. Apply this formula to every element of a nm numpy – matrix data structure. The loop is repeated for every single inlet and will calculate several error rates.
Similarly, the last line of this function computes new_ER; it takes the mean value of overall error rates. This is the updated error rate now that we have added pseudo-elements. The value is then returned to the function. So, the Inlet Entry function creates false samples and randomly rearranges the features in that sample while storing them by updating accurate data. That enables the machine learning system to achieve a lower error rate. This process loops through all inlets and repeats itself multiple times to enhance the system’s error rate continuously.
A. Fake user detection
The security and Privacy-aware algorithm can remove these dummy or fake user accounts by applying the above methods. That involves looking at the actual content of what a user is posting, how often they are providing that content, and with which accounts they seem to have a connection - as well as any other information Verizon has suggesting that these people are behaving online under false identities. At the same time, based on user information and data, it can find specific patterns in behavior, like sending many messages through spam or trying to take over networks with alarming connections.
B. Authorized user detection
This security and privacy-conscious algorithm will authenticate users by verifying who they are using two-factor authentication, biometric, or device authorization. Then, it checks if the user has the right to access the resources they are trying to get. After your users authenticate, this algorithm encrypts their data with a vital key. It uses access control lists or other security practices to ensure that only authorized users can access such data. Ultimately, it will log all these activities to detect malicious activity and notify about unauthorized access.
C. Detection of network attacks
For networks, detection can be either by Security and privacy-aware algorithms monitoring the traffic on the network and analyzing it for suspicious activities or checking known attack signatures , or system log files can also be audited to understand if an attempt has been made. It can also scan the system and check for malicious activity by performing a vulnerability test against existing vulnerabilities/common attack vectors. Additionally, this algorithm can use machine learning to determine anomalous behaviors and possible malicious activities and actors within the analyzed data. The algorithm can also finally look at system and network logs for any malicious activity occurring, which the security team will be promptly alerted on should anything significant arise.
The security and privacy-aware model is very secure and privacy-preserving as it is feasible for implementing security concerns in the construction of security. It security considers both types of privacy concerns due to its design. Firstly, it includes various security measures like encryption, access controls, firewalls, etc., to keep unauthorized entities away from your secret data. The model also adheres to a privacy-by-design philosophy, attacking the problem by adding privacy concerns at each stage of development instead of treating it as an afterthought. This ensures that there is no compromise of data privacy at any point. It is also very transparent and privacy-friendly; by design, you have a lot of data minimization features and control for the user on their annotated datasets. Finally, this model’s multi-dimensional take on Security and privacy makes it highly resilient.
The security and privacy-aware algorithm. Quite self-explanatory, its name is a distributed algorithm that applies both security & privacy techniques to achieve the best achievable performance in the network. It does this by weighing the cost of security services against its ability to make everything run more efficiently. The algorithm observes network usage, security threats, and user activity to assign resources to each user and every system accessing the network. It then modifies the resource allocation per user/system according to the evaluated risk. The algorithm closely monitors network usage and guarantees that network performance is maximized by optimally allocating resources to each user or system. It ensures the resources are utilized while complying with security and privacy requirements.
RESULTS AND DISCUSSION
Comparison of the proposed security and privacy-aware model (SPAM) with existing methods like privacy-aware fine-grained access control (PAFAC), privacy-aware information-sharing (PAIS), privacy-aware detection framework, PADF, and other similar techniques, e.g., privacy-aware federated learning, etc. In this model, a network simulator (NS-2) is used to simulate the scenario, and the results are explained as follows.
Computation of False Discovery Rate (FDR)
The privacy-aware algorithm can evaluate the security and privacy-awareness models on machine learning-based algorithmic optimization for network attack detection in heterogeneous industrial Internet of Things networks, with the goal of integrating these algorithms with security operation center subsystems. [30]. Using an arbitrary cutoff point, the privacy-aware algorithm procedure rejects hypotheses on the resulting p-value. The procedure consists of ordering the hypotheses according to increasing p-value, then running each step, dividing its p-value by their rank, respectively the ordered and follow. If p > FDR, reject the hypothesis. The FDR is the theoretical fraction of false positives among all hypotheses rejected. The consumer electronics company could thus precisely compute its FDR by using the private aware algorithm for security and privacy-aware model over Split learning. The FDR has computed with the help of following Eq. (21):
21
Table 3 shows the comparison of FDR between existing and proposed models.
Table 3. . Comparison of FDR
No. of inputs | PAFAC (TR) | PAFAC (TS) | PAIS (TR) | PAIS (TS) | PADF (TR) | PADF (TS) | PAFL (TR) | PAFL (TS) | SPAM (TR) | SPAM (TS) |
|---|---|---|---|---|---|---|---|---|---|---|
100 | 0.380 | 0.546 | 0.493 | 0.619 | 0.642 | 0.534 | 0.701 | 0.730 | 0.822 | 0.946 |
200 | 0.394 | 0.526 | 0.497 | 0.639 | 0.629 | 0.540 | 0.713 | 0.723 | 0.836 | 0.936 |
300 | 0.407 | 0.506 | 0.502 | 0.658 | 0.616 | 0.546 | 0.725 | 0.715 | 0.851 | 0.925 |
400 | 0.420 | 0.486 | 0.506 | 0.678 | 0.602 | 0.552 | 0.738 | 0.707 | 0.865 | 0.915 |
500 | 0.434 | 0.465 | 0.511 | 0.697 | 0.589 | 0.557 | 0.750 | 0.699 | 0.879 | 0.904 |
600 | 0.447 | 0.445 | 0.515 | 0.717 | 0.576 | 0.563 | 0.762 | 0.691 | 0.895 | 0.895 |
700 | 0.460 | 0.425 | 0.520 | 0.737 | 0.563 | 0.569 | 0.775 | 0.684 | 0.909 | 0.884 |
800 | 0.474 | 0.405 | 0.524 | 0.756 | 0.549 | 0.575 | 0.787 | 0.676 | 0.923 | 0.875 |
900 | 0.487 | 0.385 | 0.529 | 0.776 | 0.536 | 0.580 | 0.799 | 0.668 | 0.938 | 0.865 |
1000 | 0.501 | 0.365 | 0.533 | 0.796 | 0.523 | 0.586 | 0.812 | 0.660 | 0.952 | 0.854 |
The false discovery rate (FDR) for a security and privacy-aware model over split learning in consumer electronics can be computed by first determining the number of false positives from the model. This could be done by running the model on independent training and validation datasets and recording any false positives. The FDR would then be calculated by taking the total number of false positives and dividing it by the total number of positives identified by the model (correct and false positives). The resulting FDR would then be the proportion of false positives amongst all the positives identified by the model.
Figure 7 shows the comparison of false discovery rate. In a critical authentication point with training set, the existing PAFAC reached 0.447, PAIS reached 0.515, PADF obtained 0.576, and PAFL obtained 0.762 FDR results. The proposed SPAM reached 0.895 FDR results. Also in testing set, the existing PAFAC reached 0.445, PAIS reached 0.717, PADF obtained 0.563, and PAFL obtained 0.691 FDR results. The proposed SPAM reached 0.895 FDR results.
Fig. 7. [Images not available. See PDF.]
False discovery rate.
This shows that the SPAM approach offers improved FDR performance, as indicated by this graph. These performance performance improvements can be explained using the adaptive modulation technique in SPAM. This way, it can autonomously modify some parameters of the authentication system in real time according to data. This improves the discrimination between true and false positives, leading to a lower FDR. Critical authentication point testing with training and test sets yields results in which the current SPAM method produces superior FDR versus all other techniques. Adaptive modulation accounts for this and can provide a more accurate authentication system. This work has significant implications in designing new and reliable authentication techniques.
False discovery rate (FDR) is a statistical concept often employed in data analysis to examine the validity of an introduced method, here spam detection techniques. FDR calculates the fraction of false positives (spam emails) from all positive results (spam.) This, in turn, allows a quantitative assessment of the performance of spam detection methods since lower FDR values are achieved when more identified emails are actual spams. The FDR can evaluate the effectiveness of spam detection techniques without putting user privacy at risk by using a distributed approach and not sharing any sensitive data, similar to split learning.
Computation of Prevalence Threshold (Pth)
These thresholds for how much safer and privacy-conscious models are based on security/privacy-aware split learning in consumer electronics should depend on typical risk factors and current, up-to-date information on the existing severity level. To decide the right level of prevalence threshold, privacy laws, and regulations, user opt-in acceptance preferences, the overall technical complexity of the system, and willingness to risk it with data owners and users need a bit more of a methodological foundation. It has computed with the help of following Eq. (22):
22
Table 4 shows the comparison of prevalence threshold between existing and proposed models.
Table 4. . Comparison of prevalence threshold
No. of inputs | PAFAC (TR) | PAFAC (TS) | PAIS (TR) | PAIS (TS) | PADF (TR) | PADF (TS) | PAFL (TR) | PAFL (TS) | SPAM (TR) | SPAM (TS) |
|---|---|---|---|---|---|---|---|---|---|---|
100 | 0.492 | 0.706 | 0.638 | 0.660 | 0.830 | 0.603 | 0.687 | 0.716 | 0.855 | 0.952 |
200 | 0.510 | 0.680 | 0.643 | 0.681 | 0.813 | 0.610 | 0.699 | 0.708 | 0.870 | 0.942 |
300 | 0.527 | 0.654 | 0.650 | 0.702 | 0.796 | 0.616 | 0.711 | 0.701 | 0.885 | 0.932 |
400 | 0.544 | 0.629 | 0.655 | 0.723 | 0.779 | 0.623 | 0.723 | 0.693 | 0.900 | 0.921 |
500 | 0.561 | 0.603 | 0.661 | 0.743 | 0.762 | 0.629 | 0.735 | 0.685 | 0.915 | 0.911 |
600 | 0.579 | 0.576 | 0.667 | 0.765 | 0.745 | 0.636 | 0.748 | 0.678 | 0.930 | 0.902 |
700 | 0.596 | 0.550 | 0.673 | 0.786 | 0.728 | 0.642 | 0.760 | 0.670 | 0.946 | 0.891 |
800 | 0.613 | 0.524 | 0.678 | 0.806 | 0.711 | 0.649 | 0.771 | 0.662 | 0.960 | 0.881 |
900 | 0.631 | 0.499 | 0.685 | 0.827 | 0.693 | 0.655 | 0.784 | 0.655 | 0.976 | 0.870 |
1000 | 0.647 | 0.472 | 0.690 | 0.849 | 0.676 | 0.662 | 0.796 | 0.647 | 0.991 | 0.860 |
For techniques to be deployed, they should have a quick responsive system from threats; the data processing and storing have to be done in such a way that internal or external unauthorized transactions are not allowed, for which hiding of information plays a critical role. Secondly, the cost and scalability of defining a prevalence threshold should be determined. In conclusion, the appropriate prevalence threshold for a detection model should be determined based on a holistic assessment of security requirements and risk profiles considering the impacts outlined in this post.
Figure 8 shows the comparison of prevalence threshold. In a critical authentication point with training set, the existing PAFAC reached 0.579, PAIS reached 0.667, PADF obtained 0.745, and PAFL obtained 0.748 prevalence threshold results. The proposed SPAM reached 0.930 prevalence threshold results. Also in testing set, the existing PAFAC reached 0.576, PAIS reached 0.765, PADF obtained 0.636, and PAFL obtained 0.678 prevalence threshold results. The proposed SPAM reached 0.902 prevalence threshold results.
Fig. 8. [Images not available. See PDF.]
Prevalence threshold.
Prevalence threshold is a metric for evaluating the performance of security and privacy-preserving models. The minimum number of users must use some security or privacy functionality for it to work. In other words, this is the proportion of users required to comply with proper security and privacy protocols for the overall system protection constraint. It regards the security and privacy of the entire system, taking into account interactions inside this system (interconnection dependencies) along with potential damage a noncompliant entity may put on it. Higher prevalence threshold means a more robust and more human-level security model that considers privacy.
Computation of Critical Success Index (CSI)
The critical success index (CSI) is a measure of the effectiveness of an organization’s security and privacy efforts. It is calculated by dividing the number of successful security and privacy measures into the total number of security and privacy measures taken. For Split learning in consumer electronics, can use the following formula to compute the CS index:
23
Table 5 demonstrates the comparison of critical success index between the proposed and existing models.
Table 5. . Comparison of CSI
No. of inputs | PAFAC (TR) | PAFAC (TS) | PAIS (TR) | PAIS (TS) | PADF (TR) | PADF (TS) | PAFL (TR) | PAFL (TS) | SPAM (TR) | SPAM (TS) |
|---|---|---|---|---|---|---|---|---|---|---|
100 | 0.556 | 0.798 | 0.582 | 0.538 | 0.818 | 0.519 | 0.711 | 0.563 | 0.982 | 0.867 |
200 | 0.576 | 0.768 | 0.587 | 0.555 | 0.802 | 0.524 | 0.697 | 0.569 | 0.963 | 0.875 |
300 | 0.596 | 0.739 | 0.593 | 0.573 | 0.785 | 0.530 | 0.682 | 0.575 | 0.942 | 0.885 |
400 | 0.614 | 0.710 | 0.598 | 0.590 | 0.768 | 0.536 | 0.667 | 0.581 | 0.922 | 0.895 |
500 | 0.634 | 0.680 | 0.603 | 0.606 | 0.751 | 0.541 | 0.652 | 0.588 | 0.901 | 0.904 |
600 | 0.654 | 0.651 | 0.609 | 0.624 | 0.735 | 0.546 | 0.638 | 0.593 | 0.882 | 0.913 |
700 | 0.672 | 0.622 | 0.614 | 0.641 | 0.718 | 0.552 | 0.623 | 0.599 | 0.861 | 0.922 |
800 | 0.692 | 0.592 | 0.619 | 0.658 | 0.701 | 0.558 | 0.609 | 0.605 | 0.841 | 0.932 |
900 | 0.712 | 0.563 | 0.625 | 0.674 | 0.684 | 0.564 | 0.594 | 0.612 | 0.820 | 0.941 |
1000 | 0.731 | 0.534 | 0.630 | 0.692 | 0.667 | 0.569 | 0.579 | 0.617 | 0.800 | 0.950 |
Figure 9 shows the comparison of Critical success index. In a critical authentication point with training set, the existing PAFAC reached 0.654, PAIS reached 0.609, PADF obtained 0.735, and PAFL obtained 0.638 CSI results. The proposed SPAM reached 0.882 CSI results. Also in testing set, the existing PAFAC reached 0.651, PAIS reached 0.624, PADF obtained 0.546, and PAFL obtained 0.593 CSI results. The proposed SPAM reached 0.913 CSI results.
Fig. 9. [Images not available. See PDF.]
Critical success index.
CSI measures reflect multiple components of the model. It’s a number that represents how the SPAM does overall regarding security and privacy. CSI is calculated by comparing the content ended and expected by SPAM to the actual from SPAM. This provides all-arc protection for the model concerning privacy and functionality. Further, the CSI can help identify nascent vulnerabilities and weaknesses in the SPAM, essential for a holistic model efficacy evaluation.
Computation of Matthews Correlation Coefficient (MCC)
The Matthews correlation coefficient (MCC) measures the quality of binary classifications in statistics. Security and privacy-aware model binary classification accuracy evaluation on split learning consumer hardware The MCC is also a surrogate for the performance of a security and privacy-aware model in maintaining its integrity by predicting the classification labels correctly over an unknown dataset. In this consumer electronics example, we measure the performance of a split learning model using MCC by calculating the true positives (TP), true negatives (TN), false positives (FP), and false negatives [])for that particular mode. Then, use the following formula to calculate the MCC:
24
Table 6 demonstrates the comparison of Matthews’s correlation coefficient between the proposed and existing models.
Table 6. . Comparison of MCC
No. of inputs | PAFAC (TR) | PAFAC (TS) | PAIS (TR) | PAIS (TS) | PADF (TR) | PADF (TS) | PAFL (TR) | PAFL (TS) | SPAM (TR) | SPAM (TS) |
|---|---|---|---|---|---|---|---|---|---|---|
100 | 0.477 | 0.685 | 0.501 | 0.463 | 0.807 | 0.447 | 0.660 | 0.688 | 0.836 | 0.955 |
200 | 0.494 | 0.660 | 0.504 | 0.477 | 0.791 | 0.451 | 0.672 | 0.681 | 0.851 | 0.945 |
300 | 0.511 | 0.634 | 0.510 | 0.493 | 0.774 | 0.456 | 0.683 | 0.673 | 0.866 | 0.934 |
400 | 0.527 | 0.610 | 0.514 | 0.508 | 0.757 | 0.461 | 0.695 | 0.666 | 0.880 | 0.925 |
500 | 0.545 | 0.584 | 0.519 | 0.522 | 0.741 | 0.466 | 0.706 | 0.659 | 0.895 | 0.914 |
600 | 0.561 | 0.559 | 0.523 | 0.536 | 0.724 | 0.470 | 0.719 | 0.652 | 0.910 | 0.904 |
700 | 0.577 | 0.534 | 0.528 | 0.551 | 0.707 | 0.475 | 0.730 | 0.644 | 0.925 | 0.893 |
800 | 0.595 | 0.508 | 0.532 | 0.566 | 0.690 | 0.480 | 0.742 | 0.637 | 0.940 | 0.883 |
900 | 0.611 | 0.484 | 0.537 | 0.580 | 0.674 | 0.485 | 0.753 | 0.630 | 0.954 | 0.874 |
1000 | 0.627 | 0.458 | 0.542 | 0.595 | 0.657 | 0.489 | 0.765 | 0.622 | 0.969 | 0.863 |
Figure 10 shows the comparison of Matthews’s correlation coefficient. In a critical authentication point with training set, the existing PAFAC reached 0.561, PAIS reached 0.523, PADF obtained 0.724, and PAFL obtained 0.719 MCC results. The proposed SPAM reached 0.910 MCC results. Also in testing set, the existing PAFAC reached 0.559, PAIS reached 0.536, PADF obtained 0.470, and PAFL obtained 0.652 MCC results. The proposed SPAM reached 0.904 MCC results.
Fig. 10. [Images not available. See PDF.]
Matthews’s correlation coefficient.
MCC is a statistical measure that can estimate the success rate of security and privacy-aware models. It takes true positives and negatives and false positives and negatives into account to give a more balanced view of how the model performs. The MCC is between –1 and +1, where 0 indicates no linear correlation with chosen class labels. The security and privacy-aware models can detect the accuracy/reliability of such prediction by measuring this distance, as illustrated above, thus securing sensitive data adequately while identifying the potential risks (security) to handle them appropriately.
Computation of Fowlkes–Mallows Index (FMI)
The Fowlkes–Mallows index (FMI) is a metric used to compare clusters and assess agreement between split learning in consumer electronics and the security- and privacy-aware model. Its value is computed by taking the ratio of two quantities: the fuzzy Rand index and a quantity measuring how well-c-definedness duce one agrees with all others in its cluster; it is based on clinical clustering. The Fowlkes–Mallows index calculates the Rand Index (see, e.g., Handler et al. [15]), from which it is then straightforwardly obtained. The Rand index between a set of actual classes and estimated class annotations is equal to the number of element pairs in the same or different clusters in both partitions divided by the total number of such pair-wise agreements. The next step is to calculate the geometric mean of the within-cluster similarities, which is the mean of all similarities between the points in the same cluster. Finally, the FMI is computed by dividing the Rand index by the geometric mean.
25
Table 7 demonstrates the comparison of Fowlkes–Mallows index between the proposed and existing models.
Table 7. . Comparison of FMI
No. of inputs | PAFAC (TR) | PAFAC (TS) | PAIS (TR) | PAIS (TS) | PADF (TR) | PADF (TS) | PAFL (TR) | PAFL (TS) | SPAM (TR) | SPAM (TS) |
|---|---|---|---|---|---|---|---|---|---|---|
100 | 0.410 | 0.589 | 0.431 | 0.398 | 0.795 | 0.384 | 0.647 | 0.675 | 0.820 | 0.937 |
200 | 0.424 | 0.567 | 0.434 | 0.411 | 0.779 | 0.388 | 0.659 | 0.668 | 0.834 | 0.927 |
300 | 0.439 | 0.544 | 0.438 | 0.424 | 0.762 | 0.392 | 0.670 | 0.660 | 0.849 | 0.916 |
400 | 0.453 | 0.523 | 0.442 | 0.436 | 0.746 | 0.396 | 0.681 | 0.653 | 0.863 | 0.906 |
500 | 0.468 | 0.501 | 0.446 | 0.449 | 0.730 | 0.401 | 0.692 | 0.645 | 0.877 | 0.895 |
600 | 0.482 | 0.480 | 0.450 | 0.461 | 0.714 | 0.405 | 0.705 | 0.639 | 0.893 | 0.887 |
700 | 0.496 | 0.458 | 0.455 | 0.474 | 0.697 | 0.409 | 0.716 | 0.631 | 0.907 | 0.876 |
800 | 0.511 | 0.437 | 0.458 | 0.487 | 0.681 | 0.413 | 0.727 | 0.624 | 0.921 | 0.866 |
900 | 0.525 | 0.416 | 0.462 | 0.498 | 0.665 | 0.417 | 0.738 | 0.617 | 0.935 | 0.856 |
1000 | 0.539 | 0.394 | 0.466 | 0.512 | 0.648 | 0.421 | 0.750 | 0.609 | 0.950 | 0.846 |
Figure 11 shows the comparison of Fowlkes–Mallows index. In a critical authentication point with training set, the existing PAFAC reached 0.482, PAIS reached 0.450, PADF obtained 0.714, and PAFL obtained 0.705 FMI results. The proposed SPAM reached 0.893 FMI results. Also in testing set, the existing PAFAC reached 0.480, PAIS reached 0.461, PADF obtained 0.405, and PAFL obtained 0.639 FMI results. The proposed SPAM reached 0.887 FMI results.
Fig. 11. [Images not available. See PDF.]
Fowlkes–Mallows index.
FMI measures are often used in data mining and machine learning to assess the performance of a model. It is explicitly used to evaluate the goodness of a model in predicting the classification of data points. Regarding security and privacy-aware models, FMI is calculated by binarizing sensitive data points based on their predicted classifications. The higher the FMI score, the stronger the correlation between predicted and actual classifications, making them ideal model fidelity candidate_metric scree_plotsetLabel. This implies that the FMI is generally a helpful device within the security and privacy-aware models to evaluate whether or not such fashions can accurately defend delicate information.
Computation of Informedness (IM)
In the assessment of informedness for security and privacy-aware model over Split learning in consumer electronics, it will be with regards to the application’s ability to cover the user’s data from being accessed without authorized access, which includes the issue of security, as well as the extent to which it has the notion of fairness as a trait through which sensitive information assumed can be dissociated from it. It would be against the levels of security and privacy that can prevent the user’s data from being breached or acquired via unauthorized access in comparison to the standards/practices in the industry. The accuracy and performance levels of the model and how much they have been achieved in ensuring they are compliant with the user’s privacy & data security can also inform whether the model has been trained with enough insightfulness.
26
Table 8 demonstrates the comparison of informedness between the proposed and existing models.
Table 8. . Comparison of IM
No. of inputs | PAFAC (TR) | PAFAC (TS) | PAIS (TR) | PAIS (TS) | PADF (TR) | PADF (TS) | PAFL (TR) | PAFL (TS) | SPAM (TR) | SPAM (TS) |
|---|---|---|---|---|---|---|---|---|---|---|
100 | 0.314 | 0.470 | 0.344 | 0.380 | 0.633 | 0.461 | 0.704 | 0.716 | 0.844 | 0.994 |
200 | 0.325 | 0.452 | 0.346 | 0.392 | 0.620 | 0.466 | 0.716 | 0.708 | 0.859 | 0.983 |
300 | 0.336 | 0.436 | 0.350 | 0.404 | 0.607 | 0.471 | 0.729 | 0.701 | 0.873 | 0.972 |
400 | 0.347 | 0.418 | 0.353 | 0.416 | 0.594 | 0.476 | 0.741 | 0.694 | 0.888 | 0.963 |
500 | 0.359 | 0.401 | 0.356 | 0.428 | 0.580 | 0.481 | 0.753 | 0.686 | 0.903 | 0.952 |
600 | 0.370 | 0.383 | 0.359 | 0.440 | 0.567 | 0.485 | 0.766 | 0.678 | 0.918 | 0.941 |
700 | 0.380 | 0.366 | 0.363 | 0.452 | 0.555 | 0.490 | 0.779 | 0.670 | 0.933 | 0.930 |
800 | 0.391 | 0.349 | 0.365 | 0.464 | 0.542 | 0.495 | 0.791 | 0.662 | 0.948 | 0.919 |
900 | 0.402 | 0.332 | 0.368 | 0.476 | 0.528 | 0.500 | 0.803 | 0.655 | 0.963 | 0.909 |
1000 | 0.414 | 0.314 | 0.372 | 0.488 | 0.515 | 0.506 | 0.816 | 0.648 | 0.978 | 0.899 |
Figure 12 shows the comparison of informedness. In a critical authentication point with training set, the existing PAFAC reached 0.370, PAIS reached 0.359, PADF obtained 0.567, and PAFL obtained 0.766 informedness results. The proposed SPAM reached 0.918 informedness results. Also in testing set, the existing PAFAC reached 0.383, PAIS reached 0.440, PADF obtained 0.485, and PAFL obtained 0.678 informedness results. The proposed SPAM reached 0.941 informedness results.
Fig. 12. [Images not available. See PDF.]
Informedness.
The informedness measure determines how well a model can identify privacy and security breaches. It does so by comparing the model’s performance to a base rate, the success rate achieved by randomly guessing. An algorithm with high informedness can correctly identify instances as secure or insecure, while a lower value implies the algorithm is as helpful as a coin toss. In summary, informedness directly answers how well a privacy- and security-aware model can perform.
Convergence of Performance
Convergence of performance is the result of measuring performance over a period of time and seeing a trend in the data. It is the process of noticing a progressive improvement in performance or, conversely, a regression in performance, over successive readings. Table 9 shows the convergence of performance between the existing and proposed models.
Table 9. . Convergence of performance
Para-meters | PAFAC (TR) | PAFAC (TS) | PAIS (TR) | PAIS (TS) | PADF (TR) | PADF (TS) | PAFL (TR) | PAFL (TS) | SPAM (TR) | SPAM (TS) |
|---|---|---|---|---|---|---|---|---|---|---|
FDR | 0.0440 | 0.0455 | 0.0513 | 0.0707 | 0.0583 | 0.0560 | 0.0756 | 0.0695 | 0.0887 | 0.0900 |
Pth | 0.0570 | 0.0589 | 0.0664 | 0.0754 | 0.0753 | 0.0633 | 0.0741 | 0.0682 | 0.0923 | 0.0906 |
CSI | 0.0644 | 0.0666 | 0.0606 | 0.0615 | 0.0743 | 0.0544 | 0.0645 | 0.0590 | 0.0891 | 0.0908 |
MCC | 0.0553 | 0.0572 | 0.0521 | 0.0529 | 0.0732 | 0.0468 | 0.0713 | 0.0655 | 0.0903 | 0.0909 |
FMI | 0.0475 | 0.0491 | 0.0448 | 0.0455 | 0.0722 | 0.0403 | 0.0699 | 0.0642 | 0.0885 | 0.0891 |
IM | 0.0364 | 0.0392 | 0.0358 | 0.0434 | 0.0574 | 0.0483 | 0.0760 | 0.0682 | 0.0911 | 0.0946 |
The convergence of performance of security and privacy-aware models is important because it ensures that models will be implemented in the most secure manner possible. This helps to prevent security breaches, data losses, and malicious attacks that could result in financial losses or operational disruption. By using secure and reliable models, organizations can keep their data safe and secure while maximizing the effectiveness of their security and privacy efforts. Additionally, convergence can help improve performance as the same policies are applied consistently regardless of the platform or application, allowing better integration of security and privacy initiatives.
Figure 13 shows the comparison of convergence of performance. In a critical authentication point with training set, the existing PAFAC reached 0.0440, PAIS reached 0.0513, PADF obtained 0.0583, and PAFL obtained 0.0756 FDR results. The proposed SPAM reached 0.0887 FDR results. Also in testing set, the existing PAFAC reached 0.0455, PAIS reached 0.0707, PADF obtained 0.0560, and PAFL obtained 0.0695 FDR results. The proposed SPAM reached 0.0900 FDR results. The existing PAFAC reached 0.0570, PAIS reached 0.0664, PADF obtained 0.0753, and PAFL obtained 0.0741 Pth results. The proposed SPAM reached 0.0923 Pth results. Also in testing set, the existing PAFAC reached 0.0589, PAIS reached 0.0754, PADF obtained 0.0633, and PAFL obtained 0.0682 Pth results. The proposed SPAM reached 0.0906 Pth results. The existing PAFAC reached 0.0644, PAIS reached 0.0606, PADF obtained 0.0743, and PAFL obtained 0.0645 CSI results. The proposed SPAM reached 0.0891 CSI results. Also in testing set, the existing PAFAC reached 0.0666, PAIS reached 0.0615, PADF obtained 0.0544, and PAFL obtained 0.0590 CSI results. The proposed SPAM reached 0.0908 CSI results. The existing PAFAC reached 0.0553, PAIS reached 0.0521, PADF obtained 0.0732, and PAFL obtained 0.0713 MCC results. The proposed SPAM reached 0.0903 MCC results. Also in testing set, the existing PAFAC reached 0.0572, PAIS reached 0.0529, PADF obtained 0.0468, and PAFL obtained 0.0655 MCC results. The proposed SPAM reached 0.0909 MCC results. The existing PAFAC reached 0.0475, PAIS reached 0.0448, PADF obtained 0.0722, and PAFL obtained 0.0699 FMI results. The proposed SPAM reached 0.0885 FMI results. Also in testing set, the existing PAFAC reached 0.0491, PAIS reached 0.0455, PADF obtained 0.0403, and PAFL obtained 0.0642 FMI results. The proposed SPAM reached 0.0891 FMI results. The existing PAFAC reached 0.0364, PAIS reached 0.0358, PADF obtained 0.0574, and PAFL obtained 0.0760 informedness results. The proposed SPAM reached 0.0911 informedness results. Also in testing set, the existing PAFAC reached 0.0392, PAIS reached 0.0434, PADF obtained 0.0483, and PAFL obtained 0.0682 informedness results. The proposed SPAM reached 0.0946 informedness results.
Fig. 13. [Images not available. See PDF.]
Convergence of performance.
Mean of Performance
The mean (or average) of performance is an indication of the overall level of performance by an individual or group. It is the arithmetic average of scores on a given measure or measures, calculated by adding all scores and dividing by the number of scores. It is also known as an aggregate measure because it arises from the aggregation of various measures. Standard measures include the mean, which gives an average performance and makes comparisons across individuals or groups. Table 10 shows the mean of performance between the existing and proposed models.
Table 10. . Mean of performance
Parameters | PAFAC | PAIS | PADF | PAFL | SPAM |
|---|---|---|---|---|---|
FDR | 0.0015 | 0.0194 | 0.00223 | 0.00609 | 0.05135 |
Pth | 0.0019 | 0.0090 | 0.01208 | 0.00599 | 0.06177 |
CSI | 0.0022 | 0.0009 | 0.01991 | 0.00553 | 0.05171 |
MCC | 0.0019 | 0.0068 | 0.02642 | 0.00573 | 0.05062 |
FMI | 0.0016 | 0.0007 | 0.03191 | 0.00564 | 0.09267 |
IM | 0.0028 | 0.0076 | 0.00912 | 0.00784 | 0.08363 |
The Performance Mean = SUM of a security and privacy-aware model (where high value means stronger security & privacy). This can evaluate how robust your model is against a security threat and any breach of privacy information, specifying parameters associated with the accuracy of this model, authentication time needed for security data processing, and efficiency in cryptography used. The security and privacy properties of reference implementation For the particular model, security is used so as not to have any leaks because, in this paper, a kind of strength is used for measuring those strengths. Finally, such an indicator can easily be referred to as the overall (possible) efficiency level.
Figure 14 shows the mean of performance between the existing and proposed models. In a critical authentication point, the existing PAFAC reached 0.0015, PAIS reached 0.0194, PADF obtained 0.00223, and PAFL obtained 0.00609 FDR results. The proposed SPAM reached 0.05135 FDR results. The existing PAFAC reached 0.0019, PAIS reached 0.0090, PADF obtained 0.01208, and PAFL obtained 0.00599 Pth results. The proposed SPAM reached 0.06177 Pth results. The existing PAFAC reached 0.0022, PAIS reached 0.0009, PADF obtained 0.01991, and PAFL obtained 0.00553 CSI results. The proposed SPAM reached 0.05171 CSI results. The existing PAFAC reached 0.0019, PAIS reached 0.0068, PADF obtained 0.02642, and PAFL obtained 0.00573 MCC results. The proposed SPAM reached 0.05062 MCC results. The existing PAFAC reached 0.0016, PAIS reached 0.0007, PADF obtained 0.03191, and PAFL obtained 0.00564 FMI results. The proposed SPAM reached 0.09267 FMI results. The existing PAFAC reached 0.0028, PAIS reached 0.0076, PADF obtained 0.00912, and PAFL obtained 0.00784 informedness results. The proposed SPAM reached 0.08363 informedness results.
Fig. 14. [Images not available. See PDF.]
Mean of performance.
Applications
The security and privacy-aware model (SPAM) is dedicated to integrating security and privacy into the policy-centric development process of software systems. It is helpful for a wide range of real-world scenarios, especially when the objective involves protecting future sensitive data, for instance, in the coverage of personal data and access to connected devices (IoT with SPAM). In mobile apps, SPAM can protect user data and prevent them from cyber-attacks. It may also help ensure that data is kept secure as it moves and rests in the cloud services. SPAM can improve data security and user privacy in all of these cases, making it an essential form of protecting sensitive information in the digital world.
Distributed learning technique in which the computational task of training a machine learning model is divided among multiple devices, i.e., servers and clients. In consumer electronics that use machine learning, the painful step of training a model on hundreds or thousands of hours worth of data is done by servers. On the other hand, much lighter client devices (e.g., smartphones or intelligent objects) only need to carry out inference. This lowers consumer electronics prices since powerful processors do not need to be on every device, lowering production costs. Split learning also saves energy on client devices, which means it helps battery life last longer and costs less to operate for consumers. Leading to cheaper and more functional consumer electronics.
CONCLUSIONS
With split learning, the data remains unhampered and secure from malicious actors and potential privacy risks, ensuring the user’s data is kept out of harm’s way. Furthermore, with split learning, user-side devices can use their computing resources for the model computation, which reduces the burden on the cloud. The proposed SPAM reached 0.05135 FDR results, 0.06177 Pth results, 0.05171 CSI results, 0.05062 MCC results, 0.09267 FMI results, and 0.08363 informedness results. This result in object detection tasks being processed faster and with a reduced latency. For these reasons, split learning is an excellent choice for security and privacy-aware consumer electronics, with its many advantages over centralized models.
FUTURE SCOPE
The future of split learning and consumer electronics will focus on enhancing the security and privacy of the model. This can include data encryption methods such as homomorphism encryption or various secure multiparty computing techniques that can help protect data and its usage. Additionally, authentication measures can be employed to ensure that data and models are coming from trusted sources, and authorization architectures can be put in place to carefully control execution and usage of the data. As privacy regulations such as GDPR and CCPA continue to evolve, techniques can be implemented to ensure that all regulatory requirements are being met. Finally, methods to detect and mitigate model bias and explainability of the models need to be explored and implemented.
FUNDING
This work was supported by ongoing institutional funding. No additional grants to carry out or direct this particular research were obtained.
CONFLICT OF INTEREST
The authors of this work declare that they have no conflicts of interest.
CODE AVAILABILITY
Source data and their information are mentioned in the manuscript.
Publisher’s Note.
Pleiades Publishing remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
AI tools may have been used in the translation or editing of this article.
REFERENCES
1 Koutsos, V.; Papadopoulos, D.; Chatzopoulos, D.; Tarkoma, S.; Hui, P. Agora: A privacy-aware data marketplace. IEEE Trans. Dependable and Secure Comput.; 2021; 19, pp. 3728-3740. [DOI: https://dx.doi.org/10.1109/TDSC.2021.3105099]
2 Alromih, A., Clark, J.A., and Gope, P., Privacy-aware split learning based energy theft detection for smart grids, in Proc. Int. Conf. on Information and Communications Security, Cham: Springer International Publ., 2022, pp. 281–300.
3 Finster, S.; Baumgart, I. Privacy-aware smart metering: A survey. IEEE Commun. Surv. Tutorials; 2015; 17, pp. 1088-1101. [DOI: https://dx.doi.org/10.1109/COMST.2015.2425958]
4 Carvalho, M., Ennaffi, O., Chateau, S., and Bachir, S.A., On the design of privacy-aware cameras: A study on deep neural networks, in Proc. European Conf. on Computer Vision, Cham: Springer Nature Switzerland, 2022, pp. 223–237.
5 Zerr, S., Siersdorfer, S., Hare, J., and Demidova, E., Privacy-aware image classification and search, Proc. 35th Int. ACM SIGIR Conf. on Research and Development in Information Retrieval, Oregon, Aug. 2012, pp. 35–44.
6 Logeshwaran, J., Shanmugasundaram, N., and Lloret, J., L-RUBI: An efficient load-based resource utilization algorithm for bi-partite scatternet in wireless personal area networks, Int. J. Commun. Syst., 2023, vol. 36, no. 4, p. e5439.
7 Steil, J., Hagestedt, I., Huang, M.X., and Bulling, A., Privacy-aware eye tracking using differential privacy, Proc. 11th ACM Symp. on Eye Tracking Research & Applications, Denver, CO, June 2019, pp. 1–9.
8 Zhang, F.; Lee, V.E.; Jin, R.; Garg, S.; Choo, K.K.R.; Maasberg, M.; Cheng, C. Privacy-aware smart city: a case study in collaborative filtering recommender systems. J. Parallel Distrib. Comput.; 2019; 127, pp. 145-159. [DOI: https://dx.doi.org/10.1016/j.jpdc.2017.12.015]
9 Finster, S.; Baumgart, I. Privacy-aware smart metering: A survey. IEEE Commun. Surv. Tutorials; 2014; 16, pp. 1732-1745. [DOI: https://dx.doi.org/10.1109/SURV.2014.052914.00090]
10 Logeshwaran, J.; Kiruthiga, T.; Kannadasan, R.; Vijayaraja, L.; Alqahtani, A.; Alqahtani, N.; Alsu-lami, A.A. Smart load-based resource optimization model to enhance the performance of device-to-device communication in 5G-WPAN. Electronics; 2023; 12, 1821. [DOI: https://dx.doi.org/10.3390/electronics12081821]
11 Späth, J., Matschinske, J., Kamanu, F.K., Murphy, S.A., Zolotareva, O., Bakhtiari, M., and Baumbach, J., Privacy-aware multi-institutional time-to-event studies, PLOS Digital Health, 2022, vol. 1, no. 9, p. e0000101.
12 Sun, J.; Xiong, H.; Liu, X.; Zhang, Y.; Nie, X.; Deng, R.H. Lightweight and privacy-aware fine-grained access control for IoT-oriented smart health. IEEE Internet Things J.; 2020; 7, pp. 6566-6575. [DOI: https://dx.doi.org/10.1109/JIOT.2020.2974257]
13 Bilogrevic, I.; Huguenin, K.; Agir, B.; Jadliwala, M.; Gazaki, M.; Hubaux, J.P. A machine-learning based approach to privacy-aware information-sharing in mobile social networks. Pervasive Mobile Comput.; 2016; 25, pp. 125-142. [DOI: https://dx.doi.org/10.1016/j.pmcj.2015.01.006]
14 Sarfraz, U.; Alam, M.; Zeadally, S.; Khan, A. Privacy aware IOTA ledger: decentralized mixing and unlinkable IOTA transactions. Comput. Networks; 2019; 148, pp. 361-372. [DOI: https://dx.doi.org/10.1016/j.comnet.2018.11.019]
15 Kanwal, T., Jabbar, A.A., Anjum, A., Malik, S.U., Khan, A., Ahmad, N., and Balubaid, M.A., Privacy-aware relationship semantics-based XACML access control model for electronic health records in hybrid cloud, Int. J. Distrib. Sensor Networks, 2019, vol. 15, no. 6, p. 1550147719846050.
16 Bhardwaj, A., Al-Turjman, F., Sapra, V., Kumar, M., and Stephan, T., Privacy-aware detection framework to mitigate new-age phishing attacks, Comput. Electr. Eng., 2021, vol. 96, p. 107546.
17 Zhao, S., Bharati, R., Borcea, C., and Chen, Y., Privacy-aware federated learning for page recommendation, Proc. IEEE Int. Conf. on Big Data (Big Data), Virtual, Dec. 2020, pp. 1071–1080.
18 Mothukuri, V., Parizi, R.M., Pouriyeh, S., and Mashhadi, A., CloudFL: A zero-touch federated learning framework for privacy-aware sensor cloud, Proc. 17th Int. Conf. on Availability, Reliability and Security, Vienna, Aug. 2022, pp. 1–8.
19 Ahamed, F., Shahrestani, S., and Cheung, H., Privacy-aware IoT based fall detection with infrared sensors and deep learning, in Proc. Int. Conf. on Interactive Collaborative Robotics, Cham: Springer Nature Switzerland, 2023, pp. 392–401.
20 Stach, C., Secure candy castle—A prototype for privacy-aware mhealth apps, Proc. 17th IEEE Int. Conf. on Mobile Data Management (MDM), Porto, June 2016, vol. 1, pp. 361–364.
21 Hassan, K.N. and Haque, M.A., SS+ CEDNet: Aspeech privacy aware cough detection pipeline by separating sources, Proc. 10th IEEE Region 10 Humanitarian Technology Conf. (R10-HTC), Hyderabad, Sept. 2022, pp. 32–37.
22 Wu, H.; Wang, L.; Xue, G. Privacy-aware task allocation and data aggregation in fog-assisted spatial crowdsourcing. IEEE Trans. Network Sci. Eng.; 2019; 7, pp. 589-602.
23 Ni, Q.; Bertino, E.; Lobo, J.; Calo, S.B. Privacy-aware role-based access control. IEEE Secur. Privacy; 2009; 7, pp. 35-43. [DOI: https://dx.doi.org/10.1109/MSP.2009.102]
24 Luc, N.Q.; Nguyen, T.T.; Vu, C.H. Secure messaging application development: Based on post-quantum algorithms CSIDH, Falcon, and AES symmetric key cryptosystem. Program. Comput. Software; 2024; 50, pp. 322-333.
25 Konno, T., Awai, S., and Chikano, M., Privacy-aware user watching system using 2D pose estimation, Proc. 9th IEEE Global Conf. on Consumer Electronics (GCCE), Kobe, Oct. 2020, pp. 314–315.
26 Zhang, K., Zhou, X., Chen, Y., Wang, X., and Ruan, Y., Sedic: Privacy-aware data intensive computing on hybrid clouds, Proc. 18th ACM Conf. on Computer and Communications Security, Chicago, Oct. 2011, pp. 515–526.
27 Køien, G.M, Security update and incident handling for IoT-devices a privacy-aware approach, Proc. 10th Int. Conf. on Emerging Security Information, Systems and Technologies SECURWARE 2016, Nice, 2016, pp. 309–315.
28 Eiza, M.H.; Ni, Q.; Shi, Q. Secure and privacy-aware cloud-assisted video reporting service in 5G-enabled vehicular networks. IEEE Trans. Veh. Technol.; 2016; 65, pp. 7868-7881. [DOI: https://dx.doi.org/10.1109/TVT.2016.2541862]
29 Acs, G.; Conti, M.; Gasti, P.; Ghali, C.; Tsudik, G.; Wood, C.A. Privacy-aware caching in information-centric networking. IEEE Trans. Dependable Secure Comput.; 2017; 16, pp. 313-328. [DOI: https://dx.doi.org/10.1109/TDSC.2017.2679711]
30 Vulfin, A.M. Detection of network attacks in a heterogeneous industrial network based on machine learning. Program. Comput. Software; 2023; 49, pp. 333-345. [DOI: https://dx.doi.org/10.1134/S0361768823040126]
Copyright Springer Nature B.V. Dec 2024