Content area
Due to legal restrictions or restrictions related to companies' internal information policies, businesses often do not trust sensitive information to public cloud providers. One of the mechanisms to ensure the security of sensitive data in clouds is homomorphic encryption. Privacy-preserving neural networks are used to design solutions that utilize neural networks under these conditions. They exploit the homomorphic encryption mechanism, thus enabling the security of commercial information in the cloud. The main deterrent to the use of privacy-preserving neural networks is the large computational and spatial complexity of the scalar multiplication algorithm, which is the basic algorithm for computing mathematical convolution. In this paper, we propose a scalar multiplication algorithm that reduces the spatial complexity from quadratic to linear, and reduces the computation time of scalar multiplication by a factor of 1.38.
INTRODUCTION
Artificial Intelligence (AI) methods (AI) [1] have been gaining popularity in the last few years. Although in the ‘00s, AI was usually of interest only in research circles, in recent decades AI has been gaining popularity in all areas of human activity. If we analyze the scientific and technological achievements, we can notice the following. The rise in popularity of AI techniques has been more influenced by the development of decentralized computing architectures, including cloud computing, hardware gas pedals, and the general trend of increasing computing power of devices. AI methods are now being applied both in manufacturing, to improve automation efficiency, and in medicine and financial structures, to analyze big data. With the growing popularity of language models and the launch of open source GPT, AI methods have entered even more areas of human activity [2].
However, as in the late twentieth century, when the Internet/World Wide Web was formed, AI methods have been the subject of many questions [3]. Both from the point of view of law and lawmaking and from the point of view of data security. AI needs to process large amounts of data to work properly, and in the case of the same language models, even larger volumes of data. Big Data [4] may contain information with limited access, for example, if AI works within a particular company and processes user data, in a medical organization - personal data of patients, municipal/state – data of citizens, internal documents; in finantial organizations - customer data, account information, stock exchange quotes, etc. All the above data are often confidential and protected by law, for example in the Russian Federation this is the Federal Law “About personal data” № 152-FZ, which aims to strengthen control over the processing and dissemination of personal information of citizens [5]. If the AI method is used inside a closed network, security issues can be solved using standard methods, but building and maintaining a high-capacity closed network requires many resources and funding. Therefore, it is often more efficient to turn to a cloud computing service provider, which is what most companies do. By leasing computing power, the network becomes public, which poses risks to sensitive data. Although cloud service providers guarantee the security of data stored in the cloud, it is virtually impossible to actually ensure the security of computing now, as data is stored in encrypted form, but not processed in encrypted form.
Thus, there is a problem of handling sensitive AI data in public networks. As a solution to the problem under consideration, we can consider such cryptographic primitive as fully homomorphic encryption (FHE) [6], it allows performing homomorphic addition and multiplication operations on encrypted data. This is sufficient, for example, for the operation of NN [7], when we need to process input data with the help of already trained neurons. There are also unsolved problems in this area. Although for matrix operations it is possible to realize based on addition and multiplication (subtraction is realized as an addition of positive and negative numbers), however, due to the peculiarities of FHE schemes, there is a large redundancy of data, which leads to inefficient operation of confidential NNs.
This paper is to investigate matrix multiplication in homomorphic encryption to improve the efficiency of memory consumption, derive a new algorithm and study it based on confidential NN.
The paper is organized as follows, Section 2 discusses confidential NNs and their organization methods, Section 3 presents the research on confidential matrix multiplication, Section 4 analyzes the obtained results based on the experimental study, Section 5 concludes the work done and describes future works.
PRIVACY-PRESERVING NEURAL NETWORKS
Artificial Neural Network
A mathematical model of neural networks, which is presented in the form of interconnected neurons (Fig. 1).
Fig. 1. [Images not available. See PDF.]
Model of neural network.
The input layer processes the input data, the hidden layer performs calculations, and the output layer is responsible for the output information. Artificial NN, as well as biological NN works due to neuron activation, that is, when the value within a neuron reaches the value of the activation function to the next neuron, its values are transferred. Each neuron has its own base value (weight). Then the state of neuron can be described as:where x is the value of the ith neuron input, wi is the weight of the ith synapse, n is the number of neuron inputs. Axon (transfer of values to the next neuron), is made at:where f(S) is the activation function. For example, the sigmoidal activation function [8], is defined as:
1
From a mathematical point of view, NN is a multi-parameter nonlinear optimization problem, where the hidden layer neurons represent parameters, and the output layer neurons represent constraints. For the neural network to work properly, it needs to be trained. The training is done by changing the internal values of the neuron. There are many types of learning, both with and without a teacher. The most popular way of learning with error. Based on this type of learning, back propagation NNs are built and used for pattern finding, prediction and qualitative analysis. Analyzing Fig. 1, we can see that if we introduce matrix multiplication operation, it is possible to increase the efficiency of data processing, which is used in most cases. To consider NN privacy-preserving, it is necessary to introduce the notion of FHE cryptographic primitive.
Fully Homomorphic Encryption and CKKS Scheme
Fully Homomorphic Encryption (FHE) is a cryptographic primitive that develops the ideas of Homomorphic Encryption (HE). HE allows performing homomorphic addition or homomorphic multiplication over an encrypted text. Examples of HE are asymmetric ciphers such as RSA [9], ElGamal [10] and others. Cryptographers as early as the 1980s speculated that fully homomorphic encryption was possible. The first FHE scheme was presented by Gentry in his 2009 paper [6]. However, this scheme was not efficient and could process binary bits using logical operations in quite a long time (compared to modern schemes), in addition, the scheme had large limits on the number of allowed operations (the number of operations after which a message can be recovered). Over 15 years, Gentry himself [11–13] and his followers [14–17] have developed new FHE schemes that handle integers, run faster, and extend the constraints on operations.
The next step in the history of FHE is the CKKS scheme (originally HEAAN), which allows for the processing of rational numbers [18]. CKKS is a homomorphic encryption system designed to efficiently perform approximate arithmetic operations on encrypted data. It is particularly suitable for computations involving real or complex numbers over the field CN/2. The plaintext space and the ciphertext space share the same domain.where N is the degree of two.
The CKKS batch coding maps an array of complex numbers into a polynomial with the property: decode(encode (m1) encode (m2)) ≈ , where is the multiplication of the components and is the noncyclic convolution.
The CKKS scheme operates on the standard [19], which contains recommended parameters for 128-bit keys of ternary form homomorphic encryption. Encryption in CKKS is accomplished by computing Lagrange polynomials in the complex number field.
The scheme uses approximate arithmetic to construct ciphertexts. Consider the given arithmetic. At the beginning we fix the base p > 0 and the modulus q0, and q = pq0 at 0 < l ≤ L. The integer p will be used as the basis for scaling in the approximate calculations. A parameter such that M = M(λ, qL) for a polynomial ring is chosen as the security setting λ. Given bounds 0 < l ≤ L of ciphertext level l a vector in for a fixed integer k is defined.
1. Key generation. The encryption process begins with the generation of keys, which are the public and private keys. The private key is used to decrypt the data while the public key is used to encrypt it.
2. Encryption. To encrypt a plaintext vector x, the following steps are performed:
Padding. Vector m(x) is filled with zeros, the length of the vector is equal to the given degree of two ;
Encoding. The plaintext vector x is encoded into a plaintext polynomial m(x), which is a polynomial representation of the message.
Homomorphic encryption. The polynomial m(x) is encrypted using pk to obtain the polynomial c(x) of the ciphertext, while controlling the amount of error of the ciphertext, while controlling the amount of error , that satisfies , at which the expression 〈c, sk〉 = m + eMax.
3. Decryption. To decrypt the polynomial c(x) of the ciphertext, the following steps are performed:
• Homomorphic decryption. The ciphertext polynomial c(x) is decrypted using the secret key to obtain the plaintext polynomial m(x) ← 〈c, sk〉(mod ql);
• Decoding. To obtain the original text vector the text polynomial m(x) is converted back from a polynomial to a message polynomial.
4. Homomorphic operations. CKKS supports several approximate arithmetic operations on encrypted data, including addition and multiplication. Homomorphic addition and multiplication can be performed in ciphertext space without having to decrypt the ciphertext:
• Homomorphic addition. Given two ciphertexts c1(x) and c2(x), representing the encrypted values of m1(x) and m2(x) respectively, homomorphic addition is performed by adding the corresponding coefficients modulo the original module text: c(x) = c1(x) + c2(x), moreover, the errors e1 and e2 are also summarized;
• Homomorphic multiplication. Given two ciphertexts c1(x) and c2(x), representing the encrypted values of m1(x) and m2(x) respectively, homomorphic multiplication is performed by transforming the encrypted polynomial text modulo the original modulus text: c(x) = c1(x) × c2(x), for multiplication, its own error bounds are allocated с , where is a given constant.
Both addition and multiplication lead to an increase in the approximation error e, the CKKS scheme can decrypt the data when the error is within certain limits. When using the CKKS scheme, it is important to control the error growth, which depends on the number of operations and their order. Given the peculiarities of arithmetic, multiplication introduces a large error. Different software implementations of the CKKS scheme offer different ways to control them.
Privacy-Preserving Neural Networks
Interest in privacy-preserving neural networks (PPNN) emerged several years ago. In their review [20] the authors explore this concept from a theoretical perspective, looking at the main challenges and problems faced by researchers in constructing FHE-based PPNNs. The foundation of the work lies in the concept of Machine Learning as a Service (MLaaS) [21], which is akin to cloud computing concepts [22–24]. The paper gives definitions of FHE operations, including problematic operations, in addition to matrix multiplication, it also mentioned bootstrapping [25], which is used to increase the allowed multiplication operations. There is also a description of tools for working with FHE [26–28], as well as with PPNN [29]. The problem of training PPNNs working in FHE is mentioned separately in the paper. This topic is also popular among researchers, as for example in [30] the acceleration of matrix multiplication operation by modifying the Haveli method [31] is studied. However, still multiplication operations based on this method are rather resource intensive. In the next section we will consider in detail the matrix multiplication operation and ways to increase the speed of data processing.
MATRIX MULTIPLICATION OPERATION IN APPROXIMATE HOMOMORPHIC ENCRYPTION SCHEME
Matrix multiplication is a basic operation for many systems, including NN us consider its algorithm based on multiplication of square matrices a and b of size n × n:where , – multiplication result. n the open form, this algorithm is quite simple. However, in FHE its execution is impossible because we cannot separately address an element of each inner vector. The method of Haveli [31], us consider it in detail. To perform matrix multiplication in FHE, the matrices must be encoded in their diagonal representation. Then several rotation operations with auxiliary matrices are needed to perform the multiplication. Consider an example.
Let the matrix A of size m × m be represented in the encrypted form y0, …, ym–1, where yi = (A0,i, A1,i +1, …, Am– 1,m+ 1). First, = . Next, the piecewise product between the vector of weights and the matrix, the product w = , where = – is the input vector can be calculated as:where is the component product between vectors.
However, there is another way that is appropriate for PPNN. In this case, you may wonder what kind of confidentiality is needed.
Given the fact that it is currently impossible to train NN in FHE in sufficient time, NN is trained in the open, it is fair to observe that the values of the weights are open and can be publicly available and there is a possibility of their compromise. Then it does not make sense to encrypt them, and we can apply them in the open. Then we can use a modifiable algorithm based on the previous one, where the input matrix is encoded in their diagonal representation and the weights are represented as a vector. Such multiplication is treated as matrix-to-scalar multiplication, which combines the operations of multiplication and rotation. Consider an example.
Let the matrix A of size n × n be represented in the encrypted form , where ai = (A0,i, A1,i +1, …, An– 1,n+ 1). First, = . Next, the product w = , where = is the input vector, can be calculated as:
This method requires m rotation, multiplication and addition. n addition, the auxiliary matrices and in encrypted form are memory intensive, as are the intermediate encrypted matrix until the result is obtained. From the formulas we can see that by using open values we not only reduce the number of operations but also the memory consumption. To confirm the conclusions, we have conducted an experimental study (Fig. 2).
Fig. 2. [Images not available. See PDF.]
Study of memory consumption by the proposed method.
From the data presented in Fig. 2a we can conclude that the proposed algorithm allows us to reduce the amount of memory by an average of 7.89 times. The trend line for the data provided in Fig. 2b is equal to 0.0656n2 – 0.4452n + 2.0912 with a confidence factor R2 = 0.9925. Thus, we can conclude that the space complexity has decreased from to for product n × n square matrices. Considering that to product n × n square matrices, scalar multiplications and cyclic vector shifts must be calculated, the spatial complexity of the scalar multiplication algorithm with encrypted data is reduced from to .
As can be seen in the illustration, memory consumption has been reduced from a quadratic law to a linear law. This gives an advantage in efficiency when working with PPNN. However, considering the specifics of the CKKS scheme, it is necessary to verify whether the changes made did not affect the accuracy of the result. For this purpose, it is necessary to build a neural network and conduct a study. Next, us consider the speed with which the calculations are performed. It should be noted that the illustration (Fig. 3a) shows the total speed of operations, including encryption and decryption.
Fig. 3. [Images not available. See PDF.]
Study of time of calculation of matrix multiplication operation.
Analyzing the illustration, the proposed method performs the multiplication faster. This effect is achieved both due to the new multiplication approach and the fact that encryption and decryption are simplified, since the multiplication is performed on an open vector of PPNN weights. In addition, we propose to analyze the speed ratio of the methods (Fig. 3b).
Trend lines of time versus n for the Haveli algorithm 0.0634n4 – 1.8561n3 + 20.897n2 – 88.794n + 118.56 for the proposed algorithm 0.0461n4 – 1.3405n3 + 14.955n2 – 63.491n + 84.81 with the confidence coefficient for both lines equal to R2 = 0.9995 (see, Fig. 3a). Asymptotically, the gain in time with increasing n is equal:
The time of the algorithm for product square matrices n × n decreased by an average of 1.49 times (see, Fig. 3b). As the size increases, the graph becomes more linear, which can be explained by the increase in redundancy, which depends on the length of the vector, while at small sizes it depends on both the design of the vector and the auxiliary matrices required for rotation.
In general, we can say that the proposed method is efficient both in terms of memory consumption and computational speed. It is worth noting that this result is achieved by reducing privacy, namely the PPNN weights, provided that the weights are common knowledge when training PPNNs in the open form.
ACCURACY STUDY
As part of the research, experiments were conducted to train and test the neural network, as well as its encrypted version, using the MNIST dataset. The aim of the experiment was to evaluate the performance of the model in normal and encrypted modes, and to study the effect of homomorphic encryption on the performance and accuracy of the model. The hardware configuration consists of an Intel(R) Xeon(R) CPU E5-2696 v3 CPU clocked at 2.30 GHz, 32 GB of DDR4 RAM at 2133 MHz, and a 1 TB SSD. The average time was measured by running the algorithms 10000 times on the platform. During the experiment, data was collected on training and testing losses as well as classification accuracy for each class. For this purpose, an NN was constructed based on the following mathematical model:
Consider a convolutional neural network (CNN) with the following layers and parameters:
Input image: I, a single-channel image.
First convolutional layer : Applies 4 filters of kernel size 7 × 7 with stride 3 and padding 0.
First fully connected layer : Transforms the flattened feature maps into a hidden layer with H neurons.
Second fully connected layer (): Maps the hidden layer to an output layer with О neurons.
The mathematical operations performed by the CNN are as follows:
1. The first convolutional layer operation can be defined as:where is the kernel size, S1 = 3 is the stride, and P1 = 0 is the padding.
2. The output of is then passed through an activation function and possibly other operations like pooling or normalization before being flattened and fed into the first fully connected layer.
3. The first fully connected layer operation is given by:where is the input vector to , and represent the weights and biases of , respectively.
4. The second fully connected layer operation is similarly defined as:where Y is the input vector to F2, derived from the output of F1, and represent the weights and biases of , respectively. This model outlines the structure of the CNN, emphasizing the sequence from convolutional layer processing to the final output generation through fully connected layers.
CNN was chosen as a model because FHE has the qualities of both homomorphism and automorphism, due to which the rotation of matrices is encrypted to realize matrix multiplication. In addition, these properties allow the mathematical convolution operation to be performed sufficiently efficiently [32]. Next, let us consider the results of the obtained model. Namely, the data on losses (Fig. 4).
Fig. 4. [Images not available. See PDF.]
Loss function study for PPNN.
Figure 4 shows the dynamics of neural network losses during training for 10 epochs. The X axis represents the test number equal to the number of epochs (from 1 to 10), and the Y axis represents the number of losses. The graph shows a decrease in losses with each subsequent epoch, which indicates adequate behavior of the model using the new training algorithm.
Figure 5 shows the accuracy of the model classification on the test data for each of the 10 classes, as well as the overall accuracy. The X-axis represents the classes (0 through 9), and the Y-axis represents the percentage of accuracy for each class. The graph helps visualize how the model performs in classifying different categories, identifying classes on which the model performs better or worse.
Fig. 5. [Images not available. See PDF.]
Research on PPNN accuracy for various classes.
The experimental results show that the neural network exhibits high accuracy in both the normal and encrypted modes, with a slight increase in overall accuracy in the encrypted mode. This indicates that the application of homomorphic encryption does not have a significant negative impact on the model’s classification ability. However, an increase in training loss in the encrypted mode is observed, which may indicate the need for additional optimization of the model parameters to handle encrypted data. In addition, the fact that in some classes the encrypted CNN shows more accurate results requires additional research.
CONCLUSIONS
The result of the study on adapting PPNN to work with FHE is significant results that highlight the potential and limitations of this approach in the field of PPNN. The study demonstrates that by modifying the encrypted matrix multiplication method, PPNN can be successfully utilized while maintaining efficiency in data processing and memory consumption.
Analyzing the PPNN test results showed that the model exhibits improved performance with each epoch, which is evident from the reduction in test loss. This indicates that PPNN adapts adequately to the data and is effectively applied. The results of testing the model on the test data showed high classification accuracy for each of the classes as well as overall accuracy, which confirms the effectiveness of the model in classification tasks. It is interesting to note that the encrypted version of the model showed comparable, and in some cases even higher accuracy, indicating that the application of FHE does not have a significant negative impact on the model’s classification ability. Nevertheless, it should be noted that the use of approximate FHE requires further research to optimize the balance between security, privacy and model performance. It is important to investigate the impact of different types of activation function approximation on the accuracy and overall performance of the model, and to develop methods to improve the performance of PPNN.
The paper proposes a scalar multiplication algorithm that allows reducing the spatial complexity from to and reducing the calculation time of scalar multiplication by 1.38 times.
The results of this study open new perspectives for the development of secure PPNNs, especially in areas where sensitive data processing is required. They also emphasize the importance of continued research in this area to achieve the optimal combination of security, privacy, and efficiency in PPNNs. Future works are planned to investigate and develop other operations in PPNNs to improve computational efficiency, memory consumption, accuracy and privacy.
FUNDING
The research was supported by the Russian Science Foundation grant no. 19-71-10033, https://rscf.ru/en/project/19-71-10033/.
CONFLICT OF INTEREST
The authors of this work declare that they have no conflicts of interest.
Publisher’s Note.
Pleiades Publishing remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
AI tools may have been used in the translation or editing of this article.
REFERENCES
1 Hunt, E.B. Artificial Intelligence; 2014;
2 Radford, A. Improving Language Understanding by Generative Pre-Training; 2018;
3 Wamser, F. Telecommunication Systems; 2011;
4 Sagiroglu, S. and Sinanc, D., Big data: A review, Proc. IEEE Int. Conf. on Collaboration Technologies and Systems (CTS), Atlanta, 2013, pp. 42–47.
5 On Personal Data. http://pravo.gov.ru/proxy/ips/?docbody&nd=102108261. Accessed 16.06.2024.
6 Gentry, C. A Fully Homomorphic Encryption Scheme; 2009;
7 Yegnanarayana, B. Artificial Neural Networks; 2009;
8 Pratiwi, H. et al., Sigmoid activation function in selecting the best model of artificial neural networks, J. Phys.: Conf. Ser., 2020, vol. 1471, no. 1, p. 012010.
9 Rivest, R.L.; Shamir, A.; Adleman, L. A method for obtaining digital signatures and public-key cryptosystems. Commun. ACM; 1978; 21, pp. 120-126.MathSciNet ID: 700103[DOI: https://dx.doi.org/10.1145/359340.359342]
10 ElGamal, T. A public key cryptosystem and a signature scheme based on discrete logarithms. IEEE Trans. Inf. Theory; 1985; 31, pp. 469-472.MathSciNet ID: 798552[DOI: https://dx.doi.org/10.1109/TIT.1985.1057074]
11 Gentry, C., Fully homomorphic encryption using ideal lattices, in Proc. 41st Annu. ACM Symp. on Theory of Computing, Bethesda, MD: ACM, 2009, pp. 169–178.
12 Van Dijk, M. et al., Fully homomorphic encryption over the integers, in Proc. Conf. Advances in Cryptology – Eurocrypt 2010, Gilbert, H., Ed., Berlin: Springer, 2010.
13 Gentry, C. and Halevi, S., Implementing Gentry’s fully-homomorphic encryption scheme, in Proc. 30th Annul. Int. Conf. on the Theory and Applications of Crypto-graphic Techniques “Advances in Cryptology – Eurocrypt 2011,” Tallin, Estonia, May 15–19,2011, Springer, 2011, pp. 129–148.
14 Brakerski, Z., Fully homomorphic encryption without modulus switching from classical GapSVP, in Proc. Annu. Cryptology Conf., Springer, 2012, pp. 868–886.
15 Brakerski, Z. and Vaikuntanathan, V., Fully homomorphic encryption from ring-LWE and security for key dependent messages, in Proc. Conf. Advances in Cryptology – Crypto 2011, Rogaway, P., Ed., Berlin, Heidelberg: Springer, 2011.
16 Brakerski, Z.; Gentry, C.; Vaikuntanathan, V. (Leveled) fully homomorphic encryption without bootstrapping. ACM Trans. Comput. Theory; 2014; 6, pp. 1-36.MathSciNet ID: 3195010[DOI: https://dx.doi.org/10.1145/2633600]
17 van Dijk, M., et al., Fully homomorphic encryption over the integers, in Proc. Annu. Int. Conf. on the Theory and Applications of Cryptographic Techniques, Springer, 2010, pp. 24–43.
18 Cheon, J.H., et al., Homomorphic encryption for arithmetic of approximate numbers, in Proc. Int. Conf. on the Theory and Application of Cryptology and Information Security, Springer, 2017, pp. 409–437.
19 Homomorphic Encryption Standardization – an Open Industry / Government / Academic Consortium to Advance Secure Computation. https://homomorphicencryption.org/. Accessed 10.12.2022.
20 Pulido-Gaytan, B. Privacy-preserving neural networks with homomorphic encryption: challenges and opportunities. Peer-to-Peer Networking Appl.; 2021; 14, pp. 1666-1691. [DOI: https://dx.doi.org/10.1007/s12083-021-01076-8]
21 Ribeiro, M., Grolinger, K., and Capretz, M.A., Mlaas: machine learning as a service, Proc. 14th IEEE Int. Conf. on Machine Learning and Applications (ICMLA), Miami, FL, 2015, pp. 896–902.
22 Manvi, S.S.; Shyam, G.K. Resource management for infrastructure as a service (IaaS) in cloud computing: a survey. J. Network Comput. Appl.; 2014; 41, pp. 424-440. [DOI: https://dx.doi.org/10.1016/j.jnca.2013.10.004]
23 Rodero-Merino, L. et al., Building safe PaaS clouds: a survey on security in multitenant software platforms, in Computers & Security, Elsevier, 2012, vol. 31, no. 1, pp. 96–108.
24 Cusumano, M. Cloud computing and saas as new computing platforms. Commun. ACM; 2010; 53, pp. 27-29. [DOI: https://dx.doi.org/10.1145/1721654.1721667]
25 Chen, H., Chillotti, I., and Song, Y., Improved bootstrapping for approximate homomorphic encryption, in Proc. 38th Annu. Int. Conf. on the Theory and Applications of Cryptographic Techniques “Advances in Cryptology – Eurocrypt 2019,” Darmstadt, Germany, May 19–23,2019, Springer, 2019, part II.
26 Microsoft SEAL: C++, Microsoft, 2023.
27 OpenFHE.org – OpenFHE – Open-Source Fully Homomorphic Encryption Library. https://www.openfhe.org/. Accessed April 1, 2024.
28 Dai, W. and Sunar, B., Cuhe: a homomorphic encryption accelerator library, in Proc. Int. Conf. on Cryptography and Information Security in the Balkans, Springer, 2015, pp. 169–186.
29 Benaissa, A., et al., TenSEAL: A library for encrypted tensor operations using homomorphic encryption, 2021. arXiv: 2104.03152.
30 Lee, J.-W. Privacy-preserving machine learning with fully homomorphic encryption for deep neural network. IEEE Access.; 2022; 10, pp. 30039-30054. [DOI: https://dx.doi.org/10.1109/ACCESS.2022.3159694]
31 Halevi, S. and Shoup, V., Algorithms in helib, in Proc. Annu. Cryptology Conf., Springer, 2014, pp. 554–571.
32 Özerk, Ö. Efficient number theoretic transform implementation on GPU for homomorphic encryption. J. Supercomput.; 2022; 78, pp. 2840-2872. [DOI: https://dx.doi.org/10.1007/s11227-021-03980-5]
© Pleiades Publishing, Ltd. 2024. ISSN 0361-7688, Programming and Computer Software, 2024, Vol. 50, No. 6, pp. 417–424. © Pleiades Publishing, Ltd., 2024. Russian Text © The Author(s), 2024, published in Programmirovanie, 2024, Vol. 50, No. 6.