Headnote
Purpose. Improvement of existing methods and development of new approaches for the digital processing of discrete signals in the field of artificial neural networks, as well as the formulation of principles for the structural implementation of discrete perceptrons.
Methodology. The presented scientific results and conclusions were obtained using methods of statistical analysis and digital signal processing, probability theory and mathematical statistics, signal and system theory, as well as through computational experiments and modeling.
Findings. The research showed the potential and prospects of using the proposed perceptron structure, which processes synaptic signals based on statistical estimation of their discrete states. The signal shifting is implemented through addition of weight coefficients, Which allows avoiding multiplication operations and, as a result, reduces algorithmic and computational complexity. Furthermore, using a discrete basis allows accelerating the training process by reducing the possible range of signal values.
Originality. A method for implementing a discrete perceptron that is based on the use of statistical evaluations and integer operations on discrete synaptic signals has been proposed for the first time.
Practical value. The proposed approach allows avoiding the typical multiplication operation for perceptron structures, reducing computational costs. The use of a discrete basis significantly limits the value space of shift coefficients Which could potentially shorten the training process. An important aspect of the results obtained is the prospects for implementing specialised artificial neural networks on platforms with limited computing resources, such as microcontrollers, programmable logic integrated circuits, and others.
Keywords: perceptron, statistical estimates, discrete signals, signal processing
Mera. Удосконалення 1снуючих Ta розробка HoBUX метод1в цифрового опрацювання дискретних сигнал1в у сфер! штучних нейронних мереж, a Taкож формування принципв структурно! реал1заци дискретних MEPUENTPOHIB.
Методика. Подан!1 HayKoBi результати À BUCHOBKH отриман! завдяки використанню METOLIB CTaтистичного аналзу та цифрового опрацювання сигнал1в, Teopii ймов1рностей 1 MaTeMaTHYHOÏ статистики, Teopii CUTHAJIIB 1 систем, а також шляхом проведення моделювання в обчислювальних експериментах.
Результати. Y ход1 проведених досллджень показан! MOXJIMBOCTI та перспективи використання за- пропоновано! перцептронно! структури, що peai3y€ опрацювання сигнал1в CUHANCIB на основ! статистичного ощшнювання IX дискретних CTaHiB. Причому змищення CUTHAJIB синапс1в реал1зуеться шляXOM додавання вагових коефищент в, що, у свою Yepry, дозволило вдмовитись Bid операции множення 1, як HAC TOK, знизити алгоритмичну та обчислювальну складытсть. KpiM того, використання дискретного базису дозволяе прискорити процес навчання таких перцептронних структур за рахунок зменшення простору можливих значень сигналв.
Наукова новизна. Уперше запропоновано метод реалзаци дискретного перцептрона, що грунтуеться на використанн! статистичного ошнювання Ta шлочисельних операщй над дискретними синаптичними сигналами.
Практична значим!сть. Запропонований шдхд дозволяе BiIMOBHTHCb BİN THIMOBOÏ для перцептронних структур операци множення, що приводить до зменшення обчислювальних затрат. Використання дискретного базису суттево обмежуе прост! можливих значень коефициентв змищення, що може дозволити скоротити процес навчання. Важливим аспектом отриманих результат!в E перспективи peал1заци спещалтзованих штучних нейронних мереж на платформах з обмеженими обчислювальними ресурсами, таких як мкроконтролери, програмован! логчн1 1нтегральн! схеми тощо.
Ключов! слова: перцептрон, статистичн! OYIHKU, дискретн! cuenanu, опрацюванния cuenanie
(ProQuest: ... denotes formulae omitted.)
Introduction. Modern computer systems are increasingly integrating with artificial neural networks. The improvement of computational processes in neural networks has an important applied value for multidisciplinary engineering. In particular, the effective processing of discrete signals allows one to optimally solve the problems of estimating dynamic loads in complex technical systems [1]. Reduction of computing costs for machine learning processes will allow solving non-conservative problems of vibration and earthquake protection [2, 3], performing multi-criteria design of protective coatings [4], carrying out a refined assessment of the reliability of structures with defects [5], making optimal management decisions [6], etc. The intensive development of these networks requires exploring ways to reduce their computational and, consequently, hardware complexity. In its simplest form, a perceptron can be viewed as an artificial neural network with a direct signal path, one hidden layer, and a threshold-type transfer function, Fig. 1.
The processing of signals by perceptron structures is implemented as follows: sensor signals are transmitted to associative elements, and the resulting outputs from these associative elements are then sent to activation elements. This approach creates a set of associations between input data (sensor signals) and output signals. In general, input signals x, weight coefficient w and the constant component b, which is sometimes used to adjust the threshold value (x;- w; + b;) of the classifier, can take real values, but in practical implementations, they take only fixed values. The output (out) is determined by the activation function and can take both real and integer values. Various activation functions have been proposed in neural network research, such as sigmoid, threshold, hyperbolic tangent, etc. [7, 8]. One of the ways to reduce the computational complexity is to decrease the amount of input data. It can be fulfilled by implementation of preprocessing step, for example, introducing a segmentation procedure for images that are 2D signals [9, 10]. One of the challenges in the practical implementation of perceptron structures, which leads to increased computational complexity, is the need to use weighting of synaptic signals x, which is implemented by multiplying them by the corresponding weight coefficients w. Moreover, the use of real numbers to represent the values of input signals and weight coefficients significantly complicates the training process of perceptron structures.
Literature review. One of the well-known methods for implementing artificial neural components [11] is based on the use of channels for receiving input signals x1, X2,...xn, x, (X) through synapses with weighted connections w1, w2, ..., wn (W) to the inputs of an adder, which performs the algebraic sum of the weighted input signals and determines the excitation level $ of the neural element. The output signal is determined by passing the excitation level S through a nonlinear function, either binary or sigmoid, f(s - h), where h is a constant shift (threshold value).
In practice, the computational complexity of such a solution is determined by multiplication operations, the presence of which leads to reduced performance. It should be noted that when applied to object recognition tasks, numerical instability may occur due to overflow in systems of this type. The use of nonlinear functions further increases the demand for computational resources and, consequently, complicates the corresponding algorithmic and hardware implementations.
Another implementation of a perceptron component, developed and patented by Ivanovsky in 2004, involves the use of sinusoidal functions in synaptic channels, input lines for excitation signals, and amplitudetime functions, which are traditionally connected to the inputs of an adder. Structurally, the synaptic channel is implemented as a sinusoidal signal generator, whose operational parameters are defined by a corresponding weight vector: the maximum amplitude, frequency, and phase of the output signal. The output of the generator, in turn, is fed into the adder.
At the output of such a structure, the sum of the input signal and the synaptic sinusoidal functions, scaled by the excitation signal values representing the instantaneous values of the output amplitude-time function, is obtained [12]. Notably, since the time of this work, no significant advancements have been achieved in this direction.
Significant drawbacks of the mentioned method, particularly in digital implementations, include the substantial computational resources required for generating sinusoidal signals and performing multiplication operations. Moreover, the generation of only instantaneous values of the output amplitude-time function imposes limitations on functional capabilities and complicates algorithmic, hardware, and software solutions.
It is also worth mentioning one of the promising approaches to implementing a perceptron, which evaluates the degree of chaos, specifically the informational entropy of input discrete synaptic signals [13]. The main idea of this structure lies in the fact that the vector of input synaptic signals X = {x1,x2,...,xn}, which are multiplied by the corresponding weight coefficients И = {w0,w1,...,wn} during processing, forms а set with a limited number of discrete states. An informational entropy calculation function is applied to this set, and the resulting value is compared to a predefined threshold.
It is worth noting that the entropy calculation formula includes a logarithmic function, which is often approximated by finding the values of corresponding polynomials, particularly through the Taylor series. As in previous cases, the presence of weighting operations, i.e., multiplication, as well as the computation of polynomials, requires significant hardware and computational resources.
Unsolved aspects of the problem. The need for software or hardware implementation of the operation of weighting synaptic signals x by multiplying them by weight coefficients w, which belong to the domain of real numbers, increases the algorithmic and, consequently, the hardware and software complexity of perceptron structures, which serve as fundamental components in the construction of artificial neural networks.
Moreover, the emulation of typical logical functions such as NOT, AND, OR, and XOR remains challenging for a single elementary perceptron [7, 13].
One of the promising options for solving these problems is to improve existing implementations and find new realizations of perceptron structures, in particular by transitioning to a discrete basis, which will accelerate the training process by reducing the space of possible signal values. It is also expected to reduce algorithmic and, consequently, computational complexity by replacing the operation of weighting (via multiplication) synaptic signals with the operation of shifting them (via addition).
Methodology. The research methodology is based on information theory, the fundamentals of signal and system theory, probability theory and mathematical statistics, statistical analysis methods, digital signal processing, as well as the results of computational experiments. During the study, a discrete perceptron structure was proposed that does not require multiplication operations and is based on statistical evaluation of synaptic signals shifted by adding corresponding coefficients. Additionally, the possibility of emulating typical logical functions and their combinations was investigated.
Main part. Based on the analysis conducted, it has been established that a promising direction for the development of perceptron structures could involve transitioning to a discrete basis and employing various probabilistic characteristics (such as expected value, variance, mode, median, root mean square deviation, central moments of different orders, the count and probability distribution of existing states, as well as the informational entropy of these states), either individually or jointly. These characteristics are calculated for a set of pre-shifted synaptic signals. This approach could enhance recognition accuracy, while the elimination of multiplication operations and the adoption of a discrete basis would significantly reduce computational demands.
For stationary processes, the value of a probabilistic characteristic calculated over a corresponding fragment of the input signal converges to a certain steady-state value. Moreover, as the number of observations increases, fluctuations of these probabilistic characteristics around the steady-state value diminish. The use of a shifting operation on input signals enables the adjustment of the number of possible discrete states of the signal, particularly through the integer addition of corresponding coefficients. This processing results in a change in the selected probabilistic characteristic, which can be utilized in the implementation of the activation function for a discrete perceptron structure.
Thus, a method for the structural organization of a digital perceptron for discrete synaptic signals is proposed [14]. The sequence of processing such signals includes manipulating the number of possible states through the use of a shifting operation and an additional synaptic input with a predefined fixed discrete state, calculating one or multiple probabilistic evaluations simultaneously for the resulting set of signal states, and comparing these evaluations with a predefined threshold value to determine the activation state of the discrete neuron.
As an example, it is appropriate to consider a discrete perceptron with two binary synapses, which can be easily scaled to an arbitrary number of such synaptic inputs (denoted by a dashed line), as illustrated in Fig. 2.
As can be observed, in the general case, the vectors of input synaptic signals X ={x1,x2,..., xn} and shift coefficients W ={w0,w1,...,wn} are reduced to a specific case, namely two inputs Y ={x1,x2}, and corresponding shift coefficients W = {W0,W1,W2,}, respectively.
It is worth noting that the shift coefficient w, does not directly affect the vector X and serves as an additional tool for manipulating the number of states in the synaptic signals. During training, the formation of the coefficients W is carried out in such a way as to maximize or minimize (depending on the task) the number of states for calculating f(W,X) and comparing the result with the threshold value of the perceptron's activation function f(A).
Thus, the vector of shifted synaptic signals {w0, w1, + + X1, W2 + x2} is obtained by adding the corresponding elements of the vectors X and W. Subsequently, a selected probabilistic characteristic is calculated for the resulting vector of shifted signals, specifically counted the number of unique signal states Q = Cnt (W,X). The result of comparing the value of Q with the threshold value f(Q, A) then determines the activation level of the neuron.
The results of emulating typical logical functions AND, OR, XOR, as well as NAND and NOR, using the proposed perceptron structure with the corresponding shift coefficients W and activation threshold values A, are presented in Tables 1-5.
As can be seen, the described method of implementing the perceptron allows emulating typical logical functions, simplifies algorithmic, software, and hardware implementation, which opens up possibilities for use in low-performance computing systems and also expands the functional capabilities of artificial neural network components.
Features and challenges of training discrete perceptrons. One of the simplest methods for training the above-described perceptron is an exhaustive search of possible shift values [15, 16], as the discrete representation significantly reduces the search space.
For example, implementing logical functions and their combinations based on the proposed perceptron structure requires checking only 256 possible combinations.
Some of the search results are shown in Figs. 3-5. Here, the Z-axis represents the deviation from the value of the emulated logical function for the given parameters of the discrete perceptron, while the X and Y axes represent the values of weight coefficients w, and w2, with coefficients w0 and A fixed at w0 = 1, and A = 3 for logical functions AND and XOR, and A = 2 for the OR logical function.
The application of automatic learning methods based on extremum search encounters significant difficulties when the output function, as seen in Figs. 3-5, is not smooth and also has discontinuities.
The smoothness of a function is an important property that ensures the continuity of its derivatives, which allows the use of gradient methods for optimization. In the presence of discontinuities, gradients are either undefined or become infinite, making gradient methods ineffective or completely unsuitable for such types of functions [17, 18].
One of the main problems in optimizing non-smooth functions is the lack of reliable tools for assessing the direction and magnitude of changes in the function. Gradient methods, such as Stochastic Gradient Descent (SGD), rely on the calculation of derivatives to determine the direction of movement towards the minimum or maximum of the function. In the case of discontinuities, these methods become unreliable, as the presence of breakpoints leads to abrupt jumps in derivative values, which, in turn, causes instability in the algorithm [19].
Another difficulty is the problem of convergence. In non-smooth functions, discontinuities can cause delays or a complete halt in the convergence process, as algorithms cannot properly determine the direction and magnitude of the step.
This is particularly critical in tasks where it is important to quickly and accurately find optimal parameters, such as training artificial neural networks. Discontinuities can also cause algorithms to get "stuck" in local extrema without reaching a global minimum or maximum [20].
Implementation of an equivalent structure of a discrete perceptron based on typical perceptrons. Solving this problem requires the development of alternative optimization methods that do not depend on the smoothness of the function. These can include, for example, heuristic search methods, genetic algorithms, or simulated annealing methods, which do not require the calculation of derivatives and can be more robust to discontinuities [21]. However, these methods often have their own drawbacks, such as higher computational complexity and lower efficiency, making the task of optimizing nonsmooth functions still an open and relevant issue for research in the field of machine learning [22].
The use of a discrete basis significantly limits the search space during enumeration and, in the case of a small number of input parameters, can be successfully used as an option for quick training of perceptron structures. However, as the number of inputs increases, the number of enumeration steps grows exponentially, requiring the use of other training approaches [23].
Thus, it is advisable to consider the use of equivalent neural network structures that are implemented based on classical perceptrons [24, 25]. Then, in the general case
... (1)
where X=(x1,x2,...,Xn) is a vector of input signals, w=(w1,w2,...,wn) is a vector of input signal shifts, fis a function for evaluating information entropy (or another probabilistic estimate) of input signals, с is an activation function (for example, the Heaviside threshold function H) [26, 27].
In this case, it is worth noting that informational entropy, as a parameter, does not depend on the specific values of the input signals. However, depending on their distribution, it can be calculated differently, particularly using the formulas of Shannon (2), Cramp (3), Hartley (4), Oliver (5), and Nikolaychuk (6).
... (2)
where p; is the probability of occurrence of the ith state (value) of the input signal
... (3)
where D, is the variance estimate of the input signal
... (4)
where N is the number of possible unique states of the input signal
... (5)
... (6)
where Rxx is the autocorrelation coefficient at a given shift j.
As can be observed, only Shannon's formula (2) operates based on the probabilities of the existing states of the input signal. In contrast, all other formulas are applicable only to specific types of distributions, as they rely on the values of the input signal and provide an upper estimate of the entropy measure. However, such an upper estimate does not reflect the current value of entropy, which, in certain situations, may lead to inaccuracies.
A common drawback of the aforementioned formulas (2-6) is their reliance on the logarithmic function. In fact, the logarithmic function is the most computationally intensive element of these formulas, traditionally calculated using the Taylor series expansion. Furthermore, the number of terms in the series required to achieve acceptable accuracy increases rapidly when the probabilities of the input signal states are near zero. This disproportionate growth in computational costs makes it challenging to ensure efficient hardware or software implementation of algorithms for calculating informational entropy.
One promising approach to reducing computational complexity is the use of Hartley's formula (4), which allows for estimating the informational entropy of uniformly distributed input signals without requiring logarithmic operations.
Thus, the entropy evaluation function in (1) is replaced by a function that determines the number of unique states of the input signals Q(x + w), specifically: Q((1,2,1)+(1,0,1))=0((2,2,2)) =1, since there is only one unique value among the resulting signal values, which equals 2.
To build an artificial neural network equivalent to (1), it is necessary to emulate a function of the form
...
The given function can be expressed through the Heaviside function H, namely
... (7)
Based on expression (3), an artificial neural network can be represented, Fig. 6, which will compute the function C(x, x).
Expanding (7) to the vector X = (x1,X2,...,Xn) where the number of elements equal to element (i) is determined, we get
... (8)
To count the number of non-repeating values of input signals, we introduce a uniqueness metric inverse to the number of elements equal to element i
...
Combining expressions (4) and (5), we get
... (9)
where Q(X) is the number of unique elements in vector.
Based on (9), a neural network capable of counting the number of unique elements can be implemented, a variant of which is shown in Figs. 7 and 8.
Another approach involves using the counting function in only one direction
... (10)
As a result of the transformations, instead of expression (8), we obtain expression (10). For comparison of C_(X,i) and C(X,i) let's take as an example x = (2,2,2), then we get
As can be seen, C(x,i) compares the ith element with all elements of the vector, while C_(X,7) compares only with the ith and subsequent ones. In this case, it can be written as
...
Using the number of unique states Q(X) as a function for evaluating information entropy in (1), we obtain
...
which can be represented in the form of an artificial neural network shown in Fig. 9.
Thus, an equivalent representation of a probabilistic discrete perceptron based on a classical neural network has been achieved.
While being significantly more complex, this approach has the potential to simplify solving tasks related to training artificial neural components and neural networks implemented using discrete perceptron structures.
The developed approach will also contribute to the development of machine learning methods for the optimization of the discontinuous field, in particular in the tasks of graphic visualization of wave fronts [28], recognition of patterns of defective structures [29], search for stress extremes in piecewise homogeneous structures [30].
Conclusions. Based on the results of the research, for the first time, a structure and analytical description of a perceptron that processes synaptic signals based on statistical evaluation of their discrete states have been proposed. The distinctive features of this solution are:
- transition to a discrete basis, which allows accelerating the training process by reducing the space of possible signal values;
- avoidance of the typical multiplication operation for perceptron structures. The use of a shift operation, implemented by adding appropriate coefficients, allows manipulating probabilistic estimates of synaptic signals.
As a result, these features enable the implementation of perceptron structures based on integer operations, leading to a reduction in algorithmic, software, hardware, and consequently computational costs. An important aspect of the obtained results is the potential for implementing specialized artificial neural networks on platforms with limited computational resources, such as microcontrollers, programmable logic integrated circuits, and similar devices.
In addition, an analytical description has been obtained that allows emulating the functionality of a discrete probabilistic perceptron based on a classical neural network, which may simplify the solution of training tasks for artificial neural components and neural networks implemented based on the proposed discrete structure.
Sidebar
References
References.
1. Randall, R.B., Antoni, J., & Borghesani, P. (2022). Applied Digital Signal Processing. In Allemang, R., Avitabile, P. (Eds.) Handbook of Experimental Structural Dynamics. Springer, New York, NY. https:// doi.org/10.1007/978-1-4614-4547-0_6
2. Landar, S., Velychkovych, A., Ropyak, L., & Andrusyak, A. (2024). A Method for Applying the Use of a Smart 4 Controller for the Assessment of Drill String Bottom-Part Vibrations and Shock Loads. Vibration, 7(3), 802-828. https://doi.org/10.3390/vibration7030043
3. Thaler, D., Elezaj, L., Bamer, F., & Markert, B. (2022). Training Data Selection for Machine Learning-Enhanced Monte Carlo Simulations in Structural Dynamics. Applied Sciences, 12(2), 581. https://doi.org/10.3390/app12020581
4. Shatskyi, I., Makoviichuk, M., Ropyak, L., & Velychkovych, A. (2023). Analytical Model of Deformation of a Functionally Graded Ceramic Coating under Local Load. Ceramics, 6(3), 1879-1893. https://doi.org/10.3390/ceramics6030115
5. Shats'kyi, I., Makoviichuk, M., & Shcherbii, A. (2019). Influence of a flexible coating on the strength of a shallow cylindrical shell with longitudinal crack. Journal of Mathematical Sciences (United States), 238(2), 165-173. https://doi.org/10.1007/s10958-019-04226-9
6. Schmitt, M. (2023). Automated machine learning: AI-driven decision making in business analytics. Intelligent Systems with Applications, 18, 200188. https://doi.org/10.1016/j.iswa.2023.200188
7. Liou, D.-R., Liou, J.-W., & Liou, C.-Y. (2013). Learning Behaviors of Perceptron. iConcept Press. ISBN 978-1-477554-73-9.
8. Anzai, Y. (2016). Pattern Recognition & Machine Learning. ISBN-10:0124121497.
9. Vorobel, R. (2010). Logarithmic type image processing algebras. In 2010 International Kharkov Symposium on Physics and Engineering of Microwaves, Millimeter and Submillimeter Waves (MSMW). IEEE, 1-3. https://doi.org/10.1109/msmw.2010.5546157
10.Mandziy, T., Ivasenko, I., Berehulyak, O., Vorobel, R., Bembenek, M., Kryshtopa, S., & Ropyak, L. (2024). Evaluation of the Degree of Degradation of Brake Pad Friction Surfaces Using Image Processing. Lubricants, 12(5), 172. https://doi.org/10.3390/lubricants12050172
11.Ramanaiah, K., & Sridhar, S. (2015). Hardware Implementation of Artificial Neural Networks, 3, 31-34. https://doi.org/10.26634/ JES.3.4.3514
12.Ivanovskii, O.V. (2004). Block Diagram of an Irregular Element (Patent of Ukraine No 2246). State Service of Intellectual Property of Ukraine. Retrieved from https://sis.nipo.gov.ua/uk/search/detail/243354
13.Melnychuk, S., Kuz, M., & Yakovyn, S. (2018). Emulation of logical functions NOT, AND, OR, and XOR with a perceptron implemented using an information entropy function. 2018 14th International Conference on Advanced Trends in Radioelecrtronics, Telecommunications and Computer Engineering (TCSET), (pp. 878-882). https://doi.org/10.1109/TCSET.2018.8336337
14.Melnychuk, S.I., & Yakovyn, S.V. (2023). Method of Implementation of Perceptron Based on Probable Characteristics of Displaced Synaptic Signals (Patent of Ukraine No. 126753). State Service of Intellectual Property of Ukraine. Retrieved from https://sis.nipo.gov.ua/uk/ search/detail/1719578
15.Ahamad, M.V., Ali, R., Naz, F., & Fatima, S. (2020). Simulation of Learning Logical Functions Using Single-Layer Perceptron. In Hu, Y.C., Tiwari, S., Trivedi, M., Mishra, K. (Eds.) Ambient Communications and Computer Systems. Advances in Intelligent Systems and Computing, 1097, (pp. 121-133). Springer, Singapore. https://doi.org/10.1007/978-981-15-1518-7_10
16.Neirotti, J. (2010). Parallel strategy for optimal learning in perceptrons. Journal of Physics A: Mathematical and Theoretical, 43, 125101. https://doi.org/10.1088/1751-8113/43/12/125101
17.Mishchenko, K. (2021). On Seven Fundamental Optimization Challenges in Machine Learning. arXiv: Optimization and Control. https://doi.org/10.25781/KAUST-NV6UI
18.Bandura, A., & Skaskiv, O. (2019). Analog of hayman's theorem and its application to some system of linear partial differential equations. Journal of Mathematical Physics, Analysis, Geometry, 15(2), 170191. https://doi.org/10.15407/mag15.02.170
19.Niutta, C.B., Wehrle, E. J., Duddeck, F., & Belingardi, G. (2018). Surrogate modeling in design optimization of structures with discontinuous responses. Structural and Multidisciplinary Optimization, 57, 1857-1869. https://doi.org/10.1007/S00158-018-1958-7
20.Jagtap, A.D., Kawaguchi, K., & Karniadakis, E.G. (2019). Adaptive activation functions accelerate convergence in deep and physicsinformed neural networks. J. Comput. Phys., 404, 109136. https://doi.org/10.1016/j.jcp.2019.109136
21.Fatyanosa, T.N., Sihananto, A.N., Alfarisy, G.A.F., Burhan, M.S., & Mahmudy, W.F. (2017). Hybrid Genetic Algorithm and Simulated Annealing for Function Optimization. Journal of Information Technology and Computer Science, 1(2), 82-97. https://doi.org/ 10.25126/jitecs.20161215
22.Maucher, M., Schöning, U., & Kestler, H.A. (2011). Search heuristics and the influence of non-perfect randomness: examining Genetic Algorithms and Simulated Annealing. Computational Statistics, 26, 303-319. https://doi.org/10.1007/S00180-011-0237-5
23.Agliari, E., Barra, A., Galluzzi, A., Guerra, F., & Moauro, F. (2012). Multitasking Associative Networks. Physical Review Letters, 109, 268101. https://doi.org/10.1103/PhysRevLett.109.268101
24.Zheng, Y., Lu, S., & Wu, R. (2018). Quantum Circuit Design for Training Perceptron Models. arXiv: Quantum Physics, 2-12. https://doi.org/10.48550/arXiv.1802.05428
25.Secco, J., & Corinto, F. (2015). Memristor-based cellular nonlinear networks with belief propagation inspired algorithm. 2015 IEEE International Symposium on Circuits and Systems (ISCAS), 1522-1525. https://doi.org/10.1109/ISCAS.2015.7168935
26.Gupta, M., Bukovský, I., Homma, N., Solo, A., & Hou, Z. (2013). Fundamentals of Higher Order Neural Networks for Modeling and Simulation, 103-133. https://doi.org/10.4018/978-1-4666-2175-6.CH006
27.Yeo, I., Gi., S.-G., Lee, B.-G., & Chu, M. (2016). Stochastic implementation of the activation function for artificial neural networks. 2016 IEEE Biomedical Circuits and Systems Conference (Bi°CAS), 440443. https://doi.org/10.1109/Bi°CAS.2016.7833826
28.Shatskyi, I., & Perepichka, V. (2018). Problem of Dynamics of an Elastic Rod with Decreasing Function of Elastic-Plastic External Resistance. Springer Proceedings in Mathematics and Statistics, 249, 335342. https://doi.org/10.1007/978-3-319-96601-4_30
29.Bembenek, M., Mandziy, T., Ivasenko, I., Berehulyak, O., Vorobel, R., Slobodyan, Z., & Ropyak, L. (2022). Multiclass Level-Set Segmentation of Rust and Coating Damages in Images of Metal Structures. Sensors, 22(19), 7600. https://doi.org/10.3390/s22197600
30.Bembenek, M., Makoviichuk, M., Shatskyi, I., Ropyak, L., Pritula, I., Gryn, L., & Belyakovskyi, V. (2022). Optical and Mechanical Properties of Layered Infrared Interference Filters. Sensors, 22(21), 8105. https://doi.org/10.3390/s22218105