Would you like to exit ProQuest or continue working? Tab through to the exit button or continue working link.Help icon>
Exit ProQuest, or continue working?
Your session is about to expire
Your session is about to expire. Sessions expire after 30 minutes of inactivity. Tab through the options to the continue working button or end session link.
With the deployment of 5G networks, the Internet of Things (IoT) has experienced a transformative boost, enabling higher data rates, reduced latency, and the connection of millions of devices across applications like smart cities, healthcare, and industrial automation. However, in real-world scenarios, the performance of Low-Density Parity-Check (LDPC) codes, the preferred channel coding scheme in 5G, is severely affected by noise and fading environments, particularly colored noise, which distorts signals over certain frequency bands. Colored noise introduces correlation in the interference, unlike white noise, thereby posing a challenge in decoding, especially in fading channels such as Rayleigh, Rician, and Nakagami-m. In this work, we propose a novel approach that combines the Iterative Offset Min-Sum (OMS) algorithm with a Convolutional Neural Network (CNN) to enhance LDPC decoding efficiency in 5G-enabled IoT networks. Our proposed OMS-CNN hybrid architecture addresses the limitations imposed by colored noise in fading channels by employing deep learning techniques for accurate noise estimation and mitigation. Furthermore, the OMS algorithm mitigates the overestimation of noise correction, refining the output in iterative decoding steps. Through comprehensive simulations, the OMS-CNN decoder demonstrates substantial improvements over traditional decoding approaches. Specifically, it achieves a performance enhancement of 2.7 dB at a bit error rate (BER) of across a range of fading channels. The study examines the decoder’s performance in environments characterized by Rayleigh, Rician, and Nakagami-m fading models, highlighting the robustness of the proposed solution under different channel conditions. Additionally, this research explores the influence of parameters such as the correlation coefficient of the noise, the scaling factor in the cost function, and the number of iterations between the CNN and OMS decoding steps.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Longer documents can take a while to translate. Rather than keep you waiting, we have only translated the first few paragraphs. Click the button below if you want to translate the rest of the document.
The Fifth-Generation (5G) technology has revolutionized the Internet of Things (IoT) by offering high data rates, low latency, and massive device connectivity, essential for various applications1, 2–3. With support for up to one million devices per square kilometer, 5G facilitates mMTC4 and URLLC5, enabling critical use cases like remote surgery and autonomous vehicles6. Enhanced mobile broadband (eMBB) further extends bandwidth for applications like augmented reality and smart surveillance7, 8–9. LDPC codes have been officially standardized as the primary channel coding scheme in the 5G New Radio (NR) specifications, replacing previous coding schemes used in earlier wireless standards. This adoption stems from LDPC codes’ ability to deliver near-capacity error correction performance, enabling 5G networks to meet the stringent requirements of extremely high data rates, ultra-low latency, and massive connectivity demanded by modern applications. More specifically, 5G utilizes quasi-cyclic LDPC (QC-LDPC) codes, which are structured to allow for efficient hardware implementation, flexibility in code rates, and adaptability across a wide range of block lengths. These characteristics are crucial for supporting the three major 5G use cases: eMBB, which requires high throughput; Ultra-Reliable Low Latency Communications (URLLC), which demands extremely low error rates and delay; and massive Machine-Type Communications (mMTC), which involves connecting a vast number of IoT devices with diverse quality of service requirements. However, despite their advantages, the performance of 5G-enabled IoT networks is highly influenced by wireless channel characteristics, particularly fading and noise. Fading models such as Rayleigh, Rician, and Nakagami-m describe varying channel conditions, from non-line-of-sight (NLoS) to those with dominant line-of-sight (LoS) components10. These models, combined with the presence of colored noise, significantly degrade communication performance, posing challenges for LDPC decoding, especially in dense urban and industrial environments. This study examines the performance of Artificial intelligence (AI)-driven LDPC decoding in next-generation 5G networks11,12, considering the effects of fading channels and colored noise. Key objectives include improving LDPC decoding in smart cities, industrial automation, healthcare, and smart agriculture by mitigating the impact of colored noise. Artificial intelligence (AI) plays a crucial role in unlocking and maximizing potential across a variety of fields13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25–26. The research tests two hypotheses: (1) AI-driven 5G channel decoding will improve error correction under Fading and colored noise, and (2) Real-time adaptive coding techniques will optimize performance by adjusting to varying channel conditions. Our Contributions include:
This research’s main contribution is the development of an innovative Iterative OMS-CNN architecture tailored for decoding of LDPC codes within the 5G standard. The design targets a codeword length of N = 3808 and accommodates various code rates.
By combining channel coding with modern deep learning methodologies, we achieved a performance enhancement of 2.7 dB.
We also explored the impact of different loss functions used in CNNs on the performance of our proposed design.
The structure of the paper is as follows: Section II introduces the fundamental concepts. Section III elaborates on the methodology employed. Section IV showcases the simulation results alongside a thorough analysis. Finally, Section V presents the concluding observations.
Fundamental concepts overview
5G LDPC codes and base graph matrices (BGMs)
In the 5G New Radio (NR) specifications, Quasi-Cyclic Low-Density Parity-Check (QC-LDPC) codes were chosen as the preferred channel coding scheme to ensure efficient data transmission, offering both high throughput and low latency27. QC-LDPC codes utilize two fundamental base graph matrices, and , with 51 distinct lifting sizes , providing a range of code rates. Specifically, , with dimensions 46 68, supports code rates from 1/3 to 8/9, while , with dimensions 42 52, supports code rates from 1/5 to 2/3. The base graph matrix is expanded into the Parity check matrix (PCM) H, which defines the null space of a binary (N, K) LDPC code28. The PCM H, with dimensions over GF(2), can also be represented as a bipartite Tanner graph comprising M check nodes (CNs) and N variable nodes (VNs)29,30. Decoding is performed iteratively using a message-passing algorithm on this Tanner graph31,32. In the construction of 5G LDPC codes, the base graph matrix is expanded into H using the lifting size , where the dimensions of H and are and , respectively, with and . The entries in can be , 0, or non-negative values, which are expanded as follows in H:
A entry in is replaced by a zero matrix of size .
A 0 entry in is replaced by an identity matrix of size .
A non-negative entry in is replaced by a circulant permutation matrix , where the shift value ranges from 1 to , and is created by right-shifting the rows of the identity matrix by positions.
For example, BGM-1 is constructed using an LDPC code with dimensions 46 68, a message block length of 1232 bits, a codeword length of 3808 bits, and a base code rate of 1/333. Table 1 outlines the key parameters and values for this constructed BGM-1.Table 1
Parameters and values of Constructed BGM-1.
Parameter
Value
Base code rate
1/3
Number of edges
316
Lifting size
56
Number of block columns ( )
68
Column weights ( )
1 to 30
Number of block rows ( )
46
Row weights ( )
3 to 19
Message block length
1232
Codeword length
3808
The 5G LDPC decoding process iteratively computes the logarithmic-likelihood ratio (LLR) of each symbol in a codeword based on the received channel symbol at the corresponding variable node . The LLR is denoted as , and decoding proceeds over the Tanner graph until convergence or the maximum number of iterations is reached29.
Fading channel models in 5G enabled IOT networks
Fading is a critical phenomenon in wireless communication systems that affects signal strength and quality over a distance. It results from various factors, including path loss, obstacles, multipath propagation, and atmospheric conditions. Prominent Fading channels are Rayleigh, Rician, and Nakagami fading based on the presence of LOS components and the severity of fading. Rayleigh fading models are used when no LOS component exists, while Rician fading includes the presence of a dominant LOS component. Nakagami fading provides a generalized approach to describe fading conditions and includes Rayleigh fading as a special case34,35. In this article, addressing small-scale fading in 5G-enabled IoT is critical to achieving reliable, high-quality communication in the presence of rapid signal fluctuations.
Colored noise and fading channels in 5G IoT networks
Colored Noise and its Impact: In an elementary communication system model centered on channel coding, a message block vector , comprising bits, is encoded to generate a codeword . This codeword is a -bit vector obtained by appending parity bits to the initial message block. Utilizing Binary Phase Shift Keying (BPSK) modulation, the modulator produces a modulated symbol vector . This vector is then transmitted through the communication channel, resulting in the received signal vector , where represents the noise vector added by the channel, also consisting of bits. In all fading models, colored noise introduces correlated disturbances that can alter both the amplitude and phase of the signal, leading to an increased likelihood of bit errors. Unlike white noise, which has a flat spectral density, colored noise has frequency-dependent characteristics, making its effects more pronounced on certain parts of the signal36, 37–38. To model colored noise, an Auto-Regressive (AR) process of order 1, or AR(1), is often used. The AR(1) model expresses the noise value at time step as:
1
where is the autoregressive correlation coefficient, and is the white noise component with zero mean and constant variance . This model captures the correlation between noise samples, which is critical in understanding how colored noise affects the signal. The variance of the noise process is given by:
2
and the covariance between two noise samples and is:
3
The covariance matrix for the noise process is defined as:
4
where the covariance between noise samples decreases exponentially with the distance between their time indices. This covariance matrix is symmetric, positive definite (for ), and has a Toeplitz structure, which means that the elements along each diagonal are constant. These properties make suitable for efficient computation, which is crucial for LLR calculations in signal decoding.
In 5G IoT networks, the interaction between different types of fading channels and colored noise is critical in determining communication performance. It affects signal quality, decoding accuracy, and reliability, making it crucial to develop robust signal processing techniques that mitigate these effects. The interaction between fading types—Rayleigh, Rician, and Nakagami-m—and colored noise introduces unique challenges for 5G networks.
Rayleigh fading: common in non-line-of-sight environments, leads to significant signal degradation when combined with colored noise, which emphasizes certain frequency components. An AR(1)-based noise model analyzes these effects by accounting for noise correlation.
Rician fading: with a line-of-sight path characterized by the -factor, offers more stability but still experiences phase and amplitude distortions due to colored noise, especially at lower -factors. Noise covariance models can improve decoding accuracy.
Nakagami-m fading: provides flexibility across varying channel conditions. Lower values suffer more from noise, while higher values show greater resilience, though colored noise still requires careful spectrum modeling for optimal performance.
In summary, the interaction between Rayleigh, Rician, and Nakagami-m fading with colored noise introduces unique challenges in 5G IoT networks.
Calculating LLR when colored noise is present in fading channels
In wireless communication systems, the presence of fading (Rayleigh, Rician, or Nakagami-m) combined with colored noise significantly complicates the signal decoding process. The derivation of the Log-Likelihood Ratio (LLR) under these conditions follows a similar structure for the different fading models. Below, we outline the common steps for all three fading models, followed by the specific differences in each sub-subsection. Common Steps for LLR Calculation in the Presence of Colored Noise.
Step 1:System Model with Fading and Colored Noise: In all fading models, the received signal at time in the presence of fading and colored noise is represented as:
5
The system model considers a transmitted symbol , which is modulated using binary phase-shift keying (BPSK). The received signal is influenced by , a fading coefficient that captures multi-path propagation effects, with common channel models including Rayleigh (for non-line-of-sight environments), Rician (for dominant line-of-sight components), and Nakagami-m (for flexible fading severity). Additionally, the additive noise is colored rather than white, modeled as a first-order auto-regressive (AR(1)) process, where current noise values depend linearly on previous ones, introducing temporal correlation.
Step 2: Modeling the Received Signal Vector with Colored Noise: Considering time samples, the received signal vector can be modeled as:
6
The system model is expressed in vector-matrix notation, where received symbols are represented as a vector . The fading effects are captured by a diagonal matrix , whose entries correspond to channel coefficients modeled as Rayleigh, Rician, or Nakagami-m distributions, depending on the propagation environment. The transmitted symbols form the vector , containing BPSK-modulated values ( ). The additive colored noise vector incorporates temporal correlation through its covariance matrix , derived from the AR(1) process assumptions.
Step 3: Likelihood Functions for LLR Calculation with Colored Noise: The goal of the LLR calculation is to compute the conditional probabilities of the received signal vector given each possible transmitted symbol vector . The likelihood function for the received signal given the transmitted signal and colored noise is expressed as:
7
Here, denotes the inverse of the noise covariance matrix . The symbol represents the conjugate transpose of a matrix, which takes into account the complex nature of the fading and noise components. Additionally, refers to the determinant of the noise covariance matrix .
Step 4: LLR Expression: The LLR for a specific transmitted symbol is defined as the logarithm of the ratio of the likelihoods of the received signal vector conditioned on and :
8
Substituting the likelihood functions into the LLR definition and simplifying the expression, we obtain:
9
By simplifying the logarithmic expression further, we arrive at the final expression for the LLR:
10
The specific characteristics of the fading model (Rayleigh, Rician, or Nakagami-m) will affect the structure of the fading coefficients and the corresponding performance in the presence of colored noise. The remaining steps for each fading model are outlined below.
LLR calculation in the presence of colored noise from the rayleigh distribution’s PDF
Rayleigh fading is used to model environments where severe multi-path scattering occurs without a direct LoS component. The probability density function (PDF) for the Rayleigh-distributed fading coefficient is given by:
11
Here, x denotes the magnitude of the fading coefficient , while represents the average power of the fading envelope. In Rayleigh fading, the LLR expression remains sensitive to the strong temporal correlations introduced by the colored noise. Since there is no LoS component, the signal is more affected by scattering, resulting in greater dependence on the noise covariance matrix .
LLR calculation in the presence of colored noise from the rician distribution’s PDF
Rician fading models environments where the received signal is affected by both a LoS component and multiple scattered paths. The PDF for the magnitude of the Rician fading coefficient is:
12
Here, represents the amplitude of the line-of-sight (LoS) component, denotes the variance of the scattered components, and is the modified Bessel function of the first kind of zero order. The presence of the LoS component in Rician fading reduces the system’s sensitivity to the noise correlation, but colored noise still affects overall performance. The LoS component offers a level of stability that is absent in Rayleigh fading, making Rician fading more resilient to colored noise, especially in environments where the LoS is dominant.
LLR calculation in the presence of colored noise from the Nakagami-m distribution’s PDF
Nakagami-m fading is a generalized fading model that can describe a broad range of fading environments, from severe (Rayleigh-like) to mild (better than Rician fading). The PDF for the Nakagami-m fading coefficient is given by:
13
Here, is the shape parameter that controls the severity of fading, is the spread parameter representing the average power of the fading envelope, and denotes the Gamma function. The Nakagami-m distribution’s flexibility allows it to model fading environments that range from highly scattered (Rayleigh-like) to environments with significant LoS components (Rician-like). The ability to tune the shape parameter allows for optimization in various conditions, with the impact of colored noise varying depending on the chosen value.Table 2
Comparison of Fading Models and LLR Analysis across various Fading Channels.
Aspect
Rayleigh fading
Rician Fading
Nakagami-m fading
Line-of-sight (LoS)
No LoS component
LoS + scattered paths
Can model both LoS and non-LoS
Distribution
Rayleigh distribution
Rician distribution
Nakagami-m distribution
Parameters
: magnitude of the fading coefficient
: average power of the fading envelope
: amplitude of the LoS component
: variance of the scattered components
: modified Bessel function of the first kind and zero order
: shape parameter controlling severity of fading
: spread parameter representing the average power of the fading envelope
: Gamma function
Amplitude distribution
Rayleigh distribution
Shifted due to LoS
Controlled by shape parameter
Channel characteristics
No LoS; purely scattered paths
Presence of LoS and scattered paths; more stable compared to Rayleigh
Generalized model; adaptable from severe (Rayleigh-like) to mild (better than Rician)
Typical scenarios
Dense urban areas with heavy scattering
Environments with strong LoS (e.g., rural)
Varying environments (severe to mild fading)
LLR Expression
Impact of Colored Noise
Strongly impacted by multi-path scattering and temporal noise correlation
Less sensitivity to colored noise due to LoS component, but correlation still affects performance
Flexible fading model; handles wide range of fading conditions and noise correlation effects
Fading coefficient
Table 2 provides a concise comparison of Rayleigh, Rician, and Nakagami-m fading models, emphasizing their key characteristics, parameters, and LLR expressions. Rayleigh fading assumes no LoS and is modeled by the Rayleigh distribution, while Rician fading accounts for both LoS and scattered paths with the Rician distribution. Nakagami-m is the most flexible, capable of modeling both LoS and non-LoS environments through the Nakagami-m distribution and its tunable shape parameter . The table outlines critical parameters for each model: Rayleigh fading depends on the average power , Rician fading incorporates the LoS component amplitude and scattered variance , and Nakagami-m fading introduces the shape parameter for greater adaptability. All models share a similar LLR expression, though their sensitivity to colored noise varies. Rayleigh is most affected by temporal noise correlation due to the absence of LoS, while Rician provides more stability due to the LoS path, and Nakagami-m offers the greatest flexibility by adjusting . The PDFs for each model reflect these differences in fading behavior, highlighting the need to choose the appropriate model to optimize communication performance in the presence of colored noise.
Offset Min-Sum (OMS) algorithm
The Offset Min-Sum (OMS) algorithm is another enhanced version of the Min-Sum algorithm used for decoding LDPC codes. OMS introduces an offset value to mitigate overestimation errors by directly reducing the magnitude of the messages exchanged between nodes39,40. The following section outlines the algorithm’s notations, steps, and key parameters. The notations used are as follows: represents the received value for the variable node (VN) ; denotes the log-likelihood ratio (LLR) sent from VN to check node (CN) , while represents the LLR sent from CN to VN . The set N(i) consists of all CNs connected to VN , and M(j) is the set of all VNs connected to CN . Lastly, is the offset value employed to reduce overestimation. The Algorithm Steps:
Initialization: Each variable node (VN) is initialized using the received channel value . The initial log-likelihood ratio (LLR) for each VN is calculated as
14
where denotes the real part.
Check Node Update: For each iteration t, the check node (CN) calculates the outgoing LLR to each connected VN as
15
where the offset value controls the reduction of outgoing messages to prevent overestimation.
Codeword Decision: A hard decision for each bit is made based on the accumulated LLR as
16
The decoding process stops if (i.e., if the codeword satisfies all parity checks) or if the maximum number of iterations is reached.
Variable Node Update: The LLR message from VN to CN is updated according to
17
Then, the process proceeds to the next iteration starting again from the CN update step.
The offset value plays a crucial role in controlling the magnitude of outgoing messages. Typically, is chosen based on empirical studies and ranges between 0.05 and 0.5. A smaller helps reduce overestimation but may slow down the convergence of the algorithm. Conversely, a larger increases the message magnitude, making the algorithm behave more like the standard Min-Sum algorithm, which carries the risk of overestimation. The OMS algorithm mitigates overestimation issues, but does so by introducing an offset instead of a normalization factor. Both algorithms improve upon the performance of the standard Min-Sum algorithm by adjusting the magnitude of the messages exchanged between check and variable nodes, but the OMS algorithm directly reduces the messages with an offset.
Proposed methodology
This section provides an in-depth explanation of the proposed OMS-CNN architecture, illustrated in Fig. 1. The role of the custom cost function, specifically designed for CNN optimization, and the architecture itself are discussed based on the methodology presented in41.
Fig. 1 [Images not available. See PDF.]
Proposed OMS-CNN architecture.
OMS-CNN design flowchart
Fig. 2 [Images not available. See PDF.]
Flowchart illustrating the OMS-CNN design process.
The OMS-CNN design process is outlined in Fig. 2, commencing at the receiver end of the communication system. The procedure involves the following sequential steps:
Signal Reception and LLR Calculation: After receiving the signal , the Log-Likelihood Ratios (LLRs) corresponding to the received symbols are computed, denoted as .
OMS Decoder Input: The computed LLR values are then input into the OMS decoder, which generates an estimate of the transmitted symbol vector, .
Channel Noise Estimation: The estimated channel noise, , may differ from the actual noise due to decoding errors. The relationship between the actual and estimated noise can be expressed as , where represents the noise estimation error.
CNN Processing: The estimated noise, , is then processed by a trained CNN. The CNN leverages the inherent correlation in the channel noise to effectively suppress the error component , producing an improved noise estimate, .
Signal Adjustment: The CNN-generated output is subtracted from the received signal , resulting in an updated signal vector , expressed as:
18
where denotes the residual noise after CNN processing.
OMS Decoding: The updated signal vector undergoes a second round of decoding via the OMS decoder. Before this step, the LLRs are updated to , where the superscript signifies the post-CNN processing LLR values.
The characteristics of the residual noise significantly influence the updated LLR computation and the overall performance of the subsequent OMS decoding. Therefore, it is essential to train the CNN to minimize the residual noise by accurately estimating the channel noise. This is achieved using a custom cost function , as outlined in (22). By structuring the OMS-CNN process in this manner, the proposed methodology enhances channel noise estimation accuracy, leading to improved decoding performance in noisy communication environments.
CNN structure for noise estimation
Fig. 3 [Images not available. See PDF.]
Proposed CNN structure.
The proposed OMS-CNN architecture employs a 1-D CNN, a specialized version of the traditional CNN, designed for processing one-dimensional data42. Figure 3 illustrates the detailed structure, including the number of layers, kernel sizes, and feature maps for each layer. Before training the network, specific parameters are initialized. Unlike conventional CNN architectures, this design does not utilize fully connected, dropout, or max pooling layers, as the input and output sizes remain identical, maintaining a consistent representation throughout. The CNN consists of four convolutional layers. Each 1-D convolutional layer applies filters to the input data, which is represented as a one-dimensional sequence. After OMS decoding, the estimated channel noise is fed into the CNN as a 1-D vector with dimensions . Each convolutional layer contains several learnable filters (kernels), which are small segments of weights that move along the input data, performing element-wise multiplication and summation. This operation extracts specific features from the input, producing a new output array called a feature map. Each convolutional layer typically has multiple filters, allowing it to detect different features or patterns. The feature map for the -th filter in the -th layer, denoted as , is computed as follows:
First Layer:
19
where is the -th kernel of the first layer, and is the corresponding bias.
Intermediate Layers: For subsequent layers, the feature map is computed as:
20
where is the feature map output from the previous layer.
Final Layer: In the last layer, the estimated channel noise is computed as:
21
where and represent the kernel and bias for the final layer.
In these expressions, is the -th kernel in the -th layer, and is the corresponding bias. After applying convolution, a non-linear activation function, typically Rectified Linear Unit (ReLU)43, is used. The output of each convolutional layer is a set of feature maps that indicate the presence of specific patterns or features detected by the filters. This convolution operation is repeated for each filter in the layer, producing multiple feature maps that capture various aspects of the input data.
Proposed custom cost function for fading channels
In this section, we propose a custom cost function tailored specifically for Rician, Nakagami-m, and Rayleigh fading channels. This cost function is designed to optimize the CNN’s performance by considering the characteristics of these fading channels, which introduce both random amplitude and phase variations. The proposed cost function is formulated as:
22
Here, is the squared norm of the residual noise, defined as , where is the actual channel noise and is the noise estimated by the CNN. The parameter N represents the length of the codeword, and is a scaling factor. The skewness S of the estimated noise is computed as
23
where is the residual noise sample and is the mean of the residual noise. The kurtosis of the estimated noise is calculated by
24
The theoretical kurtosis specific to the fading channel depends on the fading type: for Rayleigh fading, ; for Nakagami-m fading, it is
25
where m is the shape parameter of the Nakagami distribution; and for Rician fading, it depends on the -factor and can be approximated as
26
Finally, is a regularization parameter, and denotes the variance of the fading amplitude , capturing the variation due to the fading channel.
Components of the cost function:
Residual Noise Minimization:
27
This term minimizes the average residual noise power, encouraging the CNN to accurately estimate the channel noise.
Normality Regularization:
28
This term regularizes the skewness and kurtosis of the noise distribution44. Unlike the AWGN channel, where the target is a normal distribution, the kurtosis is adjusted to match the fading channel:
For Rayleigh fading, (kurtosis of Rayleigh distribution).
For Nakagami-m fading, depends on the fading parameter .
For Rician fading, is determined by the Rician -factor.
Fading Amplitude Regularization:
29
This term penalizes large variations in the fading amplitude . The variance of quantifies how much the amplitude deviates from its mean. Minimizing this term encourages more accurate estimation of the fading amplitude.
Steps for implementing the custom cost function:
Compute Residual Noise: Calculate the difference between the actual and estimated noise for each sample.
Calculate Skewness and Kurtosis: Compute the skewness and kurtosis of the estimated noise, adjusting based on the specific fading channel (Rayleigh, Nakagami-m, or Rician).
Compute Fading Amplitude Variance: Estimate the fading amplitude and compute its variance.
Incorporate into Cost Function: Apply the defined cost function to compute the loss during each training step.
Optimize: Use an Adam optimization algorithm to minimize the residual noise and optimize the network weights during training.
Advantages of the proposed cost function:
Channel-Specific Adaptation: By adjusting the kurtosis to match the characteristics of the fading channel, the cost function is more suitable for handling the variability introduced by Rician, Nakagami-m, and Rayleigh channels.
Fading Amplitude Control: The inclusion of fading amplitude variance regularization helps improve the accuracy of signal estimation under fading conditions.
Normality Enforcement: Regularizing skewness and kurtosis ensures the estimated noise distribution remains well-behaved, leading to improved performance in noise-affected environments.
This custom cost function is designed to improve noise estimation and decoding performance under realistic fading conditions encountered in wireless communication systems, making it suitable for Rician, Nakagami-m, and Rayleigh fading channels.
Noise sample generation for CNN training
In this section, we present a method for generating realistic noise samples tailored for CNN training in fading channels. The noise generation process is crucial for training the network to estimate and mitigate noise effects in different environments. The general formula for generating noise samples is given by:
30
Here, denotes the noise sample vector, represents the channel correlation matrix that models the correlation properties of the noise, and is a vector of specific fading channel samples. The matrix encapsulates how noise samples are correlated due to the communication channel’s characteristics. These noise samples are then used to train the CNN, allowing it to learn and adjust to the specific noise conditions. By exposing the network to these noise samples, the CNN becomes better equipped to estimate and suppress noise effectively, leading to enhanced performance in signal decoding. This method enables the network to handle various noise patterns, contributing to improved accuracy and reliability in wireless communication systems.
Convolutional processing in CNNs: focus on noise patterns
In Convolutional Neural Networks (CNNs), the convolution operation on colored noise data involves sliding a filter (kernel) over the input noise vector. At each step, the dot product between the filter’s weights and the corresponding segment of the input data is computed. Let the filter contain weights of size , and at time step , it covers the segment . The convolution at this point is given by:
31
The resulting value forms one entry in the feature map. Following the convolution, the ReLU activation function is applied:
32
This process of convolution followed by ReLU activation enables detection of regions with strong autocorrelation in the time series. For an auto-regressive (AR) series with a correlation coefficient close to 1, consecutive values are likely similar and positive. If the filter weights are designed to recognize such patterns (e.g., positive weights), the convolution will produce a significant positive value when and are both positive, which will pass through the ReLU. As a result, will be large in areas with high positive autocorrelation, highlighting those regions in the feature map.
Simulation findings and their analysis
Simulation setup and training methodology
The design validation employs the 5G LDPC code, specifically the BGM1 form with a base code rate of and a codeword length of . TensorFlow, Google Colab, and MATLAB were utilized to construct the simulation platform45, 46–47. Before training, it was necessary to generate the training data. In machine learning, generating validation data is a common practice to evaluate the network’s cost function and minimize overfitting risk. The training data were generated across different signal-to-noise ratio (SNR) levels, ranging from 0 dB to 10 dB. Each SNR level was equally represented within the total dataset. The codewords were generated using MATLAB, employing the 5G LDPC BGM1 with a base code rate of and a codeword length of . The LDPC matrix structure follows the standards defined by the 3rd Generation Partnership Project (3GPP) for 5G New Radio (NR). In all simulations, the offset parameter for the OMS decoder was set to 0.5, as per the optimized settings for LDPC decoding in correlated noise environments. This choice ensures better convergence of the iterative decoding process. The network was trained using a conventional mini-batch gradient descent approach. The data corresponding to each SNR level occupied an equal portion of the dataset, with each mini-batch consisting of 1200 data blocks. In each iteration, a mini-batch of training data was randomly selected to calculate the gradient. The adaptive moment estimation (Adam) optimization algorithm48,49 was applied to determine the optimal parameters for the network. The training process continued until the loss function showed no further reduction for a prolonged duration. Table 3 lists the CNN parameters and their respective values. The bit error rate (BER) was used to assess system performance. This metric accounted for the channel’s energy efficiency using the ratio at a target BER of . For valid estimation, each BER measurement observed at least one hundred frame errors50.Table 3
CNN Parameters Overview.
Parameter
Value
Total layers
4
Filter dimensions
{59, 24, 12, 16}
Feature map count
{64, 32, 16, 1}
Padding
None
Pooling layers
None
Fully connected layers
None
Activation function
ReLU
Optimizer
Adam
Loss function
Weight initialization method
Kaiming (He) initialization
Training data volume
1,500,000
Testing data volume
75,000
Accuracy
95.8%
Epoch count
30
The parameter values of the CNN architecture—including filter dimensions, feature map counts, activation functions, optimizer choice, and the absence of padding and pooling layers—were chosen based on a combination of prior study51 and empirical tuning tailored specifically to our problem domain. The selection was carefully designed to balance performance and computational efficiency. Specifically, the chosen filter sizes 59, 24, 12, 16 and corresponding feature map counts 64, 32, 16, 1 were inspired by architectures reported in recent state-of-the-art works51 addressing similar 1D signal processing tasks, where multi-scale feature extraction is critical. We initially adopted these parameters from prior studies that demonstrated effective hierarchical feature learning without significant information loss. Unlike common CNN designs, we opted for no padding and no pooling layers to preserve the input sequence length throughout the network. Maintaining the spatial dimension was crucial for our regression task, as this design choice helps in accurately mapping input to output vectors without dimensionality reduction. The ReLU activation function and Adam optimizer were selected due to their well-established effectiveness in improving convergence speed and stability in deep learning models. To further optimize performance, we conducted systematic hyperparameter tuning experiments by varying filter sizes and feature map counts around the initial settings. The final selected parameters represent a balanced configuration that maximizes accuracy—achieving 95.8%—while maintaining computational efficiency. Finally, the Kaiming (He) initialization method was chosen to facilitate effective training with ReLU activations by mitigating the issues of vanishing or exploding gradients. In summary, the CNN parameters were guided by prior literature on similar tasks and refined through targeted empirical experiments on our dataset. These choices were validated by the achieved performance metrics and the stable convergence behavior observed during training.
Table 3 provides a comprehensive overview of the key parameters used in the CNN architecture. The network consists of four convolutional layers, each with varying filter dimensions and feature map counts. No padding or pooling layers were utilized in this design, and the activation function employed is the ReLU function. The Adam optimizer was selected for network training, and the Kaiming (He) initialization method was used for weight initialization52. The table also outlines the dataset sizes for training and testing, the number of epochs, and the final accuracy achieved during training. The loss function includes both residual noise minimization and regularization terms, ensuring improved noise estimation and network performance. The proposed OMS-CNN decoder does not require separate training for each channel model, as the network can generalize channel noise characteristics during training. However, channel-specific training may enhance performance due to the distinct statistical properties of Rician, Nakagami-m, and Rayleigh fading channels, including differences in amplitude, phase, correlation, skewness, and kurtosis. The custom cost function includes kurtosis terms tailored to these channels, such as for Rician fading and for Nakagami-m fading. These variations impact residual noise and require training the CNN to adapt to specific channel conditions for optimal noise estimation and suppression. Generalized training is possible using hybrid datasets from all channel types, dynamically adjusting in the cost function. While generalized models simplify deployment, they may under-perform compared to channel-specific training, which achieves better BER performance, as shown in experiments (e.g., BER of at 4.7 dB for Rician fading). For real-world 5G IoT applications, channel-specific training is recommended during development, with generalized models reserved for dynamically varying channel environments where retraining is infeasible.
The error floor performance of LDPC codes is a critical metric for assessing decoding schemes. The proposed OMS-CNN decoder demonstrates effective error floor stabilization for the given LDPC code parameters , under different noise correlation levels . Based on the observed BER trends, the error floor stabilization threshold for the OMS-CNN decoder typically falls within values of 4.5 dB to 5.5 dB, depending on the noise correlation. For highly correlated noise , the error floor stabilizes around . With moderate correlation , this threshold increases to approximately . In uncorrelated noise scenarios , the error floor stabilization occurs near . These results confirm that the OMS-CNN decoder achieves robust performance, effectively mitigating noise correlation and minimizing residual errors across different channel conditions. The refined results will provide valuable insights into the decoder’s ability to address error floor challenges, particularly in 5G-enabled IoT applications where low BERs are essential.
Analyzing performance
The design’s performance in fading channels is significantly affected by three key parameters:
Correlation coefficient ,
Scaling parameter , and
Number of repetitions between the OMS and CNN stages.
Decoder performance in fading channels with correlation coefficient
In this study, we evaluate the performance of the proposed design across three types of fading channels: Rician, Nakagami-m, and Rayleigh. The SNR values at a BER of for various correlation coefficients and scalar factors are presented in Table 4. For the Rician fading channel, the proposed decoder exhibits significant performance gains, achieving an SNR of 4.7 dB when and . As the correlation coefficient decreases to , the SNR reach to 6.3 dB, demonstrating the adaptability of the decoder to different correlation levels. When the correlation is absent , the performance converges with the traditional OMS decoder, with an SNR of 7.6 dB. For the Nakagami-m fading channel, similar trends are observed. The highest performance gain occurs at , with an SNR of 5.8 dB. As the correlation decreases to , the SNR increases to 7.1 dB. When the correlation is absent , the performance of the decoder aligns with that of the standard OMS, yielding an SNR of 8.5 dB.
In the Rayleigh fading channel, the decoder shows superior performance compared to the traditional OMS decoder for all tested scenarios. For high correlation , the SNR is 6.7 dB, and it increases to 8.2 dB when . The performance becomes identical to the conventional OMS decoder with an SNR of 9.7 dB when the correlation is absent. From Figs. 4, 5, 6 the findings confirm that the proposed OMS-CNN decoder effectively adapts to various fading environments and correlation levels, consistently outperforming the SPA and OMS decoders in scenarios with correlated noise. Specifically, with a correlation coefficient of , the OMS-CNN decoder demonstrated significant performance improvements, achieving gains of 2.9 dB in Rician fading, 2.7 dB in Nakagami-m fading, and 2.7 dB in Rayleigh fading, with particular effectiveness in Rician conditions. As shown in Table 4, the performance gains increase with higher correlation coefficients, confirming that the OMS-CNN decoder is well-suited to exploit channel correlation for improved decoding efficiency.Table 4
SNR at BER of for various designs across fading channels.
Algorithm
Fading channel
Correlation coefficient ( )
Scalar factor ( )
SNR (dB)
Rician
0.9
0.3
4.7
Rician
0.5
10
6.3
SPA
Rician
0.9
–
7.4
OMS
Rician
0.9
–
7.6
Rician
0
10
7.6
Nakagami-m
0.9
0.1
5.8
Nakagami-m
0.5
10
7.1
SPA
Nakagami-m
0.9
–
8.3
OMS
Nakagami-m
0.9
–
8.5
Nakagami-m
0
10
8.5
Rayleigh
0.9
0.2
6.9
Rayleigh
0.5
10
8.2
SPA
Rayleigh
0.9
–
9.4
OMS
Rayleigh
0.9
–
9.6
Rayleigh
0
10
9.6
Fig. 4 [Images not available. See PDF.]
BER plot of various algorithms under correlation noise for Rician fading channel.
Fig. 5 [Images not available. See PDF.]
BER plot of various algorithms under correlation noise for Nakagami-m fading channel.
Fig. 6 [Images not available. See PDF.]
BER plot of various algorithms under correlation noise for Rayleigh fading channel.
The selection of scaling factor for fading channels
In fading channel environments, the scaling factor in the custom cost function plays a critical role in balancing two primary objectives: minimizing the residual noise power ( ) to improve decoding performance and regularizing the statistical properties of the residual noise, such as skewness ( ) and kurtosis ( ), to align with the characteristics of the fading channel. This balance is essential for the decoder to effectively mitigate noise while maintaining compatibility with the statistical distribution of fading-induced noise. Simulations reveal that small values of prioritize noise power reduction but fail to preserve the required statistical alignment, particularly in complex fading scenarios like Rician or Nakagami-m channels. This imbalance reduces the decoder’s ability to leverage the channel’s statistical characteristics, leading to suboptimal performance. Conversely, excessively high values overemphasize regularizing noise properties, which limits the suppression of residual noise power and increases decoding errors due to inadequate noise minimization.
An optimal balance is achieved with a moderate value of , such as , which ensures accurate noise power minimization while maintaining alignment with fading channel properties. Empirical results demonstrate that under Rician fading conditions, a moderate achieves a bit error rate (BER) of at 4.7 dB, significantly outperforming extreme values. For instance, a very low results in a BER of , whereas a high increases the BER to , highlighting the importance of proper tuning. Figure 7 illustrates the performance of the proposed CNN decoder for an LDPC code under Rician fading conditions. The results show that the decoder’s performance is highly sensitive to the choice of . Table 5 presents the bit error rate (BER) values for various values under the same conditions. It is evident from the table that optimal performance occurs at , achieving a BER of at 4.7 dB. This shows that, for Rician fading, a moderate value strikes the best balance between noise power reduction and maintaining the appropriate distribution.
The Table 5 also reveals that both very low and very high values lead to poorer performance. For instance, a very small of 0.01 results in a BER of , while an excessively high of 20 yields a significantly worse BER of . This demonstrates the need to carefully adjust based on the specific characteristics of the fading channel. These findings underscore the sensitivity of decoder performance to the selection of . A well-chosen balances residual noise suppression and statistical alignment, enabling the decoder to adapt dynamically to varying fading channel conditions and achieve optimal performance in challenging communication environments.
Fig. 7 [Images not available. See PDF.]
BER plot of OMS-CNN for the best-performing values under Rician fading.
Table 5
BER values of OMS-CNN for different values under Rician fading.
Scalar factor ( )
BER at 4.7 dB ( )
0.01
0.3
1
10
20
The influence of iterations between CNN and OMS in fading channels
In the initial simulation results for fading channels, the performance of the OMS-CNN decoder was evaluated using a single iteration, denoted as , for an LDPC code with a base code rate of 1/3, a codeword length of 3808, and 15 OMS iterations. The Rician fading environment is considered, which introduce additional variations in the channel characteristics. To further enhance performance in such fading conditions, multiple iterations between the CNN and OMS can be introduced. represents the iteration process where iterations are performed between the OMS decoder and the CNN. In Figure 8, at a BER of , using two iterations between the CNN and OMS, denoted as , results in an approximate 0.1 dB improvement in decoding performance over the single iteration . This improvement is particularly important in fading channels, where the noise characteristics vary more significantly compared to AWGN channels. It is also observed that after four iterations, denoted as , further performance improvement of 0.3 dB. At this point, the CNN reaches its full capability in mitigating the residual noise caused by the fading effects. Beyond this, additional iterations do not provide significant performance gains because the CNN has reached its limit in compensating for the complex fading-induced noise. In summary, while increasing the number of iterations between CNN and OMS improves performance in fading channels, particularly at lower BERs, the returns diminish after four iterations.
Fig. 8 [Images not available. See PDF.]
BER plot of various iterations between OMS-CNN design for Rician fading.
The CNN component in the OMS-CNN decoder plays a critical role in addressing the residual noise caused by fading effects. By leveraging its deep learning capability, the CNN learns to model and mitigate noise patterns, adapting effectively to varying noise profiles encountered in different fading channel environments. Specifically, the CNN refines the residual noise estimate, represented as , where is the actual noise and is the CNN-estimated noise. Through iterative training and noise suppression, the CNN progressively reduces the residual noise power, improving the accuracy of the log-likelihood ratios (LLRs) used in decoding. After four iterations , the residual noise power stabilizes, and further refinements provide diminishing BER improvements. This stabilization indicates that the CNN has reached its full capability, having captured and compensated for the complex fading-induced noise patterns. Notably, in highly correlated noise environments , the CNN demonstrates its ability to adapt to the intricate dependencies within the noise, achieving substantial gains in decoding performance. This iterative process highlights the synergy between the CNN and the OMS decoder, where the CNN dynamically adapts to channel noise characteristics, ensuring robust decoding and high efficiency in both correlated and uncorrelated noise scenarios. By fully exploiting the statistical properties of fading channels, the CNN component significantly enhances the overall decoding process within the OMS-CNN framework.
The observed diminishing improvements in BER after four iterations are specific to the code length and the correlation conditions of the simulated fading channels. This behavior is influenced by the complexity of the noise patterns and the capacity of the decoder to refine residual noise. For shorter code lengths ( ), the decoder typically reaches this limit faster, often within two to three iterations, due to the reduced noise diversity in smaller block sizes. Conversely, for longer code lengths ( ), additional iterations may provide further BER improvements as these codes exhibit more complex noise patterns that benefit from iterative refinement. This scalability highlights the importance of tailoring the number of iterations to the specific code length and channel characteristics for optimal decoding performance.
Computational complexity: The approximate computational complexity of is derived by analyzing the contributions from the OMS decoder and the CNN. The complexity of the OMS decoder for one iteration is , where represents the number of parity-check equations ( ) and is the codeword length .
OMS decoder complexity: For iterations, the OMS complexity becomes:
33
CNN layer complexity: The CNN complexity is determined by summing the contributions of its four layers, with each layer’s complexity given by:
34
For the entire CNN, the total complexity for one pass is:
35
36
For iterations, the CNN complexity is:
37
38
Combined OMS-CNN complexity: Combining the OMS and CNN contributions, the total computational complexity for is:
39
40
Substituting the values:
41
For iterations, the approximate computational complexity is:
42
This linear scaling with , combined with the dominance of the CNN component, underscores the computational efficiency of the OMS-CNN decoder. Its design balances high performance with manageable complexity, making it suitable for deployment in 5G IoT applications.
Effects of different loss functions in fading channels
In fading channels, the choice of loss functions directly influences the performance of the OMS-CNN decoder. Since the task involves predicting continuous values, regression loss functions such as Mean Squared Error (MSE) and Mean Absolute Error (MAE) are commonly used53. MSE measures the average squared difference between predicted and actual values:
43
where is the actual noise value, is the predicted noise, and is the size of the coded block. MSE is sensitive to outliers, which is useful in fading channels with large deviations. MAE, on the other hand, calculates the average magnitude of errors without amplifying extreme values:
44
Additionally, a Custom cost function was introduced to balance the power and distribution of residual noise in fading environments. Simulations for an LDPC code with a base rate of 1/3, using Rician channel, and 15 OMS iterations, shown in Figure 9 that provided about 0.9 dB improvement over MAE. In conclusion, the custom cost function enhances decoding performance in fading channels, better addressing the challenges posed by noise variability compared to traditional loss functions like MSE and MAE.
Fig. 9 [Images not available. See PDF.]
BER plot of OMS-CNN design of various loss functions for Rician channel.
Table 6
Performance Comparison of decoding Schemes.
Feature
OMS-CNN (proposed)
Iterative BP-CNN51
CNNAPS54
Noise environment
Fading channels (Rician, Nakagami-m)
AWGN, correlated noise
Correlated noise
5G LDPC code support
Yes
No
No
Decoding algorithm
OMS + CNN
BP + CNN
Post-processing with CNN
Loss function
Residual noise minimization + fading
Residual noise + normality test
No explicit loss function
Channel-specific training
Yes
No
No
Computational complexity
Moderate
High
Low
Performance (BER)
at 4.7 dB (Rician fading)
at 2.1 dB (AWGN)
at 3.5 dB
Comparative study
The proposed OMS-CNN decoder introduces significant advancements over existing LDPC decoding schemes such as Iterative BP-CNN51 and CNN-Aided Post-Processing Scheme54. Unlike51 and54, which primarily focus on AWGN and correlated noise, OMS-CNN is explicitly designed for fading channel environments commonly encountered in 5G-enabled IoT networks, such as Rician, Nakagami-m, and Rayleigh channels. This is achieved by incorporating skewness and kurtosis regularization into its custom cost function, which aligns residual noise characteristics with fading channel properties. Additionally, OMS-CNN explicitly handles fading amplitude variations ( ) for enhanced noise estimation in dynamic conditions. A key novelty of the OMS-CNN decoder is its optimization for 5G LDPC BGM1, supporting a codeword length of with . This ensures compatibility with modern 5G communication standards, making the decoder suitable for URLLC and mMTC applications in IoT. Furthermore, the OMS-CNN leverages the OMS algorithm, which reduces computational complexity compared to the computationally intensive BP-CNN used in51. By iteratively updating LLRs with OMS decoding and CNN-based noise suppression, OMS-CNN achieves substantial performance gains while maintaining moderate computational overhead, making it ideal for resource-constrained IoT devices.
Unlike54, which applies a lightweight but less effective threshold-based CNN approach for post-processing, OMS-CNN is trained and tested on realistic noise models tailored to fading channels, ensuring robust performance across diverse 5G IoT environments. Simulation results demonstrate that OMS-CNN achieves a BER of at 4.7 dB under Rician fading, outperforming51 and54, which achieve at 2.1 dB and at 3.5 dB, respectively. These results highlight OMS-CNN’s superior ability to handle fading channel conditions with lower computational demands, making it a highly relevant and efficient solution for 5G IoT applications, such as smart cities, industrial automation, and connected healthcare systems. The comparative performance of OMS-CNN and existing schemes51,54 is summarized in Table 6.
Fig. 10 [Images not available. See PDF.]
BER plot of various code rates for N = 3808 for Rician fading channel.
Rate matching
Rate matching techniques that allow decoders to handle punctured (rate-increased) or shorted (rate-decreased) versions of the mother code without significant loss in decoding performance55, 56–57. This involves algorithms that can reconfigure based on the presence or absence of bits. For 5G NR-compliant QC LDPC code, the recommended decoder provides runtime flexibility and can decode received messages corresponding to code rates 1/3, 2/5, 1/2, 2/3, 3/4, 5/6, and 8/9 with base codeword length N = 3808. Furthermore, Figure. 10 indicates that the performance is enhanced for lesser code rates, such as 1/3 and 2/5. In contrast, this performance experiences a decline when the code rate is increased, as evidenced by the 5/6 and 8/9 codes. The BER plot of the multiple code rates for N = 3808 when = 0.9 and = 0.3 was applied to .
Conclusion
This paper has addressed the critical challenge of improving LDPC decoding in 5G-enabled IoT networks, where fading channels and colored noise present substantial barriers to efficient communication. By introducing an innovative OMS-CNN architecture, we have demonstrated that combining the OMS algorithm with deep learning techniques significantly enhances the decoding process. The system successfully mitigates the effects of correlated noise, leading to a 2.7 dB improvement at a BER of across various channel models. The research highlights the importance of combining traditional decoding techniques with modern machine learning methods to optimize performance under complex channel conditions, thereby enabling more reliable data transmission in IoT networks. Looking ahead, this work opens up several avenues for further exploration. Applying the OMS-CNN architecture to multi-antenna systems, such as MIMO, could offer insights into its scalability in more advanced communication setups. Additionally, optimizing the energy efficiency of the proposed method will be crucial for resource-constrained IoT devices. As 6G technology emerges, extending this approach to meet the increased demands of future networks will be a valuable direction for research. Ultimately, the findings presented in this paper underscore the potential of deep learning-based methods in overcoming traditional communication challenges, positioning the OMS-CNN architecture as a key solution for next-generation 5G and IoT applications.
Acknowledgements
The authors would like to acknowledge the funding from the Ongoing Research Funding Program (ORF-2025-387), King Saud University, Riyadh, Saudi Arabia.
Author contributions
Sivarama Prasad Tera was responsible for the conceptualization and methodology. Ravikumar Chinthaginjala contributed to data collection, software development, and visualization. Fadi Al-Turjman provided supervision of the software, resources, and validation. Shafiq Ahmad contributed through supervision and resource provision. All authors have read and approved the final version of the manuscript.
Funding
This work was supported by the Near East University, Lefkosa, KKTC via Mersin 10, Turkey and by the King Saud University (KSU) through Ongoing Research Funding Program (ORF-2025-387), King Saud University, Riyadh, Saudi Arabia.
Data availability
The datasets generated and analyzed during the current study are available from the corresponding author upon reasonable request.
Declarations
Competing interests
The authors declare no competing interests.
References
1.BandaLMzyeceMMekuriaF<article-title>5g business models for mobile network operators–a surveyIeee Access202210948519488610.1109/ACCESS.2022.32050112.DiaoZSunF<article-title>Application of internet of things in smart factories under the background of industry 4.0 and 5g communication technologyMath. Problems Eng.20222022441762010.1155/2022/4417620
3.PopovskiPTrillingsgaardKFSimeoneODurisiG<article-title>5g wireless network slicing for embb, urllc, and mmtc: A communication-theoretic viewIeee Access20186557655577910.1109/ACCESS.2018.28727814.RamirezRHuangC-YLiangS-H<article-title>5g digital twin: A study of enabling technologiesAppl. Sci.20221277941:CAS:528:DC%2BB38XitFWjurvM10.3390/app121577945.JiangX<article-title>Packet detection by a single ofdm symbol in urllc for critical industrial control: A realistic studyIEEE J. Selected Areas in Commun.2019379339462019IJSAC..37..933J10.1109/JSAC.2019.28987616.Noor-A-RahimM<article-title>Wireless communications for smart manufacturing and industrial iot: Existing technologies, 5g and beyondSensors202223732022Senso..23...73N36616671982459310.3390/s230100737.Wang, N. et al. Satellite support for enhanced mobile broadband content delivery in 5g. In 2018 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), 1–6 (IEEE, 2018).8.MahmoodA<article-title>Industrial iot in 5g-and-beyond networks: Vision, architecture, and design trendsIEEE Trans. Indust. Inform.20211841224137420701510.1109/TII.2021.3115697
9.KrummackerDVeithBFischerCSchottenHD<article-title>Analysis of 5g channel access for collaboration with tsn concluding at a 5g scheduling mechanismNetwork2022244045510.3390/network2030027
10.SanchezJDVUrquiza-AguiarLParedes ParedesMC<article-title>Fading channel models for mm-wave communicationsElectronics20211079810.3390/electronics1007079811.Tera, S. P. et al. Cnn-based approach for enhancing 5g ldpc code decoding performance. IEEE Access (2024).12.Tera, S. P., Chinthaginjala, R., Natha, P., Ahmad, S. & Pau, G. Deep learning approach for efficient 5g ldpc decoding in iot. IEEE Access (2024).13.NathaPRajaRajeswariP<article-title>Advancing skin cancer prediction using ensemble modelsComputers20241315710.3390/computers1307015714.KarthigaR<article-title>A novel exploratory hybrid deep neural network to predict breast cancer for mammography based on wavelet featuresMultimed. Tools Appl.202483654416546710.1007/s11042-023-18012-y15.RenugadeviM<article-title>Machine learning empowered brain tumor segmentation and grading model for lifetime predictionIEEE Access20231112086812088010.1109/ACCESS.2023.332684116.GuptaAKSrinivasuluAOyerindeOOPauGRavikumarC<article-title>Covid-19 data analytics using extended convolutional techniqueInterdiscip. Perspect. Infect. Diseas.20222022457883817.KimT-H<article-title>Improving cnn predictive accuracy in covid-19 health analyticsSci. Reports202515298642025NatSR..1529864K1:CAS:528:DC%2BB2MXitV2gtr7I18.KumarNS<article-title>Harnet in deep learning approach–a systematic surveySci. Reports20241483632024NatSR..14.8363K1:CAS:528:DC%2BB2cXpt12hsrg%3D19.RavikumarC<article-title>Developing novel channel estimation and hybrid precoding in millimeter-wave communication system using heuristic-based deep learningEnergy202326810.1016/j.energy.2022.12660012660020.CVRKBagadiKP<article-title>Mc–cdma receiver design using recurrent neural networks for eliminating multiple access interference and nonlinear distortionInt. J. Commun. Syst.20173010.1002/dac.3328e332821.Tera, S. P., Chinthaginjala, R., Pau, G. & Kim, T. H. Towards 6g: An overview of the next generation of intelligent network connectivity. IEEE Access (2024).22.KimTHChinthaginjalaRSrinivasuluATeraSPRabSO<article-title>Covid-19 health data prediction: A critical evaluation of cnn-based approachesSci. Reports20251591212025NatSR..15.9121K1:CAS:528:DC%2BB2MXmvFyjt7Y%3D23.NathaP<article-title>Boosting skin cancer diagnosis accuracy with ensemble approachSci. Reports20251512902025NatSR..15.1290N1:CAS:528:DC%2BB2MXhsVSjtLk%3D
24.ChinthaginjalaR<article-title>Hybrid ai and semiconductor approaches for power quality improvementSci. Reports202515256402025NatSR..1525640C1:CAS:528:DC%2BB2MXhslamsLnJ25.KimT-H<article-title>Enhancing cybersecurity through script development using machine and deep learning for advanced threat mitigationSci. Reports20251582972025NatSR..15.8297K1:CAS:528:DC%2BB2MXmvFOgtrw%3D26.SreenivasuluVRavikumarC<article-title>Fractalnet-based key generation for authentication in voice over ip using blockchainAin Shams Eng. J.20251610.1016/j.asej.2025.10328610328627.Ad-Hoc chair (Nokia). Chairman’s Notes of Agenda Item 7.1.4. Channel Coding. 3GPP TSG RAN WG1 Meeting AH 2, R1-1711982, Available Online: https://portal.3gpp.org (2017).28.FossorierMP<article-title>Quasicyclic low-density parity-check codes from circulant permutation matricesIEEE Trans. Inform. Theory200450178817932004ITIT...50.1788F209684710.1109/TIT.2004.8318411300.9412329.TannerR<article-title>A recursive approach to low complexity codesIEEE Trans. Inform. Theory1981275335471981ITIT...27..533T65068610.1109/TIT.1981.10564040474.9402930.Wiberg, N. Codes and decoding on general graphs. Ph.D. thesis, Department of electrical engineering, linköping university, Sweden, Sweden (1996).31.KschischangFRFreyBJLoeligerH-A<article-title>Factor graphs and the sum-product algorithmIEEE Trans. Inform. Theory2001474985192001ITIT...47..498K182047410.1109/18.9105720998.6823432.AngaritaFVallsJAlmenarVTorresV<article-title>Reduced-complexity min-sum algorithm for decoding ldpc codes with low error-floorIEEE Trans. Circuits and Syst.I: Regular Papers2014612150215833.TeraSPAlantattilRPailyR<article-title>A flexible fpga-based stochastic decoder for 5g ldpc codesElectronics202312498610.3390/electronics1224498634.RappaportTSWireless communications: principles and practice2024Cambridge University Press10.1017/978100948984335.MolischAFWireless communications: from fundamentals to beyond 5G2022John Wiley & Sons36.HajimiriALeeTH<article-title>A general theory of phase noise in electrical oscillatorsIEEE J. Solid-State Circuits1998331791941998IJSSC..33..179H10.1109/4.65861937.Durukan, F., Güney, B. M. & Özen, A. Performance analysis of color shift keying systems in awgn and color noise environment. In 2019 27th Signal Processing and Communications Applications Conference (SIU), 1–4 (IEEE, 2019).38.MochizukiKUchinoM<article-title>Efficient digital wide-band coloured noise generatorElectron. Lett.20013762642001ElL....37...62M10.1049/el:2001002639.Lugosch, L. & Gross, W. J. Neural offset min-sum decoding. In 2017 IEEE International Symposium on Information Theory (ISIT), 1361–1365 (IEEE, 2017).40.Tran-Thi, B. N., Nguyen-Ly, T. T., Hong, H. N. & Hoang, T. An improved offset min-sum ldpc decoding algorithm for 5g new radio. In 2021 International Symposium on Electrical and Electronics Engineering (ISEE), 106–109 (IEEE, 2021).41.LiZLiuFYangWPengSZhouJ<article-title>A survey of convolutional neural networks: analysis, applications, and prospectsIEEE Trans. Neural Networks Learn. Syst.202133699970192018ITPSy..33.6999L451656810.1109/TNNLS.2021.308482742.QaziEUHAlmorjanAZiaT<article-title>A one-dimensional convolutional neural network (1d-cnn) based deep learning system for network intrusion detectionAppl. Sci.20221279861:CAS:528:DC%2BB38Xit1eksLbN10.3390/app1216798643.Nair, V. & Hinton, G. E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), 807–814 (2010).44.ThadewaldTBüningH<article-title>Jarque-bera test and its competitors for testing normality-a power comparisonJ. Appl. Stat.20073487105236424310.1080/026647606009945391119.6233445.Abadi, M. et al. TensorFlow : a system for Large-Scale machine learning. In 12th USENIX symposium on operating systems design and implementation (OSDI 16), 265–283 (2016).46.CarneiroT<article-title>Performance analysis of google colaboratory as a tool for accelerating deep learning applicationsIEEE Access20186616776168510.1109/ACCESS.2018.287476747.Sharma, V., Gupta, G. K. & Gupta, M. Performance benchmarking of gpu and tpu on google colaboratory for convolutional neural network. In Applications of Artificial Intelligence in Engineering: Proceedings of First Global Conference on Artificial Intelligence and Applications (GCAIA 2020), 639–646 (Springer, 2021).48.Glorot, X. & Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, 249–256 (JMLR Workshop and Conference Proceedings, 2010).49.Kingma, D. P. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 (2014).50.SharmaSKChatzinotasSOtterstenB<article-title>Snr estimation for multi-dimensional cognitive receiver under correlated channel/noiseIEEE Trans. Wireless Commun.2013126392640510.1109/TWC.2013.103113.13052351.LiangFShenCWuF<article-title>An iterative bp-cnn architecture for channel decodingIEEE J. Select. Topics Signal Process.2018121441592018ISTSP..12..144L10.1109/JSTSP.2018.279406252.He, K., Zhang, X., Ren, S. & Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, 1026–1034 (2015).53.ChahkoutahiFKhasheiM<article-title>Influence of cost/loss functions on classification rate: A comparative study across diverse classifiers and domainsEng. Appl. Artific. Intell.202412810.1016/j.engappai.2023.10741510741554.Liu, J., Kang, S., Cheng, J., Wang, J. & Huang, R. A cnn-aided post-processing scheme for channel decoding under correlated noise. In 2024 3rd International Conference on Electronics and Information Technology (EIT), 150–154 (IEEE, 2024).55.CuiH<article-title>Design of high-performance and area-efficient decoder for 5g ldpc codesIEEE Trans. Circuits and Syst. I: Regular Papers202068879891423457356.StarkMWangLBauchGWeselRD<article-title>Decoding rate-compatible 5g-ldpc codes with coarse quantization using the information bottleneck methodIEEE Open J. Commun. Soc.2020164666010.1109/OJCOMS.2020.299404857.WuXJiangMZhaoCMaLWeiY<article-title>Low-rate pbrl-ldpc codes for urllc in 5gIEEE Wireless Commun. Lett.2018780080310.1109/LWC.2018.2825988
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. Banda, L; Mzyece, M; Mekuria, F. 5g business models for mobile network operators–a survey. Ieee Access; 2022; 10, pp. 94851-94886. [DOI: https://dx.doi.org/10.1109/ACCESS.2022.3205011]
2. Diao, Z; Sun, F. Application of internet of things in smart factories under the background of industry 4.0 and 5g communication technology. Math. Problems Eng.; 2022; 2022, 4417620. [DOI: https://dx.doi.org/10.1155/2022/4417620]
3. Popovski, P; Trillingsgaard, KF; Simeone, O; Durisi, G. 5g wireless network slicing for embb, urllc, and mmtc: A communication-theoretic view. Ieee Access; 2018; 6, pp. 55765-55779. [DOI: https://dx.doi.org/10.1109/ACCESS.2018.2872781]
4. Ramirez, R; Huang, C-Y; Liang, S-H. 5g digital twin: A study of enabling technologies. Appl. Sci.; 2022; 12, 7794.1:CAS:528:DC%2BB38XitFWjurvM [DOI: https://dx.doi.org/10.3390/app12157794]
5. Jiang, X et al. Packet detection by a single ofdm symbol in urllc for critical industrial control: A realistic study. IEEE J. Selected Areas in Commun.; 2019; 37, pp. 933-946.2019IJSAC.37.933J [DOI: https://dx.doi.org/10.1109/JSAC.2019.2898761]
6. Noor-A-Rahim, M et al. Wireless communications for smart manufacturing and industrial iot: Existing technologies, 5g and beyond. Sensors; 2022; 23, 73.2022Senso.23..73N [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36616671][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9824593][DOI: https://dx.doi.org/10.3390/s23010073]
7. Wang, N. et al. Satellite support for enhanced mobile broadband content delivery in 5g. In 2018 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), 1–6 (IEEE, 2018).
8. Mahmood, A et al. Industrial iot in 5g-and-beyond networks: Vision, architecture, and design trends. IEEE Trans. Indust. Inform.; 2021; 18, pp. 4122-4137.4207015 [DOI: https://dx.doi.org/10.1109/TII.2021.3115697]
9. Krummacker, D; Veith, B; Fischer, C; Schotten, HD. Analysis of 5g channel access for collaboration with tsn concluding at a 5g scheduling mechanism. Network; 2022; 2, pp. 440-455. [DOI: https://dx.doi.org/10.3390/network2030027]
11. Tera, S. P. et al. Cnn-based approach for enhancing 5g ldpc code decoding performance. IEEE Access (2024).
12. Tera, S. P., Chinthaginjala, R., Natha, P., Ahmad, S. & Pau, G. Deep learning approach for efficient 5g ldpc decoding in iot. IEEE Access (2024).
13. Natha, P; RajaRajeswari, P. Advancing skin cancer prediction using ensemble models. Computers; 2024; 13, 157. [DOI: https://dx.doi.org/10.3390/computers13070157]
14. Karthiga, R et al. A novel exploratory hybrid deep neural network to predict breast cancer for mammography based on wavelet features. Multimed. Tools Appl.; 2024; 83, pp. 65441-65467. [DOI: https://dx.doi.org/10.1007/s11042-023-18012-y]
15. Renugadevi, M et al. Machine learning empowered brain tumor segmentation and grading model for lifetime prediction. IEEE Access; 2023; 11, pp. 120868-120880. [DOI: https://dx.doi.org/10.1109/ACCESS.2023.3326841]
16. Gupta, AK; Srinivasulu, A; Oyerinde, OO; Pau, G; Ravikumar, C. Covid-19 data analytics using extended convolutional technique. Interdiscip. Perspect. Infect. Diseas.; 2022; 2022, 4578838.
17. Kim, T-H et al. Improving cnn predictive accuracy in covid-19 health analytics. Sci. Reports; 2025; 15, 29864.2025NatSR.1529864K1:CAS:528:DC%2BB2MXitV2gtr7I
18. Kumar, NS et al. Harnet in deep learning approach–a systematic survey. Sci. Reports; 2024; 14, 8363.2024NatSR.14.8363K1:CAS:528:DC%2BB2cXpt12hsrg%3D
19. Ravikumar, C et al. Developing novel channel estimation and hybrid precoding in millimeter-wave communication system using heuristic-based deep learning. Energy; 2023; 268, [DOI: https://dx.doi.org/10.1016/j.energy.2022.126600] 126600.
20. CV, RK; Bagadi, KP. Mc–cdma receiver design using recurrent neural networks for eliminating multiple access interference and nonlinear distortion. Int. J. Commun. Syst.; 2017; 30, [DOI: https://dx.doi.org/10.1002/dac.3328] e3328.
21. Tera, S. P., Chinthaginjala, R., Pau, G. & Kim, T. H. Towards 6g: An overview of the next generation of intelligent network connectivity. IEEE Access (2024).
22. Kim, TH; Chinthaginjala, R; Srinivasulu, A; Tera, SP; Rab, SO. Covid-19 health data prediction: A critical evaluation of cnn-based approaches. Sci. Reports; 2025; 15, 9121.2025NatSR.15.9121K1:CAS:528:DC%2BB2MXmvFyjt7Y%3D
23. Natha, P et al. Boosting skin cancer diagnosis accuracy with ensemble approach. Sci. Reports; 2025; 15, 1290.2025NatSR.15.1290N1:CAS:528:DC%2BB2MXhsVSjtLk%3D
24. Chinthaginjala, R et al. Hybrid ai and semiconductor approaches for power quality improvement. Sci. Reports; 2025; 15, 25640.2025NatSR.1525640C1:CAS:528:DC%2BB2MXhslamsLnJ
25. Kim, T-H et al. Enhancing cybersecurity through script development using machine and deep learning for advanced threat mitigation. Sci. Reports; 2025; 15, 8297.2025NatSR.15.8297K1:CAS:528:DC%2BB2MXmvFOgtrw%3D
26. Sreenivasulu, V; Ravikumar, C. Fractalnet-based key generation for authentication in voice over ip using blockchain. Ain Shams Eng. J.; 2025; 16, [DOI: https://dx.doi.org/10.1016/j.asej.2025.103286] 103286.
27. Ad-Hoc chair (Nokia). Chairman’s Notes of Agenda Item 7.1.4. Channel Coding. 3GPP TSG RAN WG1 Meeting AH 2, R1-1711982, Available Online: https://portal.3gpp.org (2017).
29. Tanner, R. A recursive approach to low complexity codes. IEEE Trans. Inform. Theory; 1981; 27, pp. 533-547.1981ITIT..27.533T650686 [DOI: https://dx.doi.org/10.1109/TIT.1981.1056404] 0474.94029
30. Wiberg, N. Codes and decoding on general graphs. Ph.D. thesis, Department of electrical engineering, linköping university, Sweden, Sweden (1996).
31. Kschischang, FR; Frey, BJ; Loeliger, H-A. Factor graphs and the sum-product algorithm. IEEE Trans. Inform. Theory; 2001; 47, pp. 498-519.2001ITIT..47.498K1820474 [DOI: https://dx.doi.org/10.1109/18.910572] 0998.68234
32. Angarita, F; Valls, J; Almenar, V; Torres, V. Reduced-complexity min-sum algorithm for decoding ldpc codes with low error-floor. IEEE Trans. Circuits and Syst.I: Regular Papers; 2014; 61, pp. 2150-2158.
33. Tera, SP; Alantattil, R; Paily, R. A flexible fpga-based stochastic decoder for 5g ldpc codes. Electronics; 2023; 12, 4986. [DOI: https://dx.doi.org/10.3390/electronics12244986]
34. Rappaport, TS. Wireless communications: principles and practice; 2024; Cambridge University Press: [DOI: https://dx.doi.org/10.1017/9781009489843]
35. Molisch, AF. Wireless communications: from fundamentals to beyond 5G; 2022; John Wiley & Sons:
36. Hajimiri, A; Lee, TH. A general theory of phase noise in electrical oscillators. IEEE J. Solid-State Circuits; 1998; 33, pp. 179-194.1998IJSSC.33.179H [DOI: https://dx.doi.org/10.1109/4.658619]
37. Durukan, F., Güney, B. M. & Özen, A. Performance analysis of color shift keying systems in awgn and color noise environment. In 2019 27th Signal Processing and Communications Applications Conference (SIU), 1–4 (IEEE, 2019).
38. Mochizuki, K; Uchino, M. Efficient digital wide-band coloured noise generator. Electron. Lett.; 2001; 37, pp. 62-64.2001ElL..37..62M [DOI: https://dx.doi.org/10.1049/el:20010026]
39. Lugosch, L. & Gross, W. J. Neural offset min-sum decoding. In 2017 IEEE International Symposium on Information Theory (ISIT), 1361–1365 (IEEE, 2017).
40. Tran-Thi, B. N., Nguyen-Ly, T. T., Hong, H. N. & Hoang, T. An improved offset min-sum ldpc decoding algorithm for 5g new radio. In 2021 International Symposium on Electrical and Electronics Engineering (ISEE), 106–109 (IEEE, 2021).
41. Li, Z; Liu, F; Yang, W; Peng, S; Zhou, J. A survey of convolutional neural networks: analysis, applications, and prospects. IEEE Trans. Neural Networks Learn. Syst.; 2021; 33, pp. 6999-7019.2018ITPSy.33.6999L4516568 [DOI: https://dx.doi.org/10.1109/TNNLS.2021.3084827]
42. Qazi, EUH; Almorjan, A; Zia, T. A one-dimensional convolutional neural network (1d-cnn) based deep learning system for network intrusion detection. Appl. Sci.; 2022; 12, 7986.1:CAS:528:DC%2BB38Xit1eksLbN [DOI: https://dx.doi.org/10.3390/app12167986]
43. Nair, V. & Hinton, G. E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), 807–814 (2010).
44. Thadewald, T; Büning, H. Jarque-bera test and its competitors for testing normality-a power comparison. J. Appl. Stat.; 2007; 34, pp. 87-105.2364243 [DOI: https://dx.doi.org/10.1080/02664760600994539] 1119.62334
45. Abadi, M. et al. TensorFlow : a system for Large-Scale machine learning. In 12th USENIX symposium on operating systems design and implementation (OSDI 16), 265–283 (2016).
46. Carneiro, T et al. Performance analysis of google colaboratory as a tool for accelerating deep learning applications. IEEE Access; 2018; 6, pp. 61677-61685. [DOI: https://dx.doi.org/10.1109/ACCESS.2018.2874767]
47. Sharma, V., Gupta, G. K. & Gupta, M. Performance benchmarking of gpu and tpu on google colaboratory for convolutional neural network. In Applications of Artificial Intelligence in Engineering: Proceedings of First Global Conference on Artificial Intelligence and Applications (GCAIA 2020), 639–646 (Springer, 2021).
48. Glorot, X. & Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, 249–256 (JMLR Workshop and Conference Proceedings, 2010).
49. Kingma, D. P. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 (2014).
50. Sharma, SK; Chatzinotas, S; Ottersten, B. Snr estimation for multi-dimensional cognitive receiver under correlated channel/noise. IEEE Trans. Wireless Commun.; 2013; 12, pp. 6392-6405. [DOI: https://dx.doi.org/10.1109/TWC.2013.103113.130523]
51. Liang, F; Shen, C; Wu, F. An iterative bp-cnn architecture for channel decoding. IEEE J. Select. Topics Signal Process.; 2018; 12, pp. 144-159.2018ISTSP.12.144L [DOI: https://dx.doi.org/10.1109/JSTSP.2018.2794062]
52. He, K., Zhang, X., Ren, S. & Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, 1026–1034 (2015).
53. Chahkoutahi, F; Khashei, M. Influence of cost/loss functions on classification rate: A comparative study across diverse classifiers and domains. Eng. Appl. Artific. Intell.; 2024; 128, [DOI: https://dx.doi.org/10.1016/j.engappai.2023.107415] 107415.
54. Liu, J., Kang, S., Cheng, J., Wang, J. & Huang, R. A cnn-aided post-processing scheme for channel decoding under correlated noise. In 2024 3rd International Conference on Electronics and Information Technology (EIT), 150–154 (IEEE, 2024).
55. Cui, H et al. Design of high-performance and area-efficient decoder for 5g ldpc codes. IEEE Trans. Circuits and Syst. I: Regular Papers; 2020; 68, pp. 879-891.4234573
56. Stark, M; Wang, L; Bauch, G; Wesel, RD. Decoding rate-compatible 5g-ldpc codes with coarse quantization using the information bottleneck method. IEEE Open J. Commun. Soc.; 2020; 1, pp. 646-660. [DOI: https://dx.doi.org/10.1109/OJCOMS.2020.2994048]
57. Wu, X; Jiang, M; Zhao, C; Ma, L; Wei, Y. Low-rate pbrl-ldpc codes for urllc in 5g. IEEE Wireless Commun. Lett.; 2018; 7, pp. 800-803. [DOI: https://dx.doi.org/10.1109/LWC.2018.2825988]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Longer documents can take a while to translate. Rather than keep you waiting, we have only translated the first few paragraphs. Click the button below if you want to translate the rest of the document.
corrected publication 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.