1. Introduction
In recent years, obstacles between communication transmitters and receivers have posed significant challenges to their high-quality communication, which even results in the disconnecting of communication link. Unmanned Aerial Vehicles (UAVs) are widely utilized to alleviate the above problem due to the characteristics of flexible deployment, all-weather application, and low cost. The term UAV mainly refers to radio remote control technology and automatic program control technology that enables unmanned aircraft. Due to the limited technology and economic requirements, UAVs were initially only used in military applications. Recently, the manufacturing costs have gradually decreased, and UAVs have begun to enter people’s daily lives. UAV sensor networks are becoming increasingly important because of their low cost, comprehensive coverage, flexibility, and availability of small-scale sensors.
However, in addition to these effective applications that greatly benefit daily life, some safety hazards and social problems also need to be solved. For example, many civil UAV flights are “black flights”, which means they are flown without an airworthiness certificate from the civil aviation administration or a license issued by the relevant authorities. This makes the necessary communication subject to human-induced interference, resulting in harmful incidents such as interference with normal aerospace communication by non-cooperative UAVs. Therefore, avoiding such interference through reasonable and effective methods is a major topic that needs to be studied. The first step in this topic is to monitor as accurately as possible whether there is non-cooperative UAV interference and whether the interference affects the normal flight of cooperative UAV. In terms of communication, monitoring channel quality and non-cooperative UAV can be achieved by analyzing and comparing the received signals of receivers for evaluation.
Channel quality evaluation is a key technology for UAV wireless communication systems, and the signal-to-noise ratio (SNR) is a significant metric to assess the channel quality of UAV communication [1,2,3,4,5,6]. SNR evaluation plays an important role in many wireless communication systems, such as signal detection, power control, adaptive modulation and demodulation, etc. Obtaining SNR evaluation results accurately in real-time can improve system performance to some extent. In addition, SNR evaluation technology has also been widely used in fields such as electrostatics and the Internet. In the transmission of modulated signals, SNR generally refers to the ratio of the average power of the carrier signal at the output end of the channel, that is to say, the average power of the noise in the channel at the input end of the receiver. Because of its direct correspondence with the communication bit error rate, SNR becomes an effective evaluation standard for the reliability of the channel state and communication quality; it is necessary prior information for many signal processing algorithms or technologies. The real-time performance and accuracy of SNR evaluation directly affect the performance of communication systems [7,8,9,10,11].
Generally, the currently available SNR evaluation algorithms can be divided into two categories, one is the data-aid SNR evaluation algorithm that requires some prior information as an auxiliary condition (e.g., auxiliary data, modulation method, etc.), and this type of algorithm will increase the overhead of the communication system to varying degrees; the other is the non-data-aid SNR evaluation algorithm that does not require the knowledge of auxiliary information. The non-data-aid evaluation algorithm ensures the stability of the evaluation results, and this type of algorithm generally employs some observation data [12].
Specifically, the traditional SNR evaluation methods contain the maximum likelihood ratio-based SNR evaluation algorithm [13,14], Second-order and Four-order Moments (M2M4) algorithm [15], etc. Deep learning (DL) has been applied in various scenario, and has provided significant developments for these fields [16,17,18]. Recently, S. Zhang and Z. Bao proposed an adaptive spectrum sensing (ASS) algorithm [19]; Li et al. proposed an algorithmic model that fused convolution neural network (CNN) and long short-term memory (LSTM) network [20], which improved the performance of SNR evaluation relative to the traditional algorithm. However, the existing DL-based SNR evaluation methods are at the expense of computation complexity and give less attention to the two-dimensional features of signals. Meanwhile, the lack of open dataset of UAV control signals also increases the difficulty of model training. To sum up, it seems indispensable to build a two-dimensional dataset of UAV control signals, thus containing enough salient features and avoiding too much interference information. In addition, the corresponding model design is also a time-limited task.
Motivated by the information mentioned above, this paper proposes an intelligent SNR evaluation method based on a single-way convolution neural network, which generates UAV signal datasets by converting one-dimensional signals into two-dimensional ones, trains neural networks of different structures on them, and finally confirms an evaluation network model that can perform a smaller SNR error on the input signals, effectively enhancing the SNR assessment, allowing the neural network model to pay more attention to signal feature information [21]. Specifically, the algorithm model is firstly presented and is applied in the field of SNR evaluation of UAV frequency hopping signals. In addition, this paper also proposes a more optimised two-way convolution neural network-based intelligent SNR evaluation method based on a classifiable model for better and more extraction of signal features. The main contributions of this paper are concluded as follows:
(1) A two-dimensional dataset of UAV remote control signal is constructed and expanded. Firstly, a one-dimensional frequency hopping signal is generated based on the UAV frequency hopping communication system, and then the one-dimensional signal data are converted to two-dimensional signal data based on the signal time-domain diagram method.
(2) The feature fusion method is considered to combine features of different levels, then the proposed model can further classify as many features as possible and reduce randomness.
(3) A two-path convolution neural network is proposed that fuses the features extracted from two neural networks with different structures, and the proposed model can further improve the accuracy of SNR evaluation.
2. Related Work
In this section, the works related to deep learning methods based on SNR evaluation are introduced, mainly including related works on SNR evaluation and DL-based schemes. Finally, we analyze the problems existing in the related works.
2.1. SNR Evaluation
Typical SNR evaluation methods include the maximum likelihood (ML) ratio-based SNR evaluation algorithm [13], Second-order and Four-order Moments (M2M4) algorithm [15], etc. In recent years, S. Zhang and Z. Bao proposed the Adaptive Spectrum Sensing (ASS) algorithm [19]. Yuhang Sun et al. proposed an algorithmic model that fuses convolution neural network (CNN) and long short-term memory (LSTM) network to improve the SNR evaluation accuracy. The algorithms introduced above are described next, respectively.
In the case of accessible auxiliary data, the maximum likelihood estimation method is the most satisfactory. Its basic premise is to obtain the joint probability density function with the received channel based on the probability density function of the noise. It is a typical DA SNR estimation algorithm. The advantage of this estimation method is that the estimation is accurate. The result is close to the actual value under high SNR conditions. However, the disadvantages are: the computational effort is enormous, a large deviation occurs when the SNR is low, and carrier synchronization is necessary.
The M2M4 SNR estimation method is a self-adaptive algorithm that uses the relationship between the variance and peakedness of a signal to perform SNR evaluation. It is a moment-based SNR estimator proposed in modulated communication signals. This algorithm solves the problem of estimating the SNR of complex sinusoidal signals with deterministic but unknown phases under additive Gaussian noise interference and is an NDA estimation algorithm. Its advantages include simple calculation and insensitivity to carrier phase deviation. Since it is a cumulative volume algorithm, it has the feature that the evaluated results are better with increased data volume. However, the estimation error of this method for the SNR ratio is positively correlated with the SNR ratio and modulation order.
The ASS algorithm evaluates the SNR by estimating the noise and signal power separately. A significance probability level constant representing the fluctuating characteristics of additive Gaussian white noise is set. The frequency domain samples of the received signal are divided into low- and high-frequency bands. The band with lower average energy is selected, and the selected part is again divided into high- and low-frequency bands until the significance probability level of this band is less than the set constant. This method results in the estimated received signal power. The noise power is estimated oppositely, requiring the significance probability level of the last obtained band to be greater than the constant. Finally, the two are calculated using the SNR formula to obtain the estimated value.
Finally, the CNN-LSTM algorithm is analyzed. The segmented signal is extracted using a CNN-LSTM network to obtain feature vectors. Then the features are fused by a fully connected layer, and the SNR value is calculated. This method uses one-dimensional signal data for estimation and has superior performance, but the complexity is relatively high. The comparison of the related SNR evaluation methods are concluded in Table 1.
2.2. DL-Based Scheme
As deep learning becomes more and more intelligent, its influence has been expanding, and it has a complete theoretical framework and some practical experience base. It has been applied very effectively in various fields, such as target detection, speech recognition, video recommendation, text data analysis, medical diagnosis, treatment, etc. The applications of deep learning methods in communication are the spectrum sensing of signals, classification of signal modulation methods, etc.
There are also a few experiments combined with deep learning in the study of SNR evaluation. The DL-based algorithm proposed by Yang K et al. [22] indirectly performs SNR estimation by estimating the target signal amplitude using a neural network structure containing five convolution layers for one-dimensional signal data. First, the received signal is segmented according to the input dimension of the deep neural network. Then, the trained deep neural network is used to estimate the signal amplitude of each segment. Next, the estimated amplitude of the target signal is calculated based on the estimated amplitude of each segment. Finally, calculate the SNR of the received signal. This method can be applied to more modulation types, and the effective range of SNR evaluation is broader and more robust to adapt to phase and frequency shifts. The idea of deep learning also inspires research on this topic; two-dimensional images of the signal are used as the input of the neural network, then improve the neural network model to make the SNR ratio evaluation more accurate.
2.3. Existing Problem of Related Works
The existing SNR evaluation methods involving deep learning are relatively complex and pay little attention to the image features of the signal. For deep learning, in terms of datasets, because there are fewer algorithms for SNR evaluation, there is a lack of publicly available datasets for UAV communication RF signals, which increases the difficulty of model training work from objective conditions. As data are received in a one-dimensional sequence, the coupling of signals in the spatial dimension cannot be too firm when generating two-dimensional datasets. In terms of network models, there is no neural network that uses two-dimensional data of the signal as input for SNR evaluation. In summary, there is a need to find a method to generate a dataset that can extract the signal features while avoiding too much interference information and complete the construction of a targeted neural network model, which is the primary goal of this investigation.
3. SNR Evaluation Based on Proposed TP-CNN
In the section, the proposed SNR evaluation method based on DL is discussed. Firstly, a novel UAV remote control signal dataset is generated. Based on the constructed dataset, a two-path CNN model is proposed to train the dataset next.
3.1. The Generation of UAV Remote Signal Dataset
Before generating UAV remote signal dataset, the drone frequency hopping signal should first be generated. Frequency hopping communication system mainly includes a signal modulator, frequency synthesizer, and pseudo random (PN) code generator, where the PN code generator continuously and randomly generates PN code, and inputs it into a frequency synthesizer to control the frequency generation. The transmitted signal is obtained by mixing the frequency generated by a frequency synthesizer with the baseband modulated signal, and the final mixing output signal is frequency-hopping signal, which is shown in Figure 1.
To obtain the frequency hopping signal of UAV, Matlab is used for simulation modeling. Firstly, the baseband signal of the system is obtained by MSK modulation after using the 01 discrete signal generated randomly as the source bit. At the same time, carrier signals of different frequencies required by the frequency hopping communication system are generated. The hop frequency is 49, and the baseband signal is modulated with different carrier signals in each frequency hopping period. Then the drone frequency hopping signal is obtained. Finally, The signal passes through the small-scale flat fading Rayleigh channel and is, respectively, superimposed with noise with different signal-to-noise ratios of dB. Here, the superimposed noise has a 1 dB interval with the signal-to-noise ratio of the effective signal, starting at −10 dB and ending at 10 dB. After down-sampling, the one-dimensional signal sequence dataset is obtained. The frequency hopping signal is denoted as:
(1)
where N is the number of frequency points, A denotes the amplitude of signal, represents a rectangular window of width , is the period of a frequency hopping signal, describes the set of frequency of drone hopping signal, denotes the initial phase and .The expression of the signal after modulation is expressed as:
(2)
where denotes the signal obtained after the original baseband signal data are modulated by the modulator, expresses the signal sent out by the transmitter. The definition of SNR is defined as:(3)
where denotes the operation of calculating the average power and signifies the variance of noise.After obtaining one-dimensional signal dataset, one-dimensional data need to be converted into two-dimensional data before inputting into neural network for training. The existing conversion methods mainly include Toplitz matrix, frequency waveform diagram and time waveform diagram. The time waveform diagram is applied to generate two-dimensional data in the paper. The time domain waveform diagram of a signal represents the curve of the signal over time. The horizontal axis represents the order in time, and the vertical axis represents the amplitude of the signal at the current moment. Compared with frequency domain diagrams, the envelope of time domain diagrams can more easily show the energy or power characteristics of the signal. In addition, the Toplitz matrix method associates the unrelated data in a one-dimensional sequence spatially, increases the processing of irrelevant information in the dataset by the neural network model, and the accuracy of the final classification results is affected.
3.2. Feature Fusion
The idea of integration refers to the rationalization of different approaches to problem solving in order to achieve better results. Fusion includes, but is not limited to, the direct integration of results; it can also occur at various stages of problem solving, with the aim of gaining a stronger affirmation of the findings at that stage. It has been used in many problems, particularly in classification problems. Specific methods include voting mechanisms, weighted averages, etc.
To optimize the performance of the deep learning algorithm for signal-to-noise evaluation, further improvements to the model are made. In the experience of previous work, many neural network models have been used to influence the classification results by modifying the depth, parameters, and other information based on a single-way neural network similar to the one mentioned. Additionally, the comparative experiments presented show that none of the participating models performed optimally on all SNR, i.e., the same test image with different convolution kernels may yield inconsistent classification results. The difference in the convolution kernels means that the features are extracted differently and, therefore, the features are extracted differently.
Feature fusion is the combination of different levels or branches of features [23] and is a ubiquitous part of modern network architectures, allowing the proposed model to use as many features as possible for further classification and reducing randomness. The fusion of multiple features is an effective method for visual and multimedia applications. Therefore, if one feature extraction method can be overlaid with another, the effect of numerous feature complementary can be achieved, avoiding the omission of small features. Therefore, we argue that using them together is better than using any one alone and that this complementary fusion of elements is formed at the fully connected level. A literature review shows that even simple fusion schemes can significantly improve results by adding or multiplying features as long as they are complementary [24]. For the networks with deep layers (e.g., ResNet [25]), deep features are often fused with shallow features to weigh the results better globally and locally. The idea of the Inception module in GoogleNet [26] is also to collect multi-scale data information by stacking different networks.
The first method is to join the features to be fused, and the dimension of the fused vector is the sum of the dimensions of the original vector, which is shown in Figure 2a. The second add method combines the feature vectors into a composite vector, and the size of the fused vector is the maximum number of dimensions of the original vector, which is shown in Figure 2b. The second add method combines the feature vectors into a composite vector, and the size after fusion is the maximum number of dimensions in the initial vector.
3.3. Proposed TP-CNN Model
In the proposed model, the original signal data are fed into two separate convolution modules with different structures and parameters, and the two modules are parallel and uncorrelated. After extracting the features using different convolution kernels, the features are fused. The fused feature vectors are obtained by adding the feature vectors from the two neural networks using the concat method. Then, all the features are concatenated and fed into the fully connected layer for linear transformation. Based on the investigating of baseline methods, the performance of the single-way convolution neural network with a convolution kernel size of 7 × 7 is worse than that with a convolution kernel size of 3 × 3, the results are still acceptable and the F1 values are better than those with a convolution kernel size of 5 × 5. Therefore, the multi-features can be extracted as an aid to the original model to obtain better results. The above analysis shows the structure of the proposed Two-Pathway CNN(TP-CNN) in Figure 3. The pseudo code of the proposed algorithm is shown in Algorithm 1.
| Algorithm 1 Proposed SNR evaluation algorithm. |
Input: UAV remote control signal. Output: Classification result .
|
Figure 4 shows the work process. Based on the location in the schematic, the model is divided into two layers, both of which use the same input and 128 × 128 data size, and use the maximum pooling method with a convolution kernel shift step of 1. The upper layer consists of a convolution layer and a pooling layer, with the convolution kernel size set to 7 × 7, which extracts larger features. The lower layer consists of a convolution layer and a pooling layer, with the convolution kernel size set to 3 × 3, which is mainly used for extracting smaller and more subtle features. The networks with different convolution kernels are fused by padding the edges of the image to ensure that the feature maps are of the same size.
Figure 5 shows an example of a 12 × 12 input size with edge fill. The blank in the center indicates the input size. In order to fuse the features, the input data needs to be filled. The blue part of the figure indicates the fill. In our experiments, we have chosen to fill with 0.
4. Experimental and Analysis
4.1. Precise Dataset
This paper focuses on the SNR evaluation of UAV signals in complex electromagnetic environments. Since no open-source datasets are available for UAV signals on the Internet or in the literature, the datasets used in this study are generated by self-simulation based on the characteristics of UAV signals. The classification interval for the different categories of data is 1 dB, and the data are classified into 21 categories. However, in the objective conditions of SNR evaluation, such an interval is still very large, so it is expected to generate signals with smaller intervals (greater accuracy) to increase the classification results. The number of classification results is increased by generating smaller intervals (greater accuracy) to further validate and optimize the neural network model. Practically, the dataset can be collected and generated with an acceptable accuracy range.
Following the dataset generation process, the signals were regenerated in the range of −10 dB to 10 dB, with signal intervals of roughly 0.2 dB, for signal-to-noise ratios of −8.4, −8.2, −5.8, −5.6, −5.4, −5.2, −2.8, −2.6, −2.4, −2.2, 2.2, 2.4, 2.6, 2.8, 5.2, 5.4, 5.6, 5.8, 8.2, and 8.4 dB. Similarly, the signal is pre-processed in the same way as in the previous statement. For example, the data in the range of 2 dB to 3 dB are shown in Figure 6. As seen from the above figure, the data in the updated dataset has increased significantly. For the classification model, the more categories that need to be distinguished, the lower the accuracy. The increased number of classes adds more uncertainty to the performance of the classification.
4.2. Performance of the Algorithm on the Generated Dataset
After updating the dataset, the classification categories of the neural network were nearly doubled. The expanded data were pre-processed to verify the performance of the DL-based SNR evaluation algorithm. Then, the data are fed into the neural network and compared to the classification results before the update. The 2D-CNN is a traditional model which mainly refers to the classical LeNet-5 convolution neural network. The model is mainly composed of the first half of the convolution module and the second half of the fully connected module, showing the advantages of lightweight compared with the existing deep learning model [27]. The model structure of 2D-CNN is shown in Figure 7. To analyze the performance of proposed model on the generated dataset, the mean absolute error (MAE) and mean relative error (MRE) are utilized to evaluate the training error. MAE evaluates the deviation degree between the real value and the predicted value, the smaller MAE is, the better the model quality is and the more accurate the prediction will be. MRE represents the average ratio between the absolute error and the true value, reflecting the reliability of the measurement. The definition of MAE and MRE are denoted as follows:
(4)
where n is the number of samples, denote the true result and describe the predicted result. Similarly, the mean relative error (MRE) is denoted as:(5)
Table 2 shows a comparison of the evaluation metric values for the three algorithms. It can be seen that although the performance of the baseline model is reduced, it still achieves good results compared to the two existing algorithms. The average absolute error of the 2D-CNN results decreases with the addition of data due to the smaller interval between categories and the smaller gap between neighboring SNR. In detail, the smaller SNR reduces the average error but also increases the classification uncertainty of the neural network due to the increased number of categories and the smaller gap.
4.3. Results and Discussions
The main objective of this experiment is to analyze the performance of a two-way convolution neural network-based SNR evaluation network model when classifying signals with different SNR. The updated two-dimensional dataset was fed into the two-way neural network SNR evaluation model described above, the classification results were recorded, and the four model evaluation metrics were compared with two known mathematically driven algorithms. The results of the comparison are shown in Table 3.
The analysis of the algorithms continues to use relative and absolute errors as metrics for assessing the performance of the algorithms. Figure 8 and Figure 9 show the absolute and relative errors of the four algorithms evaluated at different signal-to-noise ratios. The four algorithms are represented by four different colors in the two plots. It is clear from the plots that both deep learning models perform very well at different SNR. Still, as the errors of both methods are relatively small, the advantages and disadvantages are not obvious in the plots. However, on average, although the new method does not perform as well at individual SNR as it does at other SNR, overall, the two-way neural network has a more minor absolute and relative error than the other algorithms, especially when the SNR is greater than −2 dB. The error is zero on the test set. Figure 10 and Figure 11 show the average absolute and average relative error of the four algorithms for the data SNR evaluation, respectively.
The performance of mean absolute error and mean relative error of the proposed TP-CNN and the baseline methods are shown in Figure 10 and Figure 11, respectively. Regarding the average absolute error of the algorithms, TP-CNN is reduced by approximately 1/3 compared to 2D-CNN and about five times compared to the better performance of the two mathematically driven algorithms. In terms of the average relative error of the algorithm, TP-CNN is almost 0.01 less than 2D-CNN and ten times less than the conventional algorithm. These results show that the new proposed TP-CNN performs better on the dataset and has a greater performance advantage, which is supported by the 92.16% accuracy of the test set on the proposed two-way model.
5. Conclusions
This paper proposed a two-path convolution neural network-based SNR evaluation neural network for UAV communication link. Firstly, the experimental dataset is expanded to improve the accuracy of the dataset, and the performance is verified by feeding it into the baseline neural network model. Then, the baseline model is optimized, and the idea of feature fusion is introduced. Two different convolution neural networks are proposed to be combined in parallel to optimize the performance. Finally, the performance of our proposed TP-CNN model is evaluated and compared with baseline 2D-CNN model and two existing algorithms. The simulation results show that the performance of the proposed model is obviously superior to that of the baseline model in terms of MAE and MRE.
Conceptualization, Y.X. and Y.Y.; methodology, Y.X.; software, Y.X.; validation, Y.X., Y.Y. and X.J.; formal analysis, Y.X.; investigation, Y.X.; resources, Y.X.; data curation, Y.Y.; writing—original draft preparation, Y.X.; writing—review and editing, Y.X.; visualization, Y.Y.; supervision, X.J.; project administration, Y.Y.; funding acquisition, Y.X. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Informed consent was obtained from all subjects involved in the study.
The datasets used and analyzed during the present study are available from the corresponding author upon reasonable request.
The authors declare that they have no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. The generation of Unmanned Aerial Vehicles (UAV) remote signal dataset.
Figure 3. Two-path convolution neural network signal-to-noise ratio evaluation model.
The comparison of related signal-to-noise ratios (SNR) evaluation method.
| SNR Evaluation | Contribution |
|---|---|
| ML | Obtain the joint probability density function of the receiving channel based on the probability density function of noise. |
| M2M4 | Use the relation between variance and kurtosis to evaluate the SNR. |
| ASS | Estimate noise power and signal power respectively to evaluate the SNR. |
| CNN-LSTM | Evaluate SNR through the full connection layer that fuses the features. |
Performance metrics of the three algorithms for SNR evaluation after dataset update.
| Network | Mean Absolute Error | Mean Relative Error |
|---|---|---|
| 2D-CNN | 0.061473 | 0.034582 |
| M2M4 | 0.61398 | 0.426722 |
| ASS | 0.25621 | 0.292374 |
Performance metrics of the four algorithms for SNR evaluation after dataset update.
| Network | Mean Absolute Error | Mean Relative Error |
|---|---|---|
| TP-CNN | 0.042239 | 0.025826 |
| 2D-CNN | 0.061473 | 0.034582 |
| M2M4 | 0.61398 | 0.426722 |
| ASS | 0.25621 | 0.292374 |
References
1. Takahashi, K.; Roberts, R.; Jiang, Z.; Memarzadeh, B. Statistical Evaluation of Signal-to-Noise Ratio and Timing Jitter in Equivalent-Time Sampling Signals. IEEE Trans. Instrum. Meas.; 2021; 70, 8003804. [DOI: https://dx.doi.org/10.1109/TIM.2021.3078003]
2. Im, S.; Powers, E.J. An algorithm for estimating signal-to-noise ratio of UWB signals. IEEE Trans. Veh. Technol.; 2005; 54, pp. 1905-1908. [DOI: https://dx.doi.org/10.1109/TVT.2005.851339]
3. Chen, Y.; Ji, Y.; Zhou, J.; Chen, X.; Shen, W. Computation of signal-to-noise ratio of airborne hyperspectral imaging spectrometer. Proceedings of the 2012 International Conference on Systems and Informatics (ICSAI2012); Yantai, China, 19–20 May 2012; pp. 1046-1049. [DOI: https://dx.doi.org/10.1109/ICSAI.2012.6223191]
4. Mozaffari, M.; Saad, W.; Bennis, M.; Nam, Y.-H.; Debbah, M. A Tutorial on UAVs for Wireless Networks: Applications, Challenges, and Open Problems. IEEE Commun. Surv. Tutor.; 2019; 21, pp. 2334-2360. [DOI: https://dx.doi.org/10.1109/COMST.2019.2902862]
5. Zhou, Q.; Zhang, R.; Mu, J.; Zhang, H.; Zhang, F.; Jing, X. AMCRN: Few-Shot Learning for Automatic Modulation Classification. IEEE Commun. Lett.; 2022; 26, pp. 542-546. [DOI: https://dx.doi.org/10.1109/LCOMM.2021.3135688]
6. Cui, Y.; Liu, F.; Jing, X.; Mu, J. Integrating Sensing and Communications for Ubiquitous IoT: Applications, Trends, and Challenges. IEEE Netw.; 2021; 35, pp. 158-167. [DOI: https://dx.doi.org/10.1109/MNET.010.2100152]
7. Noh, D.I.; Jeong, S.G.; Hoang, H.T.; Pham, Q.V.; Huynh-The, T.; Hasegawa, M.; Sekiya, H.; Kwon, S.Y.; Chung, S.H.; Hwang, W.J. Signal Preprocessing Technique with Noise-Tolerant for RF-Based UAV Signal Classification. IEEE Access; 2022; 10, pp. 134785-134798. [DOI: https://dx.doi.org/10.1109/ACCESS.2022.3232036]
8. Ouamri, M.A.; Alkanhel, R.; Gueguen, C.; Alohali, M.A.; Ghoneim, S.S.M. Modeling and analysis of uav-assisted mobile network with imperfect beam alignment. Comput. Mater. Contin.; 2023; 74, pp. 453-467. [DOI: https://dx.doi.org/10.32604/cmc.2023.031450]
9. Cardoso, C.M.; Barros, F.J.; Carvalho, J.A.; Machado, A.A.; Cruz, H.A.; de Alcântara Neto, M.C.; Araújo, J.P. SNR Prediction with ANN for UAV Applications in IoT Networks Based on Measurements. Sensors; 2022; 22, 5233. [DOI: https://dx.doi.org/10.3390/s22145233] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35890914]
10. Huynh-The, T.; Nguyen, T.-V.; Pham, Q.-V.; da Costa, D.B.; Kwon, G.-H.; Kim, D.-S. Efficient Convolutional Networks for Robust Automatic Modulation Classification in OFDM-Based Wireless Systems. IEEE Syst. J.; 2023; 17, pp. 964-975. [DOI: https://dx.doi.org/10.1109/JSYST.2022.3207377]
11. Ouamri, M.A.; Alkanhel, R.; Singh, D.; El-kenaway, E.M.; Ghoneim, S.S.M. Double deep q-network method for energy efficiency and throughput in a uav-assisted terrestrial network. Comput. Syst. Sci. Eng.; 2023; 46, pp. 73-92. [DOI: https://dx.doi.org/10.32604/csse.2023.034461]
12. He, D.; Qiao, Y.; Chen, S.; Du, X.; Chen, W.; Zhu, S.; Guizani, M. A Friendly and Low-Cost Technique for Capturing Non-Cooperative Civilian Unmanned Aerial Vehicles. IEEE Netw.; 2019; 33, pp. 146-151. [DOI: https://dx.doi.org/10.1109/MNET.2018.1800065]
13. Pauluzzi, D.R.; Beaulieu, N.C. A Comparison of SNR Estimation Techniques for the AWGN Channel. IEEE Trans. Commun.; 2000; 48, pp. 1681-1691. [DOI: https://dx.doi.org/10.1109/26.871393]
14. Song, Q.; Hamouda, W. Performance analysis and optimization of multiselective scheme for cooperative sensing in fading channels. IEEE Trans. Veh. Technol.; 2016; 65, pp. 358-366. [DOI: https://dx.doi.org/10.1109/TVT.2015.2392853]
15. Karastergios, E.; Sumanasena, M.; Evans, B.G. Simple SNR estimator for mobile fading channels. Electron. Lett.; 2003; 39, pp. 244-245. [DOI: https://dx.doi.org/10.1049/el:20030110]
16. Zhang, R.; Jiang, C.; Wu, S.; Zhou, Q.; Jing, X.; Mu, J. Wi-Fi Sensing for Joint Gesture Recognition and Human Identification From Few Samples in Human-Computer Interaction. IEEE J. Sel. Areas Commun.; 2022; 40, pp. 2193-2205. [DOI: https://dx.doi.org/10.1109/JSAC.2022.3155526]
17. Mu, J.; Jing, X.; Zhang, Y.; Gong, Y.; Zhang, R.; Zhang, F. Machine Learning-Based 5G RAN Slicing for Broadcasting Services. IEEE Trans. Broadcast.; 2022; 68, pp. 295-304. [DOI: https://dx.doi.org/10.1109/TBC.2021.3122353]
18. Zhang, R.; Jing, X.; Wu, S.; Jiang, C.; Mu, J.; Yu, F.R. Device-Free Wireless Sensing for Human Detection: The Deep Learning Perspective. IEEE Internet Things J.; 2021; 8, pp. 2517-2539. [DOI: https://dx.doi.org/10.1109/JIOT.2020.3024234]
19. Zhang, S.; Bao, Z. An Adaptive Spectrum Sensing Algorithm under Noise Uncertainty. Proceedings of the IEEE International Conference on Communications; Kyoto, Japan, 5–9 June 2011; pp. 1-5.
20. Li, H.; Wang, D.; Zhang, X.; Gao, G. Recurrent Neural Networks and Acoustic Features for Frame-Level Signal-to-Noise Ratio Estimation. IEEE/Acm Trans. Audio Speech Lang. Process.; 2021; 29, pp. 2878-2887. [DOI: https://dx.doi.org/10.1109/TASLP.2021.3107617]
21. Dariusz, Z.; Tomasz, B.; Pawel, F. Analysis of interferences in the acoustic measurement of partial discharges of electric power transformers. Proceedings of the INTER-NOISE and NOISE-CON Congress and Conference Proceedings; San Francisco, CA, USA, 9–12 August 2015.
22. Yang, K.; Huang, Z.; Wang, X.; Wang, F. An SNR Estimation Technique Based on Deep Learning. Electronics; 2019; 8, 1139. [DOI: https://dx.doi.org/10.3390/electronics8101139]
23. Lu, L.; Li, H.; Ding, Z.; Guo, Q. An improved target detection method based on multiscale features fusion. Microw. Opt. Technol. Lett.; 2020; 62, pp. 3051-3059. [DOI: https://dx.doi.org/10.1002/mop.32409]
24. Yang, T.Y.; Hsu, J.H.; Lin, Y.Y.; Chuang, Y.Y. DeepCD: Learning Deep Complementary Descriptors for Patch Representations. Proceedings of the IEEE International Conference on Computer Vision (ICCV); Venice, Italy, 22–29 October 2017.
25. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Las Vegas, NV, USA, 27–30 June 2016; pp. 770-778.
26. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Boston, MA, USA, 7–12 June 2015.
27. Yang, Y.; Jing, X.; Mu, J.; Gao, H. SNR Estimation of UAV Control Signal Based on Convolutional Neural Network. Proceedings of the 2021 International Wireless Communications and Mobile Computing (IWCMC); Harbin, China, 28 June–2 July 2021; pp. 780-784.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
With the development of information technology, unmanned aerial vehicles (UAVs) have become an indispensable and important part of daily life and they have brought great convenience to life. Evaluating the signal-to-noise ratio (SNR) of UAV communication link is vital to improve the communication performance between UAV and the user. The classical SNR evaluation schemes of UAV communication link are limited in terms of performance, while deep learning (DL) based schemes are always at the expense of computation complexity. To solve the issues mentioned above, a two-path convolution neural network (TP-CNN) is proposed therein to evaluate the SNR of UAV communication link. Firstly, a two-dimensional dataset of UAV control signal is built and expanded thereafter. Then the TP-CNN model is designed and modified by feature fusion of input samples. Finally, the simulations are conducted, and the simulation results indicate that the performance of our proposed model is superior to that of the baseline model in terms of mean absolute error (MAE) and mean relative error (MRE).
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer





