Xinzheng Zhang 1 and Zhouyong Liu 1 and Shujun Liu 1 and Guojun Li 2
Academic Editor:Sven Nordholm
1, College of Communication Engineering, Chongqing University, Chongqing 400044, China
2, Department of Communication Commanding, Chongqing Communication Institute, Chongqing 400035, China
Received 12 December 2014; Revised 4 May 2015; Accepted 25 May 2015; 29 June 2015
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Automatic target recognition has become an important research topic for military applications. It is important to develop efficient and robust algorithms for automatic target recognition (ATR), especially for ground target identification in battle surveillance. In this paper, the problem of synthetic aperture radar (SAR) ATR is focused on, which is widely used in many surveillance tasks owing to its all-weather ability and other advantages [1]. Different from passive vision system, SAR images are formed by reflections of a coherent source, which are difficult to be interpreted directly and their characteristics vary quickly and abruptly with small change in azimuth and depression. Due to the unique characteristics of SAR image formation process, such as specular reflection and multiple bounces, it is very difficult to extract effective features for ATR as used in the optical image. For SAR ATR, a broad class of feature extraction method is from two-dimensional SAR images [2-4], while an alternative class of feature extraction method is from one-dimensional high range resolution profiles (HRRPs) obtained from SAR [5]. The latter have more advantages when SAR target images are blurred due to target motion. However, due to the large footprint of the SAR illuminating beam, it is very difficult to separate target HRRPs directly from SAR raw echoes which include a large amount of ground clutter. In contrast, HRRPs can be converted from SAR images with ground clutter removed by target segmentation in image domain [5, 6].
Features from HRRPs via Relax algorithm are investigated for target recognition [5]. Power spectrum features extracted from HRRPs are employed for SAR ATR [6]. HRRPs superresolution scattering centers features are also used to recognize SAR targets [7]. In the HRRPs-based SAR ATR approach, the key is to extract robust and effective features of HRRPs.
In terms of HRRP feature extraction, it has been found that the exploitation of target time-frequency signatures can be effective for target discrimination. Kim et al. derived geometrical moments features in joint T-F domain [8]. Raj et al. develop methods for the T-F analysis of human gait radar signals [9]. T-F analysis techniques also successfully apply to other radar applications as shown in [10-12].
The most critical issue when using time-frequency features for radar target recognition is to reduce the dimension of the time-frequency plane data while preserving as much discriminative information as possible. Our focus in this paper concentrates on developing a new feature extraction technique for T-F domain based on nonnegative matrix factorization (NMF). Over the last decade, NMF has emerged as a useful feature extraction method in areas related to speech recognition and image processing [13-15].
In this paper, we propose a new SAR ATR strategy based on adaptive Gaussian representation (AGR) and NMF. First, we propose to construct the time-frequency matrix of HRRP by using AGR. Then, NMF is performed on the time-frequency matrix to obtain the coefficient matrix composed of spectral vectors and the base matrix composed of temporal vectors. The nonnegative constraints of NMF lead to part-based representation because they allow only additive combination, which can get distinct information of the time-frequency matrix. Finally, we extract several novel features from the spectral and temporal vectors in a way that they successfully represent joint time-frequency signatures of the HRRP. Experiments based on the proposed approach are performed with HMM classifier over 10-target MSTAR public datasets.
2. SAR Images to HRRPs
The MSTAR public SAR dataset is considered for the experiments to evaluate the proposed algorithm. The data typically consist of image chips. As discussed above, these image chips have to be converted into HRRPs by several filtering operations. The brief procedure of how MSTAR SAR image chips are converted to HRRPs is provided here as follows [5], which is summarized in Figure 1.
Figure 1: The procedure from a SAR image to HRRPs.
[figure omitted; refer to PDF]
Consider a complex-valued SAR image [figure omitted; refer to PDF] , where [figure omitted; refer to PDF] reflects the downrange dimension and [figure omitted; refer to PDF] the cross range. A two-dimensional (2D) inverse FFT is taken off [figure omitted; refer to PDF] to obtain the corresponding phase history data. Next, the deconvolution of the weighting and removal of the zero-padding is performed for the phase history data due to the operation of window weighting and zero-padding in SAR image formation.
Then, a 2D FFT is applied to produce a deconvolved and Nyquist-sampled image [figure omitted; refer to PDF] . Note that both the target and surrounding clutter exist in [figure omitted; refer to PDF] . So, to remove the clutter, a target segmentation procedure is taken to image [figure omitted; refer to PDF] . Then, an inverse FFT is performed in the cross range dimension for all [figure omitted; refer to PDF] , from which each [figure omitted; refer to PDF] -dependent waveform, for a fixed [figure omitted; refer to PDF] , corresponds to a HRRP.
3. HRRP Time-Frequency Representation by AGR
Time-frequency (T-F) analysis techniques have long been used in the area of feature extraction, radar imaging, and so forth. There are several T-F approaches, such as the short-time Fourier transform (STFT), Wigner-Ville distribution (WVD), and adaptive Gaussian representation (AGR). Compared to other T-F approaches, AGR can decompose the backscattered signal into T-F centers corresponding to scattering centers and local resonances with high T-F resolutions. Up to now, AGR has been successfully applied in ISAR imaging, complicated scattering diagnostic, and radar target classification [8]. In this paper, we adopt AGR for T-F feature extraction of HRRPs.
AGR expands a HRRP in time-domain [figure omitted; refer to PDF] in terms of normalized Gaussian elementary functions [figure omitted; refer to PDF] with an adjustable T-F center [figure omitted; refer to PDF] and a variance [figure omitted; refer to PDF] [figure omitted; refer to PDF] where [figure omitted; refer to PDF] The adjustable parameters [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] for Gaussian basis functions and [figure omitted; refer to PDF] for the coefficient can be obtained such that [figure omitted; refer to PDF] is most similar to [figure omitted; refer to PDF] : [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the remainder after the orthogonal projection of [figure omitted; refer to PDF] onto [figure omitted; refer to PDF] and this iterative procedure is described as [figure omitted; refer to PDF]
Since the projection integral in (3) is the Fourier transform of [figure omitted; refer to PDF] with the Gaussian window [figure omitted; refer to PDF] , the adjustable T-F center [figure omitted; refer to PDF] and associated variance [figure omitted; refer to PDF] can be obtained using FFT and the specific search procedure in (2). [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] finally obtained give the solution of (3) and these four parameters completely describe one Gaussian T-F basis function at the [figure omitted; refer to PDF] th iteration.
After [figure omitted; refer to PDF] stages of AGR decomposition, the following relationships hold: [figure omitted; refer to PDF] Therefore, the AGR iteration in (4) continues until the reconstruction error [figure omitted; refer to PDF] is sufficiently small; hence, the upper limit [figure omitted; refer to PDF] is determined.
After [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , are obtained via AGR processing, the T-F matrix, which represents a signal energy distribution in the joint T-F plane, [figure omitted; refer to PDF] , is given by [figure omitted; refer to PDF]
The dimension of each [figure omitted; refer to PDF] is the same, which is determined by time domain sample number of each HRRP and frequency domain sample number in AGR. In this paper, each [figure omitted; refer to PDF] is one matrix of [figure omitted; refer to PDF] . Compared to the well-known WVD technique, T-F matrix by AGR can give a joint T-F distribution that is nonnegative, cross-term interference free [8]. As for comparison to the wavelet decomposition which form a complete set, AGR is not restricted to a regular sampling grid, and in addition, they are not subject to the number of data samples. In radar target electromagnetic feature extraction, the scattering mechanisms are usually too complicated, and consequently, for accurate representation of a radar signature, it is desirable to have the elementary functions on a flexible sampling grid as in AGR processing rather than the elementary functions on a regular grid as in the wavelet decomposition methods mentioned above. The advantage of AGR processing for radar applications has been well described in [8]. In Figure 2, T-F matrices of several HRRPs from different targets using WVD are compared with T-F matrices utilizing AGR technique, from which it can be seen that AGR is superior to WVD especially in cross-term interference.
Figure 2: Comparison between T-F matrices using WVD and T-F matrices using AGR.
[figure omitted; refer to PDF]
4. Time-Frequency Matrix Feature Extraction Using NMF
The next stage in this paper is to derive T-F features from the T-F matrix of HRRPs. There are several matrix decomposition techniques available, such as principal component analysis (PCA), independent component analysis (ICA), and NMF. Each of these techniques considers different sets of criteria with some properties; for example, PCA finds a set of orthogonal bases that minimizes the mean squared error of the reconstructed data; ICA decomposes a dataset into components that are as independent as possible; NMF decomposes a nonnegative matrix to its nonnegative components [16].
Compared to other matrix decomposition techniques, NMF can obtain components in the original matrix with a higher representation and localization property [17, 18]. Therefore, the features extracted from the HRRP T-F matrix by NMF will represent a HRRP signature with a better time and frequency localization. The basic principle of NMF for the HRRP T-F matrix is to find a locally optimal factorization of the matrix into two submatrices, of which the first one named base matrix represents the spectra of the scattering events in the HRRP and the second one named coefficient matrix represents the temporal characteristic of the scattering events in the HRRP. In the present paper, following the HRRP T-F matrix decomposed by NMF, T-F features are extracted from base matrix and coefficient matrix.
4.1. Definitions of NMF
Given a matrix [figure omitted; refer to PDF] and a constant [figure omitted; refer to PDF] , NMF computes two matrices [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , such that [figure omitted; refer to PDF]
4.2. NMF Factorization Algorithm
Factorization in NMF approach is usually achieved by iterative minimization of cost functions. In this work, we choose the following function: [figure omitted; refer to PDF]
The function [figure omitted; refer to PDF] has turned out to yield perceptually good results at a reasonable computational cost, which can be minimized iteratively with the multiplicative update [figure omitted; refer to PDF]
This factorization algorithm is the basis for several recent NMF-based techniques [19]. The value of the parameter [figure omitted; refer to PDF] is determined by the iteration number of NMF; here, [figure omitted; refer to PDF] is equal to 10 according to the experiments. Figure 3 draws out vectors of base matrix and the coefficient matrix obtained by NMF corresponding to each AGR T-F matrix in Figure 2.
Figure 3: NMF of three targets AGR matrices.
[figure omitted; refer to PDF]
4.3. NMF Feature Extraction
In NMF feature extraction, we assume a linear signal model for HRRP, the time-frequency distribution of which can be expressed as linear combinations of spectra of several distinct scattering centers with different temporal characteristic. Thereby the coefficients are restricted to be nonnegative. One can interpret the columns of [figure omitted; refer to PDF] as spectral components and the corresponding rows of [figure omitted; refer to PDF] as their time-varying gains.
For spectral vectors of the basis matrix [figure omitted; refer to PDF] , features are extracted utilizing several general spectral characteristic parameters which include spectral centroid, spectral standard deviation normalized by the centroid, skewness, and kurtosis. For a given spectral vector [figure omitted; refer to PDF] of basis matrix [figure omitted; refer to PDF] , spectral centroid [figure omitted; refer to PDF] , spectral standard deviation normalized by the centroid [figure omitted; refer to PDF] , skewness [figure omitted; refer to PDF] , and kurtosis [figure omitted; refer to PDF] are calculated as follows, respectively: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the dimension of [figure omitted; refer to PDF] , [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the standard deviation of [figure omitted; refer to PDF] , [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the mean of [figure omitted; refer to PDF] , [figure omitted; refer to PDF]
For feature extraction of temporal vectors, we calculate several time domain parameters including the root of the mean square (RMS) [figure omitted; refer to PDF] , standard deviation [figure omitted; refer to PDF] , skewness [figure omitted; refer to PDF] , kurtosis [figure omitted; refer to PDF] and shape factor [figure omitted; refer to PDF] . The shape factor is defined as follows: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the dimension of [figure omitted; refer to PDF] .
There are nine T-F features proposed from the significant T-F spectral and temporal components of a HRRP T-F matrix. When [figure omitted; refer to PDF] , both [figure omitted; refer to PDF] matrix and [figure omitted; refer to PDF] matrix are composed of ten vectors.
5. HMM Classifier
We use HMMs as the classifier for SAR targets, whose states represent the target orientation. Each of the states in the HMM corresponds to a special cover of target orientations and hence a special target feature varying with orientation. The parameters of the HMM specify a statistical characteristic of the target features. The number of HMM states needs to be chosen large enough to model the variation of the special range of target features but small enough to ensure that enough training features are available. The angular resolution corresponding to a state can be estimated from the ratio of the range resolution to the maximum target dimension. For MSTAR datasets, an angular resolution of 3° would thus lead 120 states in the HMM.
Let us assume that a target is partitioned into [figure omitted; refer to PDF] distinct states, denoted by the set [figure omitted; refer to PDF] . As discussed above, each state corresponds an azimuthal partitioning. The state-transition probabilities of HMM are denoted by the matrix [figure omitted; refer to PDF] , where [figure omitted; refer to PDF] is the probability of transiting from state [figure omitted; refer to PDF] to state [figure omitted; refer to PDF] . Further, the initial-state probabilities are denoted by the vector [figure omitted; refer to PDF] , where [figure omitted; refer to PDF] is the probability of sampling state [figure omitted; refer to PDF] on the first measurement. For the sequence of HRR feature vectors, it is assumed that [figure omitted; refer to PDF] represents the change in the target-sensor azimuthal orientation, upon continuous measurements. Let [figure omitted; refer to PDF] represent the azimuthal angular range of state [figure omitted; refer to PDF] . We assume [figure omitted; refer to PDF] , for all [figure omitted; refer to PDF] , implying that one of two continuous feature vectors may stay in the same state or transition in an adjacent state. This yields a tridiagonal state-transition matrix [figure omitted; refer to PDF] . Based on the previous assumptions with regard to [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , it can readily be demonstrated the following estimations for [figure omitted; refer to PDF] : [figure omitted; refer to PDF]
Moreover, it is assumed that initial target pose is uniformly distributed azimuthally, and we can have [figure omitted; refer to PDF] As discussed further below, (15) and (16) constitute initial estimates for [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , with these refined via Baum-Welch training algorithm.
In order to identify a target from sequential feature vectors, an HMM is designed for each target. A given set of sequential feature vectors of test target data is submitted to all HMMs and the likelihoods are computed. If the [figure omitted; refer to PDF] th HMM yields the largest likelihood, then we declare the sequential feature vectors are from the [figure omitted; refer to PDF] th target. For example, if we got the following observation sequence [figure omitted; refer to PDF] that is associated with an unknown target [figure omitted; refer to PDF] , then the probability of the observation sequence [figure omitted; refer to PDF] is given by summing the joint probability [figure omitted; refer to PDF] over all possible state paths [figure omitted; refer to PDF] : [figure omitted; refer to PDF] If the target [figure omitted; refer to PDF] gives the maximum likelihood for the observation feature vector sequence [figure omitted; refer to PDF] , that is, [figure omitted; refer to PDF] then we declare the sequence [figure omitted; refer to PDF] to be from the target [figure omitted; refer to PDF] .
For the SAR ATR problem considered here, continuous HMMs are employed rather than discrete HMMs, because the latter has the issue of well-known distortion inherent in the quantization of feature vectors.
6. Experiments
6.1. Database and Evaluation Methodology
In this section, we evaluate the classification performance of the proposed approach using the MSTAR public database, which is a standard dataset for evaluating ATR algorithms, consisting of X-band SAR images with 1 ft × 1 ft resolution for multiple targets [20, 21]. These targets include several military vehicles and a few civilian vehicles, and they have a very similar shape. Some sample images are depicted in Figure 4 with visible light images. The data typically consist of image chips. For each target, images were acquired at several different depression angles over the full 360° aspect angles. These images have been converted into sequences of HRR waveform through several filtering operations as described in Section 2.
Figure 4: Visible light images for 10-target in MSTAR database.
[figure omitted; refer to PDF]
In the classification experiments, we analyze how the HMM parameters affect the identification performance and also explore how robust the proposed approach is to target variable.
6.2. HMM Training
For HMM-based classification used here, there are many schemes of selecting the training sequences. Here, we train the HMM from a long forward sequence and a long backward sequence. A long forward sequence is the sequence of feature vectors with azimuth in ascending order while a long backward sequence is the same feature vectors but with azimuth in descending order. This arrangement captures all sequential information and state statistics and yet allows fast training. An HMM is trained using the training sequences from each of the ten MSTAR targets and thus obtains ten different HMM models. In the training, the Baum-Welch algorithm is employed to reestimate the HMM parameters [figure omitted; refer to PDF] and state density function so that the HMM reflects the target HRRPs scattering physics. The initial state probability distribution [figure omitted; refer to PDF] is not reestimated and remains as the geometric estimation. In essence, the EM training can be viewed as an evolution of the state decomposition; that is, as the training proceeds, each state has more clearly defined boundaries.
The training set includes total 10-target SAR images at 17-degree depression angle and is used to train ten HMMs.
6.3. HMM Classification
In the second stage of the HMM classification, the unknown target feature vector sequences are presented to the trained HMM models. For testing, we use a separate set of SAR images of 10-target SAR images at 15-degree depression angle.
Table 1 shows the confusion-matrix of classifying all test sequences of aperture 3 degrees from all the ten targets. We find that the HMM with the proposed NMF features yields an average classification rate of 87.27%. In Table 2, the confusion-matrix for 6-degree test sequences is shown. For the 6 degree angular extent, an average classification rate of 95.62% is obtained. The improvement of the classification performance is due to the increased number of state transitions that can occur in each test sequence.
Table 1: Confusion matrix for 10-target classification (3° aperture).
Test targets | Recognized as | Recognition rate (%) | |||||||||
BMP2 | BRDM2 | BTR60 | BTR70 | D7 | T62 | T72 | ZIL131 | ZSU234 | 2S1 | ||
BMP2 | 168 | 1 | 4 | 2 | 0 | 6 | 12 | 0 | 1 | 1 | 86.15 |
BRDM2 | 5 | 249 | 6 | 8 | 1 | 1 | 2 | 1 | 2 | 1 | 90.87 |
BTR60 | 1 | 3 | 167 | 15 | 0 | 0 | 1 | 1 | 7 | 0 | 85.64 |
BTR70 | 1 | 6 | 22 | 163 | 0 | 2 | 0 | 1 | 0 | 0 | 83.16 |
D7 | 2 | 0 | 1 | 0 | 259 | 4 | 0 | 6 | 1 | 1 | 94.53 |
T62 | 1 | 8 | 3 | 7 | 1 | 233 | 12 | 5 | 2 | 2 | 85.35 |
T72 | 2 | 2 | 5 | 3 | 3 | 1 | 178 | 1 | 1 | 0 | 90.82 |
ZIL131 | 6 | 5 | 3 | 1 | 2 | 1 | 3 | 237 | 11 | 5 | 86.50 |
ZSU234 | 4 | 3 | 5 | 4 | 3 | 7 | 8 | 3 | 235 | 2 | 85.77 |
2S1 | 5 | 7 | 5 | 3 | 4 | 9 | 6 | 3 | 5 | 230 | 83.94 |
| |||||||||||
Average recognition rate (%) | 87.27 |
Table 2: Confusion matrix for 10-target classification (6° aperture).
Test targets | Recognized as | Recognition rate (%) | |||||||||
BMP2 | BRDM2 | BTR60 | BTR70 | D7 | T62 | T72 | ZIL131 | ZSU234 | 2S1 | ||
BMP2 | 177 | 1 | 3 | 1 | 0 | 5 | 6 | 0 | 1 | 1 | 90.76 |
BRDM2 | 2 | 267 | 1 | 2 | 0 | 1 | 0 | 1 | 0 | 0 | 97.45 |
BTR60 | 1 | 2 | 179 | 6 | 1 | 0 | 0 | 3 | 2 | 1 | 91.79 |
BTR70 | 1 | 1 | 8 | 180 | 0 | 2 | 0 | 1 | 0 | 3 | 91.84 |
D7 | 0 | 1 | 0 | 0 | 271 | 1 | 0 | 0 | 0 | 1 | 98.91 |
T62 | 1 | 3 | 3 | 2 | 1 | 258 | 3 | 0 | 2 | 0 | 94.51 |
T72 | 1 | 0 | 1 | 1 | 0 | 2 | 189 | 1 | 0 | 1 | 96.43 |
ZIL131 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 269 | 1 | 1 | 98.18 |
ZSU234 | 1 | 0 | 1 | 2 | 0 | 0 | 0 | 1 | 268 | 1 | 97.81 |
2S1 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 270 | 98.54 |
| |||||||||||
Average recognition rate (%) | 95.62 |
Increasing the number of HMM states and Gaussian mixtures theoretically results in a complex model that is better able to model the target signature. However, if training data is not sufficient relatively, HMM acquired by training data will fail to generalize well to test data. Figure 5 shows the test set correct recognition rate for the same 10-target identification experiment using different number of HMM states [figure omitted; refer to PDF] and Gaussian mixture components [figure omitted; refer to PDF] . When [figure omitted; refer to PDF] the best performance is achieved with [figure omitted; refer to PDF] and degrades for larger or smaller [figure omitted; refer to PDF] due to overtraining or insufficiently modeling. So, in the remaining experiments we retained the initial configuration of [figure omitted; refer to PDF] and [figure omitted; refer to PDF] .
Figure 5: Comparison of correct recognition rate with [figure omitted; refer to PDF] ; [figure omitted; refer to PDF] and [figure omitted; refer to PDF] .
[figure omitted; refer to PDF]
For comparison, we compare the proposed approach with several state-of-the-art methods. The base line for the SAR ATR performance comparison is the template matching method [1]. For our approach, it can be seen that the correct recognition rate of 95.6% is better than 91.8% by the template matching method. However, it is noted that our method in this paper does not require target pose estimation unlike template matching.
The obtained recognition accuracy of the proposed method is also competitive with the support vector machine (SVM). In [20], on 3-class SAR ATR tasks they employed SVM on SAR targets images to give the misclassification error of 9.01%, worse than our results of 4.4% from Table 2. For SAR ATR methods based on HRRPs, a comparison can be made between the correct recognition rate of 92% presented in the work of Liao et al. which used a Relax feature extraction approach with a 10-target identification experiment [5]. Another spectral feature extraction approach achieved the correct recognition rate of 83.5% for SAR ATR [6]. These clearly verify the superiority of the proposed method.
6.4. Robustness to Variant Depression Angles
The robustness of a recognition algorithm to depression angle is important to the successful application of the algorithm for real scenarios, where the test target images may have been acquired from different depression angles. We will examine the invariance to depression angle for the approach. Some statistics of the variant depression angles dataset used in this experiment are summarized in Table 3. This is a subset of the MSTAR public database on 3 different targets (2S1, BRDM2, and ZSU234) at 4 different available depression angles (15°, 17°, 30°, and 45°). The data collected at depression angle 17° are utilized for training and other data for testing. The classification results are summarized in Table 4. As can be seen from Table 4, the approach performs robust when there is a large change in depression angle (e.g., from 15° to 30°). However, when the change is very large, the performances of the approach decrease due to the drastic change of the target signatures.
Table 3: Variant depression angle dataset.
| Depression angle | BRDM2 | ZSU234 | 2S1 |
Train | 17° | 298 | 299 | 299 |
| ||||
Test | 15° | 274 | 274 | 274 |
30° | 287 | 288 | 288 | |
45° | 303 | 288 | 288 |
Table 4: Recognition results with variant depression angles dataset.
Depression angle | Individual correct recognition rate (%) | Average correct recognition rate (%) | ||
BRDM2 | ZSU234 | 2S1 | ||
15° | 99.3 | 100.0 | 100.0 | 99.8 |
30° | 92.1 | 96.5 | 95.8 | 94.8 |
| ||||
45° | 71.8 | 74.5 | 76.7 | 74.3 |
7. Conclusions
This paper has presented a novel feature extraction method for SAR ATR based on HRRP sequences, which achieves excellent performance on the MSTAR database. The method characterizes the target HRRP time-frequency signature by AGR and NMF. First, the AGR is applied to each HRRP to acquire the corresponding time-frequency matrix. Then, target time-frequency features are extracted by NMF. The use of the AGR can be accurately represented by one HRRP's complex electromagnetic signature in the time-frequency domain, which gets a time-frequency matrix. And the use of NMF technique can get remarkable features extracted from the time-frequency matrix. When performing classification using HMMs, the target poses are unknown.
The approach was tested under MSTAR public release datasets. Through experiments, an average correct classification rate of 95% was achieved on a 10-target classification task. It was assessed how the performance is affected by the number of HMM states and the number of Gaussian mixture components. The robustness to variant depression angles was also tested.
In summary, several experiments and the high classification accuracies achieved by the proposed technique clearly demonstrated the potential for radar target HRRP feature extraction and SAR ATR.
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China under Grant no. 61301224 and no. 61302054, in part by Gao Fen Major Project Youth Innovation Fund under Grant no. GFZX04060103, in part by the Fundamental Research Funds for the Central Universities of China under Grant no. CDJRC11160003 and no. CDJZR12160014, in part by the Natural Science Foundation Project of CQ (no. cstc2012jjA40001, no. cstc2011jjA40047, no. cstc2012jjA40056, and no. cstc2012jjB40010), and in part by the Research Project of the Education Committee of Chongqing (no. KJ112201 and no. KJ110508).
Conflict of Interests
The authors declare that they have no competing interests.
[1] D. E. Dudgeon, R. T. Lacoss, "An overview of automatic target recognition," The Lincoln Laboratory Journal , vol. 6, pp. 3-9, 1993.
[2] Q. Zhao, J. C. Principe, V. L. Brennan, D. Xu, Z. Wang, "Synthetic aperture radar automatic target recognition with three strategies of learning and representation," Optical Engineering , vol. 39, no. 5, pp. 1230-1244, 2000.
[3] S. Papson, R. M. Narayanan, "Classification via the shadow region in SAR imagery," IEEE Transactions on Aerospace and Electronic Systems , vol. 48, no. 2, pp. 969-980, 2012.
[4] R.-H. Huan, Y. Pan, "Target recognition for multi-aspect SAR images with fusion strategies," Progress in Electromagnetics Research , vol. 134, pp. 267-288, 2013.
[5] X. Liao, P. Runkle, L. Carin, "Identification of ground targets from sequential high-range-resolution radar signatures," IEEE Transactions on Aerospace and Electronic Systems , vol. 38, no. 4, pp. 1230-1242, 2002.
[6] T. W. Albrecht, K. W. Bauer, "Classification of sequenced SAR target images via hidden markov models with decision fusion," in Algorithms for Synthetic Aperture Radar Imagery XII, vol. 5808, of Proceedings of SPIE, pp. 306-313, Orlando, Fla, USA, June 2005.
[7] J. Gudnason, J. J. Cui, M. Brookes, "HRR automatic target recognition from superresolution scattering center features," IEEE Transactions on Aerospace and Electronic Systems , vol. 45, no. 4, pp. 1512-1524, 2009.
[8] K.-T. Kim, I.-S. Choi, H.-T. Kim, "Efficient radar target classification using adaptive joint time-frequency processing," IEEE Transactions on Antennas and Propagation , vol. 48, no. 12, pp. 1789-1801, 2000.
[9] R. G. Raj, V. C. Chen, R. Lipps, "Analysis of radar human gait signatures," IET Signal Processing , vol. 4, pp. 234-244, 2010.
[10] Y. D. Zhang, M. G. Amin, B. Himed, "Joint DOD/DOA estimation in MIMO radar exploiting time-frequency signal representations," Eurasip Journal on Advances in Signal Processing , vol. 2012, article 102, 2012.
[11] Y. Wang, Q. Song, T. Jin, Y. Shi, X. Huang, "Sparse time-frequency representation based feature extraction method for landmine discrimination," Progress in Electromagnetics Research , vol. 133, pp. 459-475, 2013.
[12] L. Mei, L. Chenlei, Z. Shuqing, "Joint space-time-frequency method based on fractional Fourier transform to estimate moving target parameters for multistatic synthetic aperture radar," IET Signal Processing , vol. 7, no. 1, pp. 71-80, 2013.
[13] R. Hennequin, R. Badeau, B. David, "NMF with time-frequency activations to model nonstationary audio events," IEEE Transactions on Audio, Speech, and Language Processing , vol. 19, no. 4, pp. 744-753, 2011.
[14] N. Yokoya, T. Yairi, A. Iwasaki, "Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion," IEEE Transactions on Geoscience and Remote Sensing , vol. 50, no. 2, pp. 528-537, 2012.
[15] P. Padilla, M. Lopez, J. M. Gorriz, J. Ramirez, D. Salas-Gonzalez, I. Alvarez, "NMF-SVM based CAD tool applied to functional brain images for the diagnosis of Alzheimer's disease," IEEE Transactions on Medical Imaging , vol. 31, no. 2, pp. 207-216, 2012.
[16] D. D. Lee, H. S. Seung, "Learning the parts of objects by non-negative matrix factorization," Nature , vol. 401, no. 6755, pp. 788-791, 1999.
[17] H. Liu, Z. Wu, X. Li, D. Cai, T. S. Huang, "Constrained nonnegative matrix factorization for image representation," IEEE Transactions on Pattern Analysis and Machine Intelligence , vol. 34, no. 7, pp. 1299-1311, 2012.
[18] S. Nikitidis, A. Tefas, N. Nikolaidis, I. Pitas, "Subclass discriminant Nonnegative Matrix Factorization for facial image analysis," Pattern Recognition , vol. 45, no. 12, pp. 4080-4091, 2012.
[19] D. D. Lee, H. S. Seung, "Algorithm for nonnegative matrix factorization," Advances in Neural Information Processing Systems , vol. 12, pp. 556-562, 2000.
[20] Q. Zhao, J. C. Principe, "Support vector machines for SAR automatic target recognition," IEEE Transactions on Aerospace and Electronic Systems , vol. 37, no. 2, pp. 643-654, 2001.
[21] T. D. Ross, S. W. Worrell, V. J. Velten, J. C. Mossing, M. L. Bryant, "Standard SAR ATR evaluation experiments using the MSTAR public release data set," in Algorithms for Synthetic Aperture Radar Imagery V, vol. 3370, of Proceedings of SPIE, pp. 566-570, Orlando, Fla, USA, September 1998.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2015 Xinzheng Zhang et al. Xinzheng Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
A new approach to classify synthetic aperture radar (SAR) targets based on high range resolution profiles (HRRPs) is presented. Features from each of the target HRRPs are extracted via the nonnegative matrix factorization (NMF) algorithm in time-frequency domain represented by adaptive Gaussian representation (AGR). Firstly, SAR target images have been converted into HRRPs. And the time-frequency matrix for each of HRRPs is obtained by using AGR. Secondly, the time-frequency feature vectors are extracted from the time-frequency matrix utilizing NMF. Finally, hidden Markov models (HMMs) are employed to characterize the time-frequency feature vectors corresponding to one target and are used to being the recognizer. To demonstrate the performance of the proposed approach, experiments are performed in the 10-target MSTAR public dataset. The results support the effectiveness of the proposed technique for SAR automatic target recognition (ATR).
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer





