This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
High-resolution radar images in range and azimuth can be obtained by Synthetic Aperture Radar (SAR), which includes synthetic aperture principle, pulse compression technology, and signal processing technology. Compared with optical and infrared sensors, SAR has the advantages of day-and-night, all-weather, and the ability to penetrate obstacles such as clouds and vegetation [1–6]. With the increasing SAR imaging resolution, SAR has been diversely utilized in military and civilian fields, such as marine, land monitoring [7], and weapon guidance [8]. Therefore, SAR automatic target recognition (SAR ATR) is becoming a meaningful and challenging research field.
The MIT Lincoln Laboratory proposed to divide SAR ATR into three subsystems: detection, discrimination, and classification [9]. The task of target detection is to determine whether the image contains the target of interest and find the target’s position in the image. In the discrimination stage, a discriminator is designed to solve a two-class (target and clutter) classification problem, and the probability of false alarm can be significantly reduced. And then the true target is categorized in the classification and recognition stage.
This paper only focuses on the classification and recognition stage and does not include detection and discrimination. There are three mainstream methods for recognition: template-based, model-based, and deep learning. For template matching, the test sample is matched with certain matching criteria from the template library, which is constructed from the labeled training set [10, 11]. Template-based method is simple but needs to build a large number of template libraries, and the quality of the template library has a great influence on the recognition results.
Due to the unrobustness of the template matching method, a model-based method is proposed. The method extracts the effective features of the training samples and test samples, and then the features extracted from SAR images are fed into the classifier for recognition [12–15]. The features of SAR images primarily include geometric features, transformation features, and electromagnetic features. The geometric features describe the shape and structure of target, such as contour, edge, size, and area. Principal component analysis (PCA) [16], kernel principal component analysis (KPCA) [17], linear discriminant analysis (LDA) [18], independent component analysis (ICA) [19], and other means are all transformation features that are also applied for SAR target recognition. Due to the unique mechanism of SAR imaging, SAR images have the unique electromagnetic features [20, 21] including polarization mode and scattering centers. After feature extraction, the classifiers are necessary for feature. K-nearest neighbor (K-NN), support vector machine (SVM), and sparse representation-based classification (SRC) are frequently used as classifiers in SAR recognition.
While deep learning is well applied in various fields over years, a great quantity of deep learning methods have also emerged in SAR ATR. Chen et al. [22] proposed that the fully connected layer in convolutional neural network (CNN) is replaced with convolutional layer, which effectively suppresses the overfitting problem and reduces the number of parameters. Since the SAR images are highly sensitive to azimuth angle, Zhou et al. [23] combined three continuous azimuth images of the same target as a pseudocolor image inputting, which are input into CNN. Wang et al. [24] designed a multiview convolutional neural network and long short term memory network (CNN-LSTM) to extract and fuse the features from different adjacent azimuth angles. Zhang et al. [25] utilized CNN with CBAM, which is an attention mechanism to improve recognition rate. The deep-learning method can extract the deep semantic information of the target. Compared with the model-based method, it does not need to extract features manually and has achieved a high recognition rate in the field of SAR target recognition.
More recently, there is a viewpoint that CNN, which is different from human, is more inclined to learn the texture and surface features of the target but pays less attention to deep semantic features such as contour and shape. Contour and shape are the most reliable information in human and biological vision. Geirhos et al. [26] demonstrated that Image Net-trained CNNs are strongly biased towards recognizing textures rather than shapes, which is in stark contrast to human behavioral evidence and reveals fundamentally different classification strategies. Hermann et al. [27] indicated that, on out-of-distribution test sets, the performance of models that like to classify images by shape rather than texture is better than baseline.
Therefore, this paper proposes an enhanced-shape CNN, whose network structure is shown in Figure 1. First, the enhanced-shape CNN strengthened the shape features of the target at the input, constructing a three-channel pseudocolor image as data set, so that the convolutional neural network can tend to pay more attention to target shape. Second, the pooling commonly use in CNNs is maximum pooling and average pooling, and the target information is easily lost when downsampling the feature maps. Thus, we use the SoftPool [28] instead of max pooling to improve the network. Meanwhile, in the above literatures, some attention mechanisms combined with CNNs have been applied to SAR recognition. The channel attention module mechanism, i.e., Squeeze-and-Excitation (SE) module [30], can effectively increase the channel weights that are beneficial for recognition and suppress feature that are less useful in CNNs. However, SE module distributes channel weights more evenly in target recognition, such that there is essentially the same as CNN, as noted in paper [29]. Therefore, SoftPool is utilized by replacing global pooling, which can obtain unbalanced channel weights. Third, it is still troublesome to acquire SAR image data sets with relatively rich conditions of imaging, despite the fact that the acquisition of high-resolution SAR images has become easier. Over these years, a great quantity data sets of SAR ships and vehicles have emerged, but their resolution is not enough to be recognized; hence, the data sets are used for detection. At present, most research of SAR target recognition is based on the Moving and Stationary Target Acquisition and Recognition (MSTAR) [31] data set. From the perspective of less samples, this paper designs experiments to verify that this method has a higher recognition rate compared to existing methods under limited data sets.
[figure omitted; refer to PDF]
The main contributions of this paper are as follows:
(1) Constructing a three-channel pseudocolor image, which is formed by extracting the features of the target and shadow from the original SAR data set, filtering the original SAR images, and the original SAR images. The pseudocolor three-channel images are input to the CNN, enhancing the model to use the shape information of the image.
(2) Improving the pooling of the network and the global pooling of the attention module. Using SoftPool in the network can increase the information of the feature map during the pooling. At the same time, the pooling in the SE module is improved to make the weight distribution of the channel more different, instead of balance.
(3) Training in the full training set, one-half of the training set, one-quarter of the training set, and one-eighth of the training set and testing in full test set based on the MSTAR data set. It is proved that the method proposed in this paper can obtain a higher recognition rate with a few samples.
The remainder of this paper is organized as follows: Section 2 describes the principles of the method, including the extraction method of target and shadow, the principle of lee filter, and the fusion of three-channel pseudocolor image. and a novel pooling method (SoftPool), the Squeeze and Excitation module and Enhanced SE module. Section 3 presents the experimental results to validate the effectiveness of the proposed network, and Section 4 concludes the paper.
2. Methodology
In this section, we will describe some of the principles and structures used in our model.
2.1. Extraction of Target and Shadow
Unlike optical images, SAR images are side-view imaging, so there are shadows in the image in addition to the target. The shadow is the result of the mutual coupling between the target and the background environment under a specific radar line of sight, and its shape reflects the physical size and shape distribution of the target, so combining joint features of the target and shadow is more helpful for the recognition.
There are many existing segmentation algorithms to extract target and shadow. The focus of our model is not the segmentation algorithm; therefore, the simplest threshold method is used to segment the target and the shadow area. Our threshold setting is based on the threshold proposed by the paper [32]. The main steps are as follows:
(1) Equalize the original SAR image histogram;
(2) Use mean filtering to smooth the result of step 1, and transform the gray dynamic range to [0, 1];
(3) Set the thresholds of the shadow and target area to 0.2 and 0.8, the pixels greater than 0.8 are the target area, and those less than 0.2 are the shadow areas;
(4) Remove the area of total pixels less than 25 to reduce the influence of background noise;
(5) Utilize the morphological closing operation to connect the target area and the shadow area, which obtain a smooth target and shadow contour.
It can be seen that the simple threshold method can achieve good segmentation results and remove a lot of background noise and clutter. However, in real world situations, the common segmentation algorithm may not be able to segment the target and the shadow well, so we set the thresholds 0.1 and 0.9, and 0.3 and 0.7, respectively, to verify that a slightly biased segmentation algorithm works better.
Figure 2 demonstrates the target and shadow images obtained with different segmentation thresholds. (a) is the original image. (b) describes the morphological image of the target and shadow when the threshold is set to 0.8 and 0.2. The target and shadow extracted in (c) are relatively complete, and the pixel value of the shadow is too low to be clear. Relatively, the target area extracted in (d) is redundant, and in (e) it is incomplete.
[figures omitted; refer to PDF]
2.2. Lee Filtering
Due to its special imaging mechanism, SAR images contain more coherent speckle noise. After filtering the SAR image, the shape characteristics of the target can be enhanced, and the texture, especially the interference of noise, can be reduced.
For speckle noise, many filtering methods for the speckle noise of SAR images have been proposed. Our model utilizes lee filtering, which is a classic SAR filtering strategy. The two key aspects of noise suppression are, on the one hand, establishing a true backscatter coefficient estimation mechanism, and on the other hand, formulating a selection plan for pixel samples in homogeneous regions.
Lee filtering is one of the typical methods of image speckle filtering using the local statistical characteristics. It is based on a fully developed speckle noise model. First, a window of a certain length is selected as the local area. Then, it is assumed that the prior mean
It can be observed from Figure 3 that the speckle noise in the image is significantly reduced, and the texture features of the target and shadow parts are reduced, but the contour shape is more obvious after lee filtering.
[figures omitted; refer to PDF]
2.3. Fusion
Typically, SAR images are gray images. When recognizing SAR images with CNN, the gray-scale image is generally converted into a three-channel image input. In this paper, the original image is combined with the image of target and shadow and the filtered image in RGB mode to form a three-channel pseudocolor image, as shown in Figure 4. The original image can contain complete target information including shape, contour, and texture, while the image of target and shadow and filtered image can enhance the target shape characteristics. Using pseudocolor images as network input can acquire global information and deep semantic information instead of focusing on texture information.
[figure omitted; refer to PDF]
Compared to max pooling and average pooling, the SoftPool can balance the influence of average pooling and max pooling, while average pooling reduces the effect of activations in the area, and max pooling selects only the highest activation in the area. For SoftPool, all activations in this area contribute to the final output, and higher activations dominate the lower activations. Therefore, in the pooling of CNN, a larger activation value has a greater impact on the output, and the significant details of the feature map can be retained to the greatest extent.
Figure 6 gives the effect of different pooling. The first column is the original image, the second column is the image after max pooling, the third column is the image after average pooling, and the fourth column is the image after SoftPool. The comparison shows that the max pooling activates the pixel points with large gray values in the region, highlighting the target, as well as highlighting scattered noise. The average pooling approximates filtering, reducing the effect of noise, but weakening the structural shape information of the target with it. SoftPooling, on the other hand, retains the relatively intact structural information of the target while removing the effect of scattered noise, making the shape more prominent.
[figure omitted; refer to PDF]
As mentioned above, the SE module consists of two steps: squeeze and excitation. For the squeeze
Essentially, the SE module performs attention operations in the channel dimension. This attention mechanism allows the model to pay more attention to the channel features with the most information, while suppressing those unimportant channel features. However, this advantage is not directly reflected in the experiment on the SAR data set MSTAR. It can be seen from the paper [29] that the channel weights calculated by the SE module are close to 1, which does not reflect the importance of the channel.
Global pooling performs max pooling or average pooling on the entire feature map to obtain a 1 × 1 × C vector, but this also will lose feature information. Therefore, we think of replacing the global pooling of the SE module with SoftPool to ensure that the dominant feature map has a high weight. Figure 8 gives the calculation results of the two feature matrices under global pooling and soft pooling. (1) can represent the edge information of the target and contains more information amount than (2), but both matrices have the same calculation result, both 4, under global pooling, and cannot distinguish the importance of the channels. When the weight matrix is multiplied with the feature matrix after using soft pooling, the output of (1) is 5.724, and the output of (2) is 3.69, which can make the feature matrix containing more information have greater channel weights and solve the problem of uniform weight distribution of SE module.
[figure omitted; refer to PDF]
Figure 17 shows the recognition rate using a single module. It can be seen that the different modules used in this paper have an effect on the recognition accuracy of the model.
[figure omitted; refer to PDF]4. Conclusions
SAR ATR has become an important and promising field of remote sensing image processing. This paper proposed a method from the perspective of shape enhancement with filtering and enhancing target area at the input and synthesizing to strengthen the connection between channels. Simultaneously, the information loss due to ordinary pooling is reduced by the application of SoftPool in CNN. Moreover, the SE module has been improved to highlight the prominent channels for recognition results. As a result, more target information is obtained on a few samples. The experiments verified the accuracy of proposed method, which can achieve an accuracy of 99.29% on ten types of targets, and when the segmentation effect is not good, which is closer to the actual situation, it also has higher performance than CNN. This paper also proved the robustness of the method under noise. In the case of varying degrees of noise, the proposed method is greatly improved compared to CNN when there are few samples. The basic approach proposed in this paper can continue in the future to explore the method of balancing texture features and shape features and guide the directional training of the network based on the attention mechanism.
Acknowledgments
The authors did not receive specific funding.
[1] X. Bai, R. Xue, L. Wang, F. Zhou, "Sequence SAR image classification based on bidirectional convolution-recurrent network," IEEE Transactions on Geoscience and Remote Sensing, vol. 57 no. 11, pp. 9223-9235, DOI: 10.1109/tgrs.2019.2925636, 2019.
[2] H. Xu, Z. Yang, M. Tian, Y. Sun, G. Liao, "An extended moving target detection approach for high-resolution multichannel SAR-GMTI systems based on enhanced shadow-aided decision," IEEE Transactions on Geoscience and Remote Sensing, vol. 56, pp. 715-729, 2017.
[3] C. Clemente, L. Pallotta, D. Gaglione, A. De Maio, J. J. Soraghan, "Automatic target recognition of military vehicles with krawtchouk moments," IEEE Transactions on Aerospace and Electronic Systems, vol. 53 no. 1, pp. 493-500, DOI: 10.1109/taes.2017.2649160, 2017.
[4] P. Tait, Introduction to Radar Target Recognition, vol. 18, 2005.
[5] Y. Zhai, W. Deng, T. Lan, B. Sun, Z. Ying, J. Gan, C. Mai, J. Li, R. D. Labati, V. Piuri, "MFFA-SARNET: deep transferred multi-level feature fusion attention network with dual optimized loss for small-sample SAR ATR," Remote Sensing, vol. 12 no. 9,DOI: 10.3390/rs12091385, 2020.
[6] O. Kechagias-Stamatis, "Automatic target recognition on synthetic aperture radar imagery: a survey," 2020. https://arxiv.org/abs/2007.02106
[7] A. Moreira, P. Prats-Iraola, M. Younis, G. Krieger, I. Hajnsek, K. P. Papathanassiou, "A tutorial on synthetic aperture radar," IEEE Geoscience and Remote Sensing Magazine, vol. 1 no. 1,DOI: 10.1109/mgrs.2013.2248301, 2013.
[8] P. Wang, W. Liu, J. Chen, M. Niu, W. Yang, "A high-order imaging algorithm for high-resolution spaceborne SAR based on a modified equivalent squint range model," IEEE Transactions on Geoscience and Remote Sensing, vol. 53, pp. 1225-1235, 2014.
[9] R. L. Dudgeon, "An overview of automatic target recognition," Lincoln Laboratory Journal, vol. 6, 1993.
[10] L. Novak, G. Owirka, W. Brower, A. Weaver, "The automatic target-recognition system in SAIP," Lincoln Laboratory Journal, vol. 10, pp. 187-201, 1997.
[11] G. Owirka, S. Verbout, L. Novak, "Template-based SAR ATR performance using different image enhancement techniques," Proceedings of SPIE, vol. 3721, pp. 302-319, 1999.
[12] C. Clemente, L. Pallotta, I. Proudler, A. De Maio, J. J. Soraghan, A. Farina, "Pseudo‐Zernike‐based multi‐pass automatic target recognition from multi‐channel synthetic aperture radar," IET Radar, Sonar & Navigation, vol. 9 no. 4, pp. 457-466, DOI: 10.1049/iet-rsn.2014.0296, 2015.
[13] Y. Sun, L. Du, Y. Wang, Y. Wang, J. Hu, "SAR automatic target recognition based on dictionary learning and joint dynamic sparse representation," IEEE Geoscience and Remote Sensing Letters, vol. 13 no. 12, pp. 1777-1781, DOI: 10.1109/lgrs.2016.2608578, 2016.
[14] L. M. Novak, G. R. Benitz, G. J. Owirka, L. A. Bessette, "ATR performance using enhanced resolution SAR," Algorithms for Synthetic Aperture Radar Imagery III, pp. 332-337, 1996.
[15] K. El-Darymli, E. W. Gill, P. McGuire, D. Power, C. Moloney, "Automatic target recognition in synthetic aperture radar imagery: a state-of-the-art review," IEEE Access, vol. 4, pp. 6014-6058, DOI: 10.1109/access.2016.2611492, 2016.
[16] Z. He, J. Lu, G. Kuang, "A Fast SAR target recognition approach using PCA features," Proceedings of the International Conference on Image and Graphics, pp. 580-585, .
[17] P. Han, R. Wu, Z. Wang, Y. Wang, "SAR automatic target recognition based on KPCA criterion," Journal of Electronics and Information Technology, vol. 25, pp. 1297-1301, 2013.
[18] W. Bian, D. Tao, "Asymptotic generalization bound of Fisher’s linear discriminant analysis," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36 no. 12, pp. 2325-2337, DOI: 10.1109/tpami.2014.2327983, 2014.
[19] N. Besic, G. Vasile, J. Chanussot, S. Stankovic, "Polarimetric incoherent target decomposition by means of independent component analysis," IEEE Transactions on Geoscience and Remote Sensing, vol. 53, pp. 1236-1247, 2014.
[20] J. Zhou, Z. Shi, C. Xiao, F. Qiang, "Automatic target recognition of SAR images based on global scattering center model," IEEE Transactions on Geoscience and Remote Sensing, vol. 49, pp. 3713-3729, 2011.
[21] J. I. Park, S. H. Park, K. T. Kim, "New discrimination features for SAR automatic target recognition," IEEE Geoscience and Remote Sensing Letters, vol. 10, pp. 476-480, 2012.
[22] S. Chen, H. Wang, F. Xu, Y.-Q. Jin, "Target classification using the deep convolutional networks for SAR images," IEEE Transactions on Geoscience and Remote Sensing, vol. 54 no. 8, pp. 4806-4817, DOI: 10.1109/tgrs.2016.2551720, 2016.
[23] H. Zou, Y. Lin, W. Hong, "Research on multi-aspect SAR images target recognition using deep learning," Journal of Signal Processing, vol. 34, pp. 513-522, 2018.
[24] C. Wang, J. Pei, Z. Wang, Y. Huang, J. Yang, "Multi-view CNN-LSTM neural network for SAR automatic target recognition," Proceedings of the IEEE Geoscience and Remote Sensing Society, pp. 1755-1758, DOI: 10.1109/igarss39084.2020.9323954, .
[25] M. Zhang, J. An, D. Yu, L. Yang, X. Lv, "Convolutional neural network with attention mechanism for SAR automatic target recognition," IEEE Geoscience and Remote Sensing Letters, vol. 19, 2020.
[26] R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wichmann, W. Brendel, "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness," 2018. https://arxiv.org/abs/1811.12231
[27] K. L. Hermann, T. Chen, S. Kornblith, "The origins and prevalence of texture bias in convolutional neural networks," 2019. https://arxiv.org/abs/1911.09071
[28] A. Stergiou, R. Poppe, G. Kalliatakis, "Refining activation downsampling with SoftPool," 2021. https://arxiv.org/abs/2101.00440
[29] W. Li, B. Xueru, Z. Feng, "SAR ATR of ground vehicles based on ESENet," Remote Sensing, vol. 11, 2019.
[30] S. Hu, L. Shen, G. Sun, "Squeeze-and-excitation networks," 2017. https://arxiv.org/abs/1709.01507
[31] E. R. Keydel, S. W. Lee, J. T. Moore, "MSTAR extended operating conditions: a tutorial," Algorithms for Synthetic Aperture Radar Imagery III, pp. 228-242, 1996.
[32] P. Xia, "SAR target recognition based on joint use of target region and shadow," Journal of China Academy of Electronics and Information Technology, vol. 14, pp. 1062-1067, 2019.
[33] S. Doo, G. Smith, C. Baker, "Target classification performance as a function of measurement uncertainty," Proceedings of the Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), .
[34] B. Ding, G. Wen, J. Zhong, C. Ma, X. Yang, "A robust similarity measure for attributed scattering center sets with application to SAR ATR," Neurocomputing, vol. 219, pp. 130-143, DOI: 10.1016/j.neucom.2016.09.007, 2017.
[35] J. Pei, Y. Huang, W. Huo, Y. Zhang, J. Yang, T.-S. Yeo, "SAR automatic target recognition based on multiview deep learning framework," IEEE Transactions on Geoscience and Remote Sensing, vol. 56 no. 4, pp. 2196-2210, DOI: 10.1109/tgrs.2017.2776357, 2018.
[36] Z. Lin, K. Ji, M. Kang, X. Leng, H. Zou, "Deep convolutional highway unit network for SAR target classification with limited labeled training data," IEEE Geoscience and Remote Sensing Letters, vol. 14 no. 7, pp. 1091-1095, DOI: 10.1109/lgrs.2017.2698213, 2017.
[37] Y. Sun, Y. Wang, H. Liu, N. Wang, J. Wang, "SAR target recognition with limited training data based on angular rotation generative network," IEEE Geoscience and Remote Sensing Letters, vol. 99, 2019.
[38] J.-H. Park, S.-M. Seo, J.-H. Yoo, "SAR ATR for limited training data using DS-AE network," Sensors, vol. 21 no. 13,DOI: 10.3390/s21134538, 2021.
[39] Z. Ying, C. Xuan, Y. Zhai, B. Sun, J. Li, W. Deng, C. Mai, F. Wang, R. D. Labati, V. Piuri, F. Scotti, "TAI-SARNET: deep transferred atrous-inception CNN for small samples SAR ATR," Sensors, vol. 20 no. 6,DOI: 10.3390/s20061724, 2020.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2021 Mengmeng Huang et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/
Abstract
Synthetic Aperture Radar (SAR), as one of the important and significant methods for obtaining target characteristics in the field of remote sensing, has been applied to many fields including intelligence search, topographic surveying, mapping, and geological survey. In SAR field, the SAR automatic target recognition (SAR ATR) is a significant issue. However, on the other hand, it also has high application value. The development of deep learning has enabled it to be applied to SAR ATR. Some researchers point out that existing convolutional neural network (CNN) paid more attention to texture information, which is often not as good as shape information. Wherefore, this study designs the enhanced-shape CNN, which enhances the target shape at the input. Further, it uses an improved attention module, so that the network can highlight target shape in SAR images. Aiming at the problem of the small scale of the existing SAR data set, a small sample experiment is conducted. Enhanced-shape CNN achieved a recognition rate of 99.29% when trained on the full training set, while it is 89.93% on the one-eighth training data set.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer