It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
In accordance with the United Nations (UN) Sustainable Development Goal (SDG) 16: Peace, Justice, and Strong Institutions, this study explores ship monitoring through the use of Synthetic Aperture Radar (SAR) for its potential applications to economic and security purposes. One method to extract ships through SAR-derived imagery is to employ the use of convolutional neural networks (CNN). However, the extraction of small features continues to be a challenging task for CNNs. To improve the performance in such cases, one way is to employ the use of an appropriate loss function, which helps guide the CNN model during training. In this paper, Focal Combo (FC) loss, a recent loss function designed for extreme class imbalance, will be investigated to analyze its effects when applied to ship extraction. In doing so, this paper also presents a thorough comparison of existing loss functions in their capability to segment and detect ships on SAR imagery. Making use of the U-Net model, our results demonstrate that by using FC loss we can observe an increase in segmentation of about 9% in terms of f3-score and a decrease in missed detections by about 17 ships (after post-processing) when compared to cross-entropy loss. Unfortunately, it has also shown a significant drop in precision of about 35% resulting in an additional 270 ships being incorrectly detected in the background. In future work, varying CNN models shall be tested to see if the pattern persists and several trials shall be conducted to assess consistency.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Department of Architecture and Building Engineering, Tokyo Institute of Technology, Japan; Department of Architecture and Building Engineering, Tokyo Institute of Technology, Japan; Tokyo Tech Academy for Super Smart Society, Tokyo Institute of Technology, Japan