1. Introduction
The ocean covers approximately 70% of the Earth’s surface and plays a critical role in industries such as logistics, fisheries, national defense, and environmental monitoring [1,2]. However, this vast marine environment is frequently affected by maritime accidents, including ship collisions, shipwrecks, and capsizing, as well as environmental incidents such as oil and chemical spills. These incidents significantly impact marine ecosystems and human activities, resulting in ecosystem destruction, loss of fisheries, and disruption of logistics [3,4]. Swift and accurate responses to such incidents require rapid recognition of accident scenes. Maritime search is a cornerstone of various operations, including rescue missions, accident prevention, and surveillance of illegal activities [5]. It is essential for ensuring maritime safety, protecting the environment, and safeguarding lives and property.
The primary platforms for maritime search include satellites, ships, and aircraft, each offering unique advantages and facing distinct limitations. Satellite platforms can monitor large oceanic areas but are constrained by low spatial resolution (250 m) and limited observation cycles (10 times per day), which make them unsuitable for detecting small objects [6,7]. Vessel platforms excel in narrow waters or coastal areas due to their high mobility and real-time accessibility to incident sites [8]. Fishing vessels benefit from the hands-on experience of local fishermen, whereas naval vessels stand out due to their advanced equipment for precise navigation. However, vessels are limited in their search range and speed and are less effective in adverse weather conditions. Aircraft platforms, on the other hand, provide high-resolution data faster than satellites and can operate effectively in poor weather conditions. Nonetheless, they incur higher operational costs and are less suitable for continuous monitoring [9,10]. Consequently, selecting the most appropriate platform depends on the specific type and circumstances of the marine incident.
Sensors used in maritime search include electro-optical (EO), infrared (IR), synthetic aperture radar (SAR), and hyperspectral sensors. EO sensors offer high spatial resolution and color information, enabling intuitive data interpretation through advanced optical technology. Studies utilizing EO sensors have focused on detecting individuals adrift at sea via hidden Markov models [11], removing backgrounds and tracking objects, such as vessels, through video processing techniques to feed automated surveillance systems [12,13] and conducting operational image processing from naval vessels [14]. However, EO sensors are limited by poor nighttime visibility and degraded performance in adverse weather conditions. To address these limitations, EO sensors are often used in conjunction with IR or SAR sensors, which enhance the performance of maritime detection and surveillance systems [15,16,17,18,19].
IR sensors detect thermal energy emitted by objects, making them highly effective for locating missing people or moving vessels at night and during adverse weather conditions [20,21]. Relevant studies involving the use of IR include the detection of small objects at sea by mitigating false positives caused by clouds, the horizon, and solar reflections [22,23], identifying small targets in noisy marine IR imagery through multi-level filtering techniques [24], and monitoring submarine groundwater discharge areas using small unmanned aerial vehicles [25].
SAR sensors use radio waves, making them effective for detecting vessels, oil spills, and offshore structures across large areas of water, regardless of weather conditions or time of day. Key studies using SAR sensors include vessel detection in coastal areas through a mixture statistical model with a K-log normal distribution [26]; vessel detection employing a cubic phase time-scaled transformation technique [27]; the development of systems for automatic identification of vessels using dead-reckoning (DR) positions to estimate location, size, and speed [28]; detection of nonlinearly moving vessels via a back-projection reconstruction algorithm [29]; detection of fast-moving ocean targets using multi-resolution space-time adaptive processing (STAP) [30]; and monitoring the distribution of fishing nets at sea [31].
Hyperspectral sensors provide unique spectral information for each pixel, enabling the identification of an object’s physical properties and chemical composition [32]. Research on hyperspectral sensors includes identifying small objects at sea through spectral unmixing techniques based on the N-FINDR algorithm and ellipse fitting approaches [33]; analyzing the location, size, and type of marine plastics using their distinct optical characteristics [34]; detecting vessels at sea by combining boresight calibration methods with unsupervised techniques [35]; and identifying marine debris through supervised learning algorithms such as random forest (RF) and support vector machines (SVM) [36]. Unlike EO or SAR sensors, hyperspectral sensors can recognize objects based on spectral data from a single pixel without requiring objects to exceed a certain size. Additionally, hyperspectral sensors are not reliant on thermal radiant energy, unlike IR sensors are, making them less susceptible to environmental conditions. However, these sensors demand significant computational resources for data processing, interpretation, and transmission.
Recent advancements in machine learning and deep learning techniques have significantly enhanced the ability to efficiently analyze complex hyperspectral data for accurate detection and classification of marine objects. Key studies on hyperspectral data analysis for ship detection have utilized fully convolutional networks [37], convolutional neural networks (CNNs) [38,39], singular value decomposition (SVD) networks [40], ensemble deep learning methods [41], and two-stage deep-learning-based hyperspectral neural networks [42]. Other research has focused on various applications of hyperspectral data, including detecting fighter planes docked on aircraft carriers using Faster R-CNNs [43], identifying marine debris through machine learning algorithms [44], and swiftly detecting marine spills using CNNs combined with the DBSCAN algorithm [45,46]. Furthermore, generative models and advanced detection techniques like YOLOv5 have shown potential in mitigating data imbalance issues [47], and achieving high accuracy in object detection tasks [48]. However, the direct application of these models to hyperspectral maritime data remains limited due to factors such as the challenging nature of data acquisition, the computational complexity of processing hyperspectral data, and the unique environmental variability inherent to marine research.
This study proposes an efficient method for identifying maritime objects using hyperspectral images. The methodology involves pre-screening detection by identifying significant pixels and enhances practicality by allowing users to select specific areas and confirm the identification results. Hyperspectral image data were collected from six ports in the Republic of Korea using aircraft. The data were processed into spectral statistics and RGB images, which served as training data. Maritime objects were classified into six categories, such as vessels and fishing nets, using both a classifier and a CNN model, and the detection performance of the different models was evaluated and compared.
The structure of this paper is as follows: Section 2 describes the data collection process, preprocessing steps, and models used in the experiments. Section 3 presents the results of the model performance analysis. Finally, Section 4 summarizes the study’s findings and discusses directions for future research.
2. Materials and Methods
2.1. Hyperspectral Data Preparation
2.1.1. Hyperspectral Image Data Collection
Hyperspectral image data were collected using a Cessna Grand Caravan 208B, a medium-sized, single-engine aircraft (Figure 1a). The aircraft maintained a cruising speed of less than 140 knots and operated at an altitude of approximately 1 km, ensuring stable, low-altitude image acquisition. The AisaFENIX hyperspectral sensor (Specim, Finland), installed inside the aircraft, scans wavelengths ranging from 400 nm to 990 nm across 127 bands. At an altitude of 1 km, the sensor provides a spatial resolution of approximately 0.7 m. Figure 1b illustrates the locations where airborne hyperspectral images were captured. The images captured for targeted analysis correspond primarily to three areas in the West Sea (①: Jeongok Port, ②: Gunsan Port, ③: Mokpo Port), two areas in the East Sea (④: Mukho Port, ⑤: Pohang Port), and one area in the South Sea (⑥: Kukdong Port). The selected ports are representative of different regions. Figure 1①–⑥ shows the specific shooting locations and the corresponding flight routes.
2.1.2. Hyperspectral Data Preprocessing
Recognizable maritime objects were identified in the aerial images, and their coordinate information, corresponding RGB images, and spectral distribution data were extracted. The spectral distribution for each object was utilized to calculate statistical values, including the following:
Count: Total number of pixels within the corresponding band.
Mean: Average reflectance in the band.
STD: Standard deviation of reflectance in the band.
Min: Minimum reflectance in the band.
Max: Maximum reflectance in the band.
Q1, Q2, Q3: Reflectance values at the 25th, 50th, and 75th percentiles, respectively.
These statistical values were used as input data for classifier models and were designed to reduce data dimensionality while retaining significant object features. The converted RGB images were utilized as input data for CNN models. Image sizes were adjusted to match the requirements of each model during the training process. Coordinates were further employed to distinguish between detected and undetected objects. A total of 491 maritime objects were selected and categorized as follows: ships (254), fishing nets (40), moving ships (22), waves (59), seawater (29), and unclassified objects (87).
2.2. Methods for Maritime Object Identification
The proposed maritime object identification method emphasizes field applicability and effectiveness. The process involves three main steps: pre-screening detection of maritime objects, selection of desired areas, and confirmation of the identification results (Figure 2). Pre-screening involves the removal of seawater and land by comparing the hyperspectral data with the original images. This process highlights detected maritime objects (single pixels) in red, making them easily distinguishable in the hyperspectral images. Undetected objects, however, are not marked. Analysts can address this limitation by manually selecting suspected areas using a drag-and-drop interface. The selected area’s spectral statistics and image data are then extracted, processed by the analysis model, and returned with identification information.
When using a server equipped with an Intel® Xeon® Gold 6248R CPU and 512GB of memory, the system can analyze hyperspectral images covering maritime areas of up to 5 km2 within approximately 100 s. Despite its efficiency, the method has limitations, such as difficulty in detecting vessels near the shore or those moving at high speeds [49].
This approach facilitates detailed maritime object identification while balancing computational efficiency with analyst interaction for ambiguous cases.
2.3. Data Analysis Model
Figure 3 presents the flowchart of the proposed data analysis for maritime object identification using hyperspectral images. The selected maritime objects were processed into two formats: statistical values and RGB images. The statistical values were used as input for classifier models, while the RGB images served as input for CNN models. Statistical values were derived from spectral information through statistical processing and were employed in training and testing SVM and multi-layer perceptron (MLP) models. RGB images were resized to the appropriate resolution for various CNN models, including EfficientNet B0, EfficientNet B1, EfficientNet B2, ShuffleNet V2, ResNet18, Inception V3, and MobileNet V2. The dataset was split into training, validation, and testing subsets in a 6:2:2 ratio to evaluate model performance. When a maritime object was input into the analysis model, the system generated prediction probabilities for each of the six classified object types. Identification was determined based on the class with the highest prediction probability.
2.3.1. Classifier Model
This study employed two classification models—MLP and SVM—to analyze the hyperspectral data.
-
Multi-Layer Perceptron (MLP): MLP is an artificial neural network consisting of an input layer, two or more hidden layers, and an output layer [50]. Each layer contains multiple nodes, where the input values are multiplied by weights, summed, and processed through an activation function to produce output values. The MLP model was trained using supervised learning, with the error backpropagation algorithm applied to adjust weights, minimizing the error between predicted and actual values. This model is suitable for classification and regression tasks and is particularly effective for training non-linear data. However, MLP requires a significant amount of data and computational resources and is prone to overfitting. To mitigate overfitting, regularization and dropout techniques were implemented.
-
Support Vector Machine (SVM): SVM is a supervised learning algorithm designed for classification and regression tasks [51]. In this study, it was utilized to address the classification problem. SVM works by identifying the decision boundary that optimally separates classes, ensuring the maximum margin between them. Training data are mapped as points in a multi-dimensional space, and new data are classified based on their position relative to the decision boundary. SVM efficiently handles both linear and non-linear classification through the kernel trick, which implicitly maps input data into a high-dimensional feature space, enabling the model to perform complex classifications.
2.3.2. CNN Model
This study applied CNN models to identify maritime objects using hyperspectral data. CNNs are a type of artificial neural network that employ convolutional operations to extract meaningful features from input images [52,53]. The convolutional layer analyzes the local characteristics of input images, while convolutional filters and weight sharing reduce the number of trainable parameters, enhancing computational efficiency. Additionally, CNNs include max-pooling layers to decrease the dimensionality of input images and fully connected layers for classification into different classes (Figure 4).
The CNN models utilized in this study included EfficientNet B0, EfficientNet B1, EfficientNet B2, ShuffleNet V2, Inception V3, MobileNet V2, and ResNet18.
-
EfficientNet (B0, B1, B2): EfficientNet, introduced by Tan and Le in 2019 [54], is a CNN architecture that optimizes model performance by scaling width (increasing the number of filters), depth (increasing the number of layers), and resolution (enhancing input image resolution). This balanced scaling approach achieves high performance with fewer parameters. EfficientNet models are categorized from B0 to B7 based on size, determined using AutoML. For this study, we employed EfficientNet B0, B1, and B2.
-
ShuffleNet: ShuffleNet is a CNN architecture designed for mobile devices with constrained computational resources, such as those operating at 10–150 MFLOPs [55]. It uses pointwise group convolution and channel shuffle techniques to reduce computational costs while maintaining accuracy. This study utilized ShuffleNet V2, which employs direct metrics, rather than indirect metrics like FLOPs, for effective network design.
-
Inception: InceptionNet, also known as GoogleNet, is a high-performance deep learning architecture for image recognition and detection [56]. Inception V3, released in 2015 [57], combines convolutional filters of various sizes to extract and integrate features, resulting in improved accuracy. This model is widely recognized for its strong performance in computer vision tasks.
-
MobileNet: MobileNet is a lightweight CNN model tailored for mobile and embedded devices [58]. It employs depthwise separable convolution to reduce computational complexity while preserving efficiency. This architecture creates a lightweight deep neural network with minimal latency, making it suitable for real-time applications.
-
Residual Network (ResNet): ResNet addresses the gradient vanishing problem in deep networks by introducing residual learning and skip connections [59]. These features enable effective training of very deep networks.
2.3.3. Setting Analytical Model Conditions
For the MLP model, the hidden layer size was set to 100, the solver to L-BFGS, the alpha (L2 regularization) to 0.1, and the activation function to tanh. For the SVM model, the kernel was set to RBF, gamma to scale, and the regularization parameter C (decision boundary) to 10,000. Both models utilized the standard scaler for data preprocessing. The image datasets for the CNN models were resized according to the input size requirements of each model (EfficientNet B0: 244 × 244 pixels, EfficientNet B1: 240 × 240 pixels, EfficientNet B2: 260 × 260 pixels, ShuffleNet V2: 224 × 224 pixels, ResNet18: 224 × 224 pixels, Inception V3: 299 × 299 pixels, MobileNet V2: 224 × 224 pixels). The CNN models were trained with the following hyperparameters: Epoch = 100, learning rate = 0.005, batch size = 20, optimizer = SGD, and momentum = 0.9. While classifier models derived test accuracy only, due to the nature of the analysis, CNN models provided training, validation, and evaluation accuracy.
3. Results and Discussion
The classified maritime object data were applied to both classifier models and CNN models to analyze and compare their performance (Table 1). The classifier models, specifically MLP and SVM, achieved classification accuracies of 82.5% and 83.5%, respectively. In contrast, the CNN models demonstrated an average training accuracy of 100% and an average validation accuracy of approximately 90%. Among the CNN models, ShuffleNet V2 exhibited the lowest validation accuracy (87.8%), while EfficientNet B1 achieved the highest (92.9%). To address potential overfitting, we examined the trend curves of loss and accuracy during training and adjusted the epoch and batch size accordingly (Figure S1). Among the CNN models, MobileNet V2 showed the lowest validation accuracy (86.87%), whereas EfficientNet B0 (94.9%) and Inception V3 (93.94%) outperformed others. The CNN models generally achieved higher validation accuracies—about 10% greater—compared to the classifier models. While the MLP and SVM models showed only a 1% difference in accuracy, indicating no significant performance gap, CNN models like EfficientNet B0 and Inception V3 demonstrated superior performance, making them more suitable for maritime object classification. It is important to note, however, that the training data used in this study exhibited an imbalance in the number of samples across classification categories. To provide a more precise evaluation of model performance, additional accuracy analyses for each classification category are necessary.
Table 2 lists the accuracies for different classification items. CNN models outperformed classifier models by more than 10% in overall accuracy. However, the MLP model achieved particularly high accuracy in classifying ships (98.3%), white waves (100%), and fishing nets (100%). In contrast, MLP underperformed CNN models in classifying seawater, unclassified objects, and moving ships. Notably, CNN models identified seawater and moving ships with 100% accuracy, whereas MLP achieved only 80% and 0%, respectively. This discrepancy is attributable to differences in data processing. CNN models analyze image data and excel in recognizing patterns in flat sea images. Classifier models, by contrast, rely on statistical data derived from individual pixels, making them more susceptible to variations in color and brightness, which reduces their accuracy. Consequently, classifier models are less effective in diverse real-world maritime environments and tend to misclassify moving ships as stationary ships. While classifier models are suitable for identifying distinct single objects like ships, CNN models are better suited for classifying a variety of maritime objects.
A detailed examination of CNN model performance revealed that, except for ResNet18, all models achieved over 90% accuracy in ship classification. EfficientNet B0, EfficientNet B1, and ShuffleNet V2 achieved the highest accuracy in this category (94.3%). For classifying white waves, EfficientNet B1, ShuffleNet V2, and Inception V3 performed the best, while EfficientNet B0 and MobileNet V2 showed relatively lower accuracies, falling below 90%. Inception V3 excelled in classifying fishing nets, and ResNet18 achieved the highest accuracy (94.7%) in identifying unclassified objects, followed by Inception V3 (89.5%). For the seawater and moving ship categories, all CNN models achieved 100% accuracy. EfficientNet B0 demonstrated high accuracy for classifying ships, EfficientNet B1 and ShuffleNet V2 performed best in classifying ships and white waves, Inception V3 excelled in classifying white waves and fishing nets, and ResNet18 stood out in identifying unclassified objects. However, EfficientNet B1 and ShuffleNet V2 showed relatively low accuracies of 68.4% and 78.9%, respectively, in recognizing unclassified objects—more than 10% lower than Inception V3. In conclusion, Inception V3 emerged as the most suitable model for comprehensively classifying the six maritime object categories examined in this study.
Figure 5 illustrates the application of the proposed maritime object identification method to hyperspectral images acquired at Mokpo. Figure 5a shows the reconstructed image using RGB wavelengths selected from the spectrum of each pixel. Figure 5b presents the results of a DBSCAN-based seawater identification model, whereas Figure 5c displays the output of a density-based land removal model. When the maritime objects detected in Figure 5c are marked in red pixels and overlaid onto Figure 5a, the composite image in Figure 5d is produced. By comparing the positions of the red pixels with an answer key generated by a high-resolution camera, Figure 5e was created. In this image, detected ships are highlighted with yellow boxes, while undetected maritime objects are marked with red boxes. Of the 17 vessels in the scene, 16 were detected, with one ship remaining undetected. Figure 5f provides detailed views of the detected and undetected vessels (the corresponding EO image is shown in Figure S2). The undetected vessels, as shown in Figure 5c, were removed during the application of the land removal model due to their proximity to land. While refining the land removal method could address this issue, it was deemed impractical to account for all variables encountered in diverse marine environments within a single algorithm.
As demonstrated in Figure 5f, selecting the area containing the undetected vessels from the hyperspectral images allows further analysis, resulting in the identification outputs shown in Figure 5g. The Inception V3 model was employed for this task. The identification results for the undetected ships were distributed across six categories: 70.08% ships, 27.94% unclassified objects, 1.25% fishing nets, and 0.5% seawater. While the selected area was classified predominantly as a ship, it is more accurate to interpret it as containing a mix of a ship, unclassified objects, a fishing net, and seawater. The actual objects in the selected area consist of vessels equipped for fishing with a large rack installed on the forepeak and fishing nets attached to the rack (Figure S2). The similarity between the hyperspectral-image-based identification results and the actual appearance of these objects indicates the method’s accuracy. However, this approach requires manual selection of each object, making it more suitable for microscopic analysis. For macroscopic analyses, techniques that automate the detection of maritime objects or identify suspected undetected objects should be employed. Figure 5h illustrates the spectral intensity of each pixel within the selected area shown in Figure 5f. The dark blue line represents the average spectral intensity across the selected area, while the light blue region depicts the range between the minimum and maximum spectral values. This visualization effectively highlights the spectral characteristics of the analyzed area.
Figure 6 illustrates the application of the proposed maritime object identification method to a moving ship (the corresponding EO image is shown in Figure S3). Figure 6a shows the result of selecting the area where the ship is located in the moving ship images (marked with a red box). Figure 6b highlights the outcome of selecting a wider area that includes a white wave (marked with a red box). Figure 6c,d display the classification results for the respective areas. The identification results for the area in Figure 6a, derived from the CNN classification model, revealed a ship classification with 76.56% confidence, while the model also predicted a 9.43% likelihood of the object being a moving ship. In contrast, the classification result for the area in Figure 6b was identified as a moving ship with a remarkably high accuracy of 99.89%. Figure 6e,f depict the spectral characteristics of the selected areas. The spectral trends in the two graphs are notably similar. However, in Figure 6e, the spectrum’s minimum values between the 60-channel and 110-channel bands were relatively lower than those in Figure 6f. These spectral characteristics may have contributed to the differences in the hyperspectral images, distinguishing between the ship and the moving ship. Although the identification results vary based on the range of the selected areas, the findings confirm that the proposed maritime object identification method can effectively analyze both stationary and moving states of vessels.
4. Conclusions
This study evaluated the accuracy and performance of various machine learning models in classifying maritime objects extracted from hyperspectral images. The classifier models, such as MLP and SVM, achieved an average test accuracy of 83%. In contrast, CNN-based models demonstrated superior performance, with an average accuracy exceeding 90%. Among the CNN models, Inception V3 exhibited the best performance, achieving 97% accuracy in unweighted categorical analysis. Furthermore, a method providing identification results for selected areas was introduced, significantly enhancing the practicality and effectiveness of real-time analysis. This approach demonstrated high potential in accurately distinguishing and classifying maritime objects, showcasing its utility in maritime surveillance and detection systems. Nevertheless, limitations such as data imbalance and insufficient training data remain challenges. Addressing data imbalance, through strategies such as data augmentation, class weighting, and resampling could enhance model performance, particularly for underrepresented categories. Alternatively, additional real-world data could be acquired through aerial surveys to help balance the dataset, improving model reliability and reducing dependency on synthetic techniques. Addressing these issues, refining the use of hyperspectral information, and optimizing computational efficiency through distributed computing, may further improve the scalability, real-time applicability, and practical application of maritime detection technologies.
Conceptualization, D.S. and S.O.; methodology, D.S.; software, D.S.; validation, D.S., S.O. and D.L.; formal analysis, D.S.; investigation, D.S. and S.O.; resources, S.O. and S.P.; data curation, D.S., S.O., S.P. and D.L.; writing—original draft preparation, D.S. and D.L.; writing—review and editing, D.S. and S.O.; visualization, D.S. and D.L.; supervision, S.O.; project administration, S.O. and S.P.; funding acquisition, S.O. and S.P. All authors have read and agreed to the published version of the manuscript.
Not applicable.
The data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. There are no commercial interests affecting the data availability or the results presented in this paper.
Author Daekyeom Lee was employed by SEASON CO., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. Overview of the airborne acquisition of hyperspectral images. (a) Aircraft and schematic of shooting process used for hyperspectral imaging. (b) Locations (①–⑥) across the Republic of Korea selected for hyperspectral image acquisition and corresponding flight routes.
Figure 2. Flowchart of the maritime object identification process, including pre-screening and analysis.
Figure 3. Proposed workflow for hyperspectral data processing and classification using machine learning.
Figure 5. Maritime object detection and identification using a hyperspectral image taken in Mokpo. (a) Hyperspectral RGB image. (b) Image after seawater removal. (c) Image after land removal. (d) Resulting image used for maritime object detection. (e) Image showing detected and undetected objects. (f) Enlarged image of undetected object. (g) Maritime object identification scores. (h) Spectral characteristics of undetected area.
Figure 6. Identification of a moving ship. (a) Selected area on the ship (red box). (b) Selected area including the white wave near the ship (red box). (c,d) Identification scores for areas marked in (a) and (b). (e,f) Spectral characteristics of areas marked in (a) and (b).
Comparison of accuracies obtained by classifier and CNN models in marine object identification.
Model | Training Set | Validation Set | Testing Set |
---|---|---|---|
MLP | – | – | 82.5 |
SVM | – | – | 83.5 |
EfficientNet B0 | 100.0 | 90.8 | 94.9 |
EfficientNet B1 | 100.0 | 92.9 | 89.9 |
EfficientNet B2 | 100.0 | 90.8 | 89.9 |
ShuffleNet V2 | 100.0 | 87.8 | 91.9 |
Inception V3 | 100.0 | 89.8 | 93.9 |
MobileNet V2 | 100.0 | 89.8 | 86.9 |
ResNet18 | 100.0 | 89.8 | 90.9 |
Classification accuracies detailed by object categories.
Model | Ship | White Wave | Fishing Net | Seawater | Unclassified | Moving Ship |
---|---|---|---|---|---|---|
MLP | 98.3 | 100 | 100 | 80 | 56.2 | 0 |
SVM | 94.8 | 77.8 | 100 | 60 | 50.0 | 0 |
EfficientNet B0 | 94.3 | 86.7 | 87.5 | 100 | 84.2 | 100 |
EfficientNet B1 | 94.3 | 100 | 87.5 | 100 | 68.4 | 100 |
EfficientNet B2 | 92.5 | 93.3 | 87.5 | 100 | 78.9 | 100 |
ShuffleNet V2 | 94.3 | 100 | 87.5 | 100 | 78.9 | 100 |
Inception V3 | 92.5 | 100 | 100 | 100 | 89.5 | 100 |
MobileNet V2 | 90.6 | 86.7 | 75.0 | 100 | 78.9 | 100 |
ResNet18 | 88.7 | 93.3 | 87.5 | 100 | 94.7 | 100 |
Supplementary Materials
The following supporting information can be downloaded at:
References
1. Sumaila, U.R.; Walsh, M.; Hoareau, K.; Cox, A.; Abdallah, P.; Akpalu, W.; Anna, Z.; Benzaken, D.; Crona, B.; Fitzgerald, T. et al. Ocean Finance: Financing the Transition to a Sustainable Ocean Economy. The Blue Compendium; Lubchenco, J.; Haugam, P.M. Springer: Berlin, Germany, 2023; pp. 309-331. [DOI: https://dx.doi.org/10.1007/978-3-031-16277-0_9]
2. More, S.; Kulkarni, M.; Indap, M. Ocean Resources and Its Sustainable Development. Int. J. Sci. Res.; 2023; 12, pp. 2160-2171. [DOI: https://dx.doi.org/10.21275/sr23523115951]
3. Derraik, J. The Pollution of the Marine Environment by Plastic Debris: A Review. Mar. Pollut. Bull.; 2002; 44, pp. 842-852. [DOI: https://dx.doi.org/10.1016/S0025-326X(02)00220-5] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/12405208]
4. Eliopoulou, E.; Papanikolaou, A.; Voulgarellis, M. Statistical Analysis of Ship Accidents and Review of Safety Level. Saf. Sci.; 2016; 85, pp. 282-292. [DOI: https://dx.doi.org/10.1016/j.ssci.2016.02.001]
5. Qiao, D.; Liu, G.; Lv, T.; Li, W.; Zhang, J. Marine Vision-Based Situational Awareness Using Discriminative Deep Learning: A Survey. J. Mar. Sci. Eng.; 2021; 9, 397. [DOI: https://dx.doi.org/10.3390/jmse9040397]
6. Huh, S.; Jin, K.-W. GEO-KOMPSAT-2A/2B AMI/GOCI-II/GEMS Data & Products. GEO DATA; 2022; 4, pp. 39-49. [DOI: https://dx.doi.org/10.22761/dj2022.4.4.005]
7. Soldi, G.; Gaglione, D.; Forti, N.; Di Simone, A.; Daffinà, F.C.; Bottini, G.; Quattrociocchi, D.; Millefiori, L.M.; Braca, P.; Carniel, S. et al. Space-Based Global Maritime Surveillance. Part I: Satellite Technologies. IEEE Aerosp. Electron. Syst. Mag.; 2021; 36, pp. 8-28. [DOI: https://dx.doi.org/10.1109/MAES.2021.3070862]
8. Jang, W.-J.; Keum, J.-S.; Shin, C.-H. A Study on the Optimal Allocation Model of the Korean Maritime SAR Fleet. J. Navig. Port Res.; 2003; 27, pp. 121-127. [DOI: https://dx.doi.org/10.5394/KINPR.2003.27.2.121]
9. Fingas, M.F.; Brown, C.E. Review of Ship Detection from Airborne Platforms. Can. J. Remote Sens.; 2014; 27, pp. 379-385. [DOI: https://dx.doi.org/10.1080/07038992.2001.10854880]
10. Veenstra, T.S.; Churnside, J.H. Airborne Sensors for Detecting Large Marine Debris at Sea. Mar. Pollut. Bull.; 2012; 65, pp. 63-68. [DOI: https://dx.doi.org/10.1016/j.marpolbul.2010.11.018]
11. Westall, P.; Ford, J.J.; O’Shea, P.; Hrabar, S. Evaluation of Machine Vision Techniques for Aerial Search of Humans in Maritime Environments. Proceedings of the Digital Image Computing: Techniques and Applications (DICTA) 2008; Canberra, Australia, 1–3 December 2008; [DOI: https://dx.doi.org/10.1109/DICTA.2008.89]
12. Prasad, D.K.; Rajan, D.; Rachmawati, L.; Rajabally, E.; Quek, C. Video Processing from Electro-Optical Sensors for Object Detection and Tracking in Maritime Environment: A Survey. IEEE Trans. Intell. Transp. Syst.; 2017; 18, pp. 1993-2016. [DOI: https://dx.doi.org/10.1109/TITS.2016.2634580]
13. Zhang, S.; Qi, Z.; Zhang, D. Ship Tracking Using Background Subtraction and Inter-Frame Correlation. Proceedings of the 2nd International Congress on Image and Signal Processing; Tianjin, China, 17–19 October 2009; [DOI: https://dx.doi.org/10.1109/CISP.2009.5302115]
14. Dijk, J.; Bijl, P.; van den Broek, S.P.; van Eijk, A.M.J. Research Topics on EO Systems for Maritime Platforms. Proceedings of the Electro-Optical and Infrared Systems: Technology and Applications XI; Amsterdam, The Netherlands, 22–25 September 2014; [DOI: https://dx.doi.org/10.1117/12.2070420]
15. Balaji, B.; Sithiravel, R.; Daya, Z.; Kirubarajan, T. Aspects of Detection and Tracking of Ground Targets from an Airborne EO/IR Sensor. Proceedings of the Signal Processing; Sensor/Information Fusion, and Target Recognition XXIV, Baltimore, MD, USA, 20–24 April 2015; [DOI: https://dx.doi.org/10.1117/12.2179283]
16. Gorin, B.A.; Waxman, A. Flight Test Capabilities for Real-Time Multiple Target Detection and Tracking for Airborne Surveillance and Maritime Domain Awareness. Proceedings of the Optics and Photonics in Global Homeland Security IV; Orlando, FL, USA, 16–20 March 2008; [DOI: https://dx.doi.org/10.1117/12.785287]
17. Leonard, C.L.; DeWeert, M.J.; Gradie, J.; Iokepa, J.; Stalder, C.L. Performance of an EO/IR Sensor System in Marine Search and Rescue. Proceedings of the Airborne Intelligence Surveillance Reconnaissance (ISR) Systems and Applications II; Orlando, FL, USA, 28 March–1 April 2005; [DOI: https://dx.doi.org/10.1117/12.603909]
18. Stecz, W.; Gromada, K. Determining UAV Flight Trajectory for Target Recognition Using EO/IR and SAR. Sensors; 2020; 20, 5712. [DOI: https://dx.doi.org/10.3390/s20195712] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33049975]
19. Schoonmaker, J.; Reed, S.; Podobna, Y.; Vazquez, J.; Boucher, C. A Multispectral Automatic Target Recognition Application for Maritime Surveillance, Search, and Rescue. Proceedings of the Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security and Homeland Defense IX; Orlando, FL, USA, 5–9 April 2010; [DOI: https://dx.doi.org/10.1117/12.852651]
20. Marques, M.M.; Lobo, V.; Aguiar, A.P.; Silva, J.E.; Borges De Sousa, J.; Nunes, F.; Ribeiro, R.A.; Bernardino, A.; Cruz, G.; Marques, J.S. An Unmanned Aircraft System for Maritime Operations: The Automatic Detection Subsystem. Mar. Technol. Soc. J.; 2021; 55, pp. 38-49. [DOI: https://dx.doi.org/10.4031/MTSJ.55.1.4]
21. Liu, C.; Zhang, Y.; Shen, J.; Liu, F. Improved RT-DETR for Infrared Ship Detection Based on Multi-Attention and Feature Fusion. J. Mar. Sci. Eng.; 2024; 12, 2130. [DOI: https://dx.doi.org/10.3390/jmse12122130]
22. Kim, S.; Lee, J. Small Infrared Target Detection by Region-Adaptive Clutter Rejection for Sea-Based Infrared Search and Track. Sensors; 2014; 14, pp. 13210-13242. [DOI: https://dx.doi.org/10.3390/s140713210]
23. Wang, X.; Zhang, T. Clutter-Adaptive Infrared Small Target Detection in Infrared Maritime Scenarios. Opt. Eng.; 2011; 50, 067001. [DOI: https://dx.doi.org/10.1117/1.3582855]
24. Zuo, Z.C.; Zhang, T. Detection of Sea-Surface Small Targets in Infrared Images Based on Multi-Level Filters. Proceedings of the International Symposium on Multispectral Image Processing; Wuhan, China, 21–23 October 1998; [DOI: https://dx.doi.org/10.1117/12.323678]
25. Young, K.S.R.; Pradhanang, S.M. Small Unmanned Aircraft (SUAS)-Deployed Thermal Infrared (TIR) Imaging for Environmental Surveys with Implications in Submarine Groundwater Discharge (SGD): Methods, Challenges, and Novel Opportunities. Remote Sens.; 2021; 13, 1331. [DOI: https://dx.doi.org/10.3390/rs13071331]
26. Li, Z.; Chen, J.; Xiong, Y.; Yu, H.; Zhang, H.; Gao, B. A Ship Detection and Imagery Scheme for Airborne Single-Channel SAR in Coastal Regions. Remote Sens.; 2022; 14, 4670. [DOI: https://dx.doi.org/10.3390/rs14184670]
27. Yang, Q.; Li, Z.; Li, J.; An, H.; Wu, J.; Pi, Y.; Yang, J. A Novel Bistatic SAR Maritime Ship Target Imaging Algorithm Based on Cubic Phase Time-Scaled Transformation. Remote Sens.; 2023; 15, 1330. [DOI: https://dx.doi.org/10.3390/rs15051330]
28. Chaturvedi, S.K. Study of Synthetic Aperture Radar and Automatic Identification System for Ship Target Detection. J. Ocean Eng. Sci.; 2019; 4, pp. 173-182. [DOI: https://dx.doi.org/10.1016/j.joes.2019.04.002]
29. Sommer, A.; Ostermann, J. Backprojection Subimage Autofocus of Moving Ships for Synthetic Aperture Radar. IEEE Trans. Geosc. Remote Sens.; 2019; 57, pp. 8383-8393. [DOI: https://dx.doi.org/10.1109/TGRS.2019.2920779]
30. Li, H.; Liao, G.; Xu, J.; Lan, L. An Efficient Maritime Target Joint Detection and Imaging Method with Airborne ISAR System. Remote Sens.; 2022; 14, 193. [DOI: https://dx.doi.org/10.3390/rs14010193]
31. Krumme, U.; Giarrizzo, T.; Pereira, R.; Silva De Jesus, A.J.; Schaub, C.; Saint-Paul, U. Airborne Synthetic-Aperture Radar (SAR) Imaging to Help Assess Impacts of Stationary Fishing Gear on the North Brazilian Mangrove Coast. ICES J. Mar. Sci.; 2015; 72, pp. 939-951. [DOI: https://dx.doi.org/10.1093/icesjms/fsu188]
32. Jia, J.; Wang, Y.; Chen, J.; Guo, R.; Shu, R.; Wang, J. Status and Application of Advanced Airborne Hyperspectral Imaging Technology: A Review. Infrared Phys. Technol.; 2020; 104, 103115. [DOI: https://dx.doi.org/10.1016/j.infrared.2019.103115]
33. Park, J.J.; Park, K.-A.; Kim, T.-S.; Oh, S.; Lee, M. Aerial Hyperspectral Remote Sensing Detection for Maritime Search and Surveillance of Floating Small Objects. Adv. Space Res.; 2023; 72, pp. 2118-2136. [DOI: https://dx.doi.org/10.1016/j.asr.2023.06.055]
34. Garaba, S.P.; Aitken, J.; Slat, B.; Dierssen, H.M.; Lebreton, L.; Zielinski, O.; Reisser, J. Sensing Ocean Plastics with an Airborne Hyperspectral Shortwave Infrared Imager. Environ. Sci. Technol.; 2018; 52, pp. 11699-11707. [DOI: https://dx.doi.org/10.1021/acs.est.8b02855]
35. Freitas, S.; Silva, H.; Almeida, J.; Silva, E. Hyperspectral Imaging for Real-Time Unmanned Aerial Vehicle Maritime Target Detection. J. Intell. Robot. Syst.; 2018; 90, pp. 551-570. [DOI: https://dx.doi.org/10.1007/s10846-017-0689-0]
36. Freitas, S.; Silva, H.; Silva, E. Remote Hyperspectral Imaging Acquisition and Characterization for Marine Litter Detection. Remote Sens.; 2021; 13, 2536. [DOI: https://dx.doi.org/10.3390/rs13132536]
37. Lin, H.; Shi, Z.; Zou, Z. Fully Convolutional Network with Task Partitioning for Inshore Ship Detection in Optical Remote Sensing Images. IEEE Geosci. Remote Sens. Lett.; 2017; 14, pp. 1665-1669. [DOI: https://dx.doi.org/10.1109/LGRS.2017.2727515]
38. Freitas, S.; Silva, H.; Almeida, J.M.; Silva, E. Convolutional Neural Network Target Detection in Hyperspectral Imaging for Maritime Surveillance. Int. J. Adv. Robot. Syst.; 2019; 16, pp. 1-13. [DOI: https://dx.doi.org/10.1177/1729881419842991]
39. Liu, W.; Ma, L.; Chen, H. Arbitrary-Oriented Ship Detection Framework in Optical Remote-Sensing Images. IEEE Geosci. Remote Sens. Lett.; 2018; 15, pp. 937-941. [DOI: https://dx.doi.org/10.1109/LGRS.2018.2813094]
40. Zou, Z.; Shi, Z. Ship Detection in Spaceborne Optical Image with SVD Networks. IEEE Transac. Geosci. Remote Sens.; 2016; 54, pp. 5832-5845. [DOI: https://dx.doi.org/10.1109/TGRS.2016.2572736]
41. Gąsienica-Józkowy, J.; Knapik, M.; Cyganek, B. An Ensemble Deep Learning Method with Optimized Weights for Drone-Based Water Rescue and Surveillance. Integr. Comput. Aid. Eng.; 2021; 28, pp. 221-235. [DOI: https://dx.doi.org/10.3233/ICA-210649]
42. Yan, L.; Yamaguchi, M.; Noro, N.; Takara, Y.; Ando, F. A Novel Two-Stage Deep Learning-Based Small-Object Detection Using Hyperspectral Images. Opt. Rev.; 2019; 26, pp. 597-606. [DOI: https://dx.doi.org/10.1007/s10043-019-00528-0]
43. Liu, X.; Wang, C.; Wang, H.; Fu, M.; Feng, Y.; Bourennane, S.; Sun, Q.; Ma, L. Target Detection of Hyperspectral Image Based on Faster R-CNN with Data Set Adjustment and Parameter Turning. Proceedings of the Oceans 2019; Marseille, France, 17–20 June 2019; [DOI: https://dx.doi.org/10.1109/OCEANSE.2019.8867428]
44. Taggio, N.; Aiello, A.; Ceriola, G.; Kremezi, M.; Kristollari, V.; Kolokoussis, P.; Karathanassi, V.; Barbone, E. A Combination of Machine Learning Algorithms for Marine Plastic Litter Detection Exploiting Hyperspectral PRISMA Data. Remote Sens.; 2022; 14, 3606. [DOI: https://dx.doi.org/10.3390/rs14153606]
45. Jiang, Z.; Zhang, J.; Ma, Y.; Mao, X. Hyperspectral Remote Sensing Detection of Marine Oil Spills Using an Adaptive Long-Term Moment Estimation Optimizer. Remote Sens.; 2022; 14, 157. [DOI: https://dx.doi.org/10.3390/rs14010157]
46. Zhan, C.; Bai, K.; Tu, B.; Zhang, W. Offshore Oil Spill Detection Based on CNN, DBSCAN, and Hyperspectral Imaging. Sensors; 2024; 24, 411. [DOI: https://dx.doi.org/10.3390/s24020411]
47. Zhu, K.; Cheng, S.; Kovalchuk, N.; Simmons, M.; Guo, Y.-K.; Matar, O.; Arcucci, R. Analyzing Drop Coalescence in Microfluidic Devices with a Deep Learning Generative Model. Phys. Chem. Chem. Phys.; 2023; 25, pp. 15744-15755. [DOI: https://dx.doi.org/10.1039/D2CP05975D]
48. Xia, Z.; Ma, K.; Cheng, S.; Blackburn, T.; Peng, Z.; Zhu, K.; Zhang, W.; Xiao, D.; Knowles, A.; Arcucci, R. Accurate Identification and Measurement of the Precipitate Area by Two-Stage Deep Neural Networks in Novel Chromium-Based alloys. Phys. Chem. Chem. Phys.; 2023; 25, pp. 15970-15987. [DOI: https://dx.doi.org/10.1039/D3CP00402C]
49. Oh, S.; Seo, D. Hyperspectral Image Analysis Technology Based on Machine Learning for Marine Object Detection. J. Korean Soc. Mar. Environ. Saf.; 2022; 28, pp. 1120-1128. [DOI: https://dx.doi.org/10.7837/kosomes.2022.28.7.1120]
50. Kruse, R.; Mostaghim, S.; Borgelt, C.; Braune, C.; Steinbrecher, M. Multi-layer Perceptrons. Computational Intelligence; Springer: Berlin, Germany, 2022; pp. 53-124. [DOI: https://dx.doi.org/10.1007/978-3-030-42227-1_5]
51. Cristianini, N.; Ricci, E. Support Vector Machines. Encyclopedia of Algorithms; Kao, M.Y. Springer: Boston, MA, USA, 1992; pp. 928-932. [DOI: https://dx.doi.org/10.1007/978-0-387-30162-4_415]
52. LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Comput.; 1989; 1, pp. 541-551. [DOI: https://dx.doi.org/10.1162/neco.1989.1.4.541]
53. Simard, P.Y.; Steinkraus, D.; Platt, J.C. Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis. Proceedings of the Seventh International Conference on Document Analysis and Recognition; Edinburgh, UK, 6 August 2003; [DOI: https://dx.doi.org/10.1109/ICDAR.2003.1227801]
54. Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. Proceedings of the Thirty-Sixth International Conference on Machine Learning (ICML); Long Beach, CA, USA, 9–15 June 2019; [DOI: https://dx.doi.org/10.48550/arXiv.1905.11946]
55. Ma, N.; Zhang, X.; Zheng, H.-T.; Sun, J. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. Computer Vision–ECCV 2018; Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Springer: Berlin, Germany, 2018; pp. 122-138. [DOI: https://dx.doi.org/10.1007/978-3-030-01264-9_8]
56. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Boston, MA, USA, 7–12 June 2015; [DOI: https://dx.doi.org/10.1109/CVPR.2015.7298594]
57. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Las Vegas, NV, USA, 27–30 June 2016; [DOI: https://dx.doi.org/10.1109/CVPR.2016.308]
58. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. In Proceeding of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Honolulu, HI, USA, 21–26 July 2017; [DOI: https://dx.doi.org/10.48550/arXiv.1704.04861]
59. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Las Vegas, NV, USA, 27–30 June 2016; [DOI: https://dx.doi.org/10.1109/CVPR.2016.90]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
The identification of maritime objects is crucial for ensuring navigational safety, enabling effective environmental monitoring, and facilitating efficient maritime search and rescue operations. Given its ability to provide detailed spectral information, hyperspectral imaging has emerged as a powerful tool for analyzing the physical and chemical properties of target objects. This study proposes a novel maritime object identification framework that integrates hyperspectral imaging with machine learning models. Hyperspectral data from six ports in South Korea were collected using airborne sensors and subsequently processed into spectral statistics and RGB images. The processed data were then analyzed using classifier and convolutional neural network (CNN) models. The results obtained in this study show that CNN models achieved an average test accuracy of 90%, outperforming classifier models, which achieved 83%. Among the CNN models, EfficientNet B0 and Inception V3 demonstrated the best performance, with Inception V3 achieving a category-specific accuracy of 97% when weights were excluded. This study presents a robust and efficient framework for marine surveillance utilizing hyperspectral imaging and machine learning, offering significant potential for advancing marine detection and monitoring technologies.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details




1 Department of Electrical and Electronic Engineering, Semyung University, Jecheon 27136, Republic of Korea;
2 SEASON Co., Ltd., Sejong City 30127, Republic of Korea;
3 Maritime Digital Transformation Research Center, Korea Research Institute of Ships and Ocean Engineering, Daejeon 34103, Republic of Korea;
4 Ocean and Maritime Digital Technology Research Division, Korea Research Institute of Ships and Ocean Engineering, Daejeon 34103, Republic of Korea