Abstract: Individual fish segmentation is a prerequisite for feature extraction and object identification in any machine vision system. In this paper, a method for segmentation of overlapping fish images in aquaculture was proposed. First, the shape factor was used to determine whether an overlap exists in the picture. Then, the corner points were extracted using the curvature scale space algorithm, and the skeleton obtained by the improved Zhang-Suen thinning algorithm. Finally, intersecting points were obtained, and the overlapped region was segmented. The results show that the average error rate and average segmentation efficiency of this method was 10% and 90%, respectively. Compared with the traditional watershed method, the separation point is accurate, and the segmentation accuracy is high. Thus, the proposed method achieves better performance in segmentation accuracy and effectiveness. This method can be applied to multi-target segmentation and fish behavior analysis systems, and it can effectively improve recognition precision.
Keywords: aquaculture, image processing, overlapping segmentation, corner detection, improved Zhang-Suen algorithm
DOI: 10.25165/j.ijabe.20191206.3217
(ProQuest: ... denotes formulae omitted.)
1Introduction
Machine vision had been applied to all aspects of agriculture[1-3]. However, this technology is still not widely used in aquaculture and has not matured into a useful tool for many reasons[4,5]. Among them, the overlapping of fish is one difficulty preventing the further application of machine vision to aquaculture, because, as shown previously, individual fish segmentation is a prerequisite for some significant machine vision applications, such as fish recognition, biometric measurements, biomass estimation, behavior tracking, etc.[6-9]
Many studies had used machine vision for aquaculture. For example, by sampling frames from a video, a simple swimming fish behavior quantifying method has been proposed[10]. The structured light sensor has been used for automated multiple fish tracking in three-dimensions[11]. However, to avoids the overlapping of fish in image processing, the water depth of the experimental system used in the above studies was usually very low[12]. While at an actual aquaculture site, the fish overlapping is a severe problem. Compared to other industrial applications, many challenges must be overcome before using the machine vision for aquaculture. Indeed, the observed object is usually uncontrollable, easily stressed, and free to move in an environment. The interference caused by fish activity has limited the further application of this technology in aquaculture[13,14].
The detection of overlapping targets has attracted the attention of many researchers. This is an important topic because of the need for accurate characterization, measurement, and localization[15,16]. Several approaches have been developed to detect overlapping targets in gray-level images, and the main research focuses can be classified into the following categories: 1) morphological methods. The basic operations include expansion, corrosion (or erosion), opening and closing, and the image data were simplified to preserve the essential features[17-19]. 2) edge detection and concave matching. During the adhesion of two objects with circular shapes, two curvature directions pointing to the external pits will exist. This type of algorithm makes use of this phenomenon to find the correct two holes to join into the dividing line that can form a reasonable division[20"22]. 3) Watershed segmentation algorithm. The watershed segmentation method is based on topology theory, which is convenient to implement and has been widely used[23-25]. Because of the differences and complexity of overlapped objects, choosing a suitable method for overlapping segmentation is difficult. Each image segmentation method must consider environmental factors, statistical characteristics, morphology, and edge characteristics to better segment and analyze the image.
In summary, the above methods are mainly applied to overlap and adhesion in a two-dimensional plane. However, in aquaculture, the target fish is exceptionally active in a three-dimensional space. Most conventional image segmentation algorithms, such as morphological operations, pit detection algorithms, and watershed algorithms, use a two-dimensional in-plane target as the research object, and handling vertical overlapping can negatively impact the results. Therefore, the use of the above algorithms typically results in over-segmentation.
In this paper, an image processing method was proposed to solve the overlapping fish image segmentation in aquaculture. The objective is to introduce a novel approach for overlapping subject segmentation. To bridge the gap between the laboratory and commercial farms, a real aquaculture facility was simulated in a laboratory, providing a theoretical basis for future applications.
2Materials and methods
2.1 Experimental materials
In this experiment, Cyprinus carpió var. specularis adult fish were selected. Before the experiment, they had been raised for one month and adapted to the current environment. During acclimation, the oxygen level was kept within the range of 57 7 and the water temperature was maintained at 10°C-15°C.
2.2 Experimental system
The experiment was conducted in the laboratory at the Xiaotangshan National Experiment Station for Precision Agriculture, Beijing, China. A previously described experimental system was used[5,26,27]. The system contains six tanks, and each tank has a diameter of 1.5 m and a water depth of 1 m. The image collection equipment was an industrial camera (AVT, Mako G-223B), which has a bit depth of 8\12 bit and a resolution of 2048x1088. The camera was used in conjunction with an 850 nm light compensation source and an 8-mm focal length lens (KOWA, LM8HC). Both the camera and the light source were fixed at the top of the tank; the position of the light compensation lamp was adjusted so that the direction of reflected light coincided with the tank outlet[28]. A computer connected to the camera collected the images in real-time and analyzed them. The computer was a DELL OPTIPLEX 7020 (Intel® Core™ İ5-4590 [email protected] GHz, 4.00GB RAM). The data processing platforms were MATLAB 2013a and Open CV 2.4.13. To reduce the effect of human activity on the fish activities, the computer was placed in the control room next to the laboratory and connected to the camera by a 30 m twisted-pair Gigabit cable. The experimental equipment is illustrated in Figure 1.
2.3 Image processing
For each image, the average background method was used to exact the background[29] (Figure 2b). After background subtraction, the binary image still contained bubbles and other impurities (Figure 2c). Because the size of all fish in each tank was basically the same, objects such as noise and other non-overlapping spots that were too small were filtered out according to the number of pixels. Finally, the binary image was obtained, as in Figure 2d.
2.4 Fish body segmentation
The fish swimming in the tank appears as a three-dimensional distribution that inevitably overlaps and crosses, causing difficulty for fish target segmentation. By analyzing the swimming poses, the fish swimming overlap can be roughly divided into the following categories (Figure 3).
From the above, fish overlapping can be divided into two basic types: X-type and Y-type; others, such as T-type, V-type, and multiple types, can be considered special cases of the above two types or as a superposition of multiple types. The flowchart of the segmenting overlapping fish method was shown in Figure 4.
2.4.1 Overlapping region determination
Determining whether there is overlap is the key to achieving an acceptable recognition rate. Because the boundary contours of overlapping or adhering fish are more complex than those of individual fish. The shape factor is selected as the basis for judging whether an overlap exists. This shape factor SF can describe the complexity of the boundary, defined as Equation (1)[30]:
...(1)
where, A and C denote the area and the circumference of the connected region, respectively. And A and C are both obtained based on the number of pixels. The shape factor has a range of 0 <SF <1. When the target area is circular, the maximum value of SF is 1. When there is multiple or partial target overlap, the boundaries become more complicated because of the presence of depression. And in the case of the same area, the overlapping target image has a large circumference and a small shape factor. Therefore, the shape factor and the maximum area can provide a basis for judging whether an overlap exists. Threshold selection was performed by manually judging the overlap area and shape factor. That is, a region can be judged as a single target if and only if SF>SF0 and A<Amax. Where SF0 is the shape factor threshold, and Amax is the maximum area threshold.
2.4.2 Corner detection
Feature point extraction is a hot topic in image processing. The feature point is an important local feature that determines the shape of the critical region in the image. For most machine vision applications, finding feature points is a vital step. Most feature point detection algorithms use corners as feature points. The corner has strong intensity variation in all directions, and it also exhibits the characteristics of rotation invariance and obvious change with illumination conditions. In this study, to differentiate the intersection target, the intersection point must be extracted first. The intersection points belong to the corners, and thus, we must first extract the corners of the binary image.
Currently, there are two main categories of corner detection: One is based on the gray level, and the other is based on image contour. The image gray-level-based method calculates the gradient points to detect the corners, and the Harris algorithm is the typical method[31]. However, the Harris corner detection algorithm does not have scale invariance and is time-intensive. In contrast, the image contour-based algorithm first extracts the contours and then finds the corners from the image contour; the curvature scale space algorithm (CSS) is one example of this type. The CSS corner detection algorithm defines a corner point as a local maximum point whose curvature is greater than a certain threshold on the target contour. The CSS algorithm uses a higher scale for corner detection and a lower scale to locate the corner points. The algorithm is accurate, and the computation requirements are low[32]. The following steps are used by the original CSS algorithm to detect corners in an image:
STEP1: Apply the Canny edge detector to the gray-level image and obtain a binary edge-map;
STEP2: Extract the edge contours from the edge-map, fill the gaps in the contours and find the T-junctions;
STEP3: Compute the curvature at a fixed low scale for each contour to retain all true corners. Track corner points to the lowest scale for good positional accuracy. For maxima points detected at a high scale, the maximum point was searched at the neighborhood of its lower scale. In this way, the search was performed on a lower scale until the lowest scale was reached.
STEP4: Consider all curvature local maxima as corner candidates, eliminating rounded corners and false corners resulting from boundary noise and details;
STEP5: Add the endpoints of the line mode curve as corners if they are not close to the corners detected above.
The corner detection results of two overlapping types are shown in Figure 5. Take the X-type overlapping as an example: there are 11 correct corner points, and the coordinates of the correct corner points are shown in Table 1.
2.4.3 Skeleton extraction
The binary image was refined to obtain a skeleton of the image, which is composed of a single-pixel-width curve. The skeleton can provide information, such as the size and shape of the object. In this paper, the Zhang-Suen thinning algorithm was used to extract the skeleton. This algorithm has the advantage of fast operation and can maintain the continuity of the curve after refinement1331. However, it cannot guarantee the thinned curve to single-pixel resolution, and finding the intersection is difficult. Some unreasonable points exist among the results of the algorithm. These points interfere with the extracted feature points, and thus, they need to be addressed. In the results map of the original refinement algorithm, the situations are shown in Figure 6a, Figure 6d and Figure 6h often occur. Before our study, a triple junction (the point at which three lines intersect) was defined as target pixel 1, each of the 8 neighborhoods has 3 pixels 1, and the sum of the pixel values of each 3x3 neighborhood is 4. Thus, intuitively, it is the intersection of 3 lines. The treatment of three cases is presented below.
1) In the first case, if the feature points are extracted directly in Figure 6a, 2 triple junctions will be obtained. However, this figure shows that there is no three-line intersection; that is, there is no triple junction. This is because of the unreasonable pixel in Figure 6b. Therefore, this pixel must be removed according to certain rules, which can be defined as follows: If the point is the target pixel 1, two target pixels are located in its eight neighborhoods, and the distance between these two points is -Jt. , then the point is an unreasonable point and should be removed.
2) In the second case, if the feature points are directly extracted in Figure 6d, three triple junctions will be located in its eight neighborhoods. However, there are not three pairs of intersecting lines because of the unreasonable points shown in Figure 6e. Therefore, the points need to be removed according to certain rules, as follows: If the point is the target pixel 1, and there are two triple junctions and a cross point in the 8 neighborhoods around the point, then the point is unreasonable and must be removed. After removing the point, Figure 6f was obtained, which contains a triple junction that can be identified as a reasonable feature point.
3)In the third case, if the feature points were extracted directly in Figure 6h, there will be three triple junctions. Only one point exists where three lines intersect. The unreasonable points are shown in Figure 6i. However, unlike the above two cases, this pixel cannot be removed. If it is removed, the skeleton will become incomplete. Therefore, the criteria for finding the correct triple junction are as follows: If the point is a triple junction, two triple junctions exist in its eight neighborhoods, and the distance between these two points is -Jī, then, the point is the correct triple junction.
After thinning the binary image shown in the figure, the skeleton branch of the image is deleted, and the bifurcated line segment removed to obtain the image skeleton (Figure 7b and Figure 8b).
2.4.4 Contour intersection extraction
The position of the triple junction was enlarged (Figure 7c and Figure 8c). Near triple junctions A, B, and D (Figure 7e, Figure 7f, and Figure 8d), the triple junction can be found, and the coordinates of the center point C can be obtained from point A and B. In Figure 7c, the coordinates of points A and B are (118, 78) and (136, 77), respectively, and the coordinates of the center point C are (127, 78). The coordinates of point D are calculated similarly.
In the magnified binary image, the corners are found corresponding to their positions. Based on Figure 5a, 2, 4, 7, 10 are the intersection points of the contour. The four points were filtered out of the corner set to find the correct cross point. The Euclidian distance from each corner to midpoint C is calculated, and the results are shown in Table 2.
The Euclidian distances of corner points 2, 4, 7, and 10 are 13, 23, 13.89 and 21.02, respectively, which are smaller than those between the other corner points and C, that is, the shortest distance from each intersection to the skeleton. After numerous experiments, the intersection was selected based on the principle that the length of the corner to C be less than 1.5 times the length of the line AB.
After superimposing the skeleton and the contour, the distance from each corner to the center point of the skeleton was calculated. In the X-type overlapping image, the distances from corners 2, 4, 7 and 10 to midpoint C was less than 1.5 times the length of line AB (18.02), whereas the distances from the other corner points to C are greater than 1.5 times the length of line AB. Therefore, points 2, 4, 7, and 10 are the intersection we are looking for. For the convenience of description, corners 2, 4, 7, and 10 are named E, F, G, and H, as shown in Figure 9a. The same method can be used to find the intersections of Y-type overlapping (Figure 9).
2.4.5 Overlapping segmentation
Because the overlapping fish loses the contour lines at the intersection point, it is necessary to complement the contour lines to differentiate the overlapped target. Assuming that the contour to be added is a straight line, since Y-type overlapping contains only two intersection points, the contour line can be separated if the two intersections points I and J are connected (Figure 9e). However, X-type overlapping images contain four intersection points, and thus, it is necessary to determine which lines can be used to differentiate the cross-overlap target. For the overlapped objects in Figure 9a, only EF, HG, EH, and FG need to be connected to separate the intersecting objects. However, during image processing, the two non-contour lines EG and FH must be excluded. First, with point E as the vertex, connected to point F, G, and H, line EF, EG, and EH are obtained. Among them, EF and EH are contour lines, and EG is the line segment to be excluded. If the coordinates of the intersection point are E (xb уД F (x2, y2), G (x3, y3), and H (x4, y4), then the linear equations of EF and GH are expressed as:
... (2)
... (3)
The two linear equations have no common solution when traversing the pixel coordinates of the linear equation in the binary image. Therefore, there is no intersection point in the quadrilateral area surrounded by the four points E, F, G, and H. Similarly, EH and FG are connected. According to the line equation, no intersection point between EH and FG exists within the quadrilateral area. However, the linear equations of EG and FH show that there is an intersection point, and thus, EG is the line segment to be excluded. Therefore, the connecting line segments EF, GH and EH, FG can correctly separate the overlapping target. When EG and FH are connected, intersections points exist within the quadrilateral formed by its four points, and thus, the intersection target cannot be separated.
2.5 Algorithm Evaluation and Index
To verify the accuracy of this method, the overlapping fish images were processed with the watershed segmentation algorithm, and the results were compared with the method proposed here. The segmentation rate (SR), segmentation error rate (SERR) and segmentation efficiency rate (SEFR) were used to evaluate the test results. The index calculation formula was shown as follows:
... (4)
...(5)
... (6)
where, N1, N2 Nunder Nerror Ncorrect represents the number of actual number, segmentation results, under-segmentations, error-segmentations, correct numbers, respectively. The SR is an index indicating the degree of segmentation. When SR>1, over-segmentation occurred, whereas SR<1 indicates under-segmentation. The SERR indicates the ratio of the number of segmentation errors (under-segmentation, over-segmentation, and segmentation errors) to the actual number. When this value is smaller, the segmentation effect is higher. The SEFR reflects the ratio of correct segmentation. When this value is larger, the segmentation efficiency is higher, and the invalid partition is lower. The maximum value (100%) indicates that the segmentation is valid.
3Results and discussion
3.1 Overlapping region determination
The shape factor values of the overlapping region are shown in Figure 11. The upper left area of this figure was enlarged (Figure 11b). In region 1 and 2, multiple fish are overlapping, whereas target region 3 contains a single fish. These region shape factors were 0.13, 0.19 and 0.29, respectively, and the shape factor clearly reflects a certain degree of discrimination between overlapping and non-overlapping regions. After several tests, the maximum fish area threshold Amax = 5000 and SF0 = 0.22 were determined.
3.2 Corner detection
Two types of overlapping types are detected by the Harris and CSS algorithms (Figure 12). Although both algorithms detected the correct corner points, the accuracy of the CSS algorithm is higher. Indeed, this algorithm eliminates most of the wrong corners and generates a more accurate location. Table 3 indicates that the CSS algorithm is more efficient because all the corners were correctly detected, and the running time of the CSS algorithm is less than a quarter of the Harris algorithm.
3.3Improved Zhang-Suen thinning algorithm
The skeletons were extracted for two overlapping types. Although the Zhang-Suen algorithm is an excellent algorithm, the refined skeletons often have extra pixels, which are acceptable in some applications. However, it was difficult to find the skeleton triple junction by applying the Zhang-Suen algorithm directly. The red circles in Figure 13a and Figure 13c indicate the triple junctions found by the Zhang-Suen thinning algorithm. In total, 152 and 45 triple junctions were found for the X-type and Y-type overlaps, respectively, but visually, the junctions are clearly less numerous. The results obtained by the improved Zhang-Suen thinning algorithm are shown in Figure 13b and Figure 13d. Unreasonable pixels have been removed or identified, and the correct triple junction found.
3.4Segmentation for two overlapping types
This method and the watershed method were applied for the segmentation of two different types of overlapping images, and the results are shown in Figure 14. Figure 14 presented the segmentation results of the two overlapping types using the proposed method. The results show that for both two overlapping types, the split line connection is correct, the positioning of the split point is very accurate.
To verify the performance of the proposed method, it was compared with the watershed algorithm and Liu's method[34]. The segmentation results are presented in Table 4, and the segmentation results achieved for sample images of the first tank are shown in Figure 15.
Comparing the results of the two methods in Table 3, the SR of proposed method is slightly better than that of the other methods, but the results are very similar, and over-segmentation occurred in both approaches (SR>1). Additionally, the SERR of the watershed method is nearly three times that of the proposed method. For the two tanks, the SEFR values of the proposed method are 12% and 8% higher than those of the watershed method. Although the SR of the other methods is close to that of the proposed method, more error segmentation and invalid segmentation occur, and the segmentation accuracy is low. For example, a comparative analysis of Figure 15b, c and d reveal that the actual number of fish is 25, whereas the number determined by the watershed is 30; 6 over-segmentations and 2 under-segmentations occur, resulting in low segmentation effectiveness. Additionally, when the other methods were applied to the two types of overlapping, over-segmentation occurs. And the method results in larger positional deviation of the segmentation position than the proposed method, and some erroneous segmentation positions occur.
From the evaluation index in Table 3, the proposed method exhibited 12% and 8% segmentation error in the two tanks, mainly because of over-segmentation. For example, compared to Figure 15, the actual number of fish is 25, whereas that determined using this method is 28. Over-segmentation is mainly localized on the fish fin and tail. Additionally, the skeleton burr is not completely filtered after image preprocessing, resulting in multiple fish being judged by the algorithm and, thus, leading to over-segmentation.
4 Conclusions
In summary, this study presents a segmentation method for images containing overlapping fish in aquaculture. The significant results were as follows.
(1) An improved Zhang-Suen thinning algorithm was used to obtain the skeleton, and eliminate the impact of unreasonable pixel points. The single-pixel resolution of the skeleton is ensured while maintaining the continuity of the skeleton, which provides the basis for accurate extraction of the triple junction of the skeleton.
(2) The segmentation method proposed here can effectively segment the two different overlapping types of images (X-type and Y-type). The separation point is accurate, and the segmentation effect is better than that of the traditional watershed segmentation method.
(3) The overlapped images were segmented. The SR is similar to that obtained by the watershed method in this study, but the segmentation effectiveness rate (SEER) of the watershed segmentation method is approximately 3 times that of the proposed algorithm. The SEFR of the watershed is 8 percentage points lower than that of the proposed algorithm, indicating that this method has high segmentation accuracy and less invalid segmentation.
The above segmentation results show that the proposed algorithm can achieve good segmentation performance for different types of overlapping images. Because fish fin and tail are not completely filtered out during image preprocessing, over-segmentation will occur. This method needs to be improved to handle this type of situation.
Acknowledgements
The research was supported by the National Key Technology R&D Program of China (2019YFD090086), the Beijing Excellent Talents Development Project (2017000057592G125), and the Beijing Natural Science Foundation (4184089).
Received date: 2018-01-19 Accepted date: 2019-10-11
Biographies: Chao Zhou, PhD, Senior Engineer, research interests: aquacultural engineering, Email: [email protected], [email protected]; Kai Lin, PhD, research associate, research interests: machine vision, Email: [email protected]; Daming Xu, master, engineer, research interests: intelligent control, Email: [email protected]; Jintao Liu, PhD candidate, research interests: image processing, Email: [email protected]; Song Zhang, Master, research interests: Pattern recognition, Email: [email protected]; Chuanheng Sun, PhD, associate professor, research interests: smart fishery, Email: [email protected]
Citation: Zhou C, Lin K, Xu D M, Liu J T, Zhang S, Sun C H, et al. Method for segmentation of overlapping fish images in aquaculture. Int J Agric & Biol Eng, 2019; 12(6): 135-142.
*Corresponding author: Xinting Yang, PhD, researcher, research interests: agricultural informatization. China National Engineering Research Center for Information Technology in Agriculture (NERCITA), Room 1109b, Building A, Beijing Nongke Masion, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing, 100097, China. Tel: +86-10-51503476, Email: [email protected].
[References]
[1] Zhou C, Xu D, Chen L, Zhang S, Sun C, Yang X, et al. Evaluation of fish feeding intensity in aquaculture using a convolutional neural network and machine vision. Aquaculture, 2019; 507: 457-465.
[2] Mao Y R, He D J, Song H B. Automatic detection of ruminant cows' mouth area during rumination based on machine vision and video analysis technology. International Journal of Agricultural and Biological Engineering, 2019; 12(1): 186-191.
[3] Costa C, Febbi P, Pallottino F, Cecchini M, Figorilli S, Antonucci F, et al. Stereovision system for estimating tractors and agricultural machines transit area under orchards canopy. International Journal of Agricultural and Biological Engineering, 2019; 12(1): 1-5.
[4] Zhou C, Sun C, Lin K, Xu D, Guo Q, Chen L, et al. Handling Water Reflections for Computer Vision in Aquaculture. Transactions of the ASABE, 2018; 61(2): 469-479.
[5] Zhou C, Yang X, Zhang B, Lin K, Xu D, Guo Q, et al. An adaptive image enhancement method for a recirculating aquaculture system. Scientific Reports, 2017; 7(1): 6243.
[6] Atienza-Vanacloig V, Andreu-García G, López-García F, Valiente-González JM, Puig-Pons V. Vision-based discrimination of tuna individuals in grow-out cages through a fish bending model. Computers and Electronics in Agriculture, 2016; 130: 142-150.
[7] Lin K, Zhou C, Xu D M, Guo Q, Yang X T, Sun C H. Three-dimensional location of target fish by monocular infrared imaging sensor based on a L-z correlation model. Infrared Physics & Technology, 2018; 88: 106-113.
[8] Costa C, Loy A, Cataudella S, Davis D, Scardi M. Extracting fish size using dual underwater cameras. Aquacultural Engineering, 2006; 35(3): 218-227.
[9] Li D L, Hao Y F, Duan Y Q. Nonintrusive methods for biomass estimation in aquaculture with emphasis on fish: a review. Reviews in Aquaculture, 2019; https://doi.org/10.1111/raq.12388
[10] Cha B J, Bae B S, Cho S K, Oh J K. A simple method to quantify fish behavior by forming time-lapse images. Aquacultural Engineering, 2012; 51: 15-20.
[11] Saberioon M M, Cisar P Automated multiple fish tracking in three-dimension using a structured light sensor. Computers and Electronics in Agriculture, 2016; 121: 215-221.
[12] Zhao J, Gu Z, Shi M, Lu H, Li J, Shen M, et al. Spatial behavioral characteristics and statistics-based kinetic energy modeling in special behaviors detection of a shoal of fish in a recirculating aquaculture system. Computers and Electronics in Agriculture, 2016; 127: 271-280.
[13] Zhou C, Xu D, Lin K, Sun C, Yang X. Intelligent feeding control methods in aquaculture with an emphasis on fish: a review. Reviews in Aquaculture, 2018; 10(4): 975-993.
[14] Zion B. The use of computer vision technologies in aquaculture - A review. Computers and Electronics in Agriculture, 2012; 88: 125-132.
[15] Les T, Markiewicz T, Osowski S, Jesiotr M. Automatic reconstruction of overlapped cells in breast cancer FISH images. Expert Systems with Applications, 2019; 137: 335-342.
[16] Wan T, Xu S S, Sang C, Jin Y L, Qin Z C. Accurate segmentation of overlapping cells in cervical cytology with deep convolutional neural networks. Neurocomputing, 2019; 365: 157-170.
[17] Ye H J, Liu C Q, Niu P Y. Cucumber appearance quality detection under complex background based on image processing. International Journal of Agricultural and Biological Engineering, 2018; 11(4): 193-199.
[18] Hamuda E, Mc Ginley B, Glavin M, Jones E. Automatic crop detection under field conditions using the HSV colour space and morphological operations. Computers and Electronics in Agriculture, 2017; 133: 97-107.
[19] Yang W, Wang S, Zhao X, Zhang J, Feng J. Greenness identification based on HSV decision tree. Information Processing in Agriculture, 2015; 2(3): 149-160.
[20] Tan S, Ma X, Mai Z, Qi L, Wang Y. Segmentation and counting algorithm for touching hybrid rice grains. Computers and Electronics in Agriculture, 2019; 162: 493-504.
[21] Wang Z, Wang K, Yang F, Pan S, Han Y. Image segmentation of overlapping leaves based on Chan-Vese model and Sobel operator. Information Processing in Agriculture, 2018; 5(1): 1-10.
[22] Tripathi M K, Maktedar D D. A role of computer vision in fruits and vegetables among various horticulture products of agriculture fields: A survey. Information Processing in Agriculture, 2019; 7: 003.
[23] Li J B, Zhang R Y, Li J B, Wang Z L, Zhang H L, Zhan B S, et al. Detection of early decayed oranges based on multispectral principal component image combining both bi-dimensional empirical mode decomposition and watershed segmentation method. Postharvest Biology and Technology, 2019; 158: 11.
[24] El-Faki M S, Song Y Q, Zhang N Q, El-Shafie H A, Xin P. Automated detection of parasitized Cadra cautella eggs by Trichogramma bourarachae using machine vision. Int J Agric & Biol Eng, 2018; 11(3): 94-101.
[25] Holmgren J, Lindberg E. Tree crown segmentation based on a tree crown density model derived from Airborne Laser Scanning. Remote Sensing Letters, 2019; 10(12): 1143-1152.
[26] Chen L, Yang X T, Sun C H, Wang Y Z, Xu D M, Zhou C. Feed intake prediction model for group fish using the MEA-BP neural network in intensive aquaculture. Information Processing in Agriculture, 2019; 9: 001.
[27] Zhou C, Zhang B, Lin K, Xu D, Chen C, Yang X, et al. Near-infrared imaging to quantify the feeding behavior of fish in aquaculture. Computers and Electronics in Agriculture, 2017; 135: 233-241.
[28] Liu Z, Li X, Fan L, Lu H, Liu L, Liu Y. Measuring feeding activity of fish in RAS using computer vision. Aquacultural Engineering, 2014; 60: 20-27.
[29] Pautsina A, Císař P, Štys D, Terjesen B F, Espmark Å M O. Infrared reflection system for indoor 3D tracking of fish. Aquacultural Engineering, 2015; 69: 7-17.
[30] Moreda G P, Muñoz M A, Ruiz-Altisent M, Perdigones A. Shape determination of horticultural produce using two-dimensional computer vision - A review. Journal of Food Engineering, 2012; 108(2): 245-261.
[31] Zhang Q, Shaojie Chen M E, Li B. A visual navigation algorithm for paddy field weeding robot based on image understanding. Computers and Electronics in Agriculture, 2017; 143: 66-78.
[32] Lin X Y, Zhu C, Liu Y P, Zhang Q. Robust corner detection using altitude to chord ratio accumulation. Multimedia Tools and Applications, 2019; 78(1): 177-195.
[33] Gu X D, Yu D H, Zhang L M. Image thinning using pulse coupled neural network. Pattern Recognition Letters, 2004; 25(9): 1075-1084.
[34] Liu Z, Cheng F, Zhang W. A novel segmentation algorithm for clustered flexional agricultural products based on image analysis. Computers and Electronics in Agriculture, 2016; 126: 44-54.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2019. This work is published under https://creativecommons.org/licenses/by/4.0 (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Individual fish segmentation is a prerequisite for feature extraction and object identification in any machine vision system. In this paper, a method for segmentation of overlapping fish images in aquaculture was proposed. First, the shape factor was used to determine whether an overlap exists in the picture. Then, the corner points were extracted using the curvature scale space algorithm, and the skeleton obtained by the improved Zhang-Suen thinning algorithm. Finally, intersecting points were obtained, and the overlapped region was segmented. The results show that the average error rate and average segmentation efficiency of this method was 10% and 90%, respectively. Compared with the traditional watershed method, the separation point is accurate, and the segmentation accuracy is high. Thus, the proposed method achieves better performance in segmentation accuracy and effectiveness. This method can be applied to multi-target segmentation and fish behavior analysis systems, and it can effectively improve recognition precision.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Beijing Research Center for Information Technology in Agriculture, Beijing 100097, China
2 Beijing Fisheries Research Institute, Beijing 100068, China