Abstract: Detection and tracking of all major parts of pig body could be more productive to help to analyze pig behavior. To achieve this goal, a real-time algorithm based on You Only Look At CoefficienTs (YOLACT) was proposed. A pig body was divided into ten parts: one head, one trunk, four thighs and four shanks. And the key points of each part were calculated by the novel algorithm, which was based mainly on combination of the Zhang-Suen thinning algorithm and Gravity algorithm. The experiment results showed that these parts of pig body could be detected and tracked, and their contributions to overall pig activity could also be sought out. The detect accuracy of the algorithm in the data set could reach up to 90%, and the processing speed to 30.5 fps. Furthermore, the algorithm was robust and adaptive.
Keywords: computer vision, CNN, pig, YOLACT, detection and tracking
(ProQuest: ... denotes formulae omitted.)
1 Introduction
China is a country with the largest number of pigs breeding[1]. With continuous expansion of factory farming, a risk of pig farming is also increasing[2]. Detection and tracking of pigs can help to find out their abnormal situation in advance and prevent pigs form diseases[3-6].
In order to realize automatic instead of manual recording, many different new technologies have been applied in the recent decades. Some researches adopted radio frequency identification technology and sensor technology for pig monitoring[7,8]. Compared to that, machine vision is of the advantages of simple operation, low cost, fast detection speed, wide application range, and great potential and benefit in agricultural production[9]. Shao et al.[10] designed a system, which could successfully detect mobile state of pigs, and classify the comfort state of pigs into three categories by crowding degree. Nasirahmadi et al.[11] adopted Delaunay Triangulation to analyze the relationship between ambient temperature and crowding degree of pigs. Kashiha et al.[12] used pattern recognition technology to identify pigs, but the patterns could not be stored for a long time. In order to achieve the purpose of long-term monitoring, Ahrendt et al.[13] built up support maps to estimate the location and identity of pigs. Xiao et al.[14] adopted the detection and tracking based on a set of association rules with constraint items (DT-ACR) to track pigs. However, this method could not completely solve the influence of light, adhesion and occlusion on monitoring in natural environment. In another way, some researchers have made great efforts with convolutional neural network to solve the problem[15]. Among them, Hua et al.[16] proposed a way to combine occlusion and motion reasoning with a tracking-by-detection approach to handle the occlusion problem, and Yang et al.[17] presented a method to predict the position of a target through geometric transformation of an object. In order to improve detecting accuracy. RNN[18] adopted region recommendation and CNN to detect targets, the recognition effect with which was obviously improved, but there were processing speed bottlenecks in this algorithm. In response to a slow detection speed of Region-CNN (R-CNN), Faster R-CNN[19] with the Region Proposal Network (RPN) was proposed to improve target detection speed. Although the processing speed of this algorithm is improved, it is difficult to achieve a real-time detection which requires speed above 30 fps.
YOLACT is the first one-stage instance segmentation model whose detection speed is above 30 fps[20], while other deep learning models (such as Single Shot MultiBox Detector (SSD)[21], You Only Look Once (YOLO)[22,23], etc.) are mainly for object detection. The YOLACT discards some implicit feature location steps and divides an instance segmentation task into two parallel tasks: (1) generating a series of prototype masks covering the whole graph; (2) predicting a series of linear combination coefficients for each instance. Moreover, for each instance, Fast Non-Maximum Suppression (Fast NMS) is adopted to process the predicted mask. Its input is RGB image, and output is mask of different location, color and semantic instance.
In this paper, a real-time algorithm based on YOLACT was proposed to improve the detection speed and accuracy. Detection and Tracking of Multiple Parts of pig body (DTMP for short) were achieved to improve the ability to analyze the behavior of pigs.
Some achievements have been made in the research of animal activity detection[24,25]. Oczak et al.[26] adopted activity index to classify aggressive behavior. Ojukwu et al.[27] used a computer vision system to detect pig inactivity. Currently, there is no clear way to measure pig activity by detecting and tracking of pig body part movement. In this paper, the activity accumulation method[14] was used to measure pig activity, which can reflect pig activity state well and describe pig activity characteristics in a long time.
2 Materials and methods
2.1 Animal model template
Based on the parameterized 3D model (shape completion and animation of people)[28] and 2D human pose model[29], a pig pose model was proposed, which was divided into 10 parts (separated by black dotted lines), and 15 key points (marked with red points) were determined, as shown in Figure 1. These 10 parts (denoted as A, B, C, D, E, F, G, H, I and J) were head, trunk, right front thigh, right front shank, left front thigh, left front shank, right rear thigh, right rear shank, left rear thigh and left rear shank, respectively. And these 15 key points (denoted as No.г, i = 0, ..., 14) were defined as head center of gravity, neck, pelvis, right shoulder, right front knee, right front ankle, left shoulder, left front knee, left front ankle, right hip, right rear knee, right rear ankle, left hip, left rear knee, left rear ankle. As shown in Figure 1, No.1, No.3, No.4, No.6, No.7, No.9, No.10, No.12 and No.13 are articulated joints for connecting two adjacent parts. The segmentation method is to reflect a pig posture based on its main active joints.
2.2 Data acquisition and marking
This study was carried out on pens in Siping Hongzui Agricultural High-tech Development Co., Ltd. A data set was taken from ten pig pens that consisted of Landracex Yorkshire crossbred pigs aged between 1-2 years. The data was collected from June 10 to July 5, 2019, with a time span of 3 weeks, and three videos were collected for each pen, for a total of 30 videos. The algorithm proposed was to detect pig activity, so the data selected were mainly concentrated in the 11:00-13:00 and 15:00-17:00 time periods, which were the most active time periods for feeding pigs in a day[30]. The duration of each video was about 130 min, frame rate was 30 fps, and image resolution was 640 x 480 pixels. For each pen, one video randomly selected was taken as a test sample, and the rest as training sample. 10000 images were selected randomly in the training sample, and 5000 images in test sample. In order to achieve a good generalization of training results, 1022 pig images selected from the ImageNet dataset were added into a primitive training sample. Therefore, there were 11 022 images in the training sample, 5000 images in test sample in all. There was no image processing before training, and an average was taken as a final result after training repeated 10 times. Another set of data for the activity accumulation experiment was collected while the camera was fixed in a corner of the pen with 1.5 m high and 45° downward sloping.
Labelme[31], an open source image labeling tool, was adopted to mask the training sample and test sample. A sample tag image obtained was shown in Figure 2.
A Deep-learning framework TensorFlow[32] and a Lenovo computer with 16 G memory, windows 10 operating system, Intel (R) core (TM) Í5-9400 CPU and Nvidia GeForce RTX 2060 SUPER (8G) GPU were adopted to train the network and test the performance of the algorithm. In addition, because pigs are dormant at night, this situation is not considered in this paper.
2.3 Detection and tracking of multiple parts of pig body (DTMP) algorithm
The operation procedure of the DTMP algorithm is shown in Figure 3. The DTMP algorithm starts by reading an image. Furthermore, the image is processed by YOLACT, then each pig body mask and 10 parts masks of each pig body are obtained, and mask is area information of target image.
... (1)
where, (xj, yj) is mask point of jth pig part mask, (xi, yi) is edge point of ith pig body mask. When more than 95% of the mask points of a part are in a pig body mask, it means that this part belongs to the corresponding body.
Step 2: Calculate key point of pig head (No.0).
Key point of pig head could be calculated according to the center of gravity[33], shown as Equation (2):
... (2)
where, (x, y) is the key point No.0; (xi, yi) is boundary point of the head mask, i = 1, ..n, xn+1 = xi, yn+1 = yi.
Step 3: Calculate three key points of right front thigh and shank (No.3, 4 and 5).
Two key points, No.3 and 4, of right front thigh are calculated by Zhang-Suen thinning algorithm[34], with which a thigh skeleton is obtained. The calculation method of key points of No.4 and 5 of right front shank is the same as that of thigh. The pig knee is the splice point between shank and thigh, and its key point No.4 is in accordance with Equation (3):
... (3)
where, (xi, yi) is pig knee key point No.4, (xt, yt) is key points calculated by thigh and (xs, ys) is key points calculated by shank. The 9 key points of the other three legs, {No.i, i = 6, 14}, were calculated in the same way.
Step 4: Calculate two points of trunk (No.1 and 2).
As far as the front thighs are concerned, there are two ways to calculate key point No. 1 of pig trunk. One way is used only when one front thigh mask is detected. The key point could be calculated by Equation (4):
... (4)
where, (xi, yi) represents trunk mask boundary point, (x, y) represents key point No.1, and d represents the distance between thigh root key point No.3 (or No.6) and trunk mask edge.
Another way can be used only when both front thigh masks are detected. Equation (3) can be adopted, and an average value of two shoulder key points No.3 and No.6 can be taken as the key point No.1 of the trunk. The calculating method of key point No.2 of two rear thighs is the same as that of the two front thighs.
When no thigh mask is detected, Zhang-Suen thinning algorithm[34] is adopted to get a trunk skeleton, and then its two key points (both the ends of the skeleton) are sought.
After each key point is output, the algorithm will wait for 10 ms, during which it will detect whether the ESC key is pressed. The algorithm will exit when the ESC key is pressed, otherwise, it will continue to read the next image.
In order to seek out pig activity, the position of pig have to be tracked. The Oriented Fast and Rotated Brief (ORB) algorithm1351 with a real-time speed is chosen to extract and describe key points, and these feature points are matched with a specific pig by the Hamming distance.
A displacement of the same key point in two adjacent frames is taken as the movement distance of target. The distance accumulation in a period of time can be expressed as the sum of the displacement between adjacent frames, which can be expressed as Equations (5)-(7):
... (5)
... (6)
... (7)
where, Cy,h(x, y) is the position coordinate of the j1 key point of the hth participating pig target in the ith frame, (j = 1, ., 15); dxiJh and dyi,j,h are x-direction and y-direction moving distance of the hth participating pig target in the ith frame, respectively; txi,j,h and tyi,j,h are total x-direction and y-direction moving distances of the hth participating pig target in the itt frame; axy,h and ayUh are the activity of the hth pig in the ith frame. Because different parts have different effects on activity, pj is a weight parameter of jth parts, and the sum of pj is 1. To analyze activity of different pigs, based on the relationship of average surface in image, some different weights were assigned to different parts, i.e., trunk weight was 0.6, head was 0.3, thighs were 0.06 and shanks were 0.04. The weight is not fixed, and it can be manually modified according to specific requirements of test.
Based on the Equation (7), each pig activity accumulation can be sought out. The activity experiments were carried out with 2 pigs, 3 pigs and 4 pigs in a pen, respectively. Two experiments were conducted during a quiet period and feeding period on two adjoining date.
Based on these real masks[36] of all body parts of each pig, to test and verify the consistency between the target masks and real masks, the target accuracy and robustness (A-R pair) indicator[37] was introduced as Equation (8).
... (8)
where, AR denotes A-R pair, A0 denotes average overlap and F0 denotes the failure rate. The robustness of a tracker is defined as an exponential failure distribution, shown as Equation (9) and (10):
... (9)
... (10)
where, RS is the robustness of tracker; M denotes meantimebetween-failures, where N is the length of sequence; The robustness of a tracker can be interpreted as a probability that the tracker successfully tracks an object up to S frames since last failure, the choice of S does not affect the performance of tracker, but can be adjusted as scale factor for better visualization. In this paper, S is taken as 30.
3 Results and discussion
3.1 Object detection and tracking
3.1.1 Multi-target detection
In order to test the algorithm proposed, a target detection experiment of multiple pigs was designed. It consisted of ten pens, each pig with 10 parts. As mentioned in Part 2.2, 5000 test images of ten pens were selected for detection, and the results were shown in Figure 4. Each colored mask represented a recognition parts of pig body (colors were added randomly, which did not represent a specific parts), and each black spot on the image represented a key point calculated.
As shown in Figure 4a, one pig body and its 9 key points were obtained. However, in the upper left corner, two pigs were judged as one due to their over occlusion. In Figure 4b, six pigs in a pen, their parts and 25 key points were detected. In Figure 4c-f, two pig bodies and 13 key points, two bodies and 12 key points, five bodies and 30 key points, three bodies and 16 key points were detected respectively in four pens. The results above show that this algorithm could achieve detection of each pig body and all body parts of each pig in most cases.
5000 images with 550 x550 pixels and 700 x700 pixels were selected to test the detection of body and each part of each pig.
As shown in Table 1, when the image size of image was 550x550, detection speeds were above 30 fps, which can be used for a real-time detection. While the image size of image was 700x700, the average precision (AP) was higher, but the detection speed was lower, only 20.6 fps, and the detection speed of Mask R-CNN was only 8.6 fps, so they were difficult to use for real-time detection. Take AP as account, the detection of pig body is of the highest AP, just as it occupies the most body space, and has the most of the features available for detection. By comparison, the pig head has more features than the rest parts. Trunk, thigh and shank follows consecutively. In addition, trunk has only some edge features, so its AP is lower than that of head. Because the edges of thigh and shank are not obvious, their detection accuracy are relatively lower[39]. In DTMP, the body AP was the highest up to 90.4%, head followed up to 87.2%, trunk was a little lower up to 84.5%, thigh and shank were up to 76.8% and 71.2% respectively, lower than trunk. It can be seen from the experimental results that this method has good performance on solving problems of real-time processing and insufficient information in multi-target detection.
3.1.2 Multi-target tracking
The Accuracy-Robustness plot in Figure 5 shows results of multi-target tracking, averaged over entire dataset. There was not much difference in robustness between DTMP and Mask R-CNN, but the mean Average Precision (mAP) of DTMP was slightly lower about 3.1%. The order of accuracy from high to low was body, head, trunk, thigh and shank. The algorithm could realize real-time tracking of each part of multiple pigs with good robustness and accuracy.
3.2 The experiment of pig activity
Pig behavior includes gregarious, fighting, sex, exploration, abnormality, aftereffect, eating, excretion, sleep etc., which can be reflected by pig activity. Usually, abnormal activity data can predict some pig health problems in advance or detect their environmental stimuli.
All parts of pig activity accumulation were measured to detect the pig activity. Two time periods were chosen: quiet period (13:00-14:00) and feeding period (15:30-16:30), one hour each period[14]. On this basis, two experiments were designed to detect each part activity during the two periods. The video was edited manually and divided into an hour for the experiments.
As shown in Figure 6, the activity of each part of the pig varied greatly. In the quiet period, the sum of thigh and shank movement accounted for more than 80%, which was because pig walking accounted for a large proportion of pig movement. The activity of the pig head was higher than that of trunk, it was consistent with the reality. During the feeding period, although the activity of head and trunk increased, the activity of thigh and shank increased more, accounting for 91.43%. This means the majority of activity was from the thigh and shank, and less from head and trunk.
As mentioned in sections 2.2 and 2.3, the pig activity accumulation can be sought out, and the experimental results were shown in Figure 7.
The results showed that the pig activity accumulation during feeding period was much larger than that during quiet period in three pens. The pig activity accumulation was slightly different during the quiet period in each pen with the range from 98.72 m to 164.38 m. The pig activity accumulation fluctuated greatly during the feeding period with the range from 336.12 m to 1050.42 m. As shown in Figure 7c, the activity accumulation of one pig was obviously lower than that of other pigs during feeding periods, which may be due to the hierarchical relationship among pigs or crowded feeding space. It is necessary to establish a new social class by competing with each other in the polyculture of pigs, the pig would be more active in this environment. In the process of feeding, pigs of high social status would get priority[40-42]. Although one pig cumulative activity may undulate, the sum of the cumulative activity of all pigs can clearly indicate whether the pig is in a quiet period or feeding period.
4Conclusions
A real-time algorithm based on YOLACT was proposed and verified. The algorithm can effectively detect 10 body parts of each pig and get their key points. In the data set, the detection accuracy of body, head, trunk, thigh and shank could reach to 90.4%, 87.2%, 84.5%, 76.8% and 71.2%, respectively. Furthermore, the algorithm is of good robustness and detection accuracy for multiple targets detection, and its detection speed can reach 30.5 fps. Two activity tests based on the algorithm were carried out. The results showed that the contribution of each part of pig body to the overall activity of pig could be calculated. This algorithm provides a new way to solve the problem of target detection, and it is of great significance to further study the behavior of pigs.
Acknowledgements
This study was supported by Beijing Jiaotong University (C18A800090). All the supports from above organizations are gratefully acknowledged.
Citation: Chen F E, Liang X M, Chen L H, Liu B Y, Lan Y B. Novel method for real-time detection and tracking of pig body and its different parts. Int J Agric & Biol Eng, 2020; 13(6): 144-149.
Received date: 2020-04-02 Accepted date: 2020-07-27
Biographies: Fuen Chen, PhD, Associate Professor, research interests: vehicle test and automation, Email: [email protected]; Longhan Chen, PhD candidate, research interests: vehicle optics test, Email: [email protected];
Baoyuan Liu, MS, Lecturer, research interests: embedded system and signal processing, Email: [email protected]; Yubin Lan, Professor, research interests: agricultural machinery, Email: [email protected].
*Corresponding author: Xiaoming Liang, PhD candidate, research interests: vehicle machine vision. Beijing Jiaotong University, No.3 Shangyuancun, Haidian District, Beijing 100044, China. Tel: +86-13488647620, Email: 17111046 @ bjtu.edu.cn.
[References]
[1] OECD, FAO. OECD-FAO Agricultural Outlook 2019-2028. https ://doi.org/10.1787/ agr_outlook-2019-en.
[2] Zhang Y F. The transformation and innovation of the management concept of the current large-scale pig farm. Feed and Animal Husbandry-Scale Pig Raising, 2011; 9: 21-25. (in Chinese)
[3] Li Y Y, Sun L Q, Zou Y B, Li Y. Individual pig object detection algorithm based on Gaussian mixture model. Int J Agric & Biol Eng, 2017; 10(5): 186-193.
[4] Sun L, Li Z, Duan Q, Sun X, Li J. Automatic monitoring of pig excretory behavior based on motion feature. Sensor Letters, 2014; 12(3): 673-677.
[5] Porto S M C, Arcidiacono C, Anguzza U, Cascone G. A computer vision-based system for the automatic detection of lying behavior of dairy cows in free-stall barns. Biosystems Engineering, 2013; 115(2): 184-194.
[6] Zuo S, Jin L, Chung Y, Park D. An index algorithm for tracking pigs in pigsty. In: International Conference on Industrial Electronics and Engineering, 2015; pp.797-804.
[7] Ma C, Wang Y, Ying G. The pig breeding management system based on RFID and WSN. In: 2011 Fourth International Conference on Information and Computing, IEEE, 2011; pp.30-33.
[8] Zhu W, Zhong F, Li X. Automated monitoring system of pig behavior based on RFID and ARM-LINUX. In: 2010 Third International Symposium on Intelligent Information Technology and Security Informatics. IEEE, 2010; pp.431-434.
[9] Chen Y-R, Chao K, Kim M S. Machine vision technology for agricultural applications. Computers and Electronics in Agriculture, 2002; 36(2): 173-191.
[10] Shao B, Xin H. A real-time computer vision assessment and control of thermal comfort for group-housed pigs. Computers and Electronics in Agriculture, 2008; 62(1): 15-21.
[11] Nasirahmadi A, Richter U, Hensel O, Edwards S, Sturm B. Using machine vision for investigation of changes in pig group lying patterns. Computers and Electronics in Agriculture, 2015; 119: 184-190.
[12] Kashiha M, Bahr C, Ott S, Moons C, Niewold T, Odberg F, et al. Automatic identification of marked pigs in a pen using image pattern recognition. Computers and Electronics in Agriculture, 2013; 93: 111 -120.
[13] Ahrendt P, Gregersen T, Karstoft H. Development of a real-time computer vision system for tracking loose-housed pigs. Computers and Electronics in Agriculture, 2011; 76(2): 169-174.
[14] Xiao D Q, Feng A J, Liu J. Detection and tracking of pigs in natural environments based on video analysis. Int J Agric & Biol Eng, 2019; 12(4): 116-126.
[15] Dong X, Shen J, Yu D, Wang W, Liu J, Huang H. Occlusion-aware real-time object tracking. IEEE Transactions on Multimedia, 2017; 19(4): 763-771.
[16] Hua Y, Alahari K, Schmid C. Occlusion and motion reasoning for long-term tracking. Computer Vision - ECCV 2014. Springer International Publishing, 2014; pp.172-187.
[17] Yang H, Alahari K, Schmid C. Online object tracking with proposal selection. In: IEEE International Conference on Computer Vision. IEEE, 2015; pp.3092-3100.
[18] Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Tech Report (v5). UC Berkeley, 2013; pp.580-587.
[19] Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell, 2015; 39(6): 91-99.
[20] Bolya D, Zhou C, Xiao F, Lee Y J. YOLACT: real-time instance segmentation. In: The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 9157-9166
[21] Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C Y, et al. Ssd: single shot multibox detector. In: 14th European Conference on Computer Vision (ECCV), Proceedings, Part I, Springer, 2016; pp. 21-37.
[22] Redmon J, Farhadi A. Yolo9000: better, faster, stronger. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hawaii, USA, 2017; Vol.1, pp.6517-6525.
[23] Silvera A M, Knowles T G, Butterworth A, Berckmans D, Vranken E, Blokhuis H J. Lameness assessment with automatic monitoring of activity in commercial broiler flocks. Poultry Science, 2017; 96(7): 2013-2017.
[24] Kongsro J. Development of a computer vision system to monitor pig locomotion. Open Journal of Animal Sciences, 2013; 3(3): 254-260.
[25] Oczak M, Viazzi S, Ismayilova G, Sonoda L, Roulston N, Fels M, Bahr C, Hartung J, Guarino M, Berckmans D, Vranken E. Classification of aggressive behavior in pigs by activity index and multilayer feed forward neural network. Biosystems Engineering, 2014; 119(4): 89-97.
[26] Ojukwu C C, Feng Y Z, Jia G F, Zhao H T, Tan H Q. Development of a computer vision system to detect inactivity in group-housed pigs. Int J Agric & Biol Eng, 2020; 13(1): 42-46.
[27] Anguelov D, Srinivasan P, Koller D, et al. SCAPE: shape completion and animation of people. ACM Transactions on Graphics, 2005; 24(3): 408-416.
[28] Ma M, Li Y B. 2D human pose estimation using multi-level dynamic model. ROBOT, 2016; 38: 587. (in Chinese)
[29] Xiao D, Feng A, Yang Q, Liu J, Zhang Z. Fast motion detection for pigs based on video tracking. Transactions of the CSAM, 2016; 47(10): 331, 351-357. (in Chinese)
[30] Kentaro Wada. Labelme: image polygonal annotation with Python. 2016. Available: https://github.com/wkentaro/labelme. Accessed on July 5, 2019.
[31] Abadi M. TensorFlow: learning functions at scale. ACM Sigplan Notices, 2016; 51(9): 1. doi:10.1145/3022670.2976746.
[32] Li Y B, Hao Y J, Liu E H. Calculation method of polygon center of gravity. Computer Application, 2005; 25(S1): 391-393. (in Chinese)
[33] Gu X D, Yu D H, Zhang L M. Image thinning using pulse coupled neural network. Pattern Recognition Letters, 2004; 25(9): 1075-1084.
[34] Rublee E, Rabaud V, Konolige K, Bradski G. ORB: An efficient alternative to SIFT or SURF. In: 2011 International Conference on Computer Vision, Barcelona, 2011; pp. 2564-2571.
[35] Sun L Q, Zou Y B, Li Y, Cai Z D, Li Y, Luo B, et al. Multi target pigs tracking loss correction algorithm based on Faster R-CNN. Int J Agric & Biol Eng, 2018; 11(5): 192-197.
[36] Čehovin L, Leonardis A, Kristan M. Visual object tracking performance measures revisited. IEEE Transactions on Image Processing, 2016; 25(3): 1261-1274.
[37] He K M, Gkioxari G, Dollár P, Girshick R. Mask R-CNN. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2017; 42: 386-397.
[38] Tian Y Y. Precision of edge detection affected by smoothing operator of image. Computer Engineering and Applications, 2009; 45(32): 161-163. (in Chinese)
[39] Qu Y C, Deng C Y, Liu G L. The exploration on the characteristics of porcine behaviors and the improvement of pig feeding as well as management. Guizhou Animal Science and Veterinary Medicine, 2001; 25(5): 9-10. (in Chinese)
[40] Forkman B, Furuhaug I L, Jensen P. Personality, coping patterns, and aggression in piglets. Applied Animal Behaviour, 1995; 45(1-2): 31^42.
[41] Zhang Y F. The transformation and innovation of the current large-scale pig farm management concept. Feed and Animal Husbandry Large Scale Pig Raising, 2011; 9: 21-25. (in Chinese)
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2020. This work is published under https://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Detection and tracking of all major parts of pig body could be more productive to help to analyze pig behavior. To achieve this goal, a real-time algorithm based on You Only Look At CoefficienTs (YOLACT) was proposed. A pig body was divided into ten parts: one head, one trunk, four thighs and four shanks. And the key points of each part were calculated by the novel algorithm, which was based mainly on combination of the Zhang-Suen thinning algorithm and Gravity algorithm. The experiment results showed that these parts of pig body could be detected and tracked, and their contributions to overall pig activity could also be sought out. The detect accuracy of the algorithm in the data set could reach up to 90%, and the processing speed to 30.5 fps. Furthermore, the algorithm was robust and adaptive.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
2 Department of Mechanical Engineering, Oakland University, Rochester, MI 48309, USA
3 School of electronics and electrical engineering, Beijing Jiaotong University Haibin College, Huanghua 061199, Hebei, China
4 College of Engineering, South China Agricultural University, Guangzhou 510642, China