Full text

Turn on search term navigation

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Accurate identification and intelligent counting of pig herds can effectively improve the level of fine management of pig farms. A semantic segmentation and counting network was proposed in this study to improve the segmentation accuracy and counting efficiency of pigs in complex image segmentation. In this study, we built our own datasets of pigs under different scenarios, and set three levels of number detection difficulty—namely, lightweight, middleweight, and heavyweight. First, an image segmentation model of a small sample of pigs was established based on the DeepLab V3+ deep learning method to reduce the training cost and obtain initial features. Second, a lightweight attention mechanism was introduced, and attention modules based on rows and columns can accelerate the efficiency of feature calculation and reduce the problem of excessive parameters and feature redundancy caused by network depth. Third, a recursive cascade method was used to optimize the fusion of high- and low-frequency features for mining potential semantic information. Finally, the improved model was integrated to build a graphical platform for the accurate counting of pigs. Compared with FCNNs, U-Net, SegNet, and DenseNet methods, the DeepLab V3+ experimental results show that the values of the comprehensive evaluation indices P, R, AP, F1-score, and MIoU of LA-DeepLab V3+ (single tag) are higher than those of other semantic segmentation models, at 86.04%, 75.06%, 78.67%, 0.8, and 76.31%, respectively. The P, AP, and MIoU values of LA-DeepLab V3+ (multiple tags) are also higher than those of other models, at 88.36%, 76.75%, and 74.62%, respectively. The segmentation accuracy of pig images with simple backgrounds reaches 99%. The pressure test of the counting network can calculate the number of pigs with a maximum of 50, which meets the requirements of free-range breeding in standard piggeries. The model has strong generalization ability in pig herd detection under different scenarios, which can serve as a reference for intelligent pig farm management and animal life research.

Details

Title
LA-DeepLab V3+: A Novel Counting Network for Pigs
Author
Liu, Chengqi 1   VIAFID ORCID Logo  ; Su, Jie 1 ; Wang, Longhe 2 ; Lu, Shuhan 3 ; Li, Lin 4 

 Department of Computer Science and Technology, College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China; [email protected] (C.L.); [email protected] (J.S.) 
 Office of Model Animals, National Research Facility for Phenotypic and Genotypic Analysis of Model Animals, China Agricultural University, Beijing 100083, China; [email protected] 
 Department of Information, School of Information, University of Michigan, Ann Arbor, MI 48109, USA; [email protected] 
 Department of Computer Science and Technology, College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China; [email protected] (C.L.); [email protected] (J.S.); Office of Model Animals, National Research Facility for Phenotypic and Genotypic Analysis of Model Animals, China Agricultural University, Beijing 100083, China; [email protected] 
First page
284
Publication year
2022
Publication date
2022
Publisher
MDPI AG
e-ISSN
20770472
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2632146616
Copyright
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.