1. Introduction
Piling behavior (PB) is a common issue that can adversely affect the welfare, productivity, and overall health of the flock in any housing, including breeder, broiler, and cage-free layer facilities. Poultry piling is a phenomenon where birds densely cluster together, often resulting in birds being piled on top of one another [1,2]. Piling one over another can result in the birds becoming trapped, which can lead to suffocation and death [2,3]. In Australia, in free-range or cage-free laying hens, PB accounts for up to 40% of mortality [4]. The location and timing of smothering tend to be unpredictable and may vary between farms. According to surveys, over 50% of free-range or cage-free farms in the United Kingdom (UK) reported smothering at some point in their flocks [5]. The UK egg industry is estimated to lose £6.5 million annually due to smothering caused by PB [6]. However, PB behavior has been primarily observed in loose-housed layer flocks and is a significant animal welfare and economic concern for producers and the egg-laying industry [2,3,5,7,8].
The PB in laying hens is considered an animal welfare issue because it can negatively impact the birds’ physical and psychological well being, resulting in stress, overheating, injuries, feather pecking, and reduced mobility and natural behaviors [3,6]. Increased stress levels in birds result in reduced egg production [3] and egg quality [9,10], decreased immune function [11], and increased susceptibility to disease [3]. Birds piled one over another can result in overheating, leading to heat stress, suffocation, and even increased mortality. Similarly, overcrowding causes physical injuries, such as fractures [3]. In addition, birds piled on top of each other may limit mobility, leading to muscle atrophy and other health issues [12]. Piling can also prevent birds from accessing feed, water, and other resources. Sometimes, PB can also lead to feather pecking [13], increasing the cause of cannibalism in poultry. Piling can also reduce chickens’ ability to express their natural behavior, such as foraging, dust bathing, and socializing with other birds [1,2]. The threshold at which a pile turns into a smothering event is currently unknown [3], and understanding the biological causes of PB is necessary for effective mitigation.
The causes of PB are not well understood, and there is a lack of research in this area. However, several potential factors have been recorded. High stocking density is one of the most common factors contributing to PB [3,5,14]. Hens living in high-density environments may become stressed and develop abnormal behaviors, such as piling. Furthermore, laying hens’ nesting behavior and competition in nest use could lead to PB [14,15,16]. In poultry houses, a social hierarchy can develop, with dominant birds having first access to resources such as food, water, and nest boxes, leading to PB as subordinate birds attempt to access these resources [2,17]. In addition, environmental factors such as lighting, temperature, and ventilation may also influence PB [2,13,14]. For instance, hens may pile up due to low temperatures or poor ventilation. Finally, layer strains can differ in their patterns of nest use and PB with brown hens often mislaying eggs on the floor or grids of an enclosure more often than white hybrids [18,19]. This floor-laying behavior also becomes a cause of PB in cage-free layer facilities. Therefore, different prevention strategies may be required to address the multifactorial nature of this issue.
Mitigation strategies for PB in laying hens include increasing space per bird, providing enrichment, such as perches and nesting boxes, and reducing flock size [2,3,14,20]. Increasing space per bird [3] and providing perches [2,21] reduced PB in laying hen houses. Furthermore, providing enrichments (e.g., toys, natural materials, and different feed types) in poultry houses encourages birds to perform natural behaviors and, thus, reduces stress and PB [2,20,22]. Another way to mitigate PB is to establish a social hierarchy by providing additional resources, such as feeders and nesting boxes, to allow all birds access to the best resources. In addition, providing nest boxes for hens to lay eggs reduces PB as it fulfills their innate need for nest-building behavior [15]. Furthermore, adequate space, environmental enrichment, social hierarchy, nest boxes, improved ventilation, and light adjusting are important in mitigating PB. Adequate ventilation maintains a comfortable temperature and humidity level and reduces PB. Hens are photoperiodic animals, meaning their behavior is influenced by the amount and duration of light they receive [2].
Research on PB has focused on identifying potential environmental and management factors that contribute to its occurrences [1,2,3,15]. However, the unpredictability and disruption caused by the presence of an observer make it challenging to conduct experiments and obtain accurate data on PB in commercial settings [5,8]. Therefore, regularly monitoring the flock to identify any issues contributing to PB is important for maintaining the health and well-being of the birds. PB can signify a more serious underlying issue, such as disease or poor nutrition, and should be addressed accordingly. More in-depth research is needed to fully comprehend the reasons for PB and develop effective strategies to prevent its occurrence. Studies incorporating observational and experimental methods in commercial settings and considering the influence of genetics and individual variation in behavior can provide valuable insights into the underlying causes of PB in laying hens. Thus, early detection of PB is required with the help of image analysis.
Image analysis is a powerful technique that uses cameras to estimate the object present in a given area. One of the most effective methods for object detection is the use of machine learning (ML) algorithms which have been successful in detecting not only hens [23,24] but also their behaviors [25,26,27,28]. In particular, these algorithms have been developed to measure animal welfare by identifying both comfort and undesired behaviors [27]. For example, a convolutional neural network was used to classify the behaviors of broiler chickens based on images obtained by a depth camera, achieving a high accuracy rate of 99.17% in classifying flock behaviors [29]. Another study used the YOLOv3 algorithm to identify six distinct behaviors in a wire cage system consisting of two pens under varying stocking-density conditions [30]. The study accurately classified behaviors, such as mating, standing, feeding, spreading, fighting, and drinking. However, this model’s accuracy was lower in high-density cages due to the occlusion effect among the birds.
High-density housing of hens can lead to overcrowding or PB, which can cause negative consequences, such as an increased risk of smothering and significant losses. This risk is particularly high when the birds cluster together in certain areas of their living space. Although YOLOv4 and YOLOv4-tiny have detected behaviors, they often fail to recognize important comfort behaviors in detail [27]. This study focuses on detecting overcrowding behavior in hens which can lead to potential issues called PB. Although the research mentions overcrowding, it only focuses on behaviors, such as movements, laying, and dustbathing on the floor, and suggests this behavior might lead to overcrowding behavior. No detailed research focuses on detecting PB and PB in different situations and camera settings. However, recently, FELB detection research has been conducted using the YOLOv5 model and found higher performance in detecting FELB [28]. This FELB research was somewhat related to PB as hens gather to lay eggs on the floor. The researcher mentioned that PB during the daytime mostly occurs for performing FELB. To decrease floor eggs and FELB, it is important to recognize PB and build the best detection machine learning model.
Improving the recognition performance of the various behaviors using machine learning technology in the past could be a promising direction for detecting PB in hens. Improving detection could involve investigating new algorithms, data pre-processing techniques, or training strategies to enhance the accuracy of behavior recognition. Furthermore, by improving the detection of PB, we could gain a deeper understanding of the welfare of hens and develop effective interventions to improve their living conditions. For example, over the past few years, the YOLO algorithm successfully identified laying hens on the floor [23,24,25,26,27,28,30] regardless of their activity which could help control challenging PB during rearing and alert farmers early to potential issues. Therefore, this study used YOLOv6 to detect PB detection, expanding upon previous research to detect PB and identify the area where hens frequently perform PB. The objectives of this detection study were to (a) develop and test the best PB detection models and (b) compare the performance of deep learning models in research cage-free facilities. In the future, after finding the area, the producer or researcher can find its potential reason and issues related to overcrowding in hens.
2. Materials and Methods
The materials and methods section provides better organization and explains how image data were collected, processed, and used to train a YOLOv6 network for object detection in this study. The section is divided into several sections that discuss the housing and management of the animals involved in the study, the methods used to collect image and data samples, the techniques used to manipulate the image data, and the software and hardware used in the analysis. The section also includes a description of the YOLOv6 network architecture and the metrics associated with PB detection to evaluate its accuracy.
2.1. Experimental Housing and Management
This experiment was conducted in four CF research houses at the University of Georgia in Athens, GA, USA (Figure 1). A total of 200 Hy-line W36 birds were raised from day 1 to day 300 in each house, and each house measures 7.3 m in length, 6.1 m in width, and 3 m in height. The houses were equipped with lights, perches, nest boxes, feeders, and drinkers, and the floor was covered with pine shavings. The indoor conditions, such as light intensity and duration, ventilation rates, temperature, and relative humidity, were controlled using a Chore-Tronics Model 8 controller, and an in-detail housing system was described in the previous research [23]. This experiment followed the animal care and use guidelines established by the University of Georgia’s Institutional Animal Care and Use Committee (UGA IACUC).
2.2. Image and Data Collections
This study recorded the hens’ behaviors using six night-vision network cameras (PRO-1080MSB, Swann Communications USA Inc, Santa Fe Springs, CA, USA) mounted about 3 m above the litter floor in each room. In addition, two cameras were placed 0.5 m above the ground floor. The data acquisition was performed daily for 24 h and recorded in a digital video recorder (DVR-4580, Swann Communications USA Inc., Santa Fe Springs, CA, USA). The recorded video files were stored in .avi format with a 1920 × 1080 pixels resolution and 15 frames per second sampling rate. The data acquisition took place between 46–50 weeks of age.
2.3. Image Processing
The video data was converted into individual image files in .jpg format using the Free Video to JPG Converter App version 5.0. The resulting images were filtered based on PB presence and high-quality image datasets. In order to expand our dataset and improve detection accuracy, this study used various techniques, such as geometric transformations, brightness and contrast adjustments, and data normalization to process the newly obtained images. This technique allows us to create multiple new image datasets with more samples. Thus, obtained images were labeled using Makesense.AI in the YOLO format. Our findings indicated that implementing these techniques resulted in a notable increase in the final accuracy rate. This study uses image datasets of 9000 images (Pbnighttime, PBdaytime, Pbground, and Pbceiling image datasets), where 70% of the total image datasets were used for training, 20% for validation, and 10% for testing (Table 1). Different categories of classes used to compare in this study are illustrated in Figure 2.
2.4. YOLOv6 Network Description
YOLOv6 is the latest object detection algorithm launched and developed by Meituan in 2022 [31]. YOLOv6 is designed to be a single-stage object detection framework, meaning that it uses a single pass through the network to perform both object detection and classification, making YOLOv6 faster and more efficient than multi-stage object detection frameworks [32]. In addition, the YOLO model, such as YOLOv6, is designed to be hardware efficient, which makes it suitable for industrial applications where real-time object detection is required [33]. Furthermore, YOLOv6 is optimized for GPUs and can run on devices with limited computing resources, making it a popular choice for embedded systems and Internet of Things (IoT) devices. Furthermore, compared to its predecessor YOLO models, YOLOv6 has improved detection accuracy and inference speed, making it a more suitable choice for object detection tasks [31]. In this study, different YOLOv6-PB models, i.e., YOLOv6n-PB (nano), YOLOv6t-PB (tiny), YOLOv6s-PB (small), YOLOv6m-PB (medium), YOLOv6l-PB (large), and YOLOv6l relu-PB (large relu) were compared for PB detection. These YOLOv6-PB models differ in size and parameters. First, YOLOv6 models were compared with PBmodel image datasets and identified the best PB detection model. Later, the best model was compared with different camera settings (PBceiling and PBground) and photoperiod (PBnighttime and PBdaytime) conditions. Each model or class was run at 300 epochs and batch size 16. The higher the epochs, the higher will be the performance results.
YOLOv6-PB is a complex neural network architecture consisting of several parts, each of which plays a specific role in object detection (Figure 3). Some of the main parts of YOLOv6-PB are:
2.4.1. Model Input
The pre-trained PB image datasets were fed into the model for making predictions through the input part of the YOLOv6-PB model. Input images and labels were then passed into the neural network, which usually occurs in another part of the YOLOv6-PB model. The size of the input image depends on the YOLOv6-PB architecture of the network, but it is usually expected to be a fixed size, for example, 640 × 640 × 3 pixels as the default size. Therefore, in this study, a default size of the images is taken for analysis.
2.4.2. Model Backbone
The backbone extracts feature from the input PB image. In YOLOv6-PB, the backbone network is typically a pre-trained Convolutional Neural Network (CNN) that has been fine-tuned for object detection. The specific architecture of the backbone network in YOLOv6-PB can vary, but it typically consists of several convolutional layers, followed by max-pooling layers, which helps to reduce the feature map’s spatial dimensions. The convolutional layers detect low-level features in the PB image, such as edges and textures. Spatial pyramid pooling (SPP) helps max-pooling layers reduce the feature map’s size and maintain the most important features for object detection [34]. Similarly, The EfficientRep Backbone used in YOLOv6-PB is designed to both effectively use the computational resources of hardware, such as GPUs, and possess robust feature representation abilities as compared to the CSP-Backbone utilized by YOLOv5 [33].
2.4.3. Model Neck
The neck connects the backbone network to the rest of the network. It takes the PB output of the backbone network and performs additional processing to produce the final feature map used for PB object detection. In general, the purpose of the neck in YOLO-PB architectures is to provide intermediate feature maps suitable for the heads to make accurate predictions. Feather maps are often achieved through a series of convolutional, pooling, and up-sampling layers that manipulate the features from the backbone network to the desired scale and resolution for the heads. Regarding its neck design, YOLOv6-PB introduces a more efficient feature fusion network, known as the Rep-PAN Neck [35], to improve hardware utilization and the balance between accuracy and speed. This design is based on the hardware-aware neural network architecture concept [36].
2.4.4. Anchor Boxes
Anchor boxes are predefined bounding boxes that represent PB objects in the image. They provide priori information about the location and size of PB objects in the image. During training, the network learns to adjust the anchor boxes to fit the PB objects in the image better.
2.4.5. Detection Head
The detection head is responsible for predicting the PB objects in the image. It takes the output of the neck network and produces a set of class probabilities and bounding boxes for each targeted PB object in the image [37]. The detection head uses anchor boxes as a starting point and adjusts them to fit the PB objects in the image better. YOLOv6-PB utilizes a decoupled head structure, simplifying the head design while carefully balancing the representation capabilities of the relevant operations with the computational demands on the hardware [33].
2.4.6. Loss Function
The loss function trains the network by measuring the difference between the predictions made by the network and the ground truth annotations. The loss function measures the error between the predicted bounding boxes and the ground truth boxes and the error between the predicted class probabilities and the ground truth class labels of PB.
(1)
where Lcls, Lobj, and Lloc represent class loss, PB object loss, and location or bounding box loss, respectively, λ is constant for respective loss.2.4.7. Post-Processing
The final step in the PB object detection process is post-processing which involves refining the predictions made by the network and filtering out low-confidence detections, rescaling the bounding boxes to the original PB image size, and drawing the final PB detections on the image.
Each of these parts of YOLOv6-PB works together to perform object detection in real-time, allowing for the efficient and accurate detection of PB objects in images and videos.
2.5. Computational Parameters
To perform PB detection, a high-performing computational configuration is used. This detection study used the Oracle cloud with different configurations to train, validate, and test the image datasets (Table 2). The higher number of computational parameters increases the speed and detection accuracy of the model [26,28].
2.6. Performance Metrics
2.6.1. Precision
This metric measures the fraction of the total number of PB made by the PB object detection system that was correct. It is calculated by all positive detections, such as true positive (TP, image contains PB, so the model predicts it correctly) and false positive (FP, the image does not contain PB, but the model detects PB). The formula of precision is given below.
(2)
The overall positive and negative PB detection is made clear with the help of the confusion matrix in Figure 4.
2.6.2. Recall
This metric measures the fraction of the total number of PB objects in an image correctly detected by the PB object detection system. It is calculated based on TP and false negative (FN, image contains PB but unable to detect PB) detection results obtained from the YOLOv6-MD model.
(3)
2.6.3. Mean Average Precision
This metric measures the average precision of the PB object detection system over multiple object classes at a threshold of 0.50 ([email protected]) or 0.50:0.95 ([email protected]:0.95). The mAP is calculated as the average of the precision values for each class, considering the number of true positive PB detections and the number of false positive PB detections.
(4)
where APi is the average precision of the ith category and C is the total number of categories.2.6.4. Intersection over Union
In YOLOv6, the model used an Inter-section over Union (IoU) metric to determine whether an object was correctly detected. This metric calculates how much the detected bounding box overlaps with the ground truth bounding box given in Equation (5). A threshold value of 0.5 was used in the previous study to determine if the detection was a TP [38]. If the overlap between the detected and ground truth bounding boxes was at least 50%, it was considered a TP. However, if the overlap was less than 50%, it was labeled as an FN, meaning the object was undetected. The FP detections occurred when the model predicted a PB where none existed. On the other hand, TN cases occurred when the model correctly avoided making such predictions.
(5)
3. Results
The study’s findings on hen PB were compared under various models and settings. Therefore, the result section is divided into three subsections which cover the performance comparison of YOLOv6-PB models and the performance of PB under different photoperiods and camera settings. In addition, the section provides an overview of the results on how different factors influence PB and includes an evaluation of the YOLOv6-PB model’s performance in detecting this behavior.
3.1. Performance Comparison of YOLOv6-PB Models
This study compared all YOLOv6-PB models to determine which model performs better in detecting PB, and the results are shown in Table 3 and Figure 5. Among all YOLOv6-PB models, YOLOv6l relu-PB performed better in terms of performance metrics, such as average recall (70.6%), [email protected] (98.9%), [email protected] (74.6%), and [email protected]:0.95 (63.7%). Similarly, after the YOLOv6l relu-PB model, YOLOv6n-PB shows higher average recall (69.8%) and [email protected] (98.9%) but lowest training time after YOLOv6t-PB model (2.03 h). However, YOLOv6t-PB performs lowest with 67.6% average recall, 67.3% [email protected], and 60.7% [email protected]:0.95. Furthermore, the training time to complete 2100 labeled image datasets at 300 epochs of batch size 16 was found in increasing order from smaller to bigger YOLOv6 models because the bigger YOLOv6 model consumes more time to perform and accurately detect the PB, as shown in Table 3. Thus, the YOLOv6t-PB and YOLOv6n-PB perform faster to train 2100 images and validate 600 images (almost 2.04 h), while YOLOv6l relu-PB acts slow in training and validation (4.24 h) at the same time. If this study compares based on training time, then YOLOv6n-PB performs faster and more accurately in detecting PB. However, other performance metrics are more important compared to training time. Thus, the YOLOv6l relu-PB outperforms and can be used in the future to detect PB, which ultimately helps to find the actual reason for PB so that it can be reduced on time. Since YOLOv6l relu-PB performs better, so we used this model for comparison within photoperiods and camera settings.
In Figure 6, YOLOv6-PB models were compared with training and validation datasets. Each model generated graph data based on the performance at each epoch. Since each model’s performance metrics are almost close, but when zoomed in and run non-parametric statistical analysis, then YOLOv6l_relu outperforms in mAP0.50:0.95. However, when comparing mAP@50, the results seem insignificant at a 0.05 level of significance. Although there is no significant difference among them, we can consider the highest mAP value YOLO model because every percentage increase in object detection is the most important.
3.2. Performance of Piling Behavior under Different Photoperiods
The YOLOv6l relu-PB was used to compare PB during nighttime and daytime, and the PB detection was found highest during nighttime compared to daytime with an average recall, [email protected], [email protected], and [email protected]:0.95 as shown in Table 4. Similarly, the model performance of results based on photoperiods and epochs to identify the performance level is shown in Figure 7. The performance metrics were found to be increased as the number of epochs increased, possibly due to more training, large architecture size, higher parameters, and more learning phenomenon. The graph shows that the performance metrics were highest during nighttime because of the largest PB flock size, and it is easy to detect a particular group by differentiating large groups of hens from individual hens, as shown in Figure 8.
Based on Figure 9, it can be observed that the IoU loss decreases as the number of training epochs increases. Furthermore, the YOLOv6-PB daytime model had a lower IoU loss than the YOLOv6-PB nighttime model, indicating that the nighttime model performed better in detecting PB.
3.3. Performance of Piling Behavior under Different Camera Settings
According to the results of this study, the YOLOv6l relu ground camera model (at the height of 0.5 m) showed the highest performance metrics for PB detection with an average recall of 66.8%, [email protected] of 96.4%, [email protected] of 56.9%, and [email protected]:0.95 of 57.6% due to the clear view it provides (Table 5). As a result, this model can be recommended for ground-level PB detection, as shown in Figure 10. In conclusion, the ground camera proved more effective for PB detection based on the test datasets and, thus, was recommended for this purpose.
Figure 11 shows that the IoU loss decreases as training epochs increase. Additionally, the results show that the YOLOv6-PB camera ground model had a lower IoU loss than the YOLOv6-PB ceiling model, indicating that the former performed better in detecting PB. Therefore, based on the lower IoU loss, it can be concluded that the YOLOv6-PB camera ground model was more effective in detecting PB than the YOLOv6-PB ceiling model.
4. Discussion
In the present study, we aimed to evaluate the performance of various YOLOv6 models for the detection of PB in poultry. The PB is a serious concern in commercial poultry farming, as it can lead to decreased welfare [2,20,22] and other negative consequences. In addition, PB has been linked to floor eggs [15,28] and FELB [28], so accurate detection can also help reduce these issues. This study focused on using computer vision techniques to detect PB accurately, which could help reduce its occurrence and improve poultry welfare. We analyzed several YOLOv6 models and found that the YOLOv6l relu-PB model performed the best on the available datasets, likely due to the model’s larger architecture consisting of several convolutional layers and parameters used [31,39]. Accurately detecting PB can help reduce false detections and lead to more effective identification and reduction of PB. However, it is important to consider that factors, such as housing systems and bird numbers in commercial farming, may impact the accuracy of PB detection. Therefore, the YOLOv6l relu-PB model will be further tested and optimized for commercial farm settings to increase its accuracy.
Piling behavior detection is crucial in various environmental situations, such as different photoperiod conditions and camera heights. This study has shown that detecting PB is crucial in maintaining animal welfare, especially during nighttime. Our results suggest that larger group sizes contribute significantly to the occurrence of PB at night. The PB was highest during nighttime as hens tend to pile together in a large group to secure safety through social contact [40]. Furthermore, our study has shown that without the use of night vision cameras, detection performance decreases during nighttime. Therefore, night vision cameras are recommended to enhance detection precision in both daytime and nighttime monitoring. The YOLOv6l relu-PB nighttime model was the most effective in accurately detecting PB during nighttime. Similarly, this study highlighted the significance of camera settings and heights in improving the accuracy of PB detection and found that the closer the camera was to the targeted objects, the higher the detection accuracy. While a camera close to the target object can help detect an object within short parameters, a ceiling camera can provide a whole room overview. In the future, a ceiling camera can aid in room overview and transmit PB signals to a ground robot which will enable the robot to navigate the PB area and locate the cause of PB, helping to reduce the gathering of hens in a specific place. Therefore, both camera heights are required to improve the PB detection model. To achieve this, training in more image datasets under various environmental conditions and settings is necessary.
In evaluating object detection models, Intersection over Union (IoU) has been recommended as a standard metric for assessing the quality of segmentation [41]. By analyzing IoU values separately, it was observed that segmenting fewer sampled classes was particularly challenging, even when using focal loss. A lower IoU loss signifies better detection accuracy, thereby supporting the conclusion that the YOLOv6-PB nighttime and YOLOv6-PB camera ground models performed better during testing. This information is crucial in evaluating the effectiveness of detection models and can help researchers identify areas that require further improvement. In summary, our study highlights the importance of considering IoU values in assessing detection accuracy, particularly when fewer classes are being segmented.
This study investigated the effectiveness of the YOLOv6 model in detecting behaviors, such as PB, in real-world settings. We found that the model performed well in various scenarios and could handle unpredictable situations. However, collecting enough image data for training PB detection models can be challenging, leading to data imbalance or overfitting issues that can affect the model’s accuracy [42]. To address this, we used data augmentation techniques, such as geometric transformations, brightness/contrast enhancement, and data normalization to increase the training dataset size and improve accuracy rates. Extending the training dataset through data augmentation is essential for accurate PB model detection, as accuracy depends heavily on the dataset’s size and resolution. We can overcome these challenges by improving the training procedure, pre-processing images, and achieving more accurate detection results.
This study has some limitations. For instance, Figure 12 highlights that it is not appropriate to evaluate a model’s effectiveness based solely on one aspect, rather, it should be assessed based on its overall performance. Unfortunately, our proposed model has some significant limitations. During the test phase, it occasionally misclassified behaviors, such as dustbathing, feeding, perching, foraging, and drinking activities, in a group as PB. Similarly, sometimes the model detects feeder as PB might be because of similar hen color. We intend to enhance the model by training it on additional datasets to address this issue. This study’s custom dataset includes many images of PB with classes for dustbathing, feeding, perching, foraging, drinking, or feeding. Therefore, if a hen was just gathering or coming closer to each other during these activities, the model waited until it detected PB before registering an identification. However, sometime model mistakenly detects the hens when more hens come close to each other. Moreover, nighttime detection proved challenging, leading us to replace regular cameras with night vision cameras. Without night vision capabilities, it is tough to identify birds. We also discovered that camera height significantly impacts detection accuracy. As the camera height increases, the sensitivity, resolution, signal-to-noise ratio, and field of view decrease, which could cause detection errors. However, the camera placed close to the hen also causes false detection problems by making the camera blur or detecting objects PB nearby. In addition, the cameras need to be cleaned daily to obtain the best quality videos because CF housing consists of higher dust particles [43,44]. Overall, the image quality decreases as the camera height increases or is too close to hens, and the camera’s periodic cleanliness is required. These limitations are the most noteworthy findings of our study.
In the future, detecting and preventing PB in hens is an important research area, and several promising directions exist to explore. One of these directions involves using advanced computer vision techniques, such as the YOLOv6 model, which can accurately identify and differentiate between objects and their behaviors. By analyzing videos, the YOLOv6 model can simultaneously evaluate multiple categories or classes and provide unique identifiers for each detected object, making it a good choice for detecting PB. Researchers could also explore the use of multi-sensor systems to obtain a complete view of hen behavior and develop non-invasive methods for detecting PB to reduce stress on the hens. Additionally, investigating the impact of environmental factors on PB and using machine learning algorithms to analyze large datasets could provide valuable insights into this issue. Finally, we could better understand the hen’s behavior and welfare by integrating data from multiple sensors, such as cameras, microphones, and pressure sensors.
To prevent PB, it is important to understand its underlying causes and motivations. Studying the social dynamics of hen groups, their individual personalities, and preferences can help develop targeted interventions to prevent or mitigate this behavior. Automated feedback systems can provide real-time information to farmers or caretakers about the prevalence and severity of PB, allowing them to intervene when necessary and improve welfare outcomes for hens. Improving an understanding of PB and the environmental and social factors that drive it can help to develop more effective strategies to improve the welfare and overall health of hens in both commercial and non-commercial settings.
5. Conclusions
This study tested different deep-learning models for detecting PB in research CF houses. The model development used 9000 images for training, validation, and testing. The results show YOLOv6l relu-PB model has a higher performance in detecting PB with higher [email protected] (98.9%), [email protected]:0.95 (63.7%), and average recall (70.6%) than other models. Similarly, ceiling and ground cameras are important to detect PB more precisely. However, ground camera results in higher precision for detecting PB. The camera with inbuild night vision technology can help in increasing detection accuracy. The camera placed at the ceiling has shown higher precision in detecting PB during nighttime. However, we encountered some common problems, such as inaccurate detection and difficulty recognizing objects that were too close or far away. To address these problems, we propose several deep-learning techniques for training the YOLOv6 model, such as linguistic transition, spontaneous geometric transition, and spontaneous color dithering.
Our research proposed the YOLOv6 model, which leverages efficient net to extract features from input images, thus enhancing the model’s feature learning and boosting the network’s performance. By detecting PB quickly and accurately, we can minimize the negative impact on animal welfare and reduce FELB, leading to better health outcomes and production. However, we noticed some limitations in real-time applications, such as the model’s inability to classify images containing groups of hens or those too close together. As a result, future research should focus on improving the approach’s accuracy and addressing these types of datasets. Overall, the YOLOv6l relu-PB detection model is recommended for monitoring PB and will be tested in commercial houses.
Conceptualization: R.B.B. and L.C.; methodology: R.B.B. and L.C.; data analysis: R.B.B.; investigation: R.B.B., S.S., X.Y. and L.C.; resources: L.C.; writing—original draft preparation: R.B.B. and L.C.; supervision: L.C.; funding acquisition: L.C. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
Not applicable.
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 2. Images datasets labeled based on class (a) PBceiling, (b) PBground, (c) PBnighttime, and (d) PBdaytime.
Figure 4. Confusion matrix for piling behavior detection used for model evaluation.
Figure 5. Piling behavior detection result comparison based on various models (a) YOLOv6t-PB, (b) YOLOv6n-PB, (c) YOLOv6s-PB, (d) YOLOv6m-PB, (e) YOLOv6l-PB, and (f) YOLOv6l relu-PB model.
Figure 6. Comparison of piling behavior detection results between different YOLOv6-PB models based on (a) [email protected] and (b) [email protected]:0.95 with 300 epochs and 16 batch size.
Figure 7. Comparison of piling behavior detection results during different photoperiods and epochs with (a) average recall, (b) [email protected], and (c) [email protected]:0.95.
Figure 8. Piling behavior detection result based on different photoperiods (a) nighttime (light turned off) and (b) daytime (light turned on).
Figure 9. Comparison of IoU loss during training between YOLOv6-PB nighttime and YOLOv6-PB daytime models at 300 epochs and 16 batch size.
Figure 10. Piling behavior detection result based on different camera settings (a) height 0.5 m (ground) and (b) height 3 m (ceiling).
Figure 11. Comparison of IoU loss during training between YOLOv6-PB ceiling and YOLOv6-PB ground models at 300 epochs and 16 batch sizes.
Figure 12. Example of false piling behaviors detected by the model due to (a) occlusion, (b) foraging, (c) feeder presence, and (d) perching.
Data pre-processing for PB model detection.
Class | Original Dataset | Train (70%) | Validation (20%) | Test (10%) |
---|---|---|---|---|
PBceiling | 1500 | 1050 | 300 | 150 |
PBground | 1500 | 1050 | 300 | 150 |
PBdaytime | 3000 | 2100 | 600 | 300 |
PBnighttime | 3000 | 2100 | 600 | 300 |
PBmodel | 3000 | 2100 | 600 | 300 |
Note: PBceiling and PBground represent PB observed at camera height of 3 m and 0.5 m, respectively, above the litter floor; PBdaytime and Pbnighttime represent PB during the light period and dark period, respectively.
Computational parameters used for the PB model evaluation.
Configuration | Parameters |
---|---|
CPU | 64 core OCPU |
GPU (4 counts) | 4 × NVIDIA® A10 (24 GB) |
Operating system | Ubuntu 22.10 |
Accelerated environment | NVIDIA CUDA |
Memory | 1024 GB |
Drive (2 counts) | 7.68 TB NVMe SSD |
Libraries | Torch 1.7.0, Torch-vision 0.8.1, OpenCV-python 4.1.1, NumPy 1.18.5 |
Comparison of performance of the different models with different performance metrics.
Performance Metrics | YOLOv6t- PB | YOLOv6n-PB | YOLOv6s-PB | YOLOv6m- PB | YOLOv6l- PB | YOLOv6l relu- PB |
---|---|---|---|---|---|---|
Average Recall (%) | 67.6 | 69.8 | 69.1 | 70.2 | 69.8 | 70.6 |
[email protected] (%) | 97.6 | 98.9 | 98.5 | 98.1 | 96.3 | 98.9 |
[email protected] (%) | 67.3 | 70.6 | 70.1 | 73.9 | 73.5 | 74.6 |
[email protected]:0.95 (%) | 60.7 | 62.8 | 62.2 | 63.4 | 62.4 | 63.7 |
Training time (hrs) | 2.03 | 2.04 | 2.07 | 2.97 | 3.23 | 4.24 |
mAP—mean average precision; hrs—hours; PB—piling behavior.
Comparison of piling behavior during daytime and nighttime using the YOLOv6l relu model.
Data Summary | Average Recall (%) | [email protected] (%) | [email protected] (%) | [email protected]:0.95 (%) |
---|---|---|---|---|
YOLOv6l relu-nighttime | 89.4 | 98.9 | 98.8 | 87.0 |
YOLOv6l relu-daytime | 70.6 | 98.0 | 72.0 | 63.5 |
mAP—mean average precision.
Comparison of piling behavior under different camera settings using the YOLOv6l relu model.
Camera Settings | Average Recall (%) | [email protected] (%) | [email protected] (%) | [email protected]:0.95 (%) |
---|---|---|---|---|
YOLOv6l relu-ceiling | 63.8 | 93.1 | 54.0 | 54.5 |
YOLOv6l relu-ground | 66.8 | 96.4 | 56.9 | 57.6 |
Ceiling—3 m; ground—0.5 m; mAP—mean average precision.
References
1. Campbell, D.; Makagon, M.; Swanson, J.; Siegford, J. Litter Use by Laying Hens in a Commercial Aviary: Dust Bathing and Piling. Poult. Sci.; 2016; 95, pp. 164-175. [DOI: https://dx.doi.org/10.3382/ps/pev183] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26354762]
2. Winter, J.; Toscano, M.J.; Stratmann, A. Piling Behaviour in Swiss Layer Flocks: Description and Related Factors. Appl. Anim. Behav. Sci.; 2021; 236, 105272. [DOI: https://dx.doi.org/10.1016/j.applanim.2021.105272]
3. Gray, H.; Davies, R.; Bright, A.; Rayner, A.; Asher, L. Why Do Hens Pile? Hypothesizing the Causes and Consequences. Front. Vet. Sci.; 2020; 7, 616836. [DOI: https://dx.doi.org/10.3389/fvets.2020.616836] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33363246]
4. Rice, M.; Acharya, R.; Fisher, A.; Taylor, P.; Hemsworth, P. Characterising Piling Behaviour in Australian Free-Range Commercial Laying Hens. ISAE 2020 Global Virtual Meeting: Online Programme Book; ISAE: Puch, Austria, 2020; 1.
5. Barrett, J.; Rayner, A.; Gill, R.; Willings, T.; Bright, A. Smothering in UK Free-range Flocks. Part 1: Incidence, Location, Timing and Management. Vet. Rec.; 2014; 175, 19. [DOI: https://dx.doi.org/10.1136/vr.102327] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24836430]
6. Herbert, G.T.; Redfearn, W.D.; Brass, E.; Dalton, H.A.; Gill, R.; Brass, D.; Smith, C.; Rayner, A.C.; Asher, L. Extreme Crowding in Laying Hens during a Recurrent Smothering Outbreak. Vet. Rec.; 2021; 188, e245. [DOI: https://dx.doi.org/10.1002/vetr.245]
7. Rayner, A.; Gill, R.; Brass, D.; Willings, T.; Bright, A. Smothering in UK Free-range Flocks. Part 2: Investigating Correlations between Disease, Housing and Management Practices. Vet. Rec.; 2016; 179, 252. [DOI: https://dx.doi.org/10.1136/vr.103701]
8. Bright, A.; Johnson, E. Smothering in Commercial Free-Range Laying Hens: A Preliminary Investigation. Anim. Behav.; 2011; 119, pp. 203-209. [DOI: https://dx.doi.org/10.1136/vr.c7462]
9. Marder, J.; Arad, Z. Panting and Acid-Base Regulation in Heat Stressed Birds. Comp. Biochem. Physiol. Part A Physiol.; 1989; 94, pp. 395-400. [DOI: https://dx.doi.org/10.1016/0300-9629(89)90112-6]
10. Kang, H.; Park, S.; Jeon, J.; Kim, H.; Kim, S.; Hong, E.; Kim, C. Effect of Stocking Density on Laying Performance, Egg Quality and Blood Parameters of Hy-Line Brown Laying Hens in an Aviary System. Eur. Poult. Sci.; 2018; 82, 245.
11. Mashaly, M.; Hendricks, G., 3rd; Kalama, M.; Gehad, A.; Abbas, A.; Patterson, P. Effect of Heat Stress on Production Parameters and Immune Responses of Commercial Laying Hens. Poult. Sci.; 2004; 83, pp. 889-894. [DOI: https://dx.doi.org/10.1093/ps/83.6.889]
12. Hartcher, K.M.; Jones, B. The Welfare of Layer Hens in Cage and Cage-Free Housing Systems. World’s Poult. Sci. J.; 2017; 73, pp. 767-782. [DOI: https://dx.doi.org/10.1017/S0043933917000812]
13. Campbell, D.L.; Hinch, G.N.; Downing, J.A.; Lee, C. Fear and Coping Styles of Outdoor-Preferring, Moderate-Outdoor and Indoor-Preferring Free-Range Laying Hens. Appl. Anim. Behav. Sci.; 2016; 185, pp. 73-77. [DOI: https://dx.doi.org/10.1016/j.applanim.2016.09.004]
14. Gebhardt-Henrich, S.G.; Stratmann, A. What Is Causing Smothering in Laying Hens?. Vet. Rec.; 2016; 179, 250. [DOI: https://dx.doi.org/10.1136/vr.i4618] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27609962]
15. Riber, A.B. Development with Age of Nest Box Use and Gregarious Nesting in Laying Hens. Appl. Anim. Behav. Sci.; 2010; 123, pp. 24-31. [DOI: https://dx.doi.org/10.1016/j.applanim.2009.12.016]
16. Giersberg, M.F.; Kemper, N.; Spindler, B. Pecking and Piling: The Behaviour of Conventional Layer Hybrids and Dual-Purpose Hens in the Nest. Appl. Anim. Behav. Sci.; 2019; 214, pp. 50-56. [DOI: https://dx.doi.org/10.1016/j.applanim.2019.02.016]
17. Lentfer, T.L.; Gebhardt-Henrich, S.G.; Fröhlich, E.K.; von Borell, E. Influence of Nest Site on the Behaviour of Laying Hens. Appl. Anim. Behav. Sci.; 2011; 135, pp. 70-77. [DOI: https://dx.doi.org/10.1016/j.applanim.2011.08.016]
18. Singh, R.; Cheng, K.; Silversides, F. Production Performance and Egg Quality of Four Strains of Laying Hens Kept in Conventional Cages and Floor Pens. Poult. Sci.; 2009; 88, pp. 256-264. [DOI: https://dx.doi.org/10.3382/ps.2008-00237]
19. Villanueva, S.; Ali, A.; Campbell, D.; Siegford, J. Nest Use and Patterns of Egg Laying and Damage by 4 Strains of Laying Hens in an Aviary System1. Poult. Sci.; 2017; 96, pp. 3011-3020. [DOI: https://dx.doi.org/10.3382/ps/pex104] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28431049]
20. Altan, O.; Seremet, C.; Bayraktar, H. The Effects of Early Environmental Enrichment on Performance, Fear and Physiological Responses to Acute Stress of Broiler. Arch. Für Geflügelkunde; 2013; 77, pp. 23-28.
21. Bist, R.B.; Subedi, S.; Chai, L.; Regmi, P.; Ritz, C.W.; Kim, W.K.; Yang, X. Effects of Perching on Poultry Welfare and Production: A Review. Poultry; 2023; 2, pp. 134-157. [DOI: https://dx.doi.org/10.3390/poultry2020013]
22. Winter, J.; Toscano, M.J.; Stratmann, A. The Potential of a Light Spot, Heat Area, and Novel Object to Attract Laying Hens and Induce Piling Behaviour. Animal; 2022; 16, 100567. [DOI: https://dx.doi.org/10.1016/j.animal.2022.100567] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35849910]
23. Yang, X.; Chai, L.; Bist, R.B.; Subedi, S.; Wu, Z. A Deep Learning Model for Detecting Cage-Free Hens on the Litter Floor. Animals; 2022; 12, 1983. [DOI: https://dx.doi.org/10.3390/ani12151983]
24. Yang, X.; Bist, R.; Subedi, S.; Chai, L. A deep learning method for monitoring spatial distribution of cage-free hens. Artif. Intell. Agric.; 2023; 8, pp. 20-29. [DOI: https://dx.doi.org/10.1016/j.aiia.2023.03.003]
25. Subedi, S.; Bist, R.; Yang, X.; Chai, L. Tracking Pecking Behaviors and Damages of Cage-Free Laying Hens with Machine Vision Technologies. Comput. Electron. Agric.; 2023; 204, 107545. [DOI: https://dx.doi.org/10.1016/j.compag.2022.107545]
26. Subedi, S.; Bist, R.; Yang, X.; Chai, L. Tracking Floor Eggs with Machine Vision in Cage-Free Hen Houses. Poult. Sci.; 2023; 102, 102637. [DOI: https://dx.doi.org/10.1016/j.psj.2023.102637] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37011469]
27. Sozzi, M.; Pillan, G.; Ciarelli, C.; Marinello, F.; Pirrone, F.; Bordignon, F.; Bordignon, A.; Xiccato, G.; Trocino, A. Measuring Comfort Behaviours in Laying Hens Using Deep-Learning Tools. Animals; 2023; 13, 33. [DOI: https://dx.doi.org/10.3390/ani13010033] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36611643]
28. Bist, R.B.; Yang, X.; Subedi, S.; Chai, L. Mislaying behavior detection in cage-free hens with deep learning technologies. Poult. Sci.; 2023; 102729. [DOI: https://dx.doi.org/10.1016/j.psj.2023.102729]
29. Pu, H.; Lian, J.; Fan, M. Automatic Recognition of Flock Behavior of Chickens with Convolutional Neural Network and Kinect Sensor. Int. J. Pattern. Recognit. Artif. Intell.; 2018; 32, 7. [DOI: https://dx.doi.org/10.1142/S0218001418500234]
30. Wang, C.Y.; Liao, H.; Wu, Y.H.; Chen, P.Y.; Hsieh, J.W.; Yeh, I.H. CSPNet: A New Backbone that can Enhance Learning Capability of CNN. Proceedings of the CVF Conference on Computer Vision and Pattern Recognition WorNshops (CVPRW); Seattle, WA, USA, 13–19 June 2020; pp. 1571-1580.
31. Mtjhl, L. Meituan/YOLOv6 2023. Available online: https://github.com/meituan/YOLOv6 (accessed on 18 January 2023).
32. Horvat, M.; Gledec, G. A Comparative Study of YOLOv5 Models Performance for Image Localization and Classification. Proceedings of the Central European Conference on Information and Intelligent Systems; Dubrovnik, Croatia, 20–22 September 2023; pp. 349-356.
33. Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W. YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications. arXiv; 2022; arXiv: 2209.02976
34. He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell.; 2015; 37, pp. 1904-1916. [DOI: https://dx.doi.org/10.1109/TPAMI.2015.2389824]
35. Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path Aggregation Network for Instance Segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Salt Lake City, UT, USA, 18–22 June 2018; pp. 8759-8768.
36. Weng, K.; Chu, X.; Xu, X.; Huang, J.; Wei, X. EfficientRep:An Efficient Repvgg-Style ConvNets with Hardware-Aware Neural Network Design. arXiv; 2023; arXiv: 2302.00386
37. Jocher, G. YOLOv5 (6.0/6.1) Brief Summary · Issue #6998 · Ultralytics/Yolov5. Available online: https://github.com/ultralytics/yolov5/issues/6998 (accessed on 10 March 2023).
38. Aburaed, N.; Alsaad, M.; Mansoori, S.A.; Al-Ahmad, H. A Study on the Autonomous Detection of Impact Craters. Proceedings of the Artificial Neural Networks in Pattern Recognition: 10th IAPR TC3 Workshop, ANNPR 2022; Dubai, United Arab Emirates, 24–26 November 2022; Springer International Publishing: Cham, Switzerland, 2022; pp. 181-194.
39. Li, C.; Li, L.; Geng, Y.; Jiang, H.; Cheng, M.; Zhang, B.; Ke, Z.; Xu, X.; Chu, X. YOLOv6 v3. 0: A Full-Scale Reloading. arXiv; 2023; [DOI: https://dx.doi.org/10.48550/arXiv.2301.05586] arXiv: 2301.05586
40. Gregory, N.G. Physiology and Behaviour of Animal Suffering; John Wiley & Sons: Hoboken, NJ, USA, 2008; ISBN 1-4051-7302-5 Available online: https://books.google.com/books?hl=en&lr=&id=0bOZocGJMaAC&oi=fnd&pg=PR5&dq=Physiology+and+Behaviour+of+Animal+Suffering%3B+&ots=wJJQHce-sQ&sig=QF9zN5IbQGMMHKpGLcUnjR0cLNY#v=onepage&q=Physiology%20and%20Behaviour%20of%20Animal%20Suffering%3B&f=false (accessed on 25 December 2022).
41. Martins Crispi, G.; Valente, D.S.M.; Queiroz, D.M.d.; Momin, A.; Fernandes-Filho, E.I.; Picanço, M.C. Using Deep Neural Networks to Evaluate Leafminer Fly Attacks on Tomato Plants. AgriEngineering; 2023; 5, pp. 273-286. [DOI: https://dx.doi.org/10.3390/agriengineering5010018]
42. Sambasivam, G.A.O.G.D.; Opiyo, G.D. A predictive machine learning application in agriculture: Cassava disease detection and classification with imbalanced dataset using convolutional neural networks. Egypt. Inform. J.; 2021; 22, pp. 27-34. [DOI: https://dx.doi.org/10.1016/j.eij.2020.02.007]
43. Ni, J.Q.; Erasmus, M.A.; Croney, C.C.; Li, C.; Li, Y. A critical review of advancement in scientific research on food animal welfare-related air pollution. J. Hazard. Mater.; 2021; 408, 124468. [DOI: https://dx.doi.org/10.1016/j.jhazmat.2020.124468]
44. Ni, J.Q.; Heber, A.J.; Darr, M.J.; Lim, T.T.; Diehl, C.A.; Bogan, B.W. Air quality monitoring and on-site computer system for livestock and poultry environment studies. Trans. ASABE; 2009; 52, pp. 937-947.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Piling behavior (PB) is a common issue that causes negative impacts on the health, welfare, and productivity of the flock in poultry houses (e.g., cage-free layer, breeder, and broiler). Birds pile on top of each other, and the weight of the birds can cause physical injuries, such as bruising or suffocation, and may even result in death. In addition, PB can cause stress and anxiety in the birds, leading to reduced immune function and increased susceptibility to disease. Therefore, piling has been reported as one of the most concerning production issues in cage-free layer houses. Several strategies (e.g., adequate space, environmental enrichments, and genetic selection) have been proposed to prevent or mitigate PB in laying hens, but less scientific information is available to control it so far. The current study aimed to develop and test the performance of a novel deep-learning model for detecting PB and evaluate its effectiveness in four CF laying hen facilities. To achieve this goal, the study utilized different versions of the YOLOv6 models (e.g., YOLOv6t, YOLOv6n, YOLOv6s, YOLOv6m, YOLOv6l, and YOLOv6l relu). The objectives of this study were to develop a reliable and efficient tool for detecting PB in commercial egg-laying facilities based on deep learning and test the performance of new models in research cage-free facilities. The study used a dataset comprising 9000 images (e.g., 6300 for training, 1800 for validation, and 900 for testing). The results show that the YOLOv6l relu-PB models perform exceptionally well with high average recall (70.6%), [email protected] (98.9%), and [email protected]:0.95 (63.7%) compared to other models. In addition, detection performance increases when the camera is placed close to the PB areas. Thus, the newly developed YOLOv6l relu-PB model demonstrated superior performance in detecting PB in the given dataset compared to other tested models.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer