Aiming at the problem of low automation in pig farms, this paper proposes a new pig posture estimation method based on breeding scenarios for intelligent monitoring of pig farms. Firstly, the video image data of indoor and outdoor scenes in pig breeding scenarios were collected and labeled, and a self-constructed pig posture estimation dataset was built; Secondly,Resnet50, VGG16 and MobileNetV2 were used as the back-bone network, and the three methods based on coordinate regression, heat map and simple coordinate classification were analyzed experimentally, and the Simple Coordinate Classification (SimCC) algorithm with the optimal effect was selected as the extraction method of key points of pigs; Finally, we integrated High Resolution Network(HRNet) and HRFormer, which incorporates Transformer modules, as backbone net-works. They were combined with the SimCC to formulate an effective pig pose estimation framework. The experimental results show that the mAP of HRFormer-SimCC reaches 83.2%, which is an average improvement of 7.2% over the use of traditional CNN model and 0.4% over the HRNet-SimCC, and the floating-point computation and parameter counts of HRFormer-SimCC are only 45.05% and 36.48% of it. This is more suitable to be deployed in breeding environments and provides a theoretical basis for intelligent monitoring of pig farms.
Pork is the largest meat consumed in China. The stable supply of pork is closely related to national life. Therefore, the health of pigs in pig enterprises is particularly important. By monitoring the behavior of pigs, we can find out the diseases of pigs and intervene in time to reduce the losses of enterprises and ensure the stable supply of pork in the market. This paper presents an improved YOLOv5 pig behavior recognition method, which can automatically recognize five behaviors of pigs:standing, ventral lying, lateral lying, sitting and climbing. Firstly,in the YOLOv5 network structure, a branch is added to its original C3 module to extract more original features. Secondly, the Convolutional Block Attention Module (CBAM) attention mechanism module is introduced and further integrated with the C3 module to obtain the new CBAMC3 module, which enhances the recognition capability of the model for obstructed targets. Meanwhile, the neck module in You Only Live Once (YOLO) v5 is improved and the Cneck module is proposed. By adding the feature fusion layer, the neck can obtain a greater number of underlying image features, provide more image features for the prediction layer, and enhance the recognition capability of the model. The improved YOLOv5 model was tested on the pig behavior dataset built in this study, and the outcome indicated that the recognition accuracy of the method for the five behaviors in the validation set was 99.1%, 95.3%, 97.4%, 88.7% and 99.5%, respectively, with an average accuracy of 96.0%, which was 1.2% more than the YOLOv5 model, and the proposed method has more merits. The method proposed in this paper has more merits and is beneficial to practical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.