This study presents a comprehensive methodology for developing and testing a machine-learning model, utilizing the YOLOv8 architecture, to analyze handgun handling states in videos. Four datasets, including ready-to-fire, low ready, holstered, and no handgun images, were meticulously curated and annotated for model training, validation, and testing. The YOLOv8 model was trained with varying epochs and batch sizes, demonstrating robust performance in detecting and classifying handgun poses, with an overall mean Average Precision (mAP) of 98.02%. Comparative analysis against six other handgun detection methods revealed YOLOv8's superior performance, particularly in precision and mAP. Lastly, the study emphasizes on the model's effectiveness in real-world scenarios and recommends further exploration of its applications, hyperparameter optimization, continuous dataset refinement, and leveraging its strengths for enhanced public safety measures.
|