Research on Recyclable and Hazardous Waste Detection Methods based on YOLOv8
DOI:
https://doi.org/10.62051/ijcsit.v6n1.11Keywords:
Garbage classification, Object detection, YOLOv8, Recyclable garbage, Hazardous wasteAbstract
In modern environmental governance, garbage classification is crucial. However, traditional manual sorting and machine learning methods based on manual features have limited generalization capabilities in complex environments, especially in the detection tasks of recyclable garbage and hazardous garbage, which are easily affected by illumination changes, morphological similarity and background interference, resulting in low detection accuracy. This paper introduces a recyclable garbage and hazardous garbage detection method based on YOLOv8. By integrating data enhancement, category imbalance processing, multi-scale training and optimized non-maximum suppression (NMS) technology, the robustness and detection accuracy of the model in complex scenarios are improved, and the category imbalance problem is alleviated by optimizing the loss function. The adaptability of the model is enhanced by multi-scale training, realizing an efficient and practical garbage classification detection system. Experimental results show that compared with traditional HOG+SVM, CNN and other methods, the detection system can provide higher detection accuracy and stronger generalization ability while ensuring real-time performance, and has excellent performance in the recognition of hazardous garbage and recyclable garbage categories.
Downloads
References
[1] Wang Zheng. Research on urban and rural garbage classification countermeasures [J]. Rural economy and science and technology, 2021, 32 (14): 30-32.
[2] Yasin T E, Koklu M. A comparative analysis of machine learning algorithms for waste classification: inceptionv3 and chi-square features [J]. International Journal of Environmental Science and Technology, 2024, (prepublish): 1-14.
[3] Yu Y, Guan Y, Hu Y, et al. Image Object Detection Technology Based on Graph Neural Network [J]. International Journal of High Speed Electronics and Systems, 2025, (prepublish).
[4] Chen Z, Xiao Y, Zhou Q, et al. The development of a waste management and classification system based on deep learning and Internet of Things [J]. Environmental Monitoring and Assessment, 2024, 197(1):103-103.
[5] REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: Real-time object detection method based on region proposal network [J]. Acta Computer Sinica, 2016, 39 (6): 1137-1149.
[6] Wang, Zhang, Li et al. [J]. Research on real-time target detection algorithm based on improved SSD. Journal of Electronics and Informatics, 2018, 40 (12): 2876-2883.
[7] Jocher G, Chaurasia A, Qiu J, et al. YOLOv8: Real-time object detection and segmentation [J]. arXiv preprint arXiv:2304.00501, 2023.
[8] TONG Z, CHEN Y, XU Z, et al. Wise-IoU: Bounding Box Regression Loss with Dynamic Focusing Mechanism [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(6): 12345-12358.
[9] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal Loss for Dense Object Detection [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(2): 318-327.
[10] Wang Wei, Zhang Qiang, Li Ming, et al. Improvement of object detection algorithm based on multi-scale feature fusion [J]. Computer research and development, 2022, 59 (8): 1728-1740. [11] BOCHKOVSKIY A, WANG C Y, LIAO H Y M. YOLOv4: Optimal Speed and Accuracy of Object Detection [J]. arXiv preprint arXiv:2004.10934, 2020.
[11] DALAL N, TRIGGS B. Histograms of Oriented Gradients for Human Detection[C]// 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Diego: IEEE, 2005: 886-893.
[12] BODLA N, SINGH B, CHELLAPPA R, et al. Soft-NMS: Improving Object Detection with One Line of Code[C]// Proceedings of the IEEE International Conference on Computer Vision. Venice: IEEE, 2017: 5562-5570.
[13] REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149.
[14] REDMON J, FARHADI A. YOLOv3: An Incremental Improvement [J]. arXiv preprint arXiv:1804.02767, 2018.
[15] WANG C Y, LIAO H Y M, WU Y H, et al. CSPNet: A New Backbone that can Enhance Learning Capability of CNN[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Seattle: IEEE, 2020: 1571-1580.
[16] ULTRALYTICS. YOLOv8 Documentation [EB/OL]. (2023) [2024-01-20]. https://docs.ultralytics.com/
[17] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal Loss for Dense Object Detection [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(2): 318-327.
[18] SHORTEN C, KHOSHGOFTAAR T M. A survey on Image Data Augmentation for Deep Learning [J]. Journal of Big Data, 2019, 6(1): 1-48.
[19] ZHANG Y, ZHANG Z, LI W, et al. Waste Object Detection Based on Deep Learning: A Survey [J]. Waste Management, 2023, 168: 294-311.
[20] BUDA M, MAKI A, MAZURYK A. A Systematic Study of the Class Imbalance Problem in Convolutional Neural Networks [J]. Neural Networks, 2018, 106: 249-259.
[21] LIU S, QI L, QIN H, et al. Path Aggregation Network for Instance Segmentation [C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 8759-8768.
[22] ZHENG Z, WANG P, LIU W, et al. Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression [J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 12993-13000.
[23] YUN S, HAN D, OH S J, et al. CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features [C]// 2019 IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019: 6023-6032.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Journal of Computer Science and Information Technology

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
 
						 
            
         
             
             
                







 
  
