Methods for Ground Target Recognition from an Aerial Camera on a Helicopter Using the MISU-YOLOv8 Model in Dark and Foggy Environments

Authors

  • Houbin Wang School of Resources and Environmental Engineering, Ludong University, Yantai, Shandong Province, China Author
  • Yongwei Wang Wenchang county Yuanzhuang town people's government, Wenshang County, Jining, Shandong Province, China Author
  • Junyi Liu School of Mechanical Engineering, Liaoning Technical University, Fuxin, Liaoning Province, China Author
  • Jianing Chang School of Resources and Environmental Engineering, Ludong University, Yantai, Shandong Province, China Author
  • Huanran Shu School of Resources and Environmental Engineering, Ludong University, Yantai, Shandong Province, China Author
  • Kaidi Sun School of Resources and Environmental Engineering, Ludong University, Yantai, Shandong Province, China Author

DOI:

https://doi.org/10.71451/ISTAER2511

Keywords:

YOLOv8, multi-scale block, inverted residual mobile, self-calibrated illumination, dehazing, object detection

Abstract

Helicopters are critical aerial platforms, and their operational capability in complex environments is crucial. However, their performance in dark and foggy conditions is limited, particularly in ground target recognition using onboard cameras due to poor visibility and lighting conditions. To address this issue, we propose a YOLOv8-based model enhanced to improve ground target recognition in dark and foggy environments. The MS block is a multi-scale feature fusion module that enhances generalization by extracting features at different scales. The improved Residual Mobile Block (iRMB) incorporates attention mechanisms to enhance feature representation. SCINet, a spatial-channel attention-based network, adaptively adjusts feature map weights to improve robustness. UnfogNet, a defogging algorithm, enhances image clarity by removing fog. This integrated approach significantly improves ground target recognition capabilities. Unlike traditional models, AOD-Net generates clean images via a lightweight CNN, making it easily integrable into other deep models. Our MISU-YOLOv8 model outperforms recent state-of-the-art real-time object detectors, including YOLOv7 and YOLOv8, with fewer parameters and FLOPs, improving YOLOv8's Average Precision (AP) from 37% to over 41%. This work can also serve as a plug-and-play module for other YOLO models, this advancement provides robust technical support for helicopter reconnaissance missions in complex environments.

 

**************** ACKNOWLEDGEMENTS****************

Thanks for the data support provided by National-level Innovation Program Project Fund "Research on Seedling Inspection Robot Technology Based on Multi-source Information Fusion and Deep Network" (No.: 202410451009); Jiangsu Provincial Natural Science Research General Project (No.: 20KJB530008); China Society for Smart Engineering "Research on Intelligent Internet of Things Devices and Control Program Algorithms Based on Multi-source Data Analysis" (No.: ZHGC104432); China Engineering Management Association "Comprehensive Application Research on Intelligent Robots and Intelligent Equipment Based on Big Data and Deep Learning" (No.: GMZY2174); Key Project of National Science and Information Technology Department Research Center National Science and Technology Development Research Plan (No.: KXJS71057); Key Project of National Science and Technology Support Program of Ministry of Agriculture (No.: NYF251050).

Author Biographies

  • Houbin Wang, School of Resources and Environmental Engineering, Ludong University, Yantai, Shandong Province, China

    I am a student at Ludong University, and my main research direction is computer vision algorithms and applications

  • Yongwei Wang, Wenchang county Yuanzhuang town people's government, Wenshang County, Jining, Shandong Province, China

    I am an employee of Wenchang county Yuanzhuang town people's government, and my main research direction is economics and urban planning

  • Junyi Liu, School of Mechanical Engineering, Liaoning Technical University, Fuxin, Liaoning Province, China

    I am a graduate student of School of Mechanical Engineering, Liaoning Technical University, and my main research direction is computer algorithms

  • Jianing Chang, School of Resources and Environmental Engineering, Ludong University, Yantai, Shandong Province, China

     

    I am a student at Ludong University, and my main research direction is computer vision algorithms and applications

  • Huanran Shu, School of Resources and Environmental Engineering, Ludong University, Yantai, Shandong Province, China

     

    I am a student at Ludong University, and my main research direction is computer vision algorithms and applications

  • Kaidi Sun, School of Resources and Environmental Engineering, Ludong University, Yantai, Shandong Province, China

     

    I am a student at Ludong University, and my main research direction is computer vision algorithms and applications

References

[1] Yang, S., Zhou, D., Cao, J., & Guo, Y. (2023). LightingNet: An integrated learning method for low-light image enhancement. IEEE Transactions on Computational Imaging, 9, 29-42. DOI: https://doi.org/10.1109/TCI.2023.3240087

[2] Fisa, R., Musukuma, M., Sampa, M., Musonda, P., & Young, T. (2022). Effects of interventions for preventing road traffic crashes: an overview of systematic reviews. BMC public health, 22(1), 513. DOI: https://doi.org/10.1186/s12889-021-12253-y

[3] Jiang, D., Li, G., Tan, C., Huang, L., Sun, Y., & Kong, J. (2021). Semantic segmentation for multiscale target based on object recognition using the improved Faster-RCNN model. Future Generation Computer Systems, 123, 94-104. DOI: https://doi.org/10.1016/j.future.2021.04.019

[4] Ren, Z., Fang, F., Yan, N., & Wu, Y. (2022). State of the art in defect detection based on machine vision. International Journal of Precision Engineering and Manufacturing-Green Technology, 9(2), 661-691. DOI: https://doi.org/10.1007/s40684-021-00343-6

[5] Sun, N., Zhao, J., Shi, Q., Liu, C., & Liu, P. (2024). Moving target tracking by unmanned aerial vehicle: A survey and taxonomy. IEEE Transactions on Industrial Informatics. DOI: https://doi.org/10.1109/TII.2024.3363084

[6] Tulbure, A.-A., Tulbure, A.-A., & Dulf, E.-H. (2022). A review on modern defect detection models using DCNNs–Deep convolutional neural networks. Journal of Advanced Research, 35, 33-48. DOI: https://doi.org/10.1016/j.jare.2021.03.015

[7] Sharma, B. B., Raffik, R., Chaturvedi, A., Geeitha, S., Akram, P. S., Natrayan, L., Mohanavel, V., Sudhakar, M., & Sathyamurthy, R. (2022). Designing and implementing a smart transplanting framework using programmable logic controller and photoelectric sensor. Energy Reports, 8, 430-444. DOI: https://doi.org/10.1016/j.egyr.2022.07.019

[8] Zhang, Z., Xie, X., Yang, M., Tian, Y., Jiang, Y., & Cui, Y. (2023). Improving social media popularity prediction with multiple post dependencies. arXiv preprint arXiv:2307.15413. DOI: https://doi.org/10.21203/rs.3.rs-4267015/v1

[9] Li, J., Zheng, C., Chen, P., Zhang, J., & Wang, B. (2025). Small object detection in UAV imagery based on channel-spatial fusion cross attention. Signal, Image and Video Processing, 19(4), 302. DOI: https://doi.org/10.1007/s11760-025-03850-0

[10] Liu, P., Wang, Q., Zhang, H., Mi, J., & Liu, Y. (2023). A lightweight object detection algorithm for remote sensing images based on attention mechanism and YOLOv5s. Remote Sensing, 15(9), 2429. DOI: https://doi.org/10.3390/rs15092429

[11] Hua, W., Chen, Q., & Chen, W. (2024). A new lightweight network for efficient UAV object detection. Scientific Reports, 14(1), 13288. DOI: https://doi.org/10.1038/s41598-024-64232-z

[12] Jinyi, F., Zijia, Z., Wei, S., & Kaixin, Z. (2024). Improved YOLOv8 Small Target Detection Algorithm in Aerial Images. Journal of Computer Engineering & Applications, 60(6).

Downloads

Published

2025-03-05

Data Availability Statement

The article includes some data to support the results of this research. The dataset for this article is available at https://drive.google.com/drive/folders/1UdlgHk49iu6WpcJ5467iT-UqNPpx__CC.

Issue

Section

Research Article

How to Cite

Methods for Ground Target Recognition from an Aerial Camera on a Helicopter Using the MISU-YOLOv8 Model in Dark and Foggy Environments. (2025). International Scientific Technical and Economic Research , 127-143. https://doi.org/10.71451/ISTAER2511

Similar Articles

1-10 of 33

You may also start an advanced similarity search for this article.