CSPNet Tabanli Çoklu Odakli Görüntü Birleştirme


Kirteke T., ASLANTAŞ V.

9th International Artificial Intelligence and Data Processing Symposium, IDAP 2025, Malatya, Türkiye, 6 - 07 Eylül 2025, (Tam Metin Bildiri) identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/idap68205.2025.11222225
  • Basıldığı Şehir: Malatya
  • Basıldığı Ülke: Türkiye
  • Anahtar Kelimeler: CSPNet, Image Fusion, Multi Focus Image Fusion
  • Erciyes Üniversitesi Adresli: Evet

Özet

This study aims to address the problem of limited depth of field in image processing by fusing multi-focus images into a single, fully focused image. Traditional image fusion methods are generally based on transform-domain techniques, which may result in information loss in complex structures. For this reason, deep learning-based models have attracted significant attention in recent years. In this study, a powerful image fusion method is proposed using the CrossStage Partial Network (CSPNet) architecture, which provides both computational efficiency and high accuracy. The Lytro Illum dataset, which contains high-resolution images captured at different focal points, was used for training the model. During data preparation, a traditional Laplacianbased fusion method was utilized to generate fused images, which were then used as input for training the proposed CSPNet-based model. Using this method, a completely focused image was successfully generated from a pair of partially focused input images. The performance of the model was evaluated using standard metrics commonly applied in image fusion tasks. According to the results obtained from these metrics, the proposed CSPNet-based model outperformed both classical image fusion methods and existing CNN-based architectures. The results demonstrate that the proposed approach is effective and applicable in the fields of image processing and multi-focus image fusion.