Multi-focus image fusion based on optimal defocus estimation


ASLANTAŞ V. , TOPRAK A. N.

COMPUTERS & ELECTRICAL ENGINEERING, cilt.62, ss.302-318, 2017 (SCI İndekslerine Giren Dergi) identifier identifier

  • Cilt numarası: 62
  • Basım Tarihi: 2017
  • Doi Numarası: 10.1016/j.compeleceng.2017.02.003
  • Dergi Adı: COMPUTERS & ELECTRICAL ENGINEERING
  • Sayfa Sayıları: ss.302-318

Özet

One of the main drawbacks of the imaging systems is limited depth of field which prevents them from obtaining an all-in-focus image of the environment. This paper presents an efficient, pixel-based multi-focus image fusion method which generates an all-in-focus image by combining the images that are acquired from the same point of view with different focus settings. The proposed method first estimates the point spread function of each source image by utilizing the Levenberg-Marquardt algorithm. Then, artificially blurred versions of the source images are computed by convolving them with the estimated point spread functions. Fusion map is computed by making use of both the source and the artificially blurred images. At last, the fusion map is improved by morphological operators. Experimental results show that the proposed method is computationally competitive with the state-of-the-art methods and outperforms them in terms of both visual and quantitative metric evaluations. (C) 2017 Elsevier Ltd. All rights reserved.

One of the main drawbacks of the imaging systems is limited depth of field which prevents them from obtaining an all-in-focus image of the environment. This paper presents an efficient, pixel-based multi-focus image fusion method which generates an all-in-focus image by combining the images that are acquired from the same point of view with different focus settings. The proposed method first estimates the point spread function of each source image by utilizing the Levenberg–Marquardt algorithm. Then, artificially blurred versions of the source images are computed by convolving them with the estimated point spread functions. Fusion map is computed by making use of both the source and the artificially blurred images. At last, the fusion map is improved by morphological operators. Experimental results show that the proposed method is computationally competitive with the state-of-the-art methods and outperforms them in terms of both visual and quantitative metric evaluations.