Pansharpening aims to enhance the spatial resolution of low-resolution multi-spectral (LRMS) images by leveraging the high-resolution panchromatic images (PANs). Deep learning–based pansharpening technology provides a suite of sophisticated unsupervised algorithms. However, there are several challenges: (1) existing methods introduce more or less some kind of a priori information but lack explicit physical priori. (2) Existing methods often lack generalization capabilities and perform well only on some specific datasets. To address these issues, an unsupervised feature-aligned pansharpening framework based on a generative adversarial network (FaPGAN) is proposed. The proposed FaPGAN employs a single-generator dual-discriminator structure, divisible into the following two steps: (1) the generator adopts a pair of adaptive complementary normalization factors to weight the fusion of LRMS and PAN, which captures the successive transition from LRMS to PAN. The dual discriminator is responsible for preserving the spatial and spectral information. (2) A novel loss is introduced, which constrains the representations in the feature domain for feature alignment, essentially improving prediction accuracy. The experimental results show that the algorithm exhibits excellent spatial and spectral fidelity when fusing images from Gaofen-2 and WorldView-3 satellite datasets, and it can be well generalized to reduced-resolution images. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one