During the last decade, graph regularized dictionary learning (DL) models have obtained a lot of attention due to their flexible and discriminative ability in nonlinear pattern classification. However, the conventional graph-regularized methods construct a fixed affinity matrix for nearby samples in high-dimensional data space, which is vulnerable to noisy and redundant sample features. Furthermore, the discrimination of the graph regularized representation is not fully explored with the supervised classifier learning framework. To remedy these limitations, we propose an adaptive graph-regularized and label-embedded DL model for pattern classification. Especially, the affinity graph construction in low-dimensional representation space and the discriminative sparse representation is simultaneously learned in a unified framework for mutual promotion. More concretely, we iteratively update the sample similarity weight matrix in representation space to enhance the model robustness and further impose a supervised label-embedding term on sparse representation to enhancing its discriminative capability for classification. The dictionary orthonormal constraint is also considered to eliminate the redundant atoms and enhance the model discrimination. An efficient alternating direction solution with guaranteed convergence is proposed for the nonconvex and unsmooth model. Experimental results on five benchmark datasets verify the effectiveness of the proposed model. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 2 scholarly publications.
Associative arrays
Data modeling
Image classification
Chemical species
Performance modeling
Statistical modeling
Autoregressive models