Paper
20 August 1992 Embedding domain information in backpropagation
George M. Georgiou, Cris Koutsougeras
Author Affiliations +
Abstract
The search space for backpropagation (BP) is usually of high dimensionality which shows convergence. Also, the number of minima abound, and thus the danger to fall in a shallow one is great. In order to limit the search space of BP in a sensible way, we incorporate domain knowledge in the training process. A Two-phase Backpropagation algorithms is presented. In the first phase the direction of the weight vectors of the first (and possibly the only) hidden layer are constrained to remain in the same directions as, for example, the ones of linear discriminants or Principal Components. The directions are chosen based on the problem at hand. Then in the second phase, the constraints are removed and standard Backpropagation algorithm takes over to further minimize the error function. The first phase swiftly situates the weight vectors in a good position (relatively low error) which can serve as the initialization of the standard Backpropagation. Other speed-up techniques can be used in both phases. The generality of its application, its simplicity, and the shorter training time it requires, make this approach attractive.
© (1992) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
George M. Georgiou and Cris Koutsougeras "Embedding domain information in backpropagation", Proc. SPIE 1706, Adaptive and Learning Systems, (20 August 1992); https://doi.org/10.1117/12.139948
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Image compression

Neurons

Switching

Iris

Switches

Machine learning

Back to Top