Paper
19 August 1993 Application of simulated annealing to the backpropagation model improves convergence
Charles B. Owen, Adel M. Abunawass
Author Affiliations +
Abstract
The Backpropagation technique for supervised learning of internal representations in multi- layer artificial neural networks is an effective approach for solution of the gradient descent problem. However, as a primarily deterministic solution, it will attempt to take the best path to the nearest minimum, whether global or local. If a local minimum is reached, the network will fail to learn or will learn a poor approximation of the solution. This paper describes a novel approach to the Backpropagation model based on Simulated Annealing. This modified learning model is designed to provide an effective means of escape from local minima. The system is shown to converge more reliably and much faster than traditional noise insertion techniques. Due to the characteristics of the cooling schedule, the system also demonstrates a more consistent training profile.
© (1993) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Charles B. Owen and Adel M. Abunawass "Application of simulated annealing to the backpropagation model improves convergence", Proc. SPIE 1966, Science of Artificial Neural Networks II, (19 August 1993); https://doi.org/10.1117/12.152626
Lens.org Logo
CITATIONS
Cited by 20 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Algorithms

Stochastic processes

Chemical elements

Artificial neural networks

Neural networks

Annealing

Systems modeling

Back to Top