Paper
30 March 2000 Minimum number of hidden neurons does not necessarily provide the best generalization
Jason M. Kinser
Author Affiliations +
Abstract
The quality of a feedforward neural network that allows it to associate data not used in training is called generalization. A common method of creating the desired network is for the user to select the network architecture and allowing a training algorithm to evolve the synaptic weights between the neurons. A popular belief is that the network with the fewest number of hidden neurons that correctly learns a sufficient training set is a network with better generalization. This paper will contradict that belief. The optimization of generalization requires that the network not assume information that does not exist in the training data. Unfortunately, a network with the minimum number of hidden neurons may require assumptions of information that does not exist. The network then skews the surface that maps the input space to the output space in order to accommodate the minimum architecture which then sacrifices generalization.
© (2000) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jason M. Kinser "Minimum number of hidden neurons does not necessarily provide the best generalization", Proc. SPIE 4055, Applications and Science of Computational Intelligence III, (30 March 2000); https://doi.org/10.1117/12.380567
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neurons

Network architectures

Neural networks

Associative arrays

Brain mapping

Chemical elements

Data hiding

Back to Top