Overfitting is a common problem in training of neural network with small training sets, which leads to worse performance on the new samples. Dropout has been proved to be an effective method to avoid overfitting, which prevents co-adaptation of features detectors by randomly discarding nodes from hidden layers of network. Inspired by dropout, we proposed a ranked dropout method to remove randomness of standard dropout mask, which discards a part of active nodes and forces the inactive nodes to learn more features to improve generalization ability. We apply the proposed ranked dropout to a stacked autoencoder network and compare it with standard dropout, gaussian dropout, uniform dropout and DropConnect on MNIST dataset. Experimental results of handwritten digit recognition demonstrate that the ranked strategy leads to better classification performance and the proposed ranked dropout can effectively reduce interference of overfitting and improve model‟s generalization ability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.