Paper
31 January 2020 Complexity-aware loss function for fast neural networks with early exits
Author Affiliations +
Proceedings Volume 11433, Twelfth International Conference on Machine Vision (ICMV 2019); 114333I (2020) https://doi.org/10.1117/12.2557077
Event: Twelfth International Conference on Machine Vision, 2019, Amsterdam, Netherlands
Abstract
Most modern convolutional neural networks (CNNs) are compute-intensive, making them infeasible to use in mobile or embedded devices. One of the approaches to this problem is to modify a usual deep CNN with shallow early-exit branches, appended to some convolutional layers [1]. This modification, named BranchyNet, allows to process simple input samples without performing full volume of calculations, providing a speed-up on average. In this work we consider the problem of training a BranchyNet. We exploit a cascade loss function [2], which explicitly regularizes CNN’s average computation time, and modify it to use the entropy of branches’ prediction as confidence measure. We show, that on CIFAR10 dataset the proposed loss function provides a actual speed-up increase from 43% to 47% without quality degradation, comparing with the original loss function.
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Lev Teplyakov, Segrey Gladilin, and Evgeny Shvets "Complexity-aware loss function for fast neural networks with early exits", Proc. SPIE 11433, Twelfth International Conference on Machine Vision (ICMV 2019), 114333I (31 January 2020); https://doi.org/10.1117/12.2557077
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
Back to Top