The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
L’augmentation des coûts de calcul et de stockage des réseaux de neurones convolutifs (CNN) entrave gravement leurs applications sur des appareils aux ressources limitées ces dernières années. En conséquence, il devient nécessaire d’accélérer les réseaux par certaines méthodes. Dans cet article, nous proposons une méthode basée sur les pertes pour élaguer les canaux redondants des CNN. Il identifie les canaux sans importance en utilisant la technique d'expansion de Taylor en ce qui concerne les facteurs de mise à l'échelle et de décalage, et élague ces canaux par seuil centile fixe. Ce faisant, nous obtenons un réseau compact avec moins de paramètres et de consommation de FLOP. Dans la section expérimentale, nous évaluons la méthode proposée dans les ensembles de données du CIFAR avec plusieurs réseaux populaires, notamment VGG-19, DenseNet-40 et ResNet-164, et les résultats expérimentaux démontrent que la méthode proposée est capable d'élaguer plus de 70 % des canaux et paramètres sans aucune performance. perte. De plus, un élagage itératif pourrait être utilisé pour obtenir un réseau plus compact.
Xin LONG
National University of Defense Technology
Xiangrong ZENG
National University of Defense Technology
Chen CHEN
National University of Defense Technology
Huaxin XIAO
National University of Defense Technology
Maojun ZHANG
National University of Defense Technology
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copier
Xin LONG, Xiangrong ZENG, Chen CHEN, Huaxin XIAO, Maojun ZHANG, "Loss-Driven Channel Pruning of Convolutional Neural Networks" in IEICE TRANSACTIONS on Information,
vol. E103-D, no. 5, pp. 1190-1194, May 2020, doi: 10.1587/transinf.2019EDL8200.
Abstract: The increase in computation cost and storage of convolutional neural networks (CNNs) severely hinders their applications on limited-resources devices in recent years. As a result, there is impending necessity to accelerate the networks by certain methods. In this paper, we propose a loss-driven method to prune redundant channels of CNNs. It identifies unimportant channels by using Taylor expansion technique regarding to scaling and shifting factors, and prunes those channels by fixed percentile threshold. By doing so, we obtain a compact network with less parameters and FLOPs consumption. In experimental section, we evaluate the proposed method in CIFAR datasets with several popular networks, including VGG-19, DenseNet-40 and ResNet-164, and experimental results demonstrate the proposed method is able to prune over 70% channels and parameters with no performance loss. Moreover, iterative pruning could be used to obtain more compact network.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2019EDL8200/_p
Copier
@ARTICLE{e103-d_5_1190,
author={Xin LONG, Xiangrong ZENG, Chen CHEN, Huaxin XIAO, Maojun ZHANG, },
journal={IEICE TRANSACTIONS on Information},
title={Loss-Driven Channel Pruning of Convolutional Neural Networks},
year={2020},
volume={E103-D},
number={5},
pages={1190-1194},
abstract={The increase in computation cost and storage of convolutional neural networks (CNNs) severely hinders their applications on limited-resources devices in recent years. As a result, there is impending necessity to accelerate the networks by certain methods. In this paper, we propose a loss-driven method to prune redundant channels of CNNs. It identifies unimportant channels by using Taylor expansion technique regarding to scaling and shifting factors, and prunes those channels by fixed percentile threshold. By doing so, we obtain a compact network with less parameters and FLOPs consumption. In experimental section, we evaluate the proposed method in CIFAR datasets with several popular networks, including VGG-19, DenseNet-40 and ResNet-164, and experimental results demonstrate the proposed method is able to prune over 70% channels and parameters with no performance loss. Moreover, iterative pruning could be used to obtain more compact network.},
keywords={},
doi={10.1587/transinf.2019EDL8200},
ISSN={1745-1361},
month={May},}
Copier
TY - JOUR
TI - Loss-Driven Channel Pruning of Convolutional Neural Networks
T2 - IEICE TRANSACTIONS on Information
SP - 1190
EP - 1194
AU - Xin LONG
AU - Xiangrong ZENG
AU - Chen CHEN
AU - Huaxin XIAO
AU - Maojun ZHANG
PY - 2020
DO - 10.1587/transinf.2019EDL8200
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E103-D
IS - 5
JA - IEICE TRANSACTIONS on Information
Y1 - May 2020
AB - The increase in computation cost and storage of convolutional neural networks (CNNs) severely hinders their applications on limited-resources devices in recent years. As a result, there is impending necessity to accelerate the networks by certain methods. In this paper, we propose a loss-driven method to prune redundant channels of CNNs. It identifies unimportant channels by using Taylor expansion technique regarding to scaling and shifting factors, and prunes those channels by fixed percentile threshold. By doing so, we obtain a compact network with less parameters and FLOPs consumption. In experimental section, we evaluate the proposed method in CIFAR datasets with several popular networks, including VGG-19, DenseNet-40 and ResNet-164, and experimental results demonstrate the proposed method is able to prune over 70% channels and parameters with no performance loss. Moreover, iterative pruning could be used to obtain more compact network.
ER -