The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
La détection de saillance est largement utilisée dans de nombreuses tâches de vision telles que la récupération d'images, la compression et la réidentification de personnes. Les méthodes d’apprentissage profond ont obtenu d’excellents résultats, mais la plupart d’entre elles se sont davantage concentrées sur les performances et ont ignoré l’efficacité des modèles, difficiles à transplanter dans d’autres applications. Comment concevoir un modèle efficace est donc devenu le principal problème. Dans cette lettre, nous proposons un réseau de fonctionnalités parallèles, un modèle de saillance construit sur un réseau neuronal à convolution (CNN) par une méthode parallèle. Des blocs de dilatation parallèles sont d'abord utilisés pour extraire des caractéristiques de différentes couches de CNN, puis une structure de suréchantillonnage parallèle est adoptée pour suréchantillonner les cartes de caractéristiques. Enfin, les cartes de saillance sont obtenues en fusionnant des sommations et des concaténations de cartes de caractéristiques. Notre modèle final construit sur VGG-16 est beaucoup plus petit et plus rapide que les modèles de saillance existants et atteint également des performances de pointe.
Zheng FANG
Army Engineering University
Tieyong CAO
Army Engineering University
Jibin YANG
Army Engineering University
Meng SUN
Army Engineering University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copier
Zheng FANG, Tieyong CAO, Jibin YANG, Meng SUN, "Parallel Feature Network For Saliency Detection" in IEICE TRANSACTIONS on Fundamentals,
vol. E102-A, no. 2, pp. 480-485, February 2019, doi: 10.1587/transfun.E102.A.480.
Abstract: Saliency detection is widely used in many vision tasks like image retrieval, compression and person re-identification. The deep-learning methods have got great results but most of them focused more on the performance ignored the efficiency of models, which were hard to transplant into other applications. So how to design a efficient model has became the main problem. In this letter, we propose parallel feature network, a saliency model which is built on convolution neural network (CNN) by a parallel method. Parallel dilation blocks are first used to extract features from different layers of CNN, then a parallel upsampling structure is adopted to upsample feature maps. Finally saliency maps are obtained by fusing summations and concatenations of feature maps. Our final model built on VGG-16 is much smaller and faster than existing saliency models and also achieves state-of-the-art performance.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E102.A.480/_p
Copier
@ARTICLE{e102-a_2_480,
author={Zheng FANG, Tieyong CAO, Jibin YANG, Meng SUN, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Parallel Feature Network For Saliency Detection},
year={2019},
volume={E102-A},
number={2},
pages={480-485},
abstract={Saliency detection is widely used in many vision tasks like image retrieval, compression and person re-identification. The deep-learning methods have got great results but most of them focused more on the performance ignored the efficiency of models, which were hard to transplant into other applications. So how to design a efficient model has became the main problem. In this letter, we propose parallel feature network, a saliency model which is built on convolution neural network (CNN) by a parallel method. Parallel dilation blocks are first used to extract features from different layers of CNN, then a parallel upsampling structure is adopted to upsample feature maps. Finally saliency maps are obtained by fusing summations and concatenations of feature maps. Our final model built on VGG-16 is much smaller and faster than existing saliency models and also achieves state-of-the-art performance.},
keywords={},
doi={10.1587/transfun.E102.A.480},
ISSN={1745-1337},
month={February},}
Copier
TY - JOUR
TI - Parallel Feature Network For Saliency Detection
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 480
EP - 485
AU - Zheng FANG
AU - Tieyong CAO
AU - Jibin YANG
AU - Meng SUN
PY - 2019
DO - 10.1587/transfun.E102.A.480
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E102-A
IS - 2
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - February 2019
AB - Saliency detection is widely used in many vision tasks like image retrieval, compression and person re-identification. The deep-learning methods have got great results but most of them focused more on the performance ignored the efficiency of models, which were hard to transplant into other applications. So how to design a efficient model has became the main problem. In this letter, we propose parallel feature network, a saliency model which is built on convolution neural network (CNN) by a parallel method. Parallel dilation blocks are first used to extract features from different layers of CNN, then a parallel upsampling structure is adopted to upsample feature maps. Finally saliency maps are obtained by fusing summations and concatenations of feature maps. Our final model built on VGG-16 is much smaller and faster than existing saliency models and also achieves state-of-the-art performance.
ER -