The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Dans cet article, nous proposons un modèle approfondi de reconnaissance visuelle basé sur le réseau hybride KPCA (H-KPCANet), basé sur la combinaison de KPCANet en une étape et de KPCANet en deux étapes. Le modèle proposé se compose de quatre types de composants de base : la couche d'entrée, le KPCANet à une étape, le KPCANet à deux étapes et la couche de fusion. Le rôle de KPCANet à une étape est de calculer les filtres KPCA pour la couche de convolution, et KPCANet à deux étapes consiste à apprendre les filtres PCA dans la première étape et les filtres KPCA dans la deuxième étape. Après le mappage de quantification binaire et l'histogramme par blocs, les caractéristiques de deux types différents de KPCANets sont fusionnées dans la couche de fusion. La caractéristique finale de l'image d'entrée peut être obtenue par une combinaison en série pondérée des deux types de caractéristiques. Les performances de notre algorithme proposé sont testées sur la reconnaissance des chiffres et la classification des objets, et les résultats expérimentaux sur les tests de reconnaissance visuelle du MNIST et du CIFAR-10 ont validé les performances du H-KPCANet proposé.
Feng YANG
University of Electronic Science and Technology of China,Wenzhou Medical University
Zheng MA
University of Electronic Science and Technology of China
Mei XIE
University of Electronic Science and Technology of China
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copier
Feng YANG, Zheng MA, Mei XIE, "Visual Recognition Method Based on Hybrid KPCA Network" in IEICE TRANSACTIONS on Information,
vol. E103-D, no. 9, pp. 2015-2018, September 2020, doi: 10.1587/transinf.2020EDL8041.
Abstract: In this paper, we propose a deep model of visual recognition based on hybrid KPCA Network(H-KPCANet), which is based on the combination of one-stage KPCANet and two-stage KPCANet. The proposed model consists of four types of basic components: the input layer, one-stage KPCANet, two-stage KPCANet and the fusion layer. The role of one-stage KPCANet is to calculate the KPCA filters for convolution layer, and two-stage KPCANet is to learn PCA filters in the first stage and KPCA filters in the second stage. After binary quantization mapping and block-wise histogram, the features from two different types of KPCANets are fused in the fusion layer. The final feature of the input image can be achieved by weighted serial combination of the two types of features. The performance of our proposed algorithm is tested on digit recognition and object classification, and the experimental results on visual recognition benchmarks of MNIST and CIFAR-10 validated the performance of the proposed H-KPCANet.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2020EDL8041/_p
Copier
@ARTICLE{e103-d_9_2015,
author={Feng YANG, Zheng MA, Mei XIE, },
journal={IEICE TRANSACTIONS on Information},
title={Visual Recognition Method Based on Hybrid KPCA Network},
year={2020},
volume={E103-D},
number={9},
pages={2015-2018},
abstract={In this paper, we propose a deep model of visual recognition based on hybrid KPCA Network(H-KPCANet), which is based on the combination of one-stage KPCANet and two-stage KPCANet. The proposed model consists of four types of basic components: the input layer, one-stage KPCANet, two-stage KPCANet and the fusion layer. The role of one-stage KPCANet is to calculate the KPCA filters for convolution layer, and two-stage KPCANet is to learn PCA filters in the first stage and KPCA filters in the second stage. After binary quantization mapping and block-wise histogram, the features from two different types of KPCANets are fused in the fusion layer. The final feature of the input image can be achieved by weighted serial combination of the two types of features. The performance of our proposed algorithm is tested on digit recognition and object classification, and the experimental results on visual recognition benchmarks of MNIST and CIFAR-10 validated the performance of the proposed H-KPCANet.},
keywords={},
doi={10.1587/transinf.2020EDL8041},
ISSN={1745-1361},
month={September},}
Copier
TY - JOUR
TI - Visual Recognition Method Based on Hybrid KPCA Network
T2 - IEICE TRANSACTIONS on Information
SP - 2015
EP - 2018
AU - Feng YANG
AU - Zheng MA
AU - Mei XIE
PY - 2020
DO - 10.1587/transinf.2020EDL8041
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E103-D
IS - 9
JA - IEICE TRANSACTIONS on Information
Y1 - September 2020
AB - In this paper, we propose a deep model of visual recognition based on hybrid KPCA Network(H-KPCANet), which is based on the combination of one-stage KPCANet and two-stage KPCANet. The proposed model consists of four types of basic components: the input layer, one-stage KPCANet, two-stage KPCANet and the fusion layer. The role of one-stage KPCANet is to calculate the KPCA filters for convolution layer, and two-stage KPCANet is to learn PCA filters in the first stage and KPCA filters in the second stage. After binary quantization mapping and block-wise histogram, the features from two different types of KPCANets are fused in the fusion layer. The final feature of the input image can be achieved by weighted serial combination of the two types of features. The performance of our proposed algorithm is tested on digit recognition and object classification, and the experimental results on visual recognition benchmarks of MNIST and CIFAR-10 validated the performance of the proposed H-KPCANet.
ER -