The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Nous proposons une méthode de conception de fonctionnalités pour une recherche visuelle mobile basée sur des fonctionnalités binaires et un cadre de sac de mots visuels. Dans la recherche visuelle mobile, les erreurs de détection et de quantification sont inévitables en raison des changements de point de vue et entraînent une dégradation des performances. Les approches typiques de recherche visuelle extraient les caractéristiques d’une vue unique des images de référence, bien que ces caractéristiques soient insuffisantes pour gérer les erreurs de détection et de quantification. Dans cet article, nous extrayons des caractéristiques d’images synthétiques multivues. Ces fonctionnalités sont sélectionnées selon notre nouvelle mesure de fiabilité qui permet une reconnaissance robuste contre divers changements de point de vue. Nous considérons la sélection des fonctionnalités comme un problème de couverture maximale. Autrement dit, nous trouvons un ensemble fini de caractéristiques maximisant une fonction objectif sous certaines contraintes. Comme ce problème est NP-difficile et donc irréalisable sur le plan informatique, nous explorons des solutions approximatives basées sur un algorithme glouton. À cette fin, nous proposons de nouvelles fonctions de contrainte conçues pour être cohérentes avec les conditions de correspondance dans la méthode de recherche visuelle. Les expériences montrent que la méthode proposée améliore la précision de la récupération de 12.7 points de pourcentage sans augmenter la taille de la base de données ni modifier la procédure de recherche. En d’autres termes, la méthode proposée permet une recherche plus précise sans affecter négativement la taille de la base de données, le coût de calcul et les besoins en mémoire.
Kohei MATSUZAKI
KDDI Research, Inc.
Kazuyuki TASAKA
KDDI Research, Inc.
Hiromasa YANAGIHARA
KDDI Research, Inc.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copier
Kohei MATSUZAKI, Kazuyuki TASAKA, Hiromasa YANAGIHARA, "Local Feature Reliability Measure Consistent with Match Conditions for Mobile Visual Search" in IEICE TRANSACTIONS on Information,
vol. E101-D, no. 12, pp. 3170-3180, December 2018, doi: 10.1587/transinf.2018EDP7107.
Abstract: We propose a feature design method for a mobile visual search based on binary features and a bag-of-visual words framework. In mobile visual search, detection error and quantization error are unavoidable due to viewpoint changes and cause performance degradation. Typical approaches to visual search extract features from a single view of reference images, though such features are insufficient to manage detection and quantization errors. In this paper, we extract features from multiview synthetic images. These features are selected according to our novel reliability measure which enables robust recognition against various viewpoint changes. We regard feature selection as a maximum coverage problem. That is, we find a finite set of features maximizing an objective function under certain constraints. As this problem is NP-hard and thus computationally infeasible, we explore approximate solutions based on a greedy algorithm. For this purpose, we propose novel constraint functions which are designed to be consistent with the match conditions in the visual search method. Experiments show that the proposed method improves retrieval accuracy by 12.7 percentage points without increasing the database size or changing the search procedure. In other words, the proposed method enables more accurate search without adversely affecting the database size, computational cost, and memory requirement.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2018EDP7107/_p
Copier
@ARTICLE{e101-d_12_3170,
author={Kohei MATSUZAKI, Kazuyuki TASAKA, Hiromasa YANAGIHARA, },
journal={IEICE TRANSACTIONS on Information},
title={Local Feature Reliability Measure Consistent with Match Conditions for Mobile Visual Search},
year={2018},
volume={E101-D},
number={12},
pages={3170-3180},
abstract={We propose a feature design method for a mobile visual search based on binary features and a bag-of-visual words framework. In mobile visual search, detection error and quantization error are unavoidable due to viewpoint changes and cause performance degradation. Typical approaches to visual search extract features from a single view of reference images, though such features are insufficient to manage detection and quantization errors. In this paper, we extract features from multiview synthetic images. These features are selected according to our novel reliability measure which enables robust recognition against various viewpoint changes. We regard feature selection as a maximum coverage problem. That is, we find a finite set of features maximizing an objective function under certain constraints. As this problem is NP-hard and thus computationally infeasible, we explore approximate solutions based on a greedy algorithm. For this purpose, we propose novel constraint functions which are designed to be consistent with the match conditions in the visual search method. Experiments show that the proposed method improves retrieval accuracy by 12.7 percentage points without increasing the database size or changing the search procedure. In other words, the proposed method enables more accurate search without adversely affecting the database size, computational cost, and memory requirement.},
keywords={},
doi={10.1587/transinf.2018EDP7107},
ISSN={1745-1361},
month={December},}
Copier
TY - JOUR
TI - Local Feature Reliability Measure Consistent with Match Conditions for Mobile Visual Search
T2 - IEICE TRANSACTIONS on Information
SP - 3170
EP - 3180
AU - Kohei MATSUZAKI
AU - Kazuyuki TASAKA
AU - Hiromasa YANAGIHARA
PY - 2018
DO - 10.1587/transinf.2018EDP7107
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E101-D
IS - 12
JA - IEICE TRANSACTIONS on Information
Y1 - December 2018
AB - We propose a feature design method for a mobile visual search based on binary features and a bag-of-visual words framework. In mobile visual search, detection error and quantization error are unavoidable due to viewpoint changes and cause performance degradation. Typical approaches to visual search extract features from a single view of reference images, though such features are insufficient to manage detection and quantization errors. In this paper, we extract features from multiview synthetic images. These features are selected according to our novel reliability measure which enables robust recognition against various viewpoint changes. We regard feature selection as a maximum coverage problem. That is, we find a finite set of features maximizing an objective function under certain constraints. As this problem is NP-hard and thus computationally infeasible, we explore approximate solutions based on a greedy algorithm. For this purpose, we propose novel constraint functions which are designed to be consistent with the match conditions in the visual search method. Experiments show that the proposed method improves retrieval accuracy by 12.7 percentage points without increasing the database size or changing the search procedure. In other words, the proposed method enables more accurate search without adversely affecting the database size, computational cost, and memory requirement.
ER -