The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
La formation contradictoire virtuelle (VAT) a montré des résultats impressionnants parmi les méthodes de régularisation récemment développées appelées régularisation de cohérence. TVA utilise des échantillons contradictoires, générés par l'injection de perturbations dans l'espace d'entrée, pour la formation et améliore ainsi la capacité de généralisation d'un classificateur. Cependant, de tels échantillons contradictoires ne peuvent être générés que dans une très petite zone autour du point de données d’entrée, ce qui limite l’efficacité contradictoire de ces échantillons. Pour résoudre ce problème, nous proposons LVAT (Latent space TVA), qui injecte une perturbation dans l'espace latent au lieu de l'espace d'entrée. LVAT peut générer des échantillons contradictoires de manière flexible, ce qui entraîne davantage d'effets indésirables et donc une régularisation plus efficace. L'espace latent est construit par un modèle génératif, et dans cet article, nous examinons deux types de modèles différents : l'auto-encodeur variationnel et le flux normalisant, en particulier Glow. Nous avons évalué les performances de notre méthode dans des scénarios d'apprentissage supervisé et semi-supervisé pour une tâche de classification d'images utilisant les ensembles de données SVHN et CIFAR-10. Lors de notre évaluation, nous avons constaté que notre méthode surpasse la TVA et les autres méthodes de pointe.
Genki OSADA
Philips Co-Creation Center,University of Tsukuba,I Dragon Corporation
Budrul AHSAN
Philips Co-Creation Center,The Tokyo Foundation for Policy Research
Revoti PRASAD BORA
Lowe's Services India Pvt. Ltd.
Takashi NISHIDE
University of Tsukuba
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copier
Genki OSADA, Budrul AHSAN, Revoti PRASAD BORA, Takashi NISHIDE, "Latent Space Virtual Adversarial Training for Supervised and Semi-Supervised Learning" in IEICE TRANSACTIONS on Information,
vol. E105-D, no. 3, pp. 667-678, March 2022, doi: 10.1587/transinf.2021EDP7161.
Abstract: Virtual Adversarial Training (VAT) has shown impressive results among recently developed regularization methods called consistency regularization. VAT utilizes adversarial samples, generated by injecting perturbation in the input space, for training and thereby enhances the generalization ability of a classifier. However, such adversarial samples can be generated only within a very small area around the input data point, which limits the adversarial effectiveness of such samples. To address this problem we propose LVAT (Latent space VAT), which injects perturbation in the latent space instead of the input space. LVAT can generate adversarial samples flexibly, resulting in more adverse effect and thus more effective regularization. The latent space is built by a generative model, and in this paper we examine two different type of models: variational auto-encoder and normalizing flow, specifically Glow. We evaluated the performance of our method in both supervised and semi-supervised learning scenarios for an image classification task using SVHN and CIFAR-10 datasets. In our evaluation, we found that our method outperforms VAT and other state-of-the-art methods.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2021EDP7161/_p
Copier
@ARTICLE{e105-d_3_667,
author={Genki OSADA, Budrul AHSAN, Revoti PRASAD BORA, Takashi NISHIDE, },
journal={IEICE TRANSACTIONS on Information},
title={Latent Space Virtual Adversarial Training for Supervised and Semi-Supervised Learning},
year={2022},
volume={E105-D},
number={3},
pages={667-678},
abstract={Virtual Adversarial Training (VAT) has shown impressive results among recently developed regularization methods called consistency regularization. VAT utilizes adversarial samples, generated by injecting perturbation in the input space, for training and thereby enhances the generalization ability of a classifier. However, such adversarial samples can be generated only within a very small area around the input data point, which limits the adversarial effectiveness of such samples. To address this problem we propose LVAT (Latent space VAT), which injects perturbation in the latent space instead of the input space. LVAT can generate adversarial samples flexibly, resulting in more adverse effect and thus more effective regularization. The latent space is built by a generative model, and in this paper we examine two different type of models: variational auto-encoder and normalizing flow, specifically Glow. We evaluated the performance of our method in both supervised and semi-supervised learning scenarios for an image classification task using SVHN and CIFAR-10 datasets. In our evaluation, we found that our method outperforms VAT and other state-of-the-art methods.},
keywords={},
doi={10.1587/transinf.2021EDP7161},
ISSN={1745-1361},
month={March},}
Copier
TY - JOUR
TI - Latent Space Virtual Adversarial Training for Supervised and Semi-Supervised Learning
T2 - IEICE TRANSACTIONS on Information
SP - 667
EP - 678
AU - Genki OSADA
AU - Budrul AHSAN
AU - Revoti PRASAD BORA
AU - Takashi NISHIDE
PY - 2022
DO - 10.1587/transinf.2021EDP7161
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E105-D
IS - 3
JA - IEICE TRANSACTIONS on Information
Y1 - March 2022
AB - Virtual Adversarial Training (VAT) has shown impressive results among recently developed regularization methods called consistency regularization. VAT utilizes adversarial samples, generated by injecting perturbation in the input space, for training and thereby enhances the generalization ability of a classifier. However, such adversarial samples can be generated only within a very small area around the input data point, which limits the adversarial effectiveness of such samples. To address this problem we propose LVAT (Latent space VAT), which injects perturbation in the latent space instead of the input space. LVAT can generate adversarial samples flexibly, resulting in more adverse effect and thus more effective regularization. The latent space is built by a generative model, and in this paper we examine two different type of models: variational auto-encoder and normalizing flow, specifically Glow. We evaluated the performance of our method in both supervised and semi-supervised learning scenarios for an image classification task using SVHN and CIFAR-10 datasets. In our evaluation, we found that our method outperforms VAT and other state-of-the-art methods.
ER -