The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Dans cet article, nous proposons un nouvel algorithme de gradient stochastique pour un filtrage adaptatif efficace. L'idée de base est de sparsifier le vecteur d'erreur initial et de maximiser les avantages de la sparsification sous des contraintes de calcul. À cette fin, nous formulons la tâche de conception d’algorithmes comme un problème d’optimisation sous contrainte et en déduisons sa solution (non triviale) sous forme fermée. Les contraintes de calcul sont formées en se concentrant sur le fait que l'énergie du vecteur d'erreur fragmenté se concentre sur les premières composantes. Les exemples numériques démontrent que l'algorithme proposé atteint la convergence aussi rapidement que la méthode coûteuse en calcul basée sur l'optimisation sans les contraintes informatiques.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copier
Masahiro YUKAWA, Wolfgang UTSCHICK, "A Fast Stochastic Gradient Algorithm: Maximal Use of Sparsification Benefits under Computational Constraints" in IEICE TRANSACTIONS on Fundamentals,
vol. E93-A, no. 2, pp. 467-475, February 2010, doi: 10.1587/transfun.E93.A.467.
Abstract: In this paper, we propose a novel stochastic gradient algorithm for efficient adaptive filtering. The basic idea is to sparsify the initial error vector and maximize the benefits from the sparsification under computational constraints. To this end, we formulate the task of algorithm-design as a constrained optimization problem and derive its (non-trivial) closed-form solution. The computational constraints are formed by focusing on the fact that the energy of the sparsified error vector concentrates at the first few components. The numerical examples demonstrate that the proposed algorithm achieves the convergence as fast as the computationally expensive method based on the optimization without the computational constraints.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E93.A.467/_p
Copier
@ARTICLE{e93-a_2_467,
author={Masahiro YUKAWA, Wolfgang UTSCHICK, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={A Fast Stochastic Gradient Algorithm: Maximal Use of Sparsification Benefits under Computational Constraints},
year={2010},
volume={E93-A},
number={2},
pages={467-475},
abstract={In this paper, we propose a novel stochastic gradient algorithm for efficient adaptive filtering. The basic idea is to sparsify the initial error vector and maximize the benefits from the sparsification under computational constraints. To this end, we formulate the task of algorithm-design as a constrained optimization problem and derive its (non-trivial) closed-form solution. The computational constraints are formed by focusing on the fact that the energy of the sparsified error vector concentrates at the first few components. The numerical examples demonstrate that the proposed algorithm achieves the convergence as fast as the computationally expensive method based on the optimization without the computational constraints.},
keywords={},
doi={10.1587/transfun.E93.A.467},
ISSN={1745-1337},
month={February},}
Copier
TY - JOUR
TI - A Fast Stochastic Gradient Algorithm: Maximal Use of Sparsification Benefits under Computational Constraints
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 467
EP - 475
AU - Masahiro YUKAWA
AU - Wolfgang UTSCHICK
PY - 2010
DO - 10.1587/transfun.E93.A.467
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E93-A
IS - 2
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - February 2010
AB - In this paper, we propose a novel stochastic gradient algorithm for efficient adaptive filtering. The basic idea is to sparsify the initial error vector and maximize the benefits from the sparsification under computational constraints. To this end, we formulate the task of algorithm-design as a constrained optimization problem and derive its (non-trivial) closed-form solution. The computational constraints are formed by focusing on the fact that the energy of the sparsified error vector concentrates at the first few components. The numerical examples demonstrate that the proposed algorithm achieves the convergence as fast as the computationally expensive method based on the optimization without the computational constraints.
ER -