The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
La transformation des caractéristiques acoustiques est largement utilisée pour réduire la dimensionnalité et améliorer les performances de reconnaissance vocale. Dans cette lettre, nous nous concentrons sur les méthodes de réduction de dimensionnalité qui minimisent l'erreur moyenne de classification. Malheureusement, la minimisation de l'erreur de classification moyenne peut entraîner des chevauchements considérables entre les distributions de certaines classes. Pour atténuer les risques de chevauchements considérables, nous proposons une méthode de réduction de dimensionnalité qui minimise l'erreur de classification maximale. Nous proposons également deux méthodes interpolées permettant de décrire les erreurs de classification moyennes et maximales. Les résultats expérimentaux montrent que ces méthodes proposées améliorent les performances de reconnaissance vocale.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copier
Makoto SAKAI, Norihide KITAOKA, Kazuya TAKEDA, "Acoustic Feature Transformation Combining Average and Maximum Classification Error Minimization Criteria" in IEICE TRANSACTIONS on Information,
vol. E93-D, no. 7, pp. 2005-2008, July 2010, doi: 10.1587/transinf.E93.D.2005.
Abstract: Acoustic feature transformation is widely used to reduce dimensionality and improve speech recognition performance. In this letter we focus on dimensionality reduction methods that minimize the average classification error. Unfortunately, minimization of the average classification error may cause considerable overlaps between distributions of some classes. To mitigate risks of considerable overlaps, we propose a dimensionality reduction method that minimizes the maximum classification error. We also propose two interpolated methods that can describe the average and maximum classification errors. Experimental results show that these proposed methods improve speech recognition performance.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.E93.D.2005/_p
Copier
@ARTICLE{e93-d_7_2005,
author={Makoto SAKAI, Norihide KITAOKA, Kazuya TAKEDA, },
journal={IEICE TRANSACTIONS on Information},
title={Acoustic Feature Transformation Combining Average and Maximum Classification Error Minimization Criteria},
year={2010},
volume={E93-D},
number={7},
pages={2005-2008},
abstract={Acoustic feature transformation is widely used to reduce dimensionality and improve speech recognition performance. In this letter we focus on dimensionality reduction methods that minimize the average classification error. Unfortunately, minimization of the average classification error may cause considerable overlaps between distributions of some classes. To mitigate risks of considerable overlaps, we propose a dimensionality reduction method that minimizes the maximum classification error. We also propose two interpolated methods that can describe the average and maximum classification errors. Experimental results show that these proposed methods improve speech recognition performance.},
keywords={},
doi={10.1587/transinf.E93.D.2005},
ISSN={1745-1361},
month={July},}
Copier
TY - JOUR
TI - Acoustic Feature Transformation Combining Average and Maximum Classification Error Minimization Criteria
T2 - IEICE TRANSACTIONS on Information
SP - 2005
EP - 2008
AU - Makoto SAKAI
AU - Norihide KITAOKA
AU - Kazuya TAKEDA
PY - 2010
DO - 10.1587/transinf.E93.D.2005
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E93-D
IS - 7
JA - IEICE TRANSACTIONS on Information
Y1 - July 2010
AB - Acoustic feature transformation is widely used to reduce dimensionality and improve speech recognition performance. In this letter we focus on dimensionality reduction methods that minimize the average classification error. Unfortunately, minimization of the average classification error may cause considerable overlaps between distributions of some classes. To mitigate risks of considerable overlaps, we propose a dimensionality reduction method that minimizes the maximum classification error. We also propose two interpolated methods that can describe the average and maximum classification errors. Experimental results show that these proposed methods improve speech recognition performance.
ER -