The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Dans cet article, nous proposons une technique d'adaptation rapide de modèles pour la reconnaissance émotionnelle de la parole qui nous permet d'extraire des informations paralinguistiques ainsi que des informations linguistiques contenues dans les signaux vocaux. Cette technique est basée sur l'estimation et l'adaptation du style à l'aide d'un HMM à régression multiple (MRHMM). Dans le MRHMM, les paramètres moyens de la fonction de densité de probabilité de sortie sont contrôlés par un vecteur de paramètres de faible dimension, appelé vecteur de style, qui correspond à un ensemble de variables explicatives de la régression multiple. Le processus de reconnaissance comprend deux étapes. Dans la première étape, le vecteur de style qui représente la catégorie d’expression émotionnelle et l’intensité de son expressivité pour le discours d’entrée est estimé phrase par phrase. Ensuite, les modèles acoustiques sont adaptés à l'aide du vecteur de style estimé, puis une reconnaissance vocale standard basée sur HMM est effectuée dans un deuxième temps. Nous évaluons les performances de la technique proposée dans la reconnaissance de discours émotionnels simulés prononcés à la fois par des narrateurs professionnels et des locuteurs non professionnels.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copier
Yusuke IJIMA, Takashi NOSE, Makoto TACHIBANA, Takao KOBAYASHI, "A Rapid Model Adaptation Technique for Emotional Speech Recognition with Style Estimation Based on Multiple-Regression HMM" in IEICE TRANSACTIONS on Information,
vol. E93-D, no. 1, pp. 107-115, January 2010, doi: 10.1587/transinf.E93.D.107.
Abstract: In this paper, we propose a rapid model adaptation technique for emotional speech recognition which enables us to extract paralinguistic information as well as linguistic information contained in speech signals. This technique is based on style estimation and style adaptation using a multiple-regression HMM (MRHMM). In the MRHMM, the mean parameters of the output probability density function are controlled by a low-dimensional parameter vector, called a style vector, which corresponds to a set of the explanatory variables of the multiple regression. The recognition process consists of two stages. In the first stage, the style vector that represents the emotional expression category and the intensity of its expressiveness for the input speech is estimated on a sentence-by-sentence basis. Next, the acoustic models are adapted using the estimated style vector, and then standard HMM-based speech recognition is performed in the second stage. We assess the performance of the proposed technique in the recognition of simulated emotional speech uttered by both professional narrators and non-professional speakers.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.E93.D.107/_p
Copier
@ARTICLE{e93-d_1_107,
author={Yusuke IJIMA, Takashi NOSE, Makoto TACHIBANA, Takao KOBAYASHI, },
journal={IEICE TRANSACTIONS on Information},
title={A Rapid Model Adaptation Technique for Emotional Speech Recognition with Style Estimation Based on Multiple-Regression HMM},
year={2010},
volume={E93-D},
number={1},
pages={107-115},
abstract={In this paper, we propose a rapid model adaptation technique for emotional speech recognition which enables us to extract paralinguistic information as well as linguistic information contained in speech signals. This technique is based on style estimation and style adaptation using a multiple-regression HMM (MRHMM). In the MRHMM, the mean parameters of the output probability density function are controlled by a low-dimensional parameter vector, called a style vector, which corresponds to a set of the explanatory variables of the multiple regression. The recognition process consists of two stages. In the first stage, the style vector that represents the emotional expression category and the intensity of its expressiveness for the input speech is estimated on a sentence-by-sentence basis. Next, the acoustic models are adapted using the estimated style vector, and then standard HMM-based speech recognition is performed in the second stage. We assess the performance of the proposed technique in the recognition of simulated emotional speech uttered by both professional narrators and non-professional speakers.},
keywords={},
doi={10.1587/transinf.E93.D.107},
ISSN={1745-1361},
month={January},}
Copier
TY - JOUR
TI - A Rapid Model Adaptation Technique for Emotional Speech Recognition with Style Estimation Based on Multiple-Regression HMM
T2 - IEICE TRANSACTIONS on Information
SP - 107
EP - 115
AU - Yusuke IJIMA
AU - Takashi NOSE
AU - Makoto TACHIBANA
AU - Takao KOBAYASHI
PY - 2010
DO - 10.1587/transinf.E93.D.107
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E93-D
IS - 1
JA - IEICE TRANSACTIONS on Information
Y1 - January 2010
AB - In this paper, we propose a rapid model adaptation technique for emotional speech recognition which enables us to extract paralinguistic information as well as linguistic information contained in speech signals. This technique is based on style estimation and style adaptation using a multiple-regression HMM (MRHMM). In the MRHMM, the mean parameters of the output probability density function are controlled by a low-dimensional parameter vector, called a style vector, which corresponds to a set of the explanatory variables of the multiple regression. The recognition process consists of two stages. In the first stage, the style vector that represents the emotional expression category and the intensity of its expressiveness for the input speech is estimated on a sentence-by-sentence basis. Next, the acoustic models are adapted using the estimated style vector, and then standard HMM-based speech recognition is performed in the second stage. We assess the performance of the proposed technique in the recognition of simulated emotional speech uttered by both professional narrators and non-professional speakers.
ER -