The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
La reconnaissance vocale émotionnelle est généralement considérée comme plus difficile que la reconnaissance vocale non émotionnelle. Les caractéristiques acoustiques de la parole émotionnelle diffèrent de celles de la parole non émotionnelle. De plus, les caractéristiques acoustiques varient considérablement en fonction du type et de l’intensité des émotions. Concernant les caractéristiques linguistiques, des expressions émotionnelles et familières sont également observées dans leurs énoncés. Pour résoudre ces problèmes, nous visons à améliorer les performances de reconnaissance en adaptant des modèles acoustiques et linguistiques à la parole émotionnelle. Nous avons utilisé le discours émotionnel basé sur Twitter japonais (JTES) comme corpus de discours émotionnel. Ce corpus était constitué de tweets et une étiquette émotionnelle était attribuée à chaque énoncé. L'adaptation du corpus est possible à partir des énoncés contenus dans ce corpus. Cependant, concernant le modèle linguistique, la quantité de données d’adaptation est insuffisante. Pour résoudre ce problème, nous proposons une adaptation du modèle linguistique en utilisant des données de tweet en ligne téléchargées sur Internet. Les phrases utilisées pour l’adaptation ont été extraites des données du tweet selon certaines règles. Nous avons extrait les données de 25.86 millions de mots et les avons utilisées à des fins d'adaptation. Dans les expériences de reconnaissance, le taux d'erreur de base sur les mots était de 36.11 %, alors que celui avec l'adaptation du modèle acoustique et linguistique était de 17.77 %. Les résultats ont démontré l'efficacité de la méthode proposée.
Tetsuo KOSAKA
Yamagata University
Kazuya SAEKI
Yamagata University
Yoshitaka AIZAWA
Yamagata University
Masaharu KATO
Yamagata University
Takashi NOSE
Tohoku University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copier
Tetsuo KOSAKA, Kazuya SAEKI, Yoshitaka AIZAWA, Masaharu KATO, Takashi NOSE, "Simultaneous Adaptation of Acoustic and Language Models for Emotional Speech Recognition Using Tweet Data" in IEICE TRANSACTIONS on Information,
vol. E107-D, no. 3, pp. 363-373, March 2024, doi: 10.1587/transinf.2023HCP0010.
Abstract: Emotional speech recognition is generally considered more difficult than non-emotional speech recognition. The acoustic characteristics of emotional speech differ from those of non-emotional speech. Additionally, acoustic characteristics vary significantly depending on the type and intensity of emotions. Regarding linguistic features, emotional and colloquial expressions are also observed in their utterances. To solve these problems, we aim to improve recognition performance by adapting acoustic and language models to emotional speech. We used Japanese Twitter-based Emotional Speech (JTES) as an emotional speech corpus. This corpus consisted of tweets and had an emotional label assigned to each utterance. Corpus adaptation is possible using the utterances contained in this corpus. However, regarding the language model, the amount of adaptation data is insufficient. To solve this problem, we propose an adaptation of the language model by using online tweet data downloaded from the internet. The sentences used for adaptation were extracted from the tweet data based on certain rules. We extracted the data of 25.86 M words and used them for adaptation. In the recognition experiments, the baseline word error rate was 36.11%, whereas that with the acoustic and language model adaptation was 17.77%. The results demonstrated the effectiveness of the proposed method.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2023HCP0010/_p
Copier
@ARTICLE{e107-d_3_363,
author={Tetsuo KOSAKA, Kazuya SAEKI, Yoshitaka AIZAWA, Masaharu KATO, Takashi NOSE, },
journal={IEICE TRANSACTIONS on Information},
title={Simultaneous Adaptation of Acoustic and Language Models for Emotional Speech Recognition Using Tweet Data},
year={2024},
volume={E107-D},
number={3},
pages={363-373},
abstract={Emotional speech recognition is generally considered more difficult than non-emotional speech recognition. The acoustic characteristics of emotional speech differ from those of non-emotional speech. Additionally, acoustic characteristics vary significantly depending on the type and intensity of emotions. Regarding linguistic features, emotional and colloquial expressions are also observed in their utterances. To solve these problems, we aim to improve recognition performance by adapting acoustic and language models to emotional speech. We used Japanese Twitter-based Emotional Speech (JTES) as an emotional speech corpus. This corpus consisted of tweets and had an emotional label assigned to each utterance. Corpus adaptation is possible using the utterances contained in this corpus. However, regarding the language model, the amount of adaptation data is insufficient. To solve this problem, we propose an adaptation of the language model by using online tweet data downloaded from the internet. The sentences used for adaptation were extracted from the tweet data based on certain rules. We extracted the data of 25.86 M words and used them for adaptation. In the recognition experiments, the baseline word error rate was 36.11%, whereas that with the acoustic and language model adaptation was 17.77%. The results demonstrated the effectiveness of the proposed method.},
keywords={},
doi={10.1587/transinf.2023HCP0010},
ISSN={1745-1361},
month={March},}
Copier
TY - JOUR
TI - Simultaneous Adaptation of Acoustic and Language Models for Emotional Speech Recognition Using Tweet Data
T2 - IEICE TRANSACTIONS on Information
SP - 363
EP - 373
AU - Tetsuo KOSAKA
AU - Kazuya SAEKI
AU - Yoshitaka AIZAWA
AU - Masaharu KATO
AU - Takashi NOSE
PY - 2024
DO - 10.1587/transinf.2023HCP0010
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E107-D
IS - 3
JA - IEICE TRANSACTIONS on Information
Y1 - March 2024
AB - Emotional speech recognition is generally considered more difficult than non-emotional speech recognition. The acoustic characteristics of emotional speech differ from those of non-emotional speech. Additionally, acoustic characteristics vary significantly depending on the type and intensity of emotions. Regarding linguistic features, emotional and colloquial expressions are also observed in their utterances. To solve these problems, we aim to improve recognition performance by adapting acoustic and language models to emotional speech. We used Japanese Twitter-based Emotional Speech (JTES) as an emotional speech corpus. This corpus consisted of tweets and had an emotional label assigned to each utterance. Corpus adaptation is possible using the utterances contained in this corpus. However, regarding the language model, the amount of adaptation data is insufficient. To solve this problem, we propose an adaptation of the language model by using online tweet data downloaded from the internet. The sentences used for adaptation were extracted from the tweet data based on certain rules. We extracted the data of 25.86 M words and used them for adaptation. In the recognition experiments, the baseline word error rate was 36.11%, whereas that with the acoustic and language model adaptation was 17.77%. The results demonstrated the effectiveness of the proposed method.
ER -