The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
L'estimation de la qualité neuronale (QE) de pointe du modèle de traduction automatique se compose de deux sous-réseaux réglés séparément, un codeur-décodeur de réseau neuronal récurrent bidirectionnel (RNN) formé pour la traduction automatique neuronale, appelé prédicteur. , et un RNN formé pour les tâches de QE au niveau des phrases, appelé estimateur. Nous proposons de combiner les deux sous-réseaux en un réseau de neurones complet, appelé réseau de neurones unifié. Lors de la formation, le codeur-décodeur RNN bidirectionnel est initialisé et pré-entraîné avec le corpus parallèle bilingue, puis les réseaux sont entraînés conjointement pour minimiser l'erreur absolue moyenne sur les échantillons d'entraînement QE. Par rapport à l'approche du prédicteur et de l'estimateur, l'utilisation d'un réseau neuronal unifié permet d'entraîner les paramètres des réseaux neuronaux les plus adaptés à la tâche QE. Les résultats expérimentaux sur l'ensemble de données de référence de la tâche partagée de QE au niveau de la phrase WMT17 montrent que l'approche de réseau neuronal unifié proposée surpasse systématiquement l'approche du prédicteur et de l'estimateur et surpasse considérablement les autres approches de QE de base.
Maoxi LI
Jiangxi Normal University
Qingyu XIANG
Jiangxi Normal University
Zhiming CHEN
Jiangxi Normal University
Mingwen WANG
Jiangxi Normal University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copier
Maoxi LI, Qingyu XIANG, Zhiming CHEN, Mingwen WANG, "A Unified Neural Network for Quality Estimation of Machine Translation" in IEICE TRANSACTIONS on Information,
vol. E101-D, no. 9, pp. 2417-2421, September 2018, doi: 10.1587/transinf.2018EDL8019.
Abstract: The-state-of-the-art neural quality estimation (QE) of machine translation model consists of two sub-networks that are tuned separately, a bidirectional recurrent neural network (RNN) encoder-decoder trained for neural machine translation, called the predictor, and an RNN trained for sentence-level QE tasks, called the estimator. We propose to combine the two sub-networks into a whole neural network, called the unified neural network. When training, the bidirectional RNN encoder-decoder are initialized and pre-trained with the bilingual parallel corpus, and then, the networks are trained jointly to minimize the mean absolute error over the QE training samples. Compared with the predictor and estimator approach, the use of a unified neural network helps to train the parameters of the neural networks that are more suitable for the QE task. Experimental results on the benchmark data set of the WMT17 sentence-level QE shared task show that the proposed unified neural network approach consistently outperforms the predictor and estimator approach and significantly outperforms the other baseline QE approaches.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2018EDL8019/_p
Copier
@ARTICLE{e101-d_9_2417,
author={Maoxi LI, Qingyu XIANG, Zhiming CHEN, Mingwen WANG, },
journal={IEICE TRANSACTIONS on Information},
title={A Unified Neural Network for Quality Estimation of Machine Translation},
year={2018},
volume={E101-D},
number={9},
pages={2417-2421},
abstract={The-state-of-the-art neural quality estimation (QE) of machine translation model consists of two sub-networks that are tuned separately, a bidirectional recurrent neural network (RNN) encoder-decoder trained for neural machine translation, called the predictor, and an RNN trained for sentence-level QE tasks, called the estimator. We propose to combine the two sub-networks into a whole neural network, called the unified neural network. When training, the bidirectional RNN encoder-decoder are initialized and pre-trained with the bilingual parallel corpus, and then, the networks are trained jointly to minimize the mean absolute error over the QE training samples. Compared with the predictor and estimator approach, the use of a unified neural network helps to train the parameters of the neural networks that are more suitable for the QE task. Experimental results on the benchmark data set of the WMT17 sentence-level QE shared task show that the proposed unified neural network approach consistently outperforms the predictor and estimator approach and significantly outperforms the other baseline QE approaches.},
keywords={},
doi={10.1587/transinf.2018EDL8019},
ISSN={1745-1361},
month={September},}
Copier
TY - JOUR
TI - A Unified Neural Network for Quality Estimation of Machine Translation
T2 - IEICE TRANSACTIONS on Information
SP - 2417
EP - 2421
AU - Maoxi LI
AU - Qingyu XIANG
AU - Zhiming CHEN
AU - Mingwen WANG
PY - 2018
DO - 10.1587/transinf.2018EDL8019
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E101-D
IS - 9
JA - IEICE TRANSACTIONS on Information
Y1 - September 2018
AB - The-state-of-the-art neural quality estimation (QE) of machine translation model consists of two sub-networks that are tuned separately, a bidirectional recurrent neural network (RNN) encoder-decoder trained for neural machine translation, called the predictor, and an RNN trained for sentence-level QE tasks, called the estimator. We propose to combine the two sub-networks into a whole neural network, called the unified neural network. When training, the bidirectional RNN encoder-decoder are initialized and pre-trained with the bilingual parallel corpus, and then, the networks are trained jointly to minimize the mean absolute error over the QE training samples. Compared with the predictor and estimator approach, the use of a unified neural network helps to train the parameters of the neural networks that are more suitable for the QE task. Experimental results on the benchmark data set of the WMT17 sentence-level QE shared task show that the proposed unified neural network approach consistently outperforms the predictor and estimator approach and significantly outperforms the other baseline QE approaches.
ER -