The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Vues en texte intégral
91
La plupart des applications mobiles sensibles à la latence dépendent des ressources de calcul fournies par un service de cloud computing. Le problème du recours au cloud computing est que, parfois, les emplacements physiques des serveurs cloud sont éloignés des utilisateurs mobiles et que la latence de communication est longue. En conséquence, le concept de service cloud distribué, appelé Mobile Edge Computing (MEC), est introduit dans le réseau 5G. Cependant, MEC ne peut réduire que la latence de communication. La latence de calcul dans MEC doit également être prise en compte pour satisfaire la latence totale requise des services. Dans cette recherche, nous étudions l'impact des deux latences dans l'architecture MEC en ce qui concerne les services sensibles à la latence. Nous considérons également un modèle centralisé, dans lequel nous utilisons un contrôleur pour gérer les flux entre les utilisateurs et les ressources mobiles afin d'analyser MEC dans une architecture pratique. Les simulations montrent que l'intervalle et la latence du contrôleur déclenchent des blocages et des erreurs dans le système. Cependant, le système permissif qui assouplit les contraintes de latence et choisit un serveur Edge en fonction de la latence totale la plus faible peut améliorer les performances du système de manière impressionnante.
Krittin INTHARAWIJITR
Tokyo Institute of Technology
Katsuyoshi IIDA
Hokkaido University
Hiroyuki KOGA
University of Kitakyushu
Katsunori YAMAOKA
Tokyo Institute of Technology
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copier
Krittin INTHARAWIJITR, Katsuyoshi IIDA, Hiroyuki KOGA, Katsunori YAMAOKA, "Simulation Study of Low-Latency Network Model with Orchestrator in MEC" in IEICE TRANSACTIONS on Communications,
vol. E102-B, no. 11, pp. 2139-2150, November 2019, doi: 10.1587/transcom.2018EBP3368.
Abstract: Most of latency-sensitive mobile applications depend on computational resources provided by a cloud computing service. The problem of relying on cloud computing is that, sometimes, the physical locations of cloud servers are distant from mobile users and the communication latency is long. As a result, the concept of distributed cloud service, called mobile edge computing (MEC), is being introduced in the 5G network. However, MEC can reduce only the communication latency. The computing latency in MEC must also be considered to satisfy the required total latency of services. In this research, we study the impact of both latencies in MEC architecture with regard to latency-sensitive services. We also consider a centralized model, in which we use a controller to manage flows between users and mobile edge resources to analyze MEC in a practical architecture. Simulations show that the interval and controller latency trigger some blocking and error in the system. However, the permissive system which relaxes latency constraints and chooses an edge server by the lowest total latency can improve the system performance impressively.
URL: https://global.ieice.org/en_transactions/communications/10.1587/transcom.2018EBP3368/_p
Copier
@ARTICLE{e102-b_11_2139,
author={Krittin INTHARAWIJITR, Katsuyoshi IIDA, Hiroyuki KOGA, Katsunori YAMAOKA, },
journal={IEICE TRANSACTIONS on Communications},
title={Simulation Study of Low-Latency Network Model with Orchestrator in MEC},
year={2019},
volume={E102-B},
number={11},
pages={2139-2150},
abstract={Most of latency-sensitive mobile applications depend on computational resources provided by a cloud computing service. The problem of relying on cloud computing is that, sometimes, the physical locations of cloud servers are distant from mobile users and the communication latency is long. As a result, the concept of distributed cloud service, called mobile edge computing (MEC), is being introduced in the 5G network. However, MEC can reduce only the communication latency. The computing latency in MEC must also be considered to satisfy the required total latency of services. In this research, we study the impact of both latencies in MEC architecture with regard to latency-sensitive services. We also consider a centralized model, in which we use a controller to manage flows between users and mobile edge resources to analyze MEC in a practical architecture. Simulations show that the interval and controller latency trigger some blocking and error in the system. However, the permissive system which relaxes latency constraints and chooses an edge server by the lowest total latency can improve the system performance impressively.},
keywords={},
doi={10.1587/transcom.2018EBP3368},
ISSN={1745-1345},
month={November},}
Copier
TY - JOUR
TI - Simulation Study of Low-Latency Network Model with Orchestrator in MEC
T2 - IEICE TRANSACTIONS on Communications
SP - 2139
EP - 2150
AU - Krittin INTHARAWIJITR
AU - Katsuyoshi IIDA
AU - Hiroyuki KOGA
AU - Katsunori YAMAOKA
PY - 2019
DO - 10.1587/transcom.2018EBP3368
JO - IEICE TRANSACTIONS on Communications
SN - 1745-1345
VL - E102-B
IS - 11
JA - IEICE TRANSACTIONS on Communications
Y1 - November 2019
AB - Most of latency-sensitive mobile applications depend on computational resources provided by a cloud computing service. The problem of relying on cloud computing is that, sometimes, the physical locations of cloud servers are distant from mobile users and the communication latency is long. As a result, the concept of distributed cloud service, called mobile edge computing (MEC), is being introduced in the 5G network. However, MEC can reduce only the communication latency. The computing latency in MEC must also be considered to satisfy the required total latency of services. In this research, we study the impact of both latencies in MEC architecture with regard to latency-sensitive services. We also consider a centralized model, in which we use a controller to manage flows between users and mobile edge resources to analyze MEC in a practical architecture. Simulations show that the interval and controller latency trigger some blocking and error in the system. However, the permissive system which relaxes latency constraints and chooses an edge server by the lowest total latency can improve the system performance impressively.
ER -