The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Cet article présente un algorithme de routage intelligent, appelé Q-Agents, qui base ses actions uniquement sur l'interaction agent-environnement. Cet algorithme combine les propriétés de trois stratégies d'apprentissage (Q-learning, apprentissage à double renforcement et apprentissage basé sur le comportement des colonies de fourmis), en leur ajoutant deux autres mécanismes pour améliorer son adaptabilité. Par conséquent, l’algorithme proposé est composé d’un ensemble d’agents se déplaçant à travers le réseau de manière indépendante et simultanée, à la recherche des meilleurs itinéraires. Les agents partagent des connaissances sur la qualité des chemins parcourus à travers la communication indirecte. Les informations sur le réseau et l'état du trafic sont mises à jour à l'aide de règles de mise à jour Q-learning et à double renforcement. Les Q-Agents ont été appliqués à un modèle de réseau à commutation de circuits AT&T. Des expériences ont été menées sur les performances de l'algorithme sous des variations de modèles de trafic, de niveau de charge et de topologie, et avec l'ajout de bruit dans les informations utilisées pour acheminer les appels. Q-Agents a subi un nombre d’appels perdus inférieur à celui de deux algorithmes entièrement basés sur le comportement des colonies de fourmis.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copier
Karla VITTORI, Aluizio F. R. ARAUJO, "Agent-Oriented Routing in Telecommunications Networks" in IEICE TRANSACTIONS on Communications,
vol. E84-B, no. 11, pp. 3006-3013, November 2001, doi: .
Abstract: This paper presents an intelligent routing algorithm, called Q-Agents, which bases its actions only on the agent-environment interaction. This algorithm combines properties of three learning strategies (Q-learning, dual reinforcement learning and learning based on ant colony behavior), adding to them two further mechanisms to improve its adaptability. Hence, the proposed algorithm is composed of a set of agents, moving through the network independently and concurrently, searching for the best routes. The agents share knowledge about the quality of the paths traversed through indirect communication. Information about the network and traffic status is updated by using Q-learning and dual reinforcement updating rules. Q-Agents were applied to a model of an AT&T circuit-switched network. Experiments were carried out on the performance of the algorithm under variations of traffic patterns, load level and topology, and with addition of noise in the information used to route calls. Q-Agents suffered a lower number of lost calls than two algorithms based entirely on ant colony behavior.
URL: https://global.ieice.org/en_transactions/communications/10.1587/e84-b_11_3006/_p
Copier
@ARTICLE{e84-b_11_3006,
author={Karla VITTORI, Aluizio F. R. ARAUJO, },
journal={IEICE TRANSACTIONS on Communications},
title={Agent-Oriented Routing in Telecommunications Networks},
year={2001},
volume={E84-B},
number={11},
pages={3006-3013},
abstract={This paper presents an intelligent routing algorithm, called Q-Agents, which bases its actions only on the agent-environment interaction. This algorithm combines properties of three learning strategies (Q-learning, dual reinforcement learning and learning based on ant colony behavior), adding to them two further mechanisms to improve its adaptability. Hence, the proposed algorithm is composed of a set of agents, moving through the network independently and concurrently, searching for the best routes. The agents share knowledge about the quality of the paths traversed through indirect communication. Information about the network and traffic status is updated by using Q-learning and dual reinforcement updating rules. Q-Agents were applied to a model of an AT&T circuit-switched network. Experiments were carried out on the performance of the algorithm under variations of traffic patterns, load level and topology, and with addition of noise in the information used to route calls. Q-Agents suffered a lower number of lost calls than two algorithms based entirely on ant colony behavior.},
keywords={},
doi={},
ISSN={},
month={November},}
Copier
TY - JOUR
TI - Agent-Oriented Routing in Telecommunications Networks
T2 - IEICE TRANSACTIONS on Communications
SP - 3006
EP - 3013
AU - Karla VITTORI
AU - Aluizio F. R. ARAUJO
PY - 2001
DO -
JO - IEICE TRANSACTIONS on Communications
SN -
VL - E84-B
IS - 11
JA - IEICE TRANSACTIONS on Communications
Y1 - November 2001
AB - This paper presents an intelligent routing algorithm, called Q-Agents, which bases its actions only on the agent-environment interaction. This algorithm combines properties of three learning strategies (Q-learning, dual reinforcement learning and learning based on ant colony behavior), adding to them two further mechanisms to improve its adaptability. Hence, the proposed algorithm is composed of a set of agents, moving through the network independently and concurrently, searching for the best routes. The agents share knowledge about the quality of the paths traversed through indirect communication. Information about the network and traffic status is updated by using Q-learning and dual reinforcement updating rules. Q-Agents were applied to a model of an AT&T circuit-switched network. Experiments were carried out on the performance of the algorithm under variations of traffic patterns, load level and topology, and with addition of noise in the information used to route calls. Q-Agents suffered a lower number of lost calls than two algorithms based entirely on ant colony behavior.
ER -