Comparative Analysis Experiment of Value-Based Multi-Agent Reinforcement Learning Algorithms

Journal of Advanced Technology Research, Vol. 5, No. 2, pp. 6-12, Dec. 2020
10.11111/JATR.2019.5.2.006, Full Text:
Keywords: Multi-Agent Reinforcement Learning, Cooperative Multi-Agent Task, Value-Based Reinforcement Learning
Abstract

In a complex real-world settings, multi-agent should condition their behavior in a distributed method. In many multi-agent environments such as simulation, centralized training with decentralized execution (CTDE) is used. Many studies have been conducted on value-based multi-agent algorithms for multi-agent learning in CTDE environments. These algorithms are derived from a powerful benchmark algorithm called Independent Q-learning (IQL) and have been intensively researched on the joint action-value decomposition problem of multi-agent. In this paper, verification is performed through analysis of algorithms related to previous studies and experimental analysis in a practical and general domain.


Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from September 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article
[IEEE Style]
J. Kim, C. Ji and Y. Han, "Comparative Analysis Experiment of Value-Based Multi-Agent Reinforcement Learning Algorithms," Journal of Advanced Technology Research, vol. 5, no. 2, pp. 6-12, 2020. DOI: 10.11111/JATR.2019.5.2.006.

[ACM Style]
Ju-Bong Kim, Chang-Hun Ji, and Youn-Hee Han. 2020. Comparative Analysis Experiment of Value-Based Multi-Agent Reinforcement Learning Algorithms. Journal of Advanced Technology Research, 5, 2, (2020), 6-12. DOI: 10.11111/JATR.2019.5.2.006.