Title: An Experimental Study for Tracking Ability of Deep Q-Network

Issue Number: Vol. 10, No. 3
Year of Publication: 2020
Page Numbers: 32-38
Authors: Masashi SUGIMOTO, Ryunosuke UCHIDA, Kentarou KURASHIGE, Shinji TSUZUKI
Journal Name: International Journal of New Computer Architectures and their Applications (IJNCAA)
- Hong Kong
DOI:  http://dx.doi.org/10.17781/P002679

Abstract:


Reinforcement Learning (RL) had been attracting attention for a long time that because it can be easily applied to real robots. On the other hand, in Q-Learning, since the Q-table is updated, a large amount of Q-tables are required to express continuous“ states,” such as smooth movements of the robot arm. There was a disadvantage that calculation could not be performed real-time. Deep Q-Network (DQN), on the other hand, uses convolutional neural network to estimate the Q-value itself, so that it can obtain an approximate function of the Q-value. From this characteristics of calculation, this method has been attracting attention, in recent. On the other hand, it seems to the following of multitasking and moving goal point that Q-Learning was not good at has been inherited by DQN. In this paper, to confirm the weak points of DQN by changing the exploration ratio as known as epsilon dynamically, has been tried