Chaotic dynamics and convergence analysis of temporal difference algorithms with bang-bang control
[ X ]
Tarih
2016
Yazarlar
Dergi Başlığı
Dergi ISSN
Cilt Başlığı
Yayıncı
Wiley
Erişim Hakkı
info:eu-repo/semantics/closedAccess
Özet
Reinforcement learning is a powerful tool used to obtain optimal control solutions for complex and difficult sequential decision making problems where only a minimal amount of a priori knowledge exists about the system dynamics. As such, it has also been used as a model of cognitive learning in humans and applied to systems, such as humanoid robots, to study embodied cognition. In this paper, a different approach is taken where a simple test problem is used to investigate issues associated with the value function's representation and parametric convergence. In particular, the terminal convergence problem is analyzed with a known optimal control policy where the aim is to accurately learn the value function. For certain initial conditions, the value function is explicitly calculated and it is shown to have a polynomial form. It is parameterized by terms that are functions of the unknown plant's parameters and the value function's discount factor, and their convergence properties are analyzed. It is shown that the temporal difference error introduces a null space associated with the finite horizon basis function during the experiment. The learning problem is only non-singular when the experiment termination is handled correctly and a number of (equivalent) solutions are described. Finally, it is demonstrated that, in general, the test problem's dynamics are chaotic for random initial states and this causes digital offset in the value function learning. The offset is calculated, and a dead zone is defined to switch off learning in the chaotic region. Copyright (C) 2015 John Wiley & Sons, Ltd.
Açıklama
Anahtar Kelimeler
badly conditioned learning, polynomial basis functions, rate of convergence, temporal difference learning, value function approximation
Kaynak
Optimal Control Applications & Methods
WoS Q Değeri
Q1
Scopus Q Değeri
Q1
Cilt
37
Sayı
1