Reinforcement learning analysis for a minimum time balance problem

dc.authoridTutsoy, Onder/0000-0001-6385-3025
dc.contributor.authorTutsoy, Önder
dc.contributor.authorBrown, Martin
dc.date.accessioned2025-01-06T17:36:32Z
dc.date.available2025-01-06T17:36:32Z
dc.date.issued2016
dc.description.abstractReinforcement learning was developed to solve complex learning control problems, where only a minimal amount of a priori knowledge exists about the system dynamics. It has also been used as a model of cognitive learning in humans and applied to systems, such as pole balancing and humanoid robots, to study embodied cognition. However, closed-form analysis of the value function learning based on a higher-order unstable test problem dynamics has been rarely considered. In this paper, firstly, a second-order, unstable balance test problem is used to investigate issues associated with the value function parameter convergence and rate of convergence. In particular, the convergence of the minimum time value function is analysed, where the minimum time optimal control policy is assumed known. It is shown that the temporal difference error introduces a null space associated with the experiment termination basis function during the simulation. As this effect occurs due to termination or any kind of switching in control signal, this null space appears in temporal differences (TD) error for more general higher-order systems. Secondly, the rate of parameter convergence is analysed and it is shown that residual gradient algorithm converges faster than TD(0) for this particular test problem. Thirdly, impact of the finite horizon on both the value function and control policy learning has been analysed in case of unknown control policy and added random exploration noise.
dc.description.sponsorshipTurkish Ministry of National Education
dc.description.sponsorshipThis research was supported by the Turkish Ministry of National Education.
dc.identifier.doi10.1177/0142331215581638
dc.identifier.endpage1200
dc.identifier.issn0142-3312
dc.identifier.issn1477-0369
dc.identifier.issue10
dc.identifier.scopus2-s2.0-84987750930
dc.identifier.scopusqualityQ2
dc.identifier.startpage1186
dc.identifier.urihttps://doi.org/10.1177/0142331215581638
dc.identifier.urihttps://hdl.handle.net/20.500.14669/1916
dc.identifier.volume38
dc.identifier.wosWOS:000383394600004
dc.identifier.wosqualityQ4
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakScopus
dc.language.isoen
dc.publisherSage Publications Ltd
dc.relation.ispartofTransactions of The Institute of Measurement and Control
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/closedAccess
dc.snmzKA_20241211
dc.subjectBadly conditioned learning
dc.subjectminimum time optimal control
dc.subjectpolynomial basis functions
dc.subjectrate of convergence
dc.subjectreinforcement learning
dc.subjecttemporal difference learning
dc.titleReinforcement learning analysis for a minimum time balance problem
dc.typeArticle

Dosyalar