An analysis of value function learning with piecewise linear control

dc.authoridTutsoy, Onder/0000-0001-6385-3025
dc.contributor.authorTutsoy, Önder
dc.contributor.authorBrown, Martin
dc.date.accessioned2025-01-06T17:44:03Z
dc.date.available2025-01-06T17:44:03Z
dc.date.issued2016
dc.description.abstractReinforcement learning (RL) algorithms attempt to learn optimal control actions by iteratively estimating a long-term measure of system performance, the so-called value function. For example, RL algorithms have been applied to walking robots to examine the connection between robot motion and the brain, which is known as embodied cognition. In this paper, RL algorithms are analysed using an exemplar test problem. A closed form solution for the value function is calculated and this is represented in terms of a set of basis functions and parameters, which is used to investigate parameter convergence. The value function expression is shown to have a polynomial form where the polynomial terms depend on the plant's parameters and the value function's discount factor. It is shown that the temporal difference error introduces a null space for the differenced higher order basis associated with the effects of controller switching (saturated to linear control or terminating an experiment) apart from the time of the switch. This leads to slow convergence in the relevant subspace. It is also shown that badly conditioned learning problems can occur, and this is a function of the value function discount factor and the controller switching points. Finally, a comparison is performed between the residual gradient and TD(0) learning algorithms, and it is shown that the former has a faster rate of convergence for this test problem.
dc.identifier.doi10.1080/0952813X.2015.1020517
dc.identifier.endpage545
dc.identifier.issn0952-813X
dc.identifier.issn1362-3079
dc.identifier.issue3
dc.identifier.scopus2-s2.0-84925236435
dc.identifier.scopusqualityQ1
dc.identifier.startpage529
dc.identifier.urihttps://doi.org/10.1080/0952813X.2015.1020517
dc.identifier.urihttps://hdl.handle.net/20.500.14669/2898
dc.identifier.volume28
dc.identifier.wosWOS:000372442000001
dc.identifier.wosqualityQ3
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakScopus
dc.language.isoen
dc.publisherTaylor & Francis Ltd
dc.relation.ispartofJournal of Experimental & Theoretical Artificial Intelligence
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/closedAccess
dc.snmzKA_20241211
dc.subjectpolynomial basis
dc.subjecttrajectory null space
dc.subjectbadly conditioned learning
dc.subjectrate of convergence
dc.subjecttemporal difference learning parameter convergence
dc.titleAn analysis of value function learning with piecewise linear control
dc.typeArticle

Dosyalar