Please use this identifier to cite or link to this item: https://repository.iimb.ac.in/handle/2074/11488
DC FieldValueLanguage
dc.contributor.authorRavikumar, K-
dc.contributor.authorDiatha, Krishna Sundar-
dc.date.accessioned2020-04-07T13:23:07Z-
dc.date.available2020-04-07T13:23:07Z-
dc.date.issued2014-
dc.identifier.issn0925-2312-
dc.identifier.urihttps://repository.iimb.ac.in/handle/2074/11488-
dc.description.abstractWe consider state-dependent pricing in a two-player service market stochastic game where state of the game and its transition dynamics are modeled using a semi-Markovian queue. We propose a multi-time scale actor–critic based reinforcement algorithm for multi-agent learning under self-play and provide experimental results on Nash convergence.-
dc.publisherElsevier-
dc.subjectDynamic Pricing-
dc.subjectLearning In Games-
dc.subjectQueues-
dc.subjectReinforcement Learning-
dc.subjectService Markets-
dc.subjectStochastic Games-
dc.titleAn actor-critic algorithm for multi-agent learning in queue-based stochastic games-
dc.typeJournal Article-
dc.identifier.doi10.1016/J.NEUCOM.2013.07.020-
dc.pages258-265p.-
dc.vol.noVol.127-
dc.journal.nameNeurocomputing-
Appears in Collections:2010-2019
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.