Please use this identifier to cite or link to this item:
https://repository.iimb.ac.in/handle/2074/11488
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ravikumar, K | - |
dc.contributor.author | Diatha, Krishna Sundar | - |
dc.date.accessioned | 2020-04-07T13:23:07Z | - |
dc.date.available | 2020-04-07T13:23:07Z | - |
dc.date.issued | 2014 | - |
dc.identifier.issn | 0925-2312 | - |
dc.identifier.uri | https://repository.iimb.ac.in/handle/2074/11488 | - |
dc.description.abstract | We consider state-dependent pricing in a two-player service market stochastic game where state of the game and its transition dynamics are modeled using a semi-Markovian queue. We propose a multi-time scale actor–critic based reinforcement algorithm for multi-agent learning under self-play and provide experimental results on Nash convergence. | - |
dc.publisher | Elsevier | - |
dc.subject | Dynamic Pricing | - |
dc.subject | Learning In Games | - |
dc.subject | Queues | - |
dc.subject | Reinforcement Learning | - |
dc.subject | Service Markets | - |
dc.subject | Stochastic Games | - |
dc.title | An actor-critic algorithm for multi-agent learning in queue-based stochastic games | - |
dc.type | Journal Article | - |
dc.identifier.doi | 10.1016/J.NEUCOM.2013.07.020 | - |
dc.pages | 258-265p. | - |
dc.vol.no | Vol.127 | - |
dc.journal.name | Neurocomputing | - |
Appears in Collections: | 2010-2019 |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.