File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Solving ergodic Markov decision processes and perfect information zero-sum stochastic games by variance reduced deflated value iteration

TitleSolving ergodic Markov decision processes and perfect information zero-sum stochastic games by variance reduced deflated value iteration
Authors
KeywordsGames
Game theory
Complexity theory
Markov processes
Heuristic algorithms
Issue Date2019
PublisherIEEE.
Citation
The 58th IEEE Conference on Decision and Control (CDC), Nice, France, 11-13 December 2019. In 2019 IEEE 58th Conference on Decision and Control (CDC): 11-13 December 2019, p. 5963-5970 How to Cite?
AbstractRecently, Sidford, Wang, Wu and Ye (2018) developed an algorithm combining variance reduction techniques with value iteration to solve discounted Markov decision processes. This algorithm has a sublinear complexity when the discount factor is fixed. Here, we extend this approach to mean-payoff problems, including both Markov decision processes and perfect information zero-sum stochastic games. We obtain sublinear complexity bounds, assuming there is a distinguished state which is accessible from all initial states and for all policies. Our method is based on a reduction from the mean payoff problem to the discounted problem by a Doob h-transform, combined with a deflation technique. The complexity analysis of this algorithm uses at the same time the techniques developed by Sidford et al. in the discounted case and non-linear spectral theory techniques (Collatz-Wielandt characterization of the eigenvalue).
Persistent Identifierhttp://hdl.handle.net/10722/316993

 

DC FieldValueLanguage
dc.contributor.authorAkian, M-
dc.contributor.authorGaubert, S-
dc.contributor.authorQu, Z-
dc.contributor.authorSaadi, O-
dc.date.accessioned2022-09-16T07:26:55Z-
dc.date.available2022-09-16T07:26:55Z-
dc.date.issued2019-
dc.identifier.citationThe 58th IEEE Conference on Decision and Control (CDC), Nice, France, 11-13 December 2019. In 2019 IEEE 58th Conference on Decision and Control (CDC): 11-13 December 2019, p. 5963-5970-
dc.identifier.urihttp://hdl.handle.net/10722/316993-
dc.description.abstractRecently, Sidford, Wang, Wu and Ye (2018) developed an algorithm combining variance reduction techniques with value iteration to solve discounted Markov decision processes. This algorithm has a sublinear complexity when the discount factor is fixed. Here, we extend this approach to mean-payoff problems, including both Markov decision processes and perfect information zero-sum stochastic games. We obtain sublinear complexity bounds, assuming there is a distinguished state which is accessible from all initial states and for all policies. Our method is based on a reduction from the mean payoff problem to the discounted problem by a Doob h-transform, combined with a deflation technique. The complexity analysis of this algorithm uses at the same time the techniques developed by Sidford et al. in the discounted case and non-linear spectral theory techniques (Collatz-Wielandt characterization of the eigenvalue).-
dc.languageeng-
dc.publisherIEEE.-
dc.relation.ispartof2019 IEEE 58th Conference on Decision and Control (CDC): 11-13 December 2019-
dc.rights2019 IEEE 58th Conference on Decision and Control (CDC): 11-13 December 2019. Copyright © IEEE.-
dc.rights©20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectGames-
dc.subjectGame theory-
dc.subjectComplexity theory-
dc.subjectMarkov processes-
dc.subjectHeuristic algorithms-
dc.titleSolving ergodic Markov decision processes and perfect information zero-sum stochastic games by variance reduced deflated value iteration-
dc.typeConference_Paper-
dc.identifier.emailQu, Z: zhengqu@hku.hk-
dc.identifier.authorityQu, Z=rp02096-
dc.identifier.doi10.1109/CDC40024.2019.9029885-
dc.identifier.hkuros336428-
dc.identifier.spage5963-
dc.identifier.epage5970-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats