File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: When does reinforcement learning stand out in quantum control? A comparative study on state preparation

TitleWhen does reinforcement learning stand out in quantum control? A comparative study on state preparation
Authors
KeywordsMachine learning
Learning
Restricted Boltzmann
Issue Date2019
PublisherNature Research (part of Springer Nature): Fully open access journals. The Journal's web site is located at http://www.nature.com/npjqi/
Citation
npj Quantum Information, 2019, v. 5, p. article no. 85 How to Cite?
AbstractReinforcement learning has been widely used in many problems, including quantum control of qubits. However, such problems can, at the same time, be solved by traditional, non-machine-learning methods, such as stochastic gradient descent and Krotov algorithms, and it remains unclear which one is most suitable when the control has specific constraints. In this work, we perform a comparative study on the efficacy of three reinforcement learning algorithms: tabular Q-learning, deep Q-learning, and policy gradient, as well as two non-machine-learning methods: stochastic gradient descent and Krotov algorithms, in the problem of preparing a desired quantum state. We found that overall, the deep Q-learning and policy gradient algorithms outperform others when the problem is discretized, e.g. allowing discrete values of control, and when the problem scales up. The reinforcement learning algorithms can also adaptively reduce the complexity of the control sequences, shortening the operation time and improving the fidelity. Our comparison provides insights into the suitability of reinforcement learning in quantum control problems.
Persistent Identifierhttp://hdl.handle.net/10722/279478
ISSN
2023 Impact Factor: 6.6
2023 SCImago Journal Rankings: 2.824
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZhang, XM-
dc.contributor.authorWei, Z-
dc.contributor.authorAsad, R-
dc.contributor.authorYANG, XC-
dc.contributor.authorWang, X-
dc.date.accessioned2019-11-01T07:18:09Z-
dc.date.available2019-11-01T07:18:09Z-
dc.date.issued2019-
dc.identifier.citationnpj Quantum Information, 2019, v. 5, p. article no. 85-
dc.identifier.issn2056-6387-
dc.identifier.urihttp://hdl.handle.net/10722/279478-
dc.description.abstractReinforcement learning has been widely used in many problems, including quantum control of qubits. However, such problems can, at the same time, be solved by traditional, non-machine-learning methods, such as stochastic gradient descent and Krotov algorithms, and it remains unclear which one is most suitable when the control has specific constraints. In this work, we perform a comparative study on the efficacy of three reinforcement learning algorithms: tabular Q-learning, deep Q-learning, and policy gradient, as well as two non-machine-learning methods: stochastic gradient descent and Krotov algorithms, in the problem of preparing a desired quantum state. We found that overall, the deep Q-learning and policy gradient algorithms outperform others when the problem is discretized, e.g. allowing discrete values of control, and when the problem scales up. The reinforcement learning algorithms can also adaptively reduce the complexity of the control sequences, shortening the operation time and improving the fidelity. Our comparison provides insights into the suitability of reinforcement learning in quantum control problems.-
dc.languageeng-
dc.publisherNature Research (part of Springer Nature): Fully open access journals. The Journal's web site is located at http://www.nature.com/npjqi/-
dc.relation.ispartofnpj Quantum Information-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectMachine learning-
dc.subjectLearning-
dc.subjectRestricted Boltzmann-
dc.titleWhen does reinforcement learning stand out in quantum control? A comparative study on state preparation-
dc.typeArticle-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.1038/s41534-019-0201-8-
dc.identifier.scopuseid_2-s2.0-85073516267-
dc.identifier.hkuros308514-
dc.identifier.volume5-
dc.identifier.spagearticle no. 85-
dc.identifier.epagearticle no. 85-
dc.identifier.isiWOS:000489957900002-
dc.publisher.placeUnited Kingdom-
dc.identifier.issnl2056-6387-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats