File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: OPTIMAL INSURANCE STRATEGIES: A HYBRID DEEP LEARNING MARKOV CHAIN APPROXIMATION APPROACH

TitleOPTIMAL INSURANCE STRATEGIES: A HYBRID DEEP LEARNING MARKOV CHAIN APPROXIMATION APPROACH
Authors
KeywordsNeural networks
deep learning
Markov chain approximation
reinsurance strategies
Issue Date2020
PublisherCambridge University Press. The Journal's web site is located at http://journals.cambridge.org/action/displayJournal?jid=ASB
Citation
ASTIN Bulletin, 2020, v. 50 n. 2, p. 449-477 How to Cite?
AbstractThis paper studies deep learning approaches to find optimal reinsurance and dividend strategies for insurance companies. Due to the randomness of the financial ruin time to terminate the control processes, a Markov chain approximation-based iterative deep learning algorithm is developed to study this type of infinite-horizon optimal control problems. The optimal controls are approximated as deep neural networks in both cases of regular and singular types of dividend strategies. The framework of Markov chain approximation plays a key role in building the iterative equations and initialization of the algorithm. We implement our method to classic dividend and reinsurance problems and compare the learning results with existing analytical solutions. The feasibility of our method for complicated problems has been demonstrated by applying to an optimal dividend, reinsurance and investment problem under a high-dimensional diffusive model with jumps and regime switching.
Persistent Identifierhttp://hdl.handle.net/10722/288162
ISSN
2021 Impact Factor: 2.545
2020 SCImago Journal Rankings: 1.113
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorCheng, X-
dc.contributor.authorJin, Z-
dc.contributor.authorYang, H-
dc.date.accessioned2020-10-05T12:08:47Z-
dc.date.available2020-10-05T12:08:47Z-
dc.date.issued2020-
dc.identifier.citationASTIN Bulletin, 2020, v. 50 n. 2, p. 449-477-
dc.identifier.issn0515-0361-
dc.identifier.urihttp://hdl.handle.net/10722/288162-
dc.description.abstractThis paper studies deep learning approaches to find optimal reinsurance and dividend strategies for insurance companies. Due to the randomness of the financial ruin time to terminate the control processes, a Markov chain approximation-based iterative deep learning algorithm is developed to study this type of infinite-horizon optimal control problems. The optimal controls are approximated as deep neural networks in both cases of regular and singular types of dividend strategies. The framework of Markov chain approximation plays a key role in building the iterative equations and initialization of the algorithm. We implement our method to classic dividend and reinsurance problems and compare the learning results with existing analytical solutions. The feasibility of our method for complicated problems has been demonstrated by applying to an optimal dividend, reinsurance and investment problem under a high-dimensional diffusive model with jumps and regime switching.-
dc.languageeng-
dc.publisherCambridge University Press. The Journal's web site is located at http://journals.cambridge.org/action/displayJournal?jid=ASB-
dc.relation.ispartofASTIN Bulletin-
dc.rightsASTIN Bulletin. Copyright © Cambridge University Press.-
dc.rightsThis article has been published in a revised form in ASTIN Bulletin [ https://doi.org/10.1017/asb.2020.9]. This version is free to view and download for private research and study only. Not for re-distribution, re-sale or use in derivative works. © Astin Bulletin-
dc.subjectNeural networks-
dc.subjectdeep learning-
dc.subjectMarkov chain approximation-
dc.subjectreinsurance strategies-
dc.titleOPTIMAL INSURANCE STRATEGIES: A HYBRID DEEP LEARNING MARKOV CHAIN APPROXIMATION APPROACH-
dc.typeArticle-
dc.identifier.emailYang, H: hlyang@hku.hk-
dc.identifier.authorityYang, H=rp00826-
dc.description.naturepostprint-
dc.identifier.doi10.1017/asb.2020.9-
dc.identifier.scopuseid_2-s2.0-85085154128-
dc.identifier.hkuros314966-
dc.identifier.volume50-
dc.identifier.issue2-
dc.identifier.spage449-
dc.identifier.epage477-
dc.identifier.isiWOS:000535927300005-
dc.publisher.placeUnited Kingdom-
dc.identifier.issnl0515-0361-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats