File Download
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1017/asb.2020.9
- Scopus: eid_2-s2.0-85085154128
- WOS: WOS:000535927300005
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: OPTIMAL INSURANCE STRATEGIES: A HYBRID DEEP LEARNING MARKOV CHAIN APPROXIMATION APPROACH
Title | OPTIMAL INSURANCE STRATEGIES: A HYBRID DEEP LEARNING MARKOV CHAIN APPROXIMATION APPROACH |
---|---|
Authors | |
Keywords | Neural networks deep learning Markov chain approximation reinsurance strategies |
Issue Date | 2020 |
Publisher | Cambridge University Press. The Journal's web site is located at http://journals.cambridge.org/action/displayJournal?jid=ASB |
Citation | ASTIN Bulletin, 2020, v. 50 n. 2, p. 449-477 How to Cite? |
Abstract | This paper studies deep learning approaches to find optimal reinsurance and dividend strategies for insurance companies. Due to the randomness of the financial ruin time to terminate the control processes, a Markov chain approximation-based iterative deep learning algorithm is developed to study this type of infinite-horizon optimal control problems. The optimal controls are approximated as deep neural networks in both cases of regular and singular types of dividend strategies. The framework of Markov chain approximation plays a key role in building the iterative equations and initialization of the algorithm. We implement our method to classic dividend and reinsurance problems and compare the learning results with existing analytical solutions. The feasibility of our method for complicated problems has been demonstrated by applying to an optimal dividend, reinsurance and investment problem under a high-dimensional diffusive model with jumps and regime switching. |
Persistent Identifier | http://hdl.handle.net/10722/288162 |
ISSN | 2021 Impact Factor: 2.545 2020 SCImago Journal Rankings: 1.113 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Cheng, X | - |
dc.contributor.author | Jin, Z | - |
dc.contributor.author | Yang, H | - |
dc.date.accessioned | 2020-10-05T12:08:47Z | - |
dc.date.available | 2020-10-05T12:08:47Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | ASTIN Bulletin, 2020, v. 50 n. 2, p. 449-477 | - |
dc.identifier.issn | 0515-0361 | - |
dc.identifier.uri | http://hdl.handle.net/10722/288162 | - |
dc.description.abstract | This paper studies deep learning approaches to find optimal reinsurance and dividend strategies for insurance companies. Due to the randomness of the financial ruin time to terminate the control processes, a Markov chain approximation-based iterative deep learning algorithm is developed to study this type of infinite-horizon optimal control problems. The optimal controls are approximated as deep neural networks in both cases of regular and singular types of dividend strategies. The framework of Markov chain approximation plays a key role in building the iterative equations and initialization of the algorithm. We implement our method to classic dividend and reinsurance problems and compare the learning results with existing analytical solutions. The feasibility of our method for complicated problems has been demonstrated by applying to an optimal dividend, reinsurance and investment problem under a high-dimensional diffusive model with jumps and regime switching. | - |
dc.language | eng | - |
dc.publisher | Cambridge University Press. The Journal's web site is located at http://journals.cambridge.org/action/displayJournal?jid=ASB | - |
dc.relation.ispartof | ASTIN Bulletin | - |
dc.rights | ASTIN Bulletin. Copyright © Cambridge University Press. | - |
dc.rights | This article has been published in a revised form in ASTIN Bulletin [ https://doi.org/10.1017/asb.2020.9]. This version is free to view and download for private research and study only. Not for re-distribution, re-sale or use in derivative works. © Astin Bulletin | - |
dc.subject | Neural networks | - |
dc.subject | deep learning | - |
dc.subject | Markov chain approximation | - |
dc.subject | reinsurance strategies | - |
dc.title | OPTIMAL INSURANCE STRATEGIES: A HYBRID DEEP LEARNING MARKOV CHAIN APPROXIMATION APPROACH | - |
dc.type | Article | - |
dc.identifier.email | Yang, H: hlyang@hku.hk | - |
dc.identifier.authority | Yang, H=rp00826 | - |
dc.description.nature | postprint | - |
dc.identifier.doi | 10.1017/asb.2020.9 | - |
dc.identifier.scopus | eid_2-s2.0-85085154128 | - |
dc.identifier.hkuros | 314966 | - |
dc.identifier.volume | 50 | - |
dc.identifier.issue | 2 | - |
dc.identifier.spage | 449 | - |
dc.identifier.epage | 477 | - |
dc.identifier.isi | WOS:000535927300005 | - |
dc.publisher.place | United Kingdom | - |
dc.identifier.issnl | 0515-0361 | - |