File Download
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1016/j.insmatheco.2020.11.012
- Scopus: eid_2-s2.0-85098599122
- WOS: WOS:000608020200019
- Find via

Supplementary
- Citations:
- Appears in Collections:
Article: A hybrid deep learning method for optimal insurance strategies: Algorithms and convergence analysis
| Title | A hybrid deep learning method for optimal insurance strategies: Algorithms and convergence analysis |
|---|---|
| Authors | |
| Keywords | Neural network Deep learning Markov chain approximation Stochastic approximation Investment Reinsurance Dividend management Convergence |
| Issue Date | 2021 |
| Publisher | Elsevier BV. The Journal's web site is located at http://www.elsevier.com/locate/ime |
| Citation | Insurance: Mathematics and Economics, 2021, v. 96, p. 262-275 How to Cite? |
| Abstract | This paper develops a hybrid deep learning approach to find optimal reinsurance, investment, and dividend strategies for an insurance company in a complex stochastic system. A jump–diffusion regime-switching model with infinite horizon subject to ruin is formulated for the surplus process. A Markov chain approximation and stochastic approximation-based iterative deep learning algorithm is developed to study this type of infinite-horizon optimal control problems. Approximations of the optimal controls are obtained by using deep neural networks. The framework of Markov chain approximation plays a key role in building iterative algorithms and finding initial values. Stochastic approximation is used to search for the optimal parameters of neural networks in a bounded region determined by the Markov chain approximation method. The convergence of the algorithm is proved and the rate of convergence is provided. |
| Persistent Identifier | http://hdl.handle.net/10722/304754 |
| ISSN | 2023 Impact Factor: 1.9 2023 SCImago Journal Rankings: 1.113 |
| ISI Accession Number ID |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Jin, Z | - |
| dc.contributor.author | Yang, H | - |
| dc.contributor.author | Yin, G | - |
| dc.date.accessioned | 2021-10-05T02:34:42Z | - |
| dc.date.available | 2021-10-05T02:34:42Z | - |
| dc.date.issued | 2021 | - |
| dc.identifier.citation | Insurance: Mathematics and Economics, 2021, v. 96, p. 262-275 | - |
| dc.identifier.issn | 0167-6687 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/304754 | - |
| dc.description.abstract | This paper develops a hybrid deep learning approach to find optimal reinsurance, investment, and dividend strategies for an insurance company in a complex stochastic system. A jump–diffusion regime-switching model with infinite horizon subject to ruin is formulated for the surplus process. A Markov chain approximation and stochastic approximation-based iterative deep learning algorithm is developed to study this type of infinite-horizon optimal control problems. Approximations of the optimal controls are obtained by using deep neural networks. The framework of Markov chain approximation plays a key role in building iterative algorithms and finding initial values. Stochastic approximation is used to search for the optimal parameters of neural networks in a bounded region determined by the Markov chain approximation method. The convergence of the algorithm is proved and the rate of convergence is provided. | - |
| dc.language | eng | - |
| dc.publisher | Elsevier BV. The Journal's web site is located at http://www.elsevier.com/locate/ime | - |
| dc.relation.ispartof | Insurance: Mathematics and Economics | - |
| dc.subject | Neural network | - |
| dc.subject | Deep learning | - |
| dc.subject | Markov chain approximation | - |
| dc.subject | Stochastic approximation | - |
| dc.subject | Investment | - |
| dc.subject | Reinsurance | - |
| dc.subject | Dividend management | - |
| dc.subject | Convergence | - |
| dc.title | A hybrid deep learning method for optimal insurance strategies: Algorithms and convergence analysis | - |
| dc.type | Article | - |
| dc.identifier.email | Yang, H: hlyang@hku.hk | - |
| dc.identifier.email | Yin, G: gyin@hku.hk | - |
| dc.identifier.authority | Yang, H=rp00826 | - |
| dc.identifier.authority | Yin, G=rp00831 | - |
| dc.description.nature | link_to_OA_fulltext | - |
| dc.identifier.doi | 10.1016/j.insmatheco.2020.11.012 | - |
| dc.identifier.scopus | eid_2-s2.0-85098599122 | - |
| dc.identifier.hkuros | 326290 | - |
| dc.identifier.volume | 96 | - |
| dc.identifier.spage | 262 | - |
| dc.identifier.epage | 275 | - |
| dc.identifier.isi | WOS:000608020200019 | - |
| dc.publisher.place | Netherlands | - |
