File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Scheduling Large-scale Distributed Training via Reinforcement Learning

TitleScheduling Large-scale Distributed Training via Reinforcement Learning
Authors
KeywordsConvolutional Neural Network
Reinforcement Learning
Optimization
Deep Learning
Issue Date2019
Citation
Proceedings - 2018 IEEE International Conference on Big Data, Big Data 2018, 2019, p. 1797-1806 How to Cite?
Abstract© 2018 IEEE. Scheduling the training procedure of deep neural networks (DNNs) such as tuning the learning rates is crucial to the successes of deep learning. Previous strategies such as piecewise and exponential learning rate schedulers have different arguments (hyper-parameters) that need to be tuned manually. With the expanding of data scale and model computation, searching for these arguments spends lots of empirical efforts. To address this issue, this work proposes policy schedular that determines the arguments of learning rate (lr) by reinforcement learning, significantly reducing costs to tune them. The policy schedular possesses several appealing benefits. First, instead of manually defining the values of initial lr and ultimate lr, it autonomously determines these values in training. Second, rather than using predefined functions to update lr, it adaptively oscillates lr by monitoring learning curves without human intervention. Third, it is able to select lr for each block or layer of a DNN. Experiments show that the DNNs trained with policy schedular achieve superior performances, outperforming previous work on various tasks and benchmarks such as ImageNet, COCO, and learning-to-learn.
Persistent Identifierhttp://hdl.handle.net/10722/273691

 

DC FieldValueLanguage
dc.contributor.authorPeng, Zhanglin-
dc.contributor.authorRen, Jiamin-
dc.contributor.authorZhang, Ruimao-
dc.contributor.authorWu, Lingyun-
dc.contributor.authorWang, Xinjiang-
dc.contributor.authorLuo, Ping-
dc.date.accessioned2019-08-12T09:56:22Z-
dc.date.available2019-08-12T09:56:22Z-
dc.date.issued2019-
dc.identifier.citationProceedings - 2018 IEEE International Conference on Big Data, Big Data 2018, 2019, p. 1797-1806-
dc.identifier.urihttp://hdl.handle.net/10722/273691-
dc.description.abstract© 2018 IEEE. Scheduling the training procedure of deep neural networks (DNNs) such as tuning the learning rates is crucial to the successes of deep learning. Previous strategies such as piecewise and exponential learning rate schedulers have different arguments (hyper-parameters) that need to be tuned manually. With the expanding of data scale and model computation, searching for these arguments spends lots of empirical efforts. To address this issue, this work proposes policy schedular that determines the arguments of learning rate (lr) by reinforcement learning, significantly reducing costs to tune them. The policy schedular possesses several appealing benefits. First, instead of manually defining the values of initial lr and ultimate lr, it autonomously determines these values in training. Second, rather than using predefined functions to update lr, it adaptively oscillates lr by monitoring learning curves without human intervention. Third, it is able to select lr for each block or layer of a DNN. Experiments show that the DNNs trained with policy schedular achieve superior performances, outperforming previous work on various tasks and benchmarks such as ImageNet, COCO, and learning-to-learn.-
dc.languageeng-
dc.relation.ispartofProceedings - 2018 IEEE International Conference on Big Data, Big Data 2018-
dc.subjectConvolutional Neural Network-
dc.subjectReinforcement Learning-
dc.subjectOptimization-
dc.subjectDeep Learning-
dc.titleScheduling Large-scale Distributed Training via Reinforcement Learning-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/BigData.2018.8622264-
dc.identifier.scopuseid_2-s2.0-85062613824-
dc.identifier.spage1797-
dc.identifier.epage1806-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats