File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/BigData47090.2019.9006104
- Scopus: eid_2-s2.0-85081290299
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Demystifying Learning Rate Policies for High Accuracy Training of Deep Neural Networks
Title | Demystifying Learning Rate Policies for High Accuracy Training of Deep Neural Networks |
---|---|
Authors | |
Keywords | Deep Learning Learning Rates Neural Networks Training |
Issue Date | 2019 |
Citation | Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019, 2019, p. 1971-1980 How to Cite? |
Abstract | Learning Rate (LR) is an important hyper-parameter to tune for effective training of deep neural networks (DNNs). Even for the baseline of a constant learning rate, it is non-trivial to choose a good constant value for training a DNN. Dynamic learning rates involve multi-step tuning of LR values at various stages of the training process and offer high accuracy and fast convergence. However, they are much harder to tune. In this paper, we present a comprehensive study of 13 learning rate functions and their associated LR policies by examining their range parameters, step parameters, and value update parameters. We propose a set of metrics for evaluating and selecting LR policies, including the classification confidence, variance, cost, and robustness, and implement them in LRBench, an LR benchmarking system. LRBench can assist end-users and DNN developers to select good LR policies and avoid bad LR policies for training their DNNs. We tested LRBench on Caffe, an open source deep learning framework, to showcase the tuning optimization of LR policies. Evaluated through extensive experiments, we attempt to demystify the tuning of LR policies by identifying good LR policies with effective LR value ranges and step sizes for LR update schedules. |
Persistent Identifier | http://hdl.handle.net/10722/343296 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wu, Yanzhao | - |
dc.contributor.author | Liu, Ling | - |
dc.contributor.author | Bae, Juhyun | - |
dc.contributor.author | Chow, Ka Ho | - |
dc.contributor.author | Iyengar, Arun | - |
dc.contributor.author | Pu, Calton | - |
dc.contributor.author | Wei, Wenqi | - |
dc.contributor.author | Yu, Lei | - |
dc.contributor.author | Zhang, Qi | - |
dc.date.accessioned | 2024-05-10T09:07:00Z | - |
dc.date.available | 2024-05-10T09:07:00Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019, 2019, p. 1971-1980 | - |
dc.identifier.uri | http://hdl.handle.net/10722/343296 | - |
dc.description.abstract | Learning Rate (LR) is an important hyper-parameter to tune for effective training of deep neural networks (DNNs). Even for the baseline of a constant learning rate, it is non-trivial to choose a good constant value for training a DNN. Dynamic learning rates involve multi-step tuning of LR values at various stages of the training process and offer high accuracy and fast convergence. However, they are much harder to tune. In this paper, we present a comprehensive study of 13 learning rate functions and their associated LR policies by examining their range parameters, step parameters, and value update parameters. We propose a set of metrics for evaluating and selecting LR policies, including the classification confidence, variance, cost, and robustness, and implement them in LRBench, an LR benchmarking system. LRBench can assist end-users and DNN developers to select good LR policies and avoid bad LR policies for training their DNNs. We tested LRBench on Caffe, an open source deep learning framework, to showcase the tuning optimization of LR policies. Evaluated through extensive experiments, we attempt to demystify the tuning of LR policies by identifying good LR policies with effective LR value ranges and step sizes for LR update schedules. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019 | - |
dc.subject | Deep Learning | - |
dc.subject | Learning Rates | - |
dc.subject | Neural Networks | - |
dc.subject | Training | - |
dc.title | Demystifying Learning Rate Policies for High Accuracy Training of Deep Neural Networks | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/BigData47090.2019.9006104 | - |
dc.identifier.scopus | eid_2-s2.0-85081290299 | - |
dc.identifier.spage | 1971 | - |
dc.identifier.epage | 1980 | - |