File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: DL2: A Deep Learning-driven Scheduler for Deep Learning Clusters

TitleDL2: A Deep Learning-driven Scheduler for Deep Learning Clusters
Authors
KeywordsDeep learning
resource allocation
distributed training
Issue Date2021
PublisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=71
Citation
IEEE Transactions on Parallel and Distributed Systems, 2021, v. 32 n. 8, p. 1947-1960 How to Cite?
AbstractEfficient resource scheduling is essential for maximal utilization of expensive deep learning (DL) clusters. Existing cluster schedulers either are agnostic to machine learning (ML) workload characteristics, or use scheduling heuristics based on operators' understanding of particular ML framework and workload, which are less efficient or not general enough. In this article, we show that DL techniques can be adopted to design a generic and efficient scheduler. Specifically, we propose DL2, a DL-driven scheduler for DL clusters, targeting global training job expedition by dynamically resizing resources allocated to jobs. DL2 advocates a joint supervised learning and reinforcement learning approach: a neural network is warmed up via offline supervised learning based on job traces produced by the existing cluster scheduler; then the neural network is plugged into the live DL cluster, fine-tuned by reinforcement learning carried out throughout the training progress of the DL jobs, and used for deciding job resource allocation in an online fashion. We implement DL2 on Kubernetes and enable dynamic resource scaling in DL jobs on MXNet. Extensive evaluation shows that DL2 outperforms fairness scheduler (i.e., DRF) by 44.1 percent and expert heuristic scheduler (i.e., Optimus) by 17.5 percent in terms of average job completion time.
Persistent Identifierhttp://hdl.handle.net/10722/301454
ISSN
2023 Impact Factor: 5.6
2023 SCImago Journal Rankings: 2.340
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorPENG, Y-
dc.contributor.authorBAO, Y-
dc.contributor.authorCHEN, Y-
dc.contributor.authorWu, C-
dc.contributor.authorMeng, C-
dc.contributor.authorLin, W-
dc.date.accessioned2021-07-27T08:11:19Z-
dc.date.available2021-07-27T08:11:19Z-
dc.date.issued2021-
dc.identifier.citationIEEE Transactions on Parallel and Distributed Systems, 2021, v. 32 n. 8, p. 1947-1960-
dc.identifier.issn1045-9219-
dc.identifier.urihttp://hdl.handle.net/10722/301454-
dc.description.abstractEfficient resource scheduling is essential for maximal utilization of expensive deep learning (DL) clusters. Existing cluster schedulers either are agnostic to machine learning (ML) workload characteristics, or use scheduling heuristics based on operators' understanding of particular ML framework and workload, which are less efficient or not general enough. In this article, we show that DL techniques can be adopted to design a generic and efficient scheduler. Specifically, we propose DL2, a DL-driven scheduler for DL clusters, targeting global training job expedition by dynamically resizing resources allocated to jobs. DL2 advocates a joint supervised learning and reinforcement learning approach: a neural network is warmed up via offline supervised learning based on job traces produced by the existing cluster scheduler; then the neural network is plugged into the live DL cluster, fine-tuned by reinforcement learning carried out throughout the training progress of the DL jobs, and used for deciding job resource allocation in an online fashion. We implement DL2 on Kubernetes and enable dynamic resource scaling in DL jobs on MXNet. Extensive evaluation shows that DL2 outperforms fairness scheduler (i.e., DRF) by 44.1 percent and expert heuristic scheduler (i.e., Optimus) by 17.5 percent in terms of average job completion time.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=71-
dc.relation.ispartofIEEE Transactions on Parallel and Distributed Systems-
dc.rightsIEEE Transactions on Parallel and Distributed Systems. Copyright © Institute of Electrical and Electronics Engineers.-
dc.rights©20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectDeep learning-
dc.subjectresource allocation-
dc.subjectdistributed training-
dc.titleDL2: A Deep Learning-driven Scheduler for Deep Learning Clusters-
dc.typeArticle-
dc.identifier.emailWu, C: cwu@cs.hku.hk-
dc.identifier.authorityWu, C=rp01397-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TPDS.2021.3052895-
dc.identifier.scopuseid_2-s2.0-85099732170-
dc.identifier.hkuros323504-
dc.identifier.volume32-
dc.identifier.issue8-
dc.identifier.spage1947-
dc.identifier.epage1960-
dc.identifier.isiWOS:000622094200004-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats