File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/INFOCOM42981.2021.9488916
- Scopus: eid_2-s2.0-85111902614
- WOS: WOS:000702210400247
- Find via
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: A Sum-of-Ratios Multi-Dimensional- Knapsack Decomposition for DNN Resource Scheduling
Title | A Sum-of-Ratios Multi-Dimensional- Knapsack Decomposition for DNN Resource Scheduling |
---|---|
Authors | |
Issue Date | 2021 |
Publisher | IEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000359 |
Citation | IEEE International Conference on Computer Communications (INFOCOM), Virtual Conference, Vancouver, BC, Canada, 10-13 May 2021, p. 1-10 How to Cite? |
Abstract | In recent years, to sustain the resource-intensive computational needs for training deep neural networks (DNNs), it is widely accepted that exploiting the parallelism in large-scale computing clusters is critical for the efficient deployments of DNN training jobs. However, existing resource schedulers for traditional computing clusters are not well suited for DNN training, which results in unsatisfactory job completion time performance. The limitations of these resource scheduling schemes motivate us to propose a new computing cluster resource scheduling framework that is able to leverage the special layered structure of DNN jobs and significantly improve their job completion times. Our contributions in this paper are three-fold: i) We develop a new resource scheduling analytical model by considering DNN’s layered structure, which enables us to analytically formulate the resource scheduling optimization problem for DNN training in computing clusters; ii) Based on the proposed performance analytical model, we then develop an efficient resource scheduling algorithm based on the widely adopted parameter-server architecture using a sum-of-ratios multi-dimensional-knapsack decomposition (SMD) method to offer strong performance guarantee; iii) We conduct extensive numerical experiments to demonstrate the effectiveness of the proposed schedule algorithm and its superior performance over the state of the art. |
Persistent Identifier | http://hdl.handle.net/10722/301292 |
ISSN | 2023 SCImago Journal Rankings: 2.865 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yu, M | - |
dc.contributor.author | Wu, C | - |
dc.contributor.author | Ji, B | - |
dc.contributor.author | Liu, J | - |
dc.date.accessioned | 2021-07-27T08:08:58Z | - |
dc.date.available | 2021-07-27T08:08:58Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | IEEE International Conference on Computer Communications (INFOCOM), Virtual Conference, Vancouver, BC, Canada, 10-13 May 2021, p. 1-10 | - |
dc.identifier.issn | 0743-166X | - |
dc.identifier.uri | http://hdl.handle.net/10722/301292 | - |
dc.description.abstract | In recent years, to sustain the resource-intensive computational needs for training deep neural networks (DNNs), it is widely accepted that exploiting the parallelism in large-scale computing clusters is critical for the efficient deployments of DNN training jobs. However, existing resource schedulers for traditional computing clusters are not well suited for DNN training, which results in unsatisfactory job completion time performance. The limitations of these resource scheduling schemes motivate us to propose a new computing cluster resource scheduling framework that is able to leverage the special layered structure of DNN jobs and significantly improve their job completion times. Our contributions in this paper are three-fold: i) We develop a new resource scheduling analytical model by considering DNN’s layered structure, which enables us to analytically formulate the resource scheduling optimization problem for DNN training in computing clusters; ii) Based on the proposed performance analytical model, we then develop an efficient resource scheduling algorithm based on the widely adopted parameter-server architecture using a sum-of-ratios multi-dimensional-knapsack decomposition (SMD) method to offer strong performance guarantee; iii) We conduct extensive numerical experiments to demonstrate the effectiveness of the proposed schedule algorithm and its superior performance over the state of the art. | - |
dc.language | eng | - |
dc.publisher | IEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000359 | - |
dc.relation.ispartof | IEEE INFOCOM - IEEE Conference on Computer Communications | - |
dc.rights | IEEE INFOCOM - IEEE Conference on Computer Communications. Copyright © IEEE Computer Society. | - |
dc.rights | ©2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | - |
dc.title | A Sum-of-Ratios Multi-Dimensional- Knapsack Decomposition for DNN Resource Scheduling | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Wu, C: cwu@cs.hku.hk | - |
dc.identifier.authority | Wu, C=rp01397 | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/INFOCOM42981.2021.9488916 | - |
dc.identifier.scopus | eid_2-s2.0-85111902614 | - |
dc.identifier.hkuros | 323508 | - |
dc.identifier.spage | 1 | - |
dc.identifier.epage | 10 | - |
dc.identifier.isi | WOS:000702210400247 | - |
dc.publisher.place | United States | - |