File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Fast Training of Deep Learning Models over Multiple GPUs

TitleFast Training of Deep Learning Models over Multiple GPUs
Authors
KeywordsDistributed training
data parallel
model parallel
Issue Date2020
PublisherAssociation for Computing Machinery (ACM).
Citation
Proceedings of the 21st International Middleware Conference 2020 (Middleware '20), Virtual Confernece, Delft, the Netherlands, 7-11 December 2020, p. 105-118 How to Cite?
AbstractThis paper proposes FastT, a transparent module to work with the TensorFlow framework for automatically identifying a satisfying deployment and execution order of operations in DNN models over multiple GPUs, for expedited model training. We propose white-box algorithms to compute the strategies with small computing resource consumption in a short time. Recently, similar studies have been done to optimize device placement using reinforcement learning. Compared to those works which learn to optimize device placement of operations in several hours using large amounts of computing resources, our approach can find excellent device placement and execution order within minutes using the same computing node as for training. We design a list of scheduling algorithms to compute the device placement and execution order for each operation and also design an algorithm to split operations in the critical path to support fine-grained (mixed) data and model parallelism to further improve the training speed in each iteration. We compare FastT with representative strategies and obtain insights on the best strategies for training different types of DNN models based on extensive testbed experiments.
Persistent Identifierhttp://hdl.handle.net/10722/301416
ISBN
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorYI, X-
dc.contributor.authorLuo, Z-
dc.contributor.authorMeng, C-
dc.contributor.authorWang, M-
dc.contributor.authorLong, G-
dc.contributor.authorWu, C-
dc.contributor.authorYang, J-
dc.contributor.authorLin, W-
dc.date.accessioned2021-07-27T08:10:44Z-
dc.date.available2021-07-27T08:10:44Z-
dc.date.issued2020-
dc.identifier.citationProceedings of the 21st International Middleware Conference 2020 (Middleware '20), Virtual Confernece, Delft, the Netherlands, 7-11 December 2020, p. 105-118-
dc.identifier.isbn9781450381536-
dc.identifier.urihttp://hdl.handle.net/10722/301416-
dc.description.abstractThis paper proposes FastT, a transparent module to work with the TensorFlow framework for automatically identifying a satisfying deployment and execution order of operations in DNN models over multiple GPUs, for expedited model training. We propose white-box algorithms to compute the strategies with small computing resource consumption in a short time. Recently, similar studies have been done to optimize device placement using reinforcement learning. Compared to those works which learn to optimize device placement of operations in several hours using large amounts of computing resources, our approach can find excellent device placement and execution order within minutes using the same computing node as for training. We design a list of scheduling algorithms to compute the device placement and execution order for each operation and also design an algorithm to split operations in the critical path to support fine-grained (mixed) data and model parallelism to further improve the training speed in each iteration. We compare FastT with representative strategies and obtain insights on the best strategies for training different types of DNN models based on extensive testbed experiments.-
dc.languageeng-
dc.publisherAssociation for Computing Machinery (ACM).-
dc.relation.ispartofProceedings of the 21st International Middleware Conference (Middleware '20)-
dc.subjectDistributed training-
dc.subjectdata parallel-
dc.subjectmodel parallel-
dc.titleFast Training of Deep Learning Models over Multiple GPUs-
dc.typeConference_Paper-
dc.identifier.emailWu, C: cwu@cs.hku.hk-
dc.identifier.authorityWu, C=rp01397-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3423211.3425675-
dc.identifier.scopuseid_2-s2.0-85098523720-
dc.identifier.hkuros323511-
dc.identifier.spage105-
dc.identifier.epage118-
dc.identifier.isiWOS:000684175200008-
dc.publisher.placeNew York, NY-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats