File Download
Supplementary

postgraduate thesis: Accelerating distributed DNN training in AI clouds

TitleAccelerating distributed DNN training in AI clouds
Authors
Advisors
Advisor(s):Wu, C
Issue Date2020
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Citation
Peng, Y. [彭杨华]. (2020). Accelerating distributed DNN training in AI clouds. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.
AbstractThe increasing scale of the amount of data, model complexity, and computation infrastructure has driven the development of deep learning (DL) significantly in recent years. DL has been extensively used for a wide range of applications, such as computer vision, speech recognition, machine translation and online recommendation, etc. Training deep neural networks (DNNs), however, are computation-hungry and time-consuming. Distributed training scales out and accelerates DNN training using multiple devices. Unfortunately, its performance is often poor, due to various reasons, e.g., communication overhead, unreasonable resource configuration and runtime interference. This thesis addresses the challenges that arise when scaling distributed DNN training in AI clouds, including training parallelization, resource scheduling and elasticity. Our approach is to exploit the characteristics of DL training to co-optimize DL training frameworks and cluster schedulers. Specifically, we have designed and implemented three systems for scalable and efficient distributed DNN training, i.e., ByteScheduler, Optimus and DL2. We first address the scalability issue of training DNNs on multiple devices in a data-parallel way. ByteScheduler is a generic communication scheduler for distributed DNN training acceleration. We design a unified scheduler framework that abstracts communication scheduling from different DL training frameworks, communication architectures and network protocols, without modifying their implementations. Besides, we propose a principled scheduling algorithm that is theoretically-optimal when there is no system overhead. We identify key system parameters, i.e., partition size and credit size, and design Bayesian Optimization-based auto-tuning to make the algorithm work well in different runtime environments. Our implementation supports MXNet, PyTorch and TensorFlow, with both PS and all-reduce for gradient synchronization, and using either TCP or RDMA. ByteScheduler accelerates training with all experimented system configurations and DNN models, by up to 196%. Then we study the resource scheduling problem where multiple DL training jobs share resources in AI clouds. Optimus is a customized GPU cluster scheduler, which minimizes job training time based on online resource-performance models. Optimus uses online fitting to predict model convergence during training, and sets up performance models to accurately estimate training speed as a function of allocated resources in each job. Based on the models, a simple yet effective method is designed and used for dynamically allocating resources and placing parameter server tasks and worker tasks to minimize average job completion time. Our experiments on a Kubernetes cluster show that Optimus achieves high job performance and resource efficiency, and outperforms DRF scheduler by 139%. Finally, we investigate the use of deep reinforcement learning in elastic and dynamic resource allocation. DL2 is a deep learning-driven scheduler for GPU clusters, targeting global training job expedition via maximal cluster resource utilization. It advocates a joint supervised learning and reinforcement learning approach. The policy neural network is first warmed up via offline supervised learning based on job traces produced by the existing cluster scheduler; then it is fine-tuned by reinforcement learning carried out online. We implement DL2 on Kubernetes and enable dynamic resource scaling in DL jobs on MXNet. DL2 outperforms expert-scheduler by 17.5% in terms of average job completion time.
DegreeDoctor of Philosophy
SubjectNeural networks (Computer science)
Machine learning
Dept/ProgramComputer Science
Persistent Identifierhttp://hdl.handle.net/10722/282303

 

DC FieldValueLanguage
dc.contributor.advisorWu, C-
dc.contributor.authorPeng, Yanghua-
dc.contributor.author彭杨华-
dc.date.accessioned2020-05-07T07:17:18Z-
dc.date.available2020-05-07T07:17:18Z-
dc.date.issued2020-
dc.identifier.citationPeng, Y. [彭杨华]. (2020). Accelerating distributed DNN training in AI clouds. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.-
dc.identifier.urihttp://hdl.handle.net/10722/282303-
dc.description.abstractThe increasing scale of the amount of data, model complexity, and computation infrastructure has driven the development of deep learning (DL) significantly in recent years. DL has been extensively used for a wide range of applications, such as computer vision, speech recognition, machine translation and online recommendation, etc. Training deep neural networks (DNNs), however, are computation-hungry and time-consuming. Distributed training scales out and accelerates DNN training using multiple devices. Unfortunately, its performance is often poor, due to various reasons, e.g., communication overhead, unreasonable resource configuration and runtime interference. This thesis addresses the challenges that arise when scaling distributed DNN training in AI clouds, including training parallelization, resource scheduling and elasticity. Our approach is to exploit the characteristics of DL training to co-optimize DL training frameworks and cluster schedulers. Specifically, we have designed and implemented three systems for scalable and efficient distributed DNN training, i.e., ByteScheduler, Optimus and DL2. We first address the scalability issue of training DNNs on multiple devices in a data-parallel way. ByteScheduler is a generic communication scheduler for distributed DNN training acceleration. We design a unified scheduler framework that abstracts communication scheduling from different DL training frameworks, communication architectures and network protocols, without modifying their implementations. Besides, we propose a principled scheduling algorithm that is theoretically-optimal when there is no system overhead. We identify key system parameters, i.e., partition size and credit size, and design Bayesian Optimization-based auto-tuning to make the algorithm work well in different runtime environments. Our implementation supports MXNet, PyTorch and TensorFlow, with both PS and all-reduce for gradient synchronization, and using either TCP or RDMA. ByteScheduler accelerates training with all experimented system configurations and DNN models, by up to 196%. Then we study the resource scheduling problem where multiple DL training jobs share resources in AI clouds. Optimus is a customized GPU cluster scheduler, which minimizes job training time based on online resource-performance models. Optimus uses online fitting to predict model convergence during training, and sets up performance models to accurately estimate training speed as a function of allocated resources in each job. Based on the models, a simple yet effective method is designed and used for dynamically allocating resources and placing parameter server tasks and worker tasks to minimize average job completion time. Our experiments on a Kubernetes cluster show that Optimus achieves high job performance and resource efficiency, and outperforms DRF scheduler by 139%. Finally, we investigate the use of deep reinforcement learning in elastic and dynamic resource allocation. DL2 is a deep learning-driven scheduler for GPU clusters, targeting global training job expedition via maximal cluster resource utilization. It advocates a joint supervised learning and reinforcement learning approach. The policy neural network is first warmed up via offline supervised learning based on job traces produced by the existing cluster scheduler; then it is fine-tuned by reinforcement learning carried out online. We implement DL2 on Kubernetes and enable dynamic resource scaling in DL jobs on MXNet. DL2 outperforms expert-scheduler by 17.5% in terms of average job completion time.-
dc.languageeng-
dc.publisherThe University of Hong Kong (Pokfulam, Hong Kong)-
dc.relation.ispartofHKU Theses Online (HKUTO)-
dc.rightsThe author retains all proprietary rights, (such as patent rights) and the right to use in future works.-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subject.lcshNeural networks (Computer science)-
dc.subject.lcshMachine learning-
dc.titleAccelerating distributed DNN training in AI clouds-
dc.typePG_Thesis-
dc.description.thesisnameDoctor of Philosophy-
dc.description.thesislevelDoctoral-
dc.description.thesisdisciplineComputer Science-
dc.description.naturepublished_or_final_version-
dc.date.hkucongregation2020-
dc.identifier.mmsid991044229569003414-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats