File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: DiffusionPipe: Training Large Diffusion Models with Efficient Pipelines
Title | DiffusionPipe: Training Large Diffusion Models with Efficient Pipelines |
---|---|
Authors | |
Issue Date | 15-May-2024 |
Abstract | Diffusion models have emerged as dominant performers for image generation. To support training large diffusion models, this paper studies pipeline parallel training of diffusion models and proposes DiffusionPipe, a synchronous pipeline training system that advocates innovative pipeline bubble filling technique, catering to structural characteristics of diffusion models. State-of-the-art diffusion models typically include trainable (the backbone) and non-trainable (e.g., frozen input encoders) parts. We first unify optimal stage partitioning and pipeline scheduling of single and multiple backbones in representative diffusion models with a dynamic programming approach. We then propose to fill the computation of non-trainable model parts into idle periods of the pipeline training of the backbones by an efficient greedy algorithm, thus achieving high training throughput. Extensive experiments show that DiffusionPipe can achieve up to 1.41x speedup over pipeline parallel methods and 1.28x speedup over data parallel training on popular diffusion models. |
Persistent Identifier | http://hdl.handle.net/10722/347513 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Tian, Ye | - |
dc.contributor.author | Jia, Zhen | - |
dc.contributor.author | Luo, Ziyue | - |
dc.contributor.author | Wang, Yida | - |
dc.contributor.author | Wu, Chuan | - |
dc.date.accessioned | 2024-09-24T00:30:41Z | - |
dc.date.available | 2024-09-24T00:30:41Z | - |
dc.date.issued | 2024-05-15 | - |
dc.identifier.uri | http://hdl.handle.net/10722/347513 | - |
dc.description.abstract | <p>Diffusion models have emerged as dominant performers for image generation. To support training large diffusion models, this paper studies pipeline parallel training of diffusion models and proposes DiffusionPipe, a synchronous pipeline training system that advocates innovative pipeline bubble filling technique, catering to structural characteristics of diffusion models. State-of-the-art diffusion models typically include trainable (the backbone) and non-trainable (e.g., frozen input encoders) parts. We first unify optimal stage partitioning and pipeline scheduling of single and multiple backbones in representative diffusion models with a dynamic programming approach. We then propose to fill the computation of non-trainable model parts into idle periods of the pipeline training of the backbones by an efficient greedy algorithm, thus achieving high training throughput. Extensive experiments show that DiffusionPipe can achieve up to 1.41x speedup over pipeline parallel methods and 1.28x speedup over data parallel training on popular diffusion models.<br></p> | - |
dc.language | eng | - |
dc.relation.ispartof | The Seventh Conference on Machine Learning and Systems (MLSys) (13/05/2024-16/05/2024, Santa Clara) | - |
dc.title | DiffusionPipe: Training Large Diffusion Models with Efficient Pipelines | - |
dc.type | Conference_Paper | - |