Browsing by Author Diao, L

Jump to: 0-9 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Showing results 1 to 6 of 6
TitleAuthor(s)Issue DateViews
 
Accelerating Large-Scale Distributed Neural Network Training with SPMD Parallelism
Proceeding/Conference:SoCC '22: Proceedings of the 13th Symposium on Cloud Computing
2022
DAPPLE: A Pipelined Data Parallel Approach for Training Large Models
Proceeding/Conference:Proceedings of the 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming
2021
12
 
2004
125
Optimizing distributed training deployment in heterogeneous GPU clusters
Proceeding/Conference:Proceedings of the 16th International Conference on emerging Networking EXperiments and Technologies
2020
10
 
Optimizing DNN Compilation for Distributed Training With Joint OP and Tensor Fusion
Journal:IEEE Transactions on Parallel and Distributed Systems
2022
 
2021