POSTER: Efficient All-reduce for Distributed DNN Training in Optical Interconnect Systems
All-reduce is the crucial communication primitive to reduce model parameters in distributed Deep Neural Networks (DNN) training. Most existing all-reduce algorithms are designed for traditional electrical interconnect systems, which cannot meet the communication requirements for distributed training of large DNNs due to the low data bandwidth of the electrical interconnect systems. One of the promising alternatives for electrical interconnect is optical interconnect, which can provide high bandwidth, low transmission delay, and low power cost. We propose an efficient scheme called WRHT (Wavelength Reused Hierarchical Tree) for implementing all-reduce operation in optical interconnect systems. WRHT can take advantage of WDM (Wavelength Division Multiplexing) to reduce the communication time of distributed data-parallel DNN training. Simulations using real DNN models show that, compared to all-reduce algorithms in the electrical and optical network systems, our approach reduces communication time by 75.76% and 91.86%, respectively.
Sun 26 FebDisplayed time zone: Eastern Time (US & Canada) change
18:00 - 20:00 | |||
18:00 2hPoster | POSTER: Stream-K: Work-centric Parallel Decomposition for Dense Matrix-Matrix Multiplication on the GPU Main Conference Muhammad Osama University of California, Davis, Duane Merrill NVIDIA Corporation, Cris Cecka NVIDIA Corporation, Michael Garland NVIDIA, John D. Owens University of California, Davis Pre-print | ||
18:00 2hPoster | POSTER: Unexpected Scaling in Path Copying Trees Main Conference Vitaly Aksenov Inria & ITMO University, Trevor Brown University of Toronto, Alexander Fedorov IST Austria, Ilya Kokorin ITMO University | ||
18:00 2hPoster | POSTER: Transactional Composition of Nonblocking Data Structures Main Conference Wentao Cai University of Rochester, Haosen Wen University of Rochester, Michael L. Scott University of Rochester | ||
18:00 2hPoster | POSTER: The ERA Theorem for Safe Memory Reclamation Main Conference | ||
18:00 2hPoster | POSTER: AArch64 Atomics: Might they be harming your performance? Main Conference | ||
18:00 2hPoster | POSTER: Fast Parallel Exact Inference on Bayesian Networks Main Conference Jiantong Jiang The University of Western Australia, Zeyi Wen The Hong Kong University of Science and Technology (Guangzhou), Atif Mansoor The University of Western Australia, Ajmal Mian The University of Western Australia | ||
18:00 2hPoster | POSTER: High-Throughput GPU Random Walk with Fine-tuned Concurrent Query Processing Main Conference Cheng Xu Shanghai Jiao Tong University, Chao Li Shanghai Jiao Tong University, Pengyu Wang Shanghai Jiao Tong University, Xiaofeng Hou Hong Kong University of Science and Technology, Jing Wang Shanghai Jiao Tong University, Shixuan Sun National University of Singapore, Minyi Guo Shanghai Jiao Tong University, Hanqing Wu Alibaba Inc, Dongbai Chen Alibaba Inc, Xiangwen Liu Alibaba Inc | ||
18:00 2hPoster | POSTER: Efficient All-reduce for Distributed DNN Training in Optical Interconnect Systems Main Conference Fei Dai University of Otago, Yawen Chen University of Otago, Zhiyi Huang University of Otago, Haibo Zhang University of Otago, Fangfang Zhang Qilu University of Technology | ||
18:00 2hPoster | POSTER: CuPBoP: A framework to make CUDA portable Main Conference Ruobing Han Georgia Institute of Technology, Jun Chen Georgia Institute of Technology, Bhanu Garg Georgia Institute of Technology, Jeffrey Young Georgia Institute of Technology, Jaewoong Sim Seoul National University, Hyesoon Kim Georgia Tech | ||
18:00 2hPoster | POSTER: Generating Fast FFT Kernels on CPUs via FFT-Specific Intrinsics Main Conference Zhihao Li SKLP, Institute of Computing Technology, CAS, Haipeng Jia SKLP, Institute of Computing Technology, CAS, Yunquan Zhang SKLP, Institute of Computing Technology, CAS, Yuyan Sun Huawei Technologies Co., Ltd, Yiwei Zhang SKLP, Institute of Computing Technology, CAS, Tun Chen SKLP, Institute of Computing Technology, CAS | ||
18:00 2hPoster | POSTER: Learning to Parallelize in a Shared-Memory Environment with Transformers Main Conference Pre-print |