POSTER: High-Throughput GPU Random Walk with Fine-tuned Concurrent Query Processing
Random walk serves as a powerful tool in dealing with large-scale graphs, reducing data size while preserving structural information. Unfortunately, existing system frameworks all focus on the execution of a single walker task in serial. In this work, we show that conventional execution model cannot fully unleash the potential of today’s GPU cores. Simply performing coarse-grained space sharing among multiple tasks incurs unexpected performance interference. Based on the above observations, we propose CoWalker, a high-throughput GPU random walk framework tailored for concurrent random walk tasks. CoWalker introduces a multi-level concurrent execution model and a multi-dimensional scheduler to allow concurrent random walk tasks to efficiently share GPU resources with low overhead. It is able to reduce stalled GPU cores by reorganizing memory access pattern, which leads to higher throughput. Our system prototype confirms that CoWalker could outperform (up to 54%) the state-of-the-art in a wide spectral of scenarios.
Sun 26 FebDisplayed time zone: Eastern Time (US & Canada) change
18:00 - 20:00 | |||
18:00 2hPoster | POSTER: Stream-K: Work-centric Parallel Decomposition for Dense Matrix-Matrix Multiplication on the GPU Main Conference Muhammad Osama University of California, Davis, Duane Merrill NVIDIA Corporation, Cris Cecka NVIDIA Corporation, Michael Garland NVIDIA, John D. Owens University of California, Davis Pre-print | ||
18:00 2hPoster | POSTER: Unexpected Scaling in Path Copying Trees Main Conference Vitaly Aksenov Inria & ITMO University, Trevor Brown University of Toronto, Alexander Fedorov IST Austria, Ilya Kokorin ITMO University | ||
18:00 2hPoster | POSTER: Transactional Composition of Nonblocking Data Structures Main Conference Wentao Cai University of Rochester, Haosen Wen University of Rochester, Michael L. Scott University of Rochester | ||
18:00 2hPoster | POSTER: The ERA Theorem for Safe Memory Reclamation Main Conference | ||
18:00 2hPoster | POSTER: AArch64 Atomics: Might they be harming your performance? Main Conference | ||
18:00 2hPoster | POSTER: Fast Parallel Exact Inference on Bayesian Networks Main Conference Jiantong Jiang The University of Western Australia, Zeyi Wen The Hong Kong University of Science and Technology (Guangzhou), Atif Mansoor The University of Western Australia, Ajmal Mian The University of Western Australia | ||
18:00 2hPoster | POSTER: High-Throughput GPU Random Walk with Fine-tuned Concurrent Query Processing Main Conference Cheng Xu Shanghai Jiao Tong University, Chao Li Shanghai Jiao Tong University, Pengyu Wang Shanghai Jiao Tong University, Xiaofeng Hou Hong Kong University of Science and Technology, Jing Wang Shanghai Jiao Tong University, Shixuan Sun National University of Singapore, Minyi Guo Shanghai Jiao Tong University, Hanqing Wu Alibaba Inc, Dongbai Chen Alibaba Inc, Xiangwen Liu Alibaba Inc | ||
18:00 2hPoster | POSTER: Efficient All-reduce for Distributed DNN Training in Optical Interconnect Systems Main Conference Fei Dai University of Otago, Yawen Chen University of Otago, Zhiyi Huang University of Otago, Haibo Zhang University of Otago, Fangfang Zhang Qilu University of Technology | ||
18:00 2hPoster | POSTER: CuPBoP: A framework to make CUDA portable Main Conference Ruobing Han Georgia Institute of Technology, Jun Chen Georgia Institute of Technology, Bhanu Garg Georgia Institute of Technology, Jeffrey Young Georgia Institute of Technology, Jaewoong Sim Seoul National University, Hyesoon Kim Georgia Tech | ||
18:00 2hPoster | POSTER: Generating Fast FFT Kernels on CPUs via FFT-Specific Intrinsics Main Conference Zhihao Li SKLP, Institute of Computing Technology, CAS, Haipeng Jia SKLP, Institute of Computing Technology, CAS, Yunquan Zhang SKLP, Institute of Computing Technology, CAS, Yuyan Sun Huawei Technologies Co., Ltd, Yiwei Zhang SKLP, Institute of Computing Technology, CAS, Tun Chen SKLP, Institute of Computing Technology, CAS | ||
18:00 2hPoster | POSTER: Learning to Parallelize in a Shared-Memory Environment with Transformers Main Conference Pre-print |