PIT: Optimization of Dynamic Sparse Deep Learning Models via Permutation Invariant Transformation
- Ningxin Zheng ,
- Huiqiang Jiang ,
- Quanlu Zhang ,
- Zhenhua Han ,
- Lingxiao Ma ,
- Yuqing Yang ,
- Fan Yang ,
- Chengruidong Zhang ,
- Lili Qiu ,
- Mao Yang ,
- Lidong Zhou
The 29th ACM Symposium on Operating Systems Principles |
Organized by ACM
Dynamic sparsity, where the sparsity patterns are unknown until runtime, poses a significant challenge to deep learning. The state-of-the-art sparsity-aware deep learning solutions are restricted to pre-defined, static sparsity patterns due to significant overheads associated with preprocessing. Efficient execution of dynamic sparse computation often faces the misalignment between the GPU-friendly tile configuration for efficient execution and the sparsity-aware tile shape that minimizes coverage wastes (non-zero values in tensor).
In this paper, we propose PIT, a deep-learning compiler for dynamic sparsity. PIT proposes a novel tiling mechanism that leverages Permutation Invariant Transformation (PIT), a mathematically proven property, to transform multiple sparsely located micro-tiles into a GPU-efficient dense tile without changing the computation results, thus achieving both high GPU utilization and low coverage waste. Given a model, PIT first finds feasible PIT rules for all its operators and generates efficient GPU kernels accordingly. At runtime, with the SRead and SWrite primitives, PIT rules can be executed extremely fast to support dynamic sparsity in an online manner. Extensive evaluation on diverse models shows that PIT can accelerate dynamic sparsity computation by up to 5.9x over state-of-the-art compilers.