Chaoyang He (USC), Shen Li (Facebook AI), Mahdi Soltanolkotabi (USC), Salman Avestimehr (USC)
[Arxiv]
Highlights:
(1) This research is under a collaboration with Facebook PyTorch team. PipeTransformer is the first research case that is developed using PyTorch Pipe (torch.distributed.pipeline) (nightly version: 1.8.0.dev20201219)
(2) In the open-source aspect, we contributed to PyTorch Pipe through API design discussion, system performance analysis, and bug fix. PyTorch Pipe was released around March 2021.
Abstract
The size of Transformer models is growing at an unprecedented pace. It has only taken less than one year to reach trillion-level parameters after the release of GPT-3 (175B). Training such models requires both substantial engineering efforts and enormous computing resources, which are luxuries most research teams cannot afford. In this paper, we propose PipeTransformer, which leverages automated and elastic pipelining and data parallelism for efficient distributed training of Transformer models. PipeTransformer automatically adjusts the pipelining and data parallelism by identifying and freezing some layers during the training, and instead allocates resources for training of the remaining active layers. More specifically, PipeTransformer dynamically excludes converged layers from the pipeline, packs active layers into fewer GPUs, and forks more replicas to increase data-parallel width. We evaluate PipeTransformer using Vision Transformer (ViT) on ImageNet and BERT on GLUE and SQuAD datasets. Our results show that PipeTransformer attains a 2.4 fold speedup compared to the state-of-the-art baseline. We also provide various performance analyses for a more comprehensive understanding of our algorithmic and system-wise design. We also develop open-sourced flexible APIs for PipeTransformer, which offer a clean separation among the freeze algorithm, model definitions, and training accelerations, hence allowing it to be applied to other algorithms that require similar freezing strategies.