ColossalAI/colossalai/accelerator
digger yu 5e1c93d732
[hotfix] fix typo change MoECheckpintIO to MoECheckpointIO (#5335)
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
2024-03-05 21:52:30 +08:00
..
README.md [accelerator] init the accelerator module (#5129) 2023-11-30 13:25:17 +08:00
__init__.py [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
api.py [hotfix] fix typo change MoECheckpintIO to MoECheckpointIO (#5335) 2024-03-05 21:52:30 +08:00
base_accelerator.py [feat] refactored extension module (#5298) 2024-01-25 17:01:48 +08:00
cpu_accelerator.py [feat] refactored extension module (#5298) 2024-01-25 17:01:48 +08:00
cuda_accelerator.py [feat] refactored extension module (#5298) 2024-01-25 17:01:48 +08:00
npu_accelerator.py [accelerator] fixed npu api 2024-01-29 14:27:52 +08:00

README.md

🚀 Accelerator

🔗 Table of Contents

📚 Introduction

This module offers a layer of abstraction for ColossalAI. With this module, the user can easily switch between different accelerator backends, such as Nvidia GPUs, Huawei NPUs, etc. This module is an attempt to make users' code portable across different hardware platform with a simple auto_set_accelerator() API.

📌 Design and Acknowledgement

Our accelerator module is heavily inspired by deepspeed/accelerator. We found that it is a very well-designed and well-structured module that can be easily integrated into our project. We would like to thank the DeepSpeed team for their great work.

We implemented this accelerator module from scratch. At the same time, we have implemented our own modifications:

  1. we updated the accelerator API names to be aligned with PyTorch's native API names.
  2. we did not include the op builder in the accelerator. Instead, we have reconstructed our kernel module to automatically match the accelerator and its corresponding kernel implementations, so as to make modules less tangled.