ColossalAI/colossalai/kernel
Hongxin Liu d202cc28c0
[npu] change device to accelerator api (#5239)
* update accelerator

* fix timer

* fix amp

* update

* fix

* update bug

* add error raise

* fix autocast

* fix set device

* remove doc accelerator

* update doc

* update doc

* update doc

* use nullcontext

* update cpu

* update null context

* change time limit for example

* udpate

* update

* update

* update

* [npu] polish accelerator code

---------

Co-authored-by: Xuanlei Zhao <xuanlei.zhao@gmail.com>
Co-authored-by: zxl <43881818+oahzxl@users.noreply.github.com>
2024-01-09 10:20:05 +08:00
..
cuda_native [npu] use extension for op builder (#5172) 2024-01-08 11:39:16 +08:00
extensions [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
jit [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
triton [Kernels]added flash-decoidng of triton (#5063) 2023-11-20 13:58:29 +08:00
__init__.py [npu] use extension for op builder (#5172) 2024-01-08 11:39:16 +08:00
base_kernel_loader.py [npu] use extension for op builder (#5172) 2024-01-08 11:39:16 +08:00
cpu_adam_loader.py [npu] use extension for op builder (#5172) 2024-01-08 11:39:16 +08:00
flash_attention_loader.py [npu] use extension for op builder (#5172) 2024-01-08 11:39:16 +08:00
op_builder [builder] reconfig op_builder for pypi install (#2314) 2023-01-04 16:32:32 +08:00