ColossalAI/colossalai/auto_parallel/offload
Hongxin Liu d202cc28c0
[npu] change device to accelerator api (#5239)
* update accelerator

* fix timer

* fix amp

* update

* fix

* update bug

* add error raise

* fix autocast

* fix set device

* remove doc accelerator

* update doc

* update doc

* update doc

* use nullcontext

* update cpu

* update null context

* change time limit for example

* udpate

* update

* update

* update

* [npu] polish accelerator code

---------

Co-authored-by: Xuanlei Zhao <xuanlei.zhao@gmail.com>
Co-authored-by: zxl <43881818+oahzxl@users.noreply.github.com>
2024-01-09 10:20:05 +08:00
..
__init__.py [auto-parallel] add auto-offload feature (#3154) 2023-03-21 14:17:41 +08:00
amp_optimizer.py [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
base_offload_module.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
mem_optimize.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
region.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
region_manager.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
runtime.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
solver.py [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
training_simulator.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
util.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00