ColossalAI/colossalai
wangbinluo a9b5ec8664 fix the build before load bug 2024-01-10 14:50:40 +08:00
..
_C
_analyzer
accelerator [hotfix] removed unused flag (#5242) 2024-01-09 14:57:07 +08:00
amp [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
auto_parallel [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
autochunk
booster [hotfix] removed unused flag (#5242) 2024-01-09 14:57:07 +08:00
checkpoint_io [pipeline,shardformer] Fix p2p efficiency in pipeline, allow skipping loading weight not in weight_map when `strict=False`, fix llama flash attention forward, add flop estimation by megatron in llama benchmark (#5017) 2023-11-16 20:15:59 +08:00
cli
cluster [gemini] gemini support tensor parallelism. (#4942) 2023-11-10 10:15:16 +08:00
context
device [npu] add npu support for hybrid plugin and llama (#5090) 2023-11-22 19:23:21 +08:00
fx
inference [Hotfix] Fix model policy matching strategy in ShardFormer (#5064) 2023-11-22 11:19:39 +08:00
interface
kernel fix the build before load bug 2024-01-10 14:50:40 +08:00
lazy
legacy [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
logging
moe [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
nn [npu] use extension for op builder (#5172) 2024-01-08 11:39:16 +08:00
pipeline [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
shardformer [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
tensor [hotfix]: modify create_ep_hierarchical_group and add test (#5032) 2023-11-17 10:53:00 +08:00
testing [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
utils [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
zero [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
__init__.py [accelerator] init the accelerator module (#5129) 2023-11-30 13:25:17 +08:00
initialize.py [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00