You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
ColossalAI/colossalai
genghaozhe 5c6c5d6be3
remove comments
6 months ago
..
_C
_analyzer [hotfix] Fix examples no pad token & auto parallel codegen bug; (#5606) 7 months ago
accelerator [hotfix] fix typo change MoECheckpintIO to MoECheckpointIO (#5335) 9 months ago
amp [npu] change device to accelerator api (#5239) 11 months ago
auto_parallel [misc] refactor launch API and tensor constructor (#5666) 7 months ago
autochunk [hotfix] Fix examples no pad token & auto parallel codegen bug; (#5606) 7 months ago
booster Merge branch 'prefetch' of github.com:botbw/ColossalAI into botbw-prefetch 6 months ago
checkpoint_io [lora] add lora APIs for booster, support lora for TorchDDP (#4981) 7 months ago
cli [devops] fix extention building (#5427) 9 months ago
cluster [Feature] Distributed optimizers: Lamb, Galore, CAME and Adafactor (#5694) 6 months ago
context [Fix]: implement thread-safety singleton to avoid deadlock for very large-scale training scenarios (#5625) 7 months ago
device [Feature] Distributed optimizers: Lamb, Galore, CAME and Adafactor (#5694) 6 months ago
fx [hotfix] Fix examples no pad token & auto parallel codegen bug; (#5606) 7 months ago
inference [misc] refactor launch API and tensor constructor (#5666) 7 months ago
interface [Feature] Distributed optimizers: Lamb, Galore, CAME and Adafactor (#5694) 6 months ago
kernel [coloattention]modify coloattention (#5627) 7 months ago
lazy [doc] add lazy init docs (#4808) 1 year ago
legacy [hotfix] fix inference typo (#5438) 7 months ago
logging [misc] update pre-commit and run all files (#4752) 1 year ago
moe [hotfix] fix typo change MoECheckpintIO to MoECheckpointIO (#5335) 9 months ago
nn [Feature] Distributed optimizers: Lamb, Galore, CAME and Adafactor (#5694) 6 months ago
pipeline [LowLevelZero] low level zero support lora (#5153) 7 months ago
quantization [Feature] qlora support (#5586) 7 months ago
shardformer [Shardformer]fix the num_heads assert for llama model and qwen model (#5704) 7 months ago
tensor [Feature] Distributed optimizers: Lamb, Galore, CAME and Adafactor (#5694) 6 months ago
testing [shardformer] refactor embedding resize (#5603) 7 months ago
utils Merge pull request #5310 from hpcaitech/feature/npu 10 months ago
zero remove comments 6 months ago
__init__.py [devops] remove post commit ci (#5566) 8 months ago
initialize.py [misc] refactor launch API and tensor constructor (#5666) 7 months ago