You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
ColossalAI/tests
Zhongkai Zhao 8e412a548e
[shardformer] Sequence Parallelism Optimization (#5533)
8 months ago
..
kit [shardformer] Sequence Parallelism Optimization (#5533) 8 months ago
test_analyzer [misc] update pre-commit and run all files (#4752) 1 year ago
test_auto_parallel [npu] change device to accelerator api (#5239) 11 months ago
test_autochunk [misc] update pre-commit and run all files (#4752) 1 year ago
test_booster [shardformer] fix pipeline forward error if custom layer distribution is used (#5189) 8 months ago
test_checkpoint_io [shardformer] Sequence Parallelism Optimization (#5533) 8 months ago
test_cluster [shardformer] Sequence Parallelism Optimization (#5533) 8 months ago
test_config [misc] update pre-commit and run all files (#4752) 1 year ago
test_device [misc] update pre-commit and run all files (#4752) 1 year ago
test_fx [misc] update pre-commit and run all files (#4752) 1 year ago
test_gptq [feature] add gptq for inference (#4754) 1 year ago
test_infer [Hotfix] Fix model policy matching strategy in ShardFormer (#5064) 1 year ago
test_lazy [example]add gpt2 benchmark example script. (#5295) 9 months ago
test_legacy [npu] change device to accelerator api (#5239) 11 months ago
test_moe [hotfix] set return_outputs=False in examples and polish code (#5404) 8 months ago
test_optimizer [shardformer]Fix lm parallel. (#5480) 8 months ago
test_pipeline [shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508) 8 months ago
test_shardformer [shardformer] Sequence Parallelism Optimization (#5533) 8 months ago
test_smoothquant [inference] Add smmoothquant for llama (#4904) 1 year ago
test_tensor fixed layout converter caching and updated tester 8 months ago
test_zero [npu] change device to accelerator api (#5239) 11 months ago
__init__.py [zero] Update sharded model v2 using sharded param v2 (#323) 3 years ago