ColossalAI/tests
Wenhao Chen e614aa34f3
[shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508)
* feat: add `GradientCheckpointConfig` and `PipelineGradientCheckpointConfig`

* feat: apply `GradientCheckpointConfig` to policy and llama_forward

* feat: move `distribute_layer` and `get_stage_index` to PipelineStageManager

* fix: add optional args for `distribute_layer` and `get_stage_index`

* fix: fix changed API calls

* test: update llama tests

* style: polish `GradientCheckpointConfig`

* fix: fix pipeline utils tests
2024-04-01 11:34:58 +08:00
..
kit [shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508) 2024-04-01 11:34:58 +08:00
test_analyzer [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
test_auto_parallel [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
test_autochunk [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
test_booster [shardformer] fix pipeline forward error if custom layer distribution is used (#5189) 2024-03-27 13:57:00 +08:00
test_checkpoint_io [hotfix] set return_outputs=False in examples and polish code (#5404) 2024-03-25 12:31:09 +08:00
test_cluster [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
test_config [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
test_device [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
test_fx [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
test_gptq [feature] add gptq for inference (#4754) 2023-09-22 11:02:50 +08:00
test_infer [Hotfix] Fix model policy matching strategy in ShardFormer (#5064) 2023-11-22 11:19:39 +08:00
test_lazy [example]add gpt2 benchmark example script. (#5295) 2024-03-04 16:18:13 +08:00
test_legacy [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
test_moe [hotfix] set return_outputs=False in examples and polish code (#5404) 2024-03-25 12:31:09 +08:00
test_optimizer [shardformer]Fix lm parallel. (#5480) 2024-03-25 17:21:51 +08:00
test_pipeline [shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508) 2024-04-01 11:34:58 +08:00
test_shardformer [shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508) 2024-04-01 11:34:58 +08:00
test_smoothquant [inference] Add smmoothquant for llama (#4904) 2023-10-16 11:28:44 +08:00
test_tensor fixed layout converter caching and updated tester 2024-03-26 17:22:27 +08:00
test_zero [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
__init__.py [zero] Update sharded model v2 using sharded param v2 (#323) 2022-03-11 15:50:28 +08:00