ColossalAI/colossalai/inference/engine
Wenhao Chen e614aa34f3
[shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508)
* feat: add `GradientCheckpointConfig` and `PipelineGradientCheckpointConfig`

* feat: apply `GradientCheckpointConfig` to policy and llama_forward

* feat: move `distribute_layer` and `get_stage_index` to PipelineStageManager

* fix: add optional args for `distribute_layer` and `get_stage_index`

* fix: fix changed API calls

* test: update llama tests

* style: polish `GradientCheckpointConfig`

* fix: fix pipeline utils tests
2024-04-01 11:34:58 +08:00
..
modeling [Kernels]added flash-decoidng of triton (#5063) 2023-11-20 13:58:29 +08:00
policies [shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508) 2024-04-01 11:34:58 +08:00
__init__.py [inference] update examples and engine (#5073) 2023-11-20 19:44:52 +08:00
engine.py [inference] refactor examples and fix schedule (#5077) 2023-11-21 10:46:03 +08:00
microbatch_manager.py [hotfix] fix typo change _descrption to _description (#5331) 2024-03-05 21:47:48 +08:00