ColossalAI/colossalai/shardformer/shard
Yuanheng ed5ebd1735 [Fix] resolve conflicts of merging main 2024-04-08 16:21:47 +08:00
..
__init__.py [shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508) 2024-04-01 11:34:58 +08:00
grad_ckpt_config.py [shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508) 2024-04-01 11:34:58 +08:00
shard_config.py [Fix] resolve conflicts of merging main 2024-04-08 16:21:47 +08:00
sharder.py [nfc] fix typo colossalai/shardformer/ (#5133) 2024-01-04 16:21:55 +08:00
shardformer.py [example]add gpt2 benchmark example script. (#5295) 2024-03-04 16:18:13 +08:00
utils.py [pipeline] update shardformer policy 2023-08-15 23:25:14 +08:00