ColossalAI/colossalai/shardformer/shard
Hongxin Liu dc2cdaf3e8
[shardformer] optimize seq parallelism (#6086)
* [shardformer] optimize seq parallelism

* [shardformer] fix gpt2 fused linear col

* [plugin] update gemini plugin

* [plugin] update moe hybrid plugin

* [test] update gpt2 fused linear test

* [shardformer] fix gpt2 fused linear reduce
2024-10-11 13:44:40 +08:00
..
__init__.py [shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508) 2024-04-01 11:34:58 +08:00
grad_ckpt_config.py [shardformer] refactor pipeline grad ckpt config (#5646) 2024-04-25 15:19:30 +08:00
shard_config.py [shardformer] optimize seq parallelism (#6086) 2024-10-11 13:44:40 +08:00
sharder.py [nfc] fix typo colossalai/shardformer/ (#5133) 2024-01-04 16:21:55 +08:00
shardformer.py [FP8] rebase main (#5963) 2024-08-06 16:29:37 +08:00
utils.py [pipeline] update shardformer policy 2023-08-15 23:25:14 +08:00