ColossalAI/colossalai/shardformer/layer
duanjunwen aed20fb2df
[feat] support zbv in mixtral benchmark; (#6083)
* [feat] support zbv in mixtral benchmark;

* [fix] MixtralForCausalLMPolicy get_held_layer support zbv;

* [feat] update MixtralPipelineForwards --> mixtral_model_forward; support zbv;

* [feat] support MixtralPipelineForwards--> mixtral_for_causal_lm_forward for zbv

* [fix] fix llama, mixtral benchmark zbv loss none bug; update mixtral & llama policy and modeling;

* [feat] Linear1D_COL/ROW support zbv WeightGradStore;

* [feat] support use_zbv in llama, mixtral modeling; only replace Linear1D_Col/Row policy;

* [fix] fix test case; moe error in second iter

* [feat]EPMixtralSparseMoeBlock (op in MOE) support zbv;

* [fix] fix bwd b; now bwd w only for Layer replaced by Linear1D_Col/Row; other layer perform a fully bwd;

* [fix] debug zbv llama test;

* [fix] rm use_zbv flag in Shardconfig; rm debug info;

* [fix] add & fix  llama test

* [feat] support meta cache, meta_grad_send, meta_tensor_send; fix runtime too long in Recv Bwd; benchmark for llama + Hybrid(tp+pp);

* [fix\ fix fail case test_shard_llama

* [fix] fix test_shard_llama

* [fix] fix llama modeling policy;

* [fix] fix test_shard_llama ci;

* [fix] fix test zerobubble

* [fix] fix handle name; rm useless comments;

* [fix] fix send recv signature;

* [fix] fix comment in llama & benchmark

* [feat] support no tensor parallel Linear in shardformer; Add test for use weightGradStore and not use WeightGradStore

* [fix] fix linear (no tp) ops func name;
2024-10-31 18:17:29 +08:00
..
__init__.py [feat] support zbv in mixtral benchmark; (#6083) 2024-10-31 18:17:29 +08:00
_operation.py [feat] support zbv in mixtral benchmark; (#6083) 2024-10-31 18:17:29 +08:00
attn.py [zerobubble] rebase main (#6075) 2024-10-08 15:58:00 +08:00
dropout.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
embedding.py [zerobubble] rebase main (#6075) 2024-10-08 15:58:00 +08:00
linear.py [feat] support zbv in mixtral benchmark; (#6083) 2024-10-31 18:17:29 +08:00
loss.py [zerobubble] rebase main (#6075) 2024-10-08 15:58:00 +08:00
normalization.py [Hotfix] Avoid fused RMSnorm import error without apex (#5985) 2024-08-09 18:17:09 +08:00
parallel_module.py [shardformer] refactor embedding resize (#5603) 2024-04-18 16:10:18 +08:00
qkv_fused_linear.py [zerobubble] rebase main (#6075) 2024-10-08 15:58:00 +08:00
utils.py [zerobubble] rebase main (#6075) 2024-10-08 15:58:00 +08:00