mirror of https://github.com/hpcaitech/ColossalAI
7172459e74
* [shardformer] implement policy for all GPT-J models and test * [shardformer] support interleaved pipeline parallel for bert finetune * [shardformer] shardformer support falcon (#4883) * [shardformer]: fix interleaved pipeline for bert model (#5048) * [hotfix]: disable seq parallel for gptj and falcon, and polish code (#5093) * Add Mistral support for Shardformer (#5103) * [shardformer] add tests to mistral (#5105) --------- Co-authored-by: Pengtai Xu <henryxu880@gmail.com> Co-authored-by: ppt0011 <143150326+ppt0011@users.noreply.github.com> Co-authored-by: flybird11111 <1829166702@qq.com> Co-authored-by: eric8607242 <e0928021388@gmail.com> |
||
---|---|---|
.. | ||
1D_tensor_parallel.md | ||
2D_tensor_parallel.md | ||
2p5D_tensor_parallel.md | ||
3D_tensor_parallel.md | ||
cluster_utils.md | ||
gradient_accumulation_with_booster.md | ||
gradient_clipping_with_booster.md | ||
lazy_init.md | ||
mixed_precision_training_with_booster.md | ||
nvme_offload.md | ||
pipeline_parallel.md | ||
shardformer.md | ||
zero_with_chunk.md |