ColossalAI/colossalai/utils
Wenhao Chen 7172459e74
[shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088)
* [shardformer] implement policy for all GPT-J models and test

* [shardformer] support interleaved pipeline parallel for bert finetune

* [shardformer] shardformer support falcon (#4883)

* [shardformer]: fix interleaved pipeline for bert model (#5048)

* [hotfix]: disable seq parallel for gptj and falcon, and polish code (#5093)

* Add Mistral support for Shardformer (#5103)

* [shardformer] add tests to mistral (#5105)

---------

Co-authored-by: Pengtai Xu <henryxu880@gmail.com>
Co-authored-by: ppt0011 <143150326+ppt0011@users.noreply.github.com>
Co-authored-by: flybird11111 <1829166702@qq.com>
Co-authored-by: eric8607242 <e0928021388@gmail.com>
2023-11-28 16:54:42 +08:00
..
model [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
multi_tensor_apply [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
rank_recorder [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
tensor_detector [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
__init__.py [npu] add npu support for gemini and zero (#5067) 2023-11-20 16:12:41 +08:00
common.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
device.py [npu] add npu support for hybrid plugin and llama (#5090) 2023-11-22 19:23:21 +08:00
memory.py [shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088) 2023-11-28 16:54:42 +08:00
timer.py [npu] add npu support for gemini and zero (#5067) 2023-11-20 16:12:41 +08:00