ColossalAI/colossalai/shardformer/modeling
Xuanlei Zhao 68fcaa2225
remove duplicate import (#5100)
2023-11-23 15:15:01 +08:00
..
chatglm2_6b [hotfix/hybridengine] Fix init model with random parameters in benchmark (#5074) 2023-11-20 20:15:25 +08:00
__init__.py [shardformer] added development protocol for standardization (#4149) 2023-07-04 16:05:01 +08:00
bert.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
blip2.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
bloom.py [gemini] gemini support tensor parallelism. (#4942) 2023-11-10 10:15:16 +08:00
chatglm2.py [shardformer]fix flash attention, when mask is casual, just don't unpad it (#5084) 2023-11-22 16:00:07 +08:00
gpt2.py [shardformer]fix flash attention, when mask is casual, just don't unpad it (#5084) 2023-11-22 16:00:07 +08:00
jit.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
llama.py remove duplicate import (#5100) 2023-11-23 15:15:01 +08:00
opt.py [shardformer]fix flash attention, when mask is casual, just don't unpad it (#5084) 2023-11-22 16:00:07 +08:00
sam.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
t5.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
vit.py [hotfix] fix torch 2.0 compatibility (#4936) 2023-10-18 11:05:25 +08:00
whisper.py [shardformer]fix flash attention, when mask is casual, just don't unpad it (#5084) 2023-11-22 16:00:07 +08:00