ColossalAI/colossalai/shardformer/modeling
Bin Jia 351351a36e
[shardformer/sequence parallel] not support opt of seq-parallel, add warning and fix a bug in gpt2 pp (#4488)
2023-08-22 17:35:35 +08:00
..
chatglm2_6b
__init__.py
bert.py [shardformer] bert support sequence parallel. (#4455) 2023-08-18 18:04:55 +08:00
blip2.py [shardformer] update shardformer to use flash attention 2 (#4392) 2023-08-15 23:25:14 +08:00
bloom.py [shardformer] bloom support sequence parallel (#4465) 2023-08-18 15:34:18 +08:00
chatglm2.py rename chatglm to chatglm2 (#4484) 2023-08-22 14:13:31 +08:00
gpt2.py [shardformer/sequence parallel] not support opt of seq-parallel, add warning and fix a bug in gpt2 pp (#4488) 2023-08-22 17:35:35 +08:00
jit.py [Shardformer] Merge flash attention branch to pipeline branch (#4362) 2023-08-15 23:25:14 +08:00
llama.py [misc] resolve code factor issues (#4433) 2023-08-15 23:25:14 +08:00
opt.py [misc] resolve code factor issues (#4433) 2023-08-15 23:25:14 +08:00
sam.py [Shardformer] Merge flash attention branch to pipeline branch (#4362) 2023-08-15 23:25:14 +08:00
t5.py [misc] resolve code factor issues (#4433) 2023-08-15 23:25:14 +08:00
vit.py [misc] resolve code factor issues (#4433) 2023-08-15 23:25:14 +08:00
whisper.py [shardformer] Pipeline/whisper (#4456) 2023-08-18 21:29:25 +08:00