ColossalAI/colossalai/shardformer/layer
Bin Jia 424629fea0
[shardformer/sequence parallel] Cherry pick commit to new branch (#4450)
* [shardformer/sequence parallel] Support sequence parallel for gpt2 (#4384)

* [sequence parallel] add sequence parallel linear col/row support (#4336)

* add sequence parallel linear col/row support

* add annotation

* add annotation

* add support for gpt2 fused qkv linear layer

* support sequence parallel in GPT2

* add docstring and note

* add requirments

* remove unused flash-attb

* modify flash attn test

* modify flash attn setting

* modify flash attn code

* add assert before divide, rename forward function

* [shardformer/test] fix gpt2 test with seq-parallel

* [shardformer/sequence parallel] Overlap input gather and grad computation during col backward (#4401)

* overlap gather input / grad computing during col backward

* modify test for overlap

* simplify code

* fix code and modify cuda stream synchronize

* [shardformer/sequence parallel] polish code
2023-08-16 15:41:20 +08:00
..
__init__.py [shardformer] fix import 2023-08-15 23:25:14 +08:00
_operation.py [shardformer/sequence parallel] Cherry pick commit to new branch (#4450) 2023-08-16 15:41:20 +08:00
dropout.py
embedding.py [shardformer] fix embedding 2023-08-15 23:25:14 +08:00
linear.py [shardformer/sequence parallel] Cherry pick commit to new branch (#4450) 2023-08-16 15:41:20 +08:00
loss.py
normalization.py [shardformer] support inplace sharding (#4251) 2023-08-15 23:25:14 +08:00
parallel_module.py
qkv_fused_linear.py [shardformer/sequence parallel] Cherry pick commit to new branch (#4450) 2023-08-16 15:41:20 +08:00
utils.py [misc] resolve code factor issues (#4433) 2023-08-15 23:25:14 +08:00