mirror of https://github.com/hpcaitech/ColossalAI
424629fea0
* [shardformer/sequence parallel] Support sequence parallel for gpt2 (#4384) * [sequence parallel] add sequence parallel linear col/row support (#4336) * add sequence parallel linear col/row support * add annotation * add annotation * add support for gpt2 fused qkv linear layer * support sequence parallel in GPT2 * add docstring and note * add requirments * remove unused flash-attb * modify flash attn test * modify flash attn setting * modify flash attn code * add assert before divide, rename forward function * [shardformer/test] fix gpt2 test with seq-parallel * [shardformer/sequence parallel] Overlap input gather and grad computation during col backward (#4401) * overlap gather input / grad computing during col backward * modify test for overlap * simplify code * fix code and modify cuda stream synchronize * [shardformer/sequence parallel] polish code |
||
---|---|---|
.. | ||
test_dist_crossentropy.py | ||
test_dropout.py | ||
test_embedding.py | ||
test_gpt2_qkv_fused_linear_1d.py | ||
test_layernorm.py | ||
test_linear_1d.py | ||
test_qkv_fused_linear_1d.py | ||
test_vocab_parallel_embedding_1d.py |