You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
ColossalAI/colossalai/shardformer/layer
Jianghai ef4c14a5e2
[Inference] Fix bug in ChatGLM2 Tensor Parallelism (#5014)
1 year ago
..
__init__.py [hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926) 1 year ago
_operation.py [misc] update pre-commit and run all files (#4752) 1 year ago
dropout.py [misc] update pre-commit and run all files (#4752) 1 year ago
embedding.py [misc] update pre-commit and run all files (#4752) 1 year ago
linear.py [Inference] Fix bug in ChatGLM2 Tensor Parallelism (#5014) 1 year ago
loss.py [misc] update pre-commit and run all files (#4752) 1 year ago
normalization.py [hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926) 1 year ago
parallel_module.py [misc] update pre-commit and run all files (#4752) 1 year ago
qkv_fused_linear.py [misc] update pre-commit and run all files (#4752) 1 year ago
utils.py [hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926) 1 year ago