.. |
__init__.py
|
[shardformer] init shardformer code structure (#3731)
|
2023-07-04 16:05:01 +08:00 |
auto_policy.py
|
[Refactor] refactor policy search and quant type controlling in inference (#5035)
|
2023-11-14 17:26:59 +08:00 |
base_policy.py
|
[hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926)
|
2023-11-03 13:32:43 +08:00 |
bert.py
|
[hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926)
|
2023-11-03 13:32:43 +08:00 |
blip2.py
|
[hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926)
|
2023-11-03 13:32:43 +08:00 |
bloom.py
|
[hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926)
|
2023-11-03 13:32:43 +08:00 |
chatglm2.py
|
[Inference] Fix bug in ChatGLM2 Tensor Parallelism (#5014)
|
2023-11-07 15:01:50 +08:00 |
gpt2.py
|
[hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926)
|
2023-11-03 13:32:43 +08:00 |
llama.py
|
[hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926)
|
2023-11-03 13:32:43 +08:00 |
opt.py
|
[hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926)
|
2023-11-03 13:32:43 +08:00 |
sam.py
|
[hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926)
|
2023-11-03 13:32:43 +08:00 |
t5.py
|
[hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926)
|
2023-11-03 13:32:43 +08:00 |
vit.py
|
[hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926)
|
2023-11-03 13:32:43 +08:00 |
whisper.py
|
[hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926)
|
2023-11-03 13:32:43 +08:00 |