.. |
__init__.py
|
…
|
|
auto_policy.py
|
[shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088)
|
2023-11-28 16:54:42 +08:00 |
base_policy.py
|
[shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508)
|
2024-04-01 11:34:58 +08:00 |
bert.py
|
[shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508)
|
2024-04-01 11:34:58 +08:00 |
blip2.py
|
[hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926)
|
2023-11-03 13:32:43 +08:00 |
bloom.py
|
[shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508)
|
2024-04-01 11:34:58 +08:00 |
chatglm2.py
|
[shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508)
|
2024-04-01 11:34:58 +08:00 |
falcon.py
|
[shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508)
|
2024-04-01 11:34:58 +08:00 |
gpt2.py
|
[shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508)
|
2024-04-01 11:34:58 +08:00 |
gptj.py
|
[shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508)
|
2024-04-01 11:34:58 +08:00 |
llama.py
|
[shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508)
|
2024-04-01 11:34:58 +08:00 |
mistral.py
|
fix typo change dosen't to doesn't (#5308)
|
2024-01-30 09:57:38 +08:00 |
opt.py
|
[shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508)
|
2024-04-01 11:34:58 +08:00 |
sam.py
|
[hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926)
|
2023-11-03 13:32:43 +08:00 |
t5.py
|
[shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508)
|
2024-04-01 11:34:58 +08:00 |
vit.py
|
[shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508)
|
2024-04-01 11:34:58 +08:00 |
whisper.py
|
[shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508)
|
2024-04-01 11:34:58 +08:00 |