.. |
chatglm2_6b
|
[hotfix] fix typo change enabel to enable under colossalai/shardformer/ (#5317)
|
2024-03-05 21:48:46 +08:00 |
__init__.py
|
[shardformer] added development protocol for standardization (#4149)
|
2023-07-04 16:05:01 +08:00 |
bert.py
|
[shardformer] support bias_gelu_jit_fused for models (#5647)
|
2024-04-29 15:33:51 +08:00 |
blip2.py
|
[shardformer] support bias_gelu_jit_fused for models (#5647)
|
2024-04-29 15:33:51 +08:00 |
bloom.py
|
[shardformer] fix modeling of bloom and falcon (#5796)
|
2024-06-11 17:43:50 +08:00 |
chatglm2.py
|
[shardformer] fix chatglm implementation (#5644)
|
2024-04-25 14:41:17 +08:00 |
command.py
|
merge model and attention forward
|
2024-06-18 02:32:41 +00:00 |
falcon.py
|
[shardformer] fix modeling of bloom and falcon (#5796)
|
2024-06-11 17:43:50 +08:00 |
gpt2.py
|
[shardformer] upgrade transformers to 4.39.3 (#5815)
|
2024-06-14 10:59:33 +08:00 |
gptj.py
|
[shardformer] upgrade transformers to 4.39.3 (#5815)
|
2024-06-14 10:59:33 +08:00 |
jit.py
|
[misc] update pre-commit and run all files (#4752)
|
2023-09-19 14:20:26 +08:00 |
llama.py
|
Support 4d parallel + flash attention (#5789)
|
2024-06-17 17:40:47 +08:00 |
mistral.py
|
[shardformer] upgrade transformers to 4.39.3 (#5815)
|
2024-06-14 10:59:33 +08:00 |
opt.py
|
[Shardformer] Add parallel output for shardformer models(bloom, falcon) (#5702)
|
2024-05-21 11:07:13 +08:00 |
qwen2.py
|
[shardformer] fix import (#5788)
|
2024-06-06 19:09:50 +08:00 |
sam.py
|
[misc] update pre-commit and run all files (#4752)
|
2023-09-19 14:20:26 +08:00 |
t5.py
|
[shardformer] update transformers (#5583)
|
2024-04-24 22:51:50 +08:00 |
vit.py
|
[shardformer] support bias_gelu_jit_fused for models (#5647)
|
2024-04-29 15:33:51 +08:00 |
whisper.py
|
[shardformer] upgrade transformers to 4.39.3 (#5815)
|
2024-06-14 10:59:33 +08:00 |