You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
flybird11111
79718fae04
[shardformer] llama support DistCrossEntropy (#5176)
* fix
aaa
fix
fix
fix
* fix
* fix
* test ci
* fix ci
fix
* llama support dist-cross
fix
fix
fix
fix
fix
fix
fix
fix
* fix
* fix
* fix
fix
* test ci
* test ci
* fix
* [Colossal-Llama-2] Add finetuning Colossal-Llama-2 example (#4878)
* Add finetuning Colossal-Llama-2 example
* Add finetuning Colossal-Llama-2 example 2
* Add finetuning Colossal-Llama-2 example and support NEFTuning
* Add inference example and refine neftune
* Modify readme file
* update the imports
---------
Co-authored-by: Xu Yuanchen <yuanchen.xu00@gmail.com>
Co-authored-by: Camille Zhong <44392324+Camille7777@users.noreply.github.com>
* llama support dist-cross
fix
fix
fix
fix
fix
fix
fix
fix
* fix
* fix
* fix
fix
* test ci
* test ci
* fix
* fix ci
* fix ci
---------
Co-authored-by: Yuanchen <70520919+chengeharrison@users.noreply.github.com>
Co-authored-by: Xu Yuanchen <yuanchen.xu00@gmail.com>
Co-authored-by: Camille Zhong <44392324+Camille7777@users.noreply.github.com>
|
12 months ago |
.. |
__init__.py
|
[shardformer] init shardformer code structure (#3731)
|
1 year ago |
auto_policy.py
|
[shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088)
|
1 year ago |
base_policy.py
|
[shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088)
|
1 year ago |
bert.py
|
[shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088)
|
1 year ago |
blip2.py
|
[hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926)
|
1 year ago |
bloom.py
|
[shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088)
|
1 year ago |
chatglm2.py
|
[Inference] Fix bug in ChatGLM2 Tensor Parallelism (#5014)
|
1 year ago |
falcon.py
|
[shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088)
|
1 year ago |
gpt2.py
|
[hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926)
|
1 year ago |
gptj.py
|
[shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088)
|
1 year ago |
llama.py
|
[shardformer] llama support DistCrossEntropy (#5176)
|
12 months ago |
mistral.py
|
[shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088)
|
1 year ago |
opt.py
|
[shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088)
|
1 year ago |
sam.py
|
[hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926)
|
1 year ago |
t5.py
|
[gemini] gemini support tensor parallelism. (#4942)
|
1 year ago |
vit.py
|
[hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926)
|
1 year ago |
whisper.py
|
[shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088)
|
1 year ago |