flybird11111
|
29695cf70c
|
[example]add gpt2 benchmark example script. (#5295)
* benchmark gpt2
* fix
fix
fix
fix
* [doc] fix typo in Colossal-LLaMA-2/README.md (#5247)
* [workflow] fixed build CI (#5240)
* [workflow] fixed build CI
* polish
* polish
* polish
* polish
* polish
* [ci] fixed booster test (#5251)
* [ci] fixed booster test
* [ci] fixed booster test
* [ci] fixed booster test
* [ci] fixed ddp test (#5254)
* [ci] fixed ddp test
* polish
* fix typo in applications/ColossalEval/README.md (#5250)
* [ci] fix shardformer tests. (#5255)
* fix ci
fix
* revert: revert p2p
* feat: add enable_metadata_cache option
* revert: enable t5 tests
---------
Co-authored-by: Wenhao Chen <cwher@outlook.com>
* [doc] fix doc typo (#5256)
* [doc] fix annotation display
* [doc] fix llama2 doc
* [hotfix]: add pp sanity check and fix mbs arg (#5268)
* fix: fix misleading mbs arg
* feat: add pp sanity check
* fix: fix 1f1b sanity check
* [workflow] fixed incomplete bash command (#5272)
* [workflow] fixed oom tests (#5275)
* [workflow] fixed oom tests
* polish
* polish
* polish
* [ci] fix test_hybrid_parallel_plugin_checkpoint_io.py (#5276)
* fix ci
fix
* fix test
* revert: revert p2p
* feat: add enable_metadata_cache option
* revert: enable t5 tests
* fix
---------
Co-authored-by: Wenhao Chen <cwher@outlook.com>
* [shardformer] hybridparallelplugin support gradients accumulation. (#5246)
* support gradients acc
fix
fix
fix
fix
fix
fix
fix
fix
fix
fix
fix
fix
fix
* fix
fix
* fix
fix
fix
* [hotfix] Fix ShardFormer test execution path when using sequence parallelism (#5230)
* fix auto loading gpt2 tokenizer (#5279)
* [doc] add llama2-13B disyplay (#5285)
* Update README.md
* fix 13b typo
---------
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
* fix llama pretrain (#5287)
* fix
* fix
* fix
fix
* fix
fix
fix
* fix
fix
* benchmark gpt2
* fix
fix
fix
fix
* [workflow] fixed build CI (#5240)
* [workflow] fixed build CI
* polish
* polish
* polish
* polish
* polish
* [ci] fixed booster test (#5251)
* [ci] fixed booster test
* [ci] fixed booster test
* [ci] fixed booster test
* fix
fix
* fix
fix
fix
* fix
* fix
fix
fix
fix
fix
* fix
* Update shardformer.py
---------
Co-authored-by: digger yu <digger-yu@outlook.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: Wenhao Chen <cwher@outlook.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
Co-authored-by: Zhongkai Zhao <kanezz620@gmail.com>
Co-authored-by: Michelle <97082656+MichelleMa8@users.noreply.github.com>
Co-authored-by: Desperado-Jia <502205863@qq.com>
|
2024-03-04 16:18:13 +08:00 |
ver217
|
1ed3f8a24f
|
[shardformer] rename policy file name
|
2023-08-15 23:25:14 +08:00 |
ver217
|
b0b8ad2823
|
[pipeline] update shardformer docstring
|
2023-08-15 23:25:14 +08:00 |
ver217
|
59f6f573f1
|
[pipeline] update shardformer policy
|
2023-08-15 23:25:14 +08:00 |
Frank Lee
|
74257cb446
|
[shardformer] refactored some doc and api (#4137)
* [shardformer] refactored some doc and api
* polish code
|
2023-07-04 16:05:01 +08:00 |
Frank Lee
|
6a88bae4ec
|
[shardformer] integrate with data parallelism (#4103)
|
2023-07-04 16:05:01 +08:00 |
Frank Lee
|
c1d5453e9f
|
[shardformer] adapted llama to the new API (#4036)
|
2023-07-04 16:05:01 +08:00 |
FoolPlayer
|
74d176c8d8
|
[shardformer] fix bert and gpt downstream with new api (#4024)
* fix bert downstream with new api
* remove comment line
|
2023-07-04 16:05:01 +08:00 |
FoolPlayer
|
d3bc530849
|
[shardformer] Refactor shardformer api (#4001)
* fix an error in readme
* simplify code
* refactor shardformer
* add todo
* remove slicer
* resolve code review
|
2023-07-04 16:05:01 +08:00 |