BlueRum
|
613efebc5c
|
[chatgpt] support colossalai strategy to train rm (#2742)
* [chatgpt]fix train_rm bug with lora
* [chatgpt]support colossalai strategy to train rm
* fix pre-commit
* fix pre-commit 2
|
2 years ago |
BlueRum
|
648183a960
|
[chatgpt]fix train_rm bug with lora (#2741)
|
2 years ago |
fastalgo
|
b6e3b955c3
|
Update README.md
|
2 years ago |
binmakeswell
|
30aee9c45d
|
[NFC] polish code format
[NFC] polish code format
|
2 years ago |
YuliangLiu0306
|
1dc003c169
|
[autoparallel] distinguish different parallel strategies (#2699)
|
2 years ago |
YH
|
ae86a29e23
|
Refact method of grad store (#2687)
|
2 years ago |
cloudhuang
|
43dffdaba5
|
[doc] fixed a typo in GPT readme (#2736)
|
2 years ago |
binmakeswell
|
93b788b95a
|
Merge branch 'main' into fix/format
|
2 years ago |
xyupeng
|
2fd528b9f4
|
[NFC] polish colossalai/auto_parallel/tensor_shard/deprecated/graph_analysis.py code style (#2737)
|
2 years ago |
Zirui Zhu
|
c9e3ee389e
|
[NFC] polish colossalai/context/process_group_initializer/initializer_2d.py code style (#2726)
|
2 years ago |
Zangwei Zheng
|
1819373e5c
|
[NFC] polish colossalai/auto_parallel/tensor_shard/deprecated/op_handler/batch_norm_handler.py code style (#2728)
|
2 years ago |
Wangbo Zhao(黑色枷锁)
|
8331420520
|
[NFC] polish colossalai/cli/cli.py code style (#2734)
|
2 years ago |
Frank Lee
|
5479fdd5b8
|
[doc] updated documentation version list (#2730)
|
2 years ago |
binmakeswell
|
c5be83afbf
|
Update version.txt (#2727)
|
2 years ago |
ziyuhuang123
|
d344313533
|
[NFC] polish colossalai/auto_parallel/tensor_shard/deprecated/op_handler/embedding_handler.py code style (#2725)
|
2 years ago |
Xue Fuzhao
|
e81caeb4bc
|
[NFC] polish colossalai/auto_parallel/tensor_shard/deprecated/cost_graph.py code style (#2720)
Co-authored-by: Fuzhao Xue <fuzhao@login2.ls6.tacc.utexas.edu>
|
2 years ago |
yuxuan-lou
|
51c45c2460
|
[NFC] polish colossalai/auto_parallel/tensor_shard/deprecated/op_handler/where_handler.py code style (#2723)
|
2 years ago |
CH.Li
|
7aacfad8af
|
fix typo (#2721)
|
2 years ago |
ver217
|
9c0943ecdb
|
[chatgpt] optimize generation kwargs (#2717)
* [chatgpt] ppo trainer use default generate args
* [chatgpt] example remove generation preparing fn
* [chatgpt] benchmark remove generation preparing fn
* [chatgpt] fix ci
|
2 years ago |
YuliangLiu0306
|
21d6a48f4d
|
[autoparallel] add shard option (#2696)
* [autoparallel] add shard option
* polish
|
2 years ago |
YuliangLiu0306
|
5b24987fa7
|
[autoparallel] fix parameters sharding bug (#2716)
|
2 years ago |
Frank Lee
|
2045d45ab7
|
[doc] updated documentation version list (#2715)
|
2 years ago |
binmakeswell
|
d4d3387f45
|
[doc] add open-source contribution invitation (#2714)
* [doc] fix typo
* [doc] add invitation
|
2 years ago |
ver217
|
f6b4ca4e6c
|
[devops] add chatgpt ci (#2713)
|
2 years ago |
Ziyue Jiang
|
4603538ddd
|
[NFC] posh colossalai/context/process_group_initializer/initializer_sequence.py code style (#2712)
Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
|
2 years ago |
YuliangLiu0306
|
cb2c6a2415
|
[autoparallel] refactor runtime pass (#2644)
* [autoparallel] refactor runtime pass
* add unit test
* polish
|
2 years ago |
Frank Lee
|
89f8975fb8
|
[workflow] fixed tensor-nvme build caching (#2711)
|
2 years ago |
Zihao
|
b3d10db5f1
|
[NFC] polish colossalai/cli/launcher/__init__.py code style (#2709)
|
2 years ago |
Fazzie-Maqianli
|
d03f4429c1
|
add ci (#2641)
|
2 years ago |
YuliangLiu0306
|
0b2a738393
|
[autoparallel] remove deprecated codes (#2664)
|
2 years ago |
YuliangLiu0306
|
7fa6be49d2
|
[autoparallel] test compatibility for gemini and auto parallel (#2700)
|
2 years ago |
CZYCW
|
4ac8bfb072
|
[NFC] polish colossalai/engine/gradient_handler/utils.py code style (#2708)
|
2 years ago |
github-actions[bot]
|
d701ef81b1
|
Automated submodule synchronization (#2707)
Co-authored-by: github-actions <github-actions@github.com>
|
2 years ago |
binmakeswell
|
94f000515b
|
[doc] add Quick Preview (#2706)
|
2 years ago |
binmakeswell
|
71deddc87f
|
[doc] resize figure (#2705)
* [doc] resize figure
* [doc] resize figure
|
2 years ago |
binmakeswell
|
6a8cd687e3
|
[doc] add ChatGPT (#2703)
|
2 years ago |
binmakeswell
|
8408c852a6
|
[app] fix ChatGPT requirements (#2704)
|
2 years ago |
ver217
|
1b34701027
|
[app] add chatgpt application (#2698)
|
2 years ago |
Liu Ziming
|
6427c406cf
|
[NFC] polish colossalai/auto_parallel/tensor_shard/deprecated/op_handler/strategy_generator.py code style (#2695)
Co-authored-by: shenggan <csg19971016@gmail.com>
|
2 years ago |
ver217
|
c3abdd085d
|
[release] update version (#2691)
|
2 years ago |
アマデウス
|
534f68c83c
|
[NFC] polish pipeline process group code style (#2694)
|
2 years ago |
LuGY
|
56ff1921e9
|
[NFC] polish colossalai/context/moe_context.py code style (#2693)
|
2 years ago |
Shawn-Kong
|
1712da2800
|
[NFC] polish colossalai/gemini/gemini_context.py code style (#2690)
|
2 years ago |
binmakeswell
|
46f20bac41
|
[doc] update auto parallel paper link (#2686)
* [doc] update auto parallel paper link
* [doc] update auto parallel paper link
|
2 years ago |
github-actions[bot]
|
88416019e7
|
Automated submodule synchronization (#2648)
Co-authored-by: github-actions <github-actions@github.com>
|
2 years ago |
HELSON
|
df4f020ee3
|
[zero1&2] only append parameters with gradients (#2681)
|
2 years ago |
ver217
|
f0aa191f51
|
[gemini] fix colo_init_context (#2683)
|
2 years ago |
Frank Lee
|
5cd8cae0c9
|
[workflow] fixed communtity report ranking (#2680)
|
2 years ago |
Frank Lee
|
c44fd0c867
|
[workflow] added trigger to build doc upon release (#2678)
|
2 years ago |
Boyuan Yao
|
40c916b192
|
[autoparallel] Patch meta information of `torch.nn.functional.softmax` and `torch.nn.Softmax` (#2674)
* [autoparallel] softmax metainfo
* [autoparallel] softmax metainfo
|
2 years ago |