Boyuan Yao
8593ae1a3f
[autoparallel] rotor solver refactor ( #2813 )
...
* [autoparallel] rotor solver refactor
* [autoparallel] rotor solver refactor
2 years ago
binmakeswell
09f457479d
[doc] update OPT serving ( #2804 )
...
* [doc] update OPT serving
* [doc] update OPT serving
2 years ago
HELSON
56ddc9ca7a
[hotfix] add correct device for fake_param ( #2796 )
2 years ago
ver217
a619a190df
[chatgpt] update readme about checkpoint ( #2792 )
...
* [chatgpt] add save/load checkpoint sample code
* [chatgpt] add save/load checkpoint readme
* [chatgpt] refactor save/load checkpoint readme
2 years ago
ver217
4ee311c026
[chatgpt] startegy add prepare method ( #2766 )
...
* [chatgpt] startegy add prepare method
* [chatgpt] refactor examples
* [chatgpt] refactor strategy.prepare
* [chatgpt] support save/load checkpoint
* [chatgpt] fix unwrap actor
* [chatgpt] fix unwrap actor
2 years ago
Boyuan Yao
a2b43e393d
[autoparallel] Patch meta information of `torch.nn.Embedding` ( #2760 )
...
* [autoparallel] embedding metainfo
* [autoparallel] fix function name in test_activation_metainfo
* [autoparallel] undo changes in activation metainfo and related tests
2 years ago
Boyuan Yao
8e3f66a0d1
[zero] fix wrong import ( #2777 )
2 years ago
Fazzie-Maqianli
ba84cd80b2
fix pip install colossal ( #2764 )
2 years ago
Nikita Shulga
01066152f1
Don't use `torch._six` ( #2775 )
...
* Don't use `torch._six`
This is a private API which is gone after https://github.com/pytorch/pytorch/pull/94709
* Update common.py
2 years ago
ver217
a88bc828d5
[chatgpt] disable shard init for colossalai ( #2767 )
2 years ago
binmakeswell
d6d6dec190
[doc] update example and OPT serving link ( #2769 )
...
* [doc] update OPT serving link
* [doc] update example and OPT serving link
* [doc] update example and OPT serving link
2 years ago
Frank Lee
e376954305
[doc] add opt service doc ( #2747 )
2 years ago
BlueRum
613efebc5c
[chatgpt] support colossalai strategy to train rm ( #2742 )
...
* [chatgpt]fix train_rm bug with lora
* [chatgpt]support colossalai strategy to train rm
* fix pre-commit
* fix pre-commit 2
2 years ago
BlueRum
648183a960
[chatgpt]fix train_rm bug with lora ( #2741 )
2 years ago
fastalgo
b6e3b955c3
Update README.md
2 years ago
binmakeswell
30aee9c45d
[NFC] polish code format
...
[NFC] polish code format
2 years ago
YuliangLiu0306
1dc003c169
[autoparallel] distinguish different parallel strategies ( #2699 )
2 years ago
YH
ae86a29e23
Refact method of grad store ( #2687 )
2 years ago
cloudhuang
43dffdaba5
[doc] fixed a typo in GPT readme ( #2736 )
2 years ago
binmakeswell
93b788b95a
Merge branch 'main' into fix/format
2 years ago
xyupeng
2fd528b9f4
[NFC] polish colossalai/auto_parallel/tensor_shard/deprecated/graph_analysis.py code style ( #2737 )
2 years ago
Zirui Zhu
c9e3ee389e
[NFC] polish colossalai/context/process_group_initializer/initializer_2d.py code style ( #2726 )
2 years ago
Zangwei Zheng
1819373e5c
[NFC] polish colossalai/auto_parallel/tensor_shard/deprecated/op_handler/batch_norm_handler.py code style ( #2728 )
2 years ago
Wangbo Zhao(黑色枷锁)
8331420520
[NFC] polish colossalai/cli/cli.py code style ( #2734 )
2 years ago
Frank Lee
5479fdd5b8
[doc] updated documentation version list ( #2730 )
2 years ago
binmakeswell
c5be83afbf
Update version.txt ( #2727 )
2 years ago
ziyuhuang123
d344313533
[NFC] polish colossalai/auto_parallel/tensor_shard/deprecated/op_handler/embedding_handler.py code style ( #2725 )
2 years ago
Xue Fuzhao
e81caeb4bc
[NFC] polish colossalai/auto_parallel/tensor_shard/deprecated/cost_graph.py code style ( #2720 )
...
Co-authored-by: Fuzhao Xue <fuzhao@login2.ls6.tacc.utexas.edu>
2 years ago
yuxuan-lou
51c45c2460
[NFC] polish colossalai/auto_parallel/tensor_shard/deprecated/op_handler/where_handler.py code style ( #2723 )
2 years ago
CH.Li
7aacfad8af
fix typo ( #2721 )
2 years ago
ver217
9c0943ecdb
[chatgpt] optimize generation kwargs ( #2717 )
...
* [chatgpt] ppo trainer use default generate args
* [chatgpt] example remove generation preparing fn
* [chatgpt] benchmark remove generation preparing fn
* [chatgpt] fix ci
2 years ago
YuliangLiu0306
21d6a48f4d
[autoparallel] add shard option ( #2696 )
...
* [autoparallel] add shard option
* polish
2 years ago
YuliangLiu0306
5b24987fa7
[autoparallel] fix parameters sharding bug ( #2716 )
2 years ago
Frank Lee
2045d45ab7
[doc] updated documentation version list ( #2715 )
2 years ago
binmakeswell
d4d3387f45
[doc] add open-source contribution invitation ( #2714 )
...
* [doc] fix typo
* [doc] add invitation
2 years ago
ver217
f6b4ca4e6c
[devops] add chatgpt ci ( #2713 )
2 years ago
Ziyue Jiang
4603538ddd
[NFC] posh colossalai/context/process_group_initializer/initializer_sequence.py code style ( #2712 )
...
Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2 years ago
YuliangLiu0306
cb2c6a2415
[autoparallel] refactor runtime pass ( #2644 )
...
* [autoparallel] refactor runtime pass
* add unit test
* polish
2 years ago
Frank Lee
89f8975fb8
[workflow] fixed tensor-nvme build caching ( #2711 )
2 years ago
Zihao
b3d10db5f1
[NFC] polish colossalai/cli/launcher/__init__.py code style ( #2709 )
2 years ago
Fazzie-Maqianli
d03f4429c1
add ci ( #2641 )
2 years ago
YuliangLiu0306
0b2a738393
[autoparallel] remove deprecated codes ( #2664 )
2 years ago
YuliangLiu0306
7fa6be49d2
[autoparallel] test compatibility for gemini and auto parallel ( #2700 )
2 years ago
CZYCW
4ac8bfb072
[NFC] polish colossalai/engine/gradient_handler/utils.py code style ( #2708 )
2 years ago
github-actions[bot]
d701ef81b1
Automated submodule synchronization ( #2707 )
...
Co-authored-by: github-actions <github-actions@github.com>
2 years ago
binmakeswell
94f000515b
[doc] add Quick Preview ( #2706 )
2 years ago
binmakeswell
71deddc87f
[doc] resize figure ( #2705 )
...
* [doc] resize figure
* [doc] resize figure
2 years ago
binmakeswell
6a8cd687e3
[doc] add ChatGPT ( #2703 )
2 years ago
binmakeswell
8408c852a6
[app] fix ChatGPT requirements ( #2704 )
2 years ago
ver217
1b34701027
[app] add chatgpt application ( #2698 )
2 years ago