Kirigaya Kazuto
318fbf1145
[NFC] polish colossalai/utils/multi_tensor_apply/multi_tensor_apply.py code style ( #1559 )
2022-09-08 22:04:34 +08:00
ver217
ae71036cd2
[utils] refactor parallel layers checkpoint and bcast model on loading checkpoint ( #1548 )
...
* refactor parallel layer
* broadcast rank0 model after load ckpt
2022-09-06 20:18:35 +08:00
ver217
2bed096848
[utils] optimize partition_tensor_parallel_state_dict ( #1546 )
2022-09-06 17:45:31 +08:00
ver217
a203b709d5
[hotfix] fix init context ( #1543 )
...
* fix init context
* fix lazy init ctx
2022-09-06 11:45:08 +08:00
Boyuan Yao
47fd8e4a02
[utils] Add use_reetrant=False in utils.activation_checkpoint ( #1460 )
...
* [utils] Add use_reetrant=False into colossalai checkpoint
* [utils] add some annotation in utils.activaion_checkpoint
* [test] add reset_seed at the beginning of tests in test_actiavion_checkpointing.py
* [test] modify test_activation_checkpoint.py
* [test] modify test for reentrant=False
2022-08-16 15:39:20 +08:00
Frank Lee
5a52e21fe3
[test] fixed the activation codegen test ( #1447 )
...
* [test] fixed the activation codegen test
* polish code
2022-08-12 14:52:31 +08:00
ver217
821c6172e2
[utils] Impl clip_grad_norm for ColoTensor and ZeroOptimizer ( #1442 )
2022-08-11 22:58:58 +08:00
HELSON
527758b2ae
[hotfix] fix a running error in test_colo_checkpoint.py ( #1387 )
2022-07-29 15:58:06 +08:00
HELSON
b6fd165f66
[checkpoint] add kwargs for load_state_dict ( #1374 )
2022-07-28 15:56:52 +08:00
Frank Lee
0c1a16ea5b
[util] standard checkpoint function naming ( #1377 )
2022-07-28 09:29:30 +08:00
Super Daniel
be229217ce
[fx] add torchaudio test ( #1369 )
...
* [fx]add torchaudio test
* [fx]add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test and test patches
* Delete ~
* [fx] add patches and patches test
* [fx] add patches and patches test
* [fx] fix patches
* [fx] fix rnn patches
* [fx] fix rnn patches
* [fx] fix rnn patches
* [fx] fix rnn patches
* [fx] merge upstream
* [fx] fix import errors
2022-07-27 11:03:14 +08:00
HELSON
8463290642
[checkpoint] use args, kwargs in save_checkpoint, load_checkpoint ( #1368 )
2022-07-26 14:41:53 +08:00
HELSON
87775a0682
[colotensor] use cpu memory to store state_dict ( #1367 )
2022-07-26 14:13:38 +08:00
HELSON
943a96323e
[hotfix] fix no optimizer in save/load ( #1363 )
2022-07-26 10:53:53 +08:00
HELSON
7a8702c06d
[colotensor] add Tensor.view op and its unit test ( #1343 )
...
[colotensor] add megatron initialization for gpt2
2022-07-21 10:53:15 +08:00
Frank Lee
2cc1175c76
[fx] tested the complete workflow for auto-parallel ( #1336 )
...
* [fx] tested the complete workflow for auto-parallel
* polish code
* polish code
* polish code
2022-07-20 10:45:17 +08:00
HELSON
f92c100ddd
[checkpoint] use gather_tensor in checkpoint and update its unit test ( #1339 )
2022-07-19 14:15:28 +08:00
Frank Lee
250be4d31e
[utils] integrated colotensor with lazy init context ( #1324 )
...
* [utils] integrated colotensor with lazy init context
* polish code
* polish code
* polish code
2022-07-15 17:47:12 +08:00
Jiarui Fang
9e4c6449b0
[checkpoint] add ColoOptimizer checkpointing ( #1316 )
2022-07-15 09:52:55 +08:00
Jiarui Fang
3ef3791a3b
[checkpoint] add test for bert and hotfix save bugs ( #1297 )
2022-07-14 15:38:18 +08:00
Jiarui Fang
4165eabb1e
[hotfix] remove potiential circle import ( #1307 )
...
* make it faster
* [hotfix] remove circle import
2022-07-14 13:44:26 +08:00
Jiarui Fang
c92f84fcdb
[tensor] distributed checkpointing for parameters ( #1240 )
2022-07-12 15:51:06 +08:00
Jiarui Fang
9bcd2fd4af
[tensor] a shorter shard and replicate spec ( #1245 )
2022-07-11 15:51:48 +08:00
Jiarui Fang
20da6e48c8
[checkpoint] save sharded optimizer states ( #1237 )
2022-07-08 16:33:13 +08:00
Jiarui Fang
3b500984b1
[tensor] fix some unittests ( #1234 )
2022-07-08 14:18:30 +08:00
ver217
a45ddf2d5f
[hotfix] fix sharded optim step and clip_grad_norm ( #1226 )
2022-07-08 13:34:48 +08:00
Yi Zhao
04537bf83e
[checkpoint]support generalized scheduler ( #1222 )
2022-07-07 18:16:38 +08:00
Jiarui Fang
52736205d9
[checkpoint] make unitest faster ( #1217 )
2022-07-06 17:39:46 +08:00
Jiarui Fang
f38006ea83
[checkpoint] checkpoint for ColoTensor Model ( #1196 )
2022-07-06 17:22:03 +08:00
Jiarui Fang
ae7d3f4927
[refactor] move process group from _DistSpec to ColoTensor. ( #1203 )
2022-07-06 16:15:16 +08:00
YuliangLiu0306
63d2a93878
[context]support arbitary module materialization. ( #1193 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [context]support arbitary module materialization.
* [test]add numerical check for lazy init context.
2022-07-04 10:12:02 +08:00
YuliangLiu0306
2053e138a2
[context]use meta tensor to init model lazily. ( #1187 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [context]use meta tensor to init model lazily.
* polish
* make module with device kwargs bypass the normal init.
* change unit test to adapt updated context.
2022-06-29 21:02:30 +08:00
YuliangLiu0306
e27645376d
[hotfix]different overflow status lead to communication stuck. ( #1175 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [hotfix]fix some bugs caused by refactored schedule.
* [hotfix]different overflow statu llead to communication stuck.
2022-06-27 09:53:57 +08:00
Jiarui Fang
4b9bba8116
[ColoTensor] rename APIs and add output_replicate to ComputeSpec ( #1168 )
2022-06-24 13:08:54 +08:00
Frank Lee
f8eec98ff5
[tensor] fixed non-serializable colo parameter during model checkpointing ( #1153 )
2022-06-22 11:43:38 +08:00
Frank Lee
73ad05fc8c
[zero] added error message to handle on-the-fly import of torch Module class ( #1135 )
...
* [zero] added error message to handle on-the-fly import of torch Module class
* polish code
2022-06-20 11:24:27 +08:00
Frank Lee
2b2dc1c86b
[pipeline] refactor the pipeline module ( #1087 )
...
* [pipeline] refactor the pipeline module
* polish code
2022-06-10 11:27:38 +08:00
Frank Lee
bad5d4c0a1
[context] support lazy init of module ( #1088 )
...
* [context] support lazy init of module
* polish code
2022-06-10 10:09:48 +08:00
Frank Lee
bfdc5ccb7b
[context] maintain the context object in with statement ( #1073 )
2022-06-07 10:48:45 +08:00
Jiarui Fang
49832b2344
[refactory] add nn.parallel module ( #1068 )
2022-06-06 15:34:41 +08:00
Jiarui Fang
a00644079e
reorgnize colotensor directory ( #1062 )
...
* reorgnize colotensor directory
* polish code
2022-06-03 18:04:22 +08:00
Ziyue Jiang
df9dcbbff6
[Tensor] add hybrid device demo and fix bugs ( #1059 )
2022-06-03 12:09:49 +08:00
Ziyue Jiang
7c530b9de2
[Tensor] add Parameter inheritance for ColoParameter ( #1041 )
...
* add Parameter inheritance for ColoParameter
* remove tricks
* remove tricks
* polish
* polish
2022-05-30 17:23:44 +08:00
Ziyue Jiang
6c5996a56e
[Tensor] add module check and bert test ( #1031 )
...
* add Embedding
* Add bert test
* polish
* add check module test
* polish
* polish
* polish
* polish
2022-05-26 18:15:42 +08:00
Ziyue Jiang
32291dd73f
[Tensor] add module handler for linear ( #1021 )
...
* add module spec for linear
* polish
* polish
* polish
2022-05-26 11:50:44 +08:00
ver217
007ca0df92
fix colo init context ( #1026 )
2022-05-25 20:41:58 +08:00
ver217
ad536e308e
[tensor] refactor colo-tensor ( #992 )
...
* refactor colo-tensor and update linear op
* polish code
* polish code
* update ops and unit tests
* update unit tests
* polish code
* rename dist_spec module
* polish code
* polish code
* remove unneeded import
* fix pipelinable
2022-05-19 12:44:59 +08:00
Ziyue Jiang
d73c2b1d79
[Tensor] fix init context ( #931 )
...
* change torch.Parameter to ColoParameter
* fix post assignment for init context
* polish
* polish
2022-05-11 15:48:12 +08:00
Ziyue Jiang
dfc88b85ea
[Tensor] simplify named param ( #928 )
...
* simplify ColoModulize
* simplify ColoModulize
* polish
* polish
2022-05-11 10:54:19 +08:00
YuliangLiu0306
32a45cd7ef
[pipelinable]use pipelinable to support GPT model. ( #903 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [pipelinable]use pipelinable to support GPT model.
* fix a bug caused by ShardedModel
* polish
* fix front func list
2022-05-11 09:23:58 +08:00