YuliangLiu0306
df54481473
[hotfix] fix some bugs during gpt2 testing ( #1379 )
2022-07-28 17:21:07 +08:00
ver217
828b9e5e0d
[hotfix] fix zero optim save/load state dict ( #1381 )
2022-07-28 17:19:39 +08:00
HELSON
b6fd165f66
[checkpoint] add kwargs for load_state_dict ( #1374 )
2022-07-28 15:56:52 +08:00
ver217
83328329dd
[hotfix] fix zero ddp buffer cast ( #1376 )
...
* fix zero ddp buffer cast
* fix zero ddp ignore params
2022-07-28 10:54:44 +08:00
ver217
5d5031e946
fix zero ddp state dict ( #1378 )
2022-07-28 09:31:42 +08:00
Frank Lee
0c1a16ea5b
[util] standard checkpoint function naming ( #1377 )
2022-07-28 09:29:30 +08:00
YuliangLiu0306
52bc2dc271
[fx] update split module pass and add customized policy ( #1373 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx]update split module pass and add customized policy
2022-07-27 13:40:54 +08:00
Super Daniel
be229217ce
[fx] add torchaudio test ( #1369 )
...
* [fx]add torchaudio test
* [fx]add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test and test patches
* Delete ~
* [fx] add patches and patches test
* [fx] add patches and patches test
* [fx] fix patches
* [fx] fix rnn patches
* [fx] fix rnn patches
* [fx] fix rnn patches
* [fx] fix rnn patches
* [fx] merge upstream
* [fx] fix import errors
2022-07-27 11:03:14 +08:00
ver217
c415240db6
[nvme] CPUAdam and HybridAdam support NVMe offload ( #1360 )
...
* impl nvme optimizer
* update cpu adam
* add unit test
* update hybrid adam
* update docstr
* add TODOs
* update CI
* fix CI
* fix CI
* fix CI path
* fix CI path
* fix CI path
* fix install tensornvme
* fix CI
* fix CI path
* fix CI env variables
* test CI
* test CI
* fix CI
* fix nvme optim __del__
* fix adam __del__
* fix nvme optim
* fix CI env variables
* fix nvme optim import
* test CI
* test CI
* fix CI
2022-07-26 17:25:24 +08:00
HELSON
8463290642
[checkpoint] use args, kwargs in save_checkpoint, load_checkpoint ( #1368 )
2022-07-26 14:41:53 +08:00
YuliangLiu0306
5542816690
[fx]add gpt2 passes for pipeline performance test ( #1366 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx]add gpt2 passes for pipeline performance test
2022-07-26 14:31:00 +08:00
HELSON
87775a0682
[colotensor] use cpu memory to store state_dict ( #1367 )
2022-07-26 14:13:38 +08:00
HELSON
943a96323e
[hotfix] fix no optimizer in save/load ( #1363 )
2022-07-26 10:53:53 +08:00
Frank Lee
cd063ac37f
[fx] added activation checkpoint codegen support for torch < 1.12 ( #1359 )
2022-07-25 23:35:31 +08:00
Frank Lee
644582eee9
[fx] added activation checkpoint codegen ( #1355 )
2022-07-25 09:39:10 +08:00
ver217
6b43c789fd
fix zero optim backward_by_grad and save/load ( #1353 )
2022-07-21 16:43:58 +08:00
ver217
d068af81a3
[doc] update rst and docstring ( #1351 )
...
* update rst
* add zero docstr
* fix docstr
* remove fx.tracer.meta_patch
* fix docstr
* fix docstr
* update fx rst
* fix fx docstr
* remove useless rst
2022-07-21 15:54:53 +08:00
Frank Lee
274c1a3b5f
[fx] fixed apex normalization patch exception ( #1352 )
2022-07-21 15:29:11 +08:00
ver217
ce470ba37e
[checkpoint] sharded optim save/load grad scaler ( #1350 )
2022-07-21 15:21:21 +08:00
Frank Lee
05fae1fd56
[fx] added activation checkpointing annotation ( #1349 )
...
* [fx] added activation checkpointing annotation
* polish code
* polish code
2022-07-21 11:14:28 +08:00
YuliangLiu0306
051592c64e
[fx] update MetaInforProp pass to process more complex node.meta ( #1344 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx] update MetaInforProp pass to process more complex node.meta
2022-07-21 10:57:52 +08:00
HELSON
7a8702c06d
[colotensor] add Tensor.view op and its unit test ( #1343 )
...
[colotensor] add megatron initialization for gpt2
2022-07-21 10:53:15 +08:00
YuliangLiu0306
942c8cd1fb
[fx] refactor tracer to trace complete graph ( #1342 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx] refactor tracer to trace complete graph
* add comments and solve conflicts.
2022-07-20 11:20:38 +08:00
Frank Lee
2cc1175c76
[fx] tested the complete workflow for auto-parallel ( #1336 )
...
* [fx] tested the complete workflow for auto-parallel
* polish code
* polish code
* polish code
2022-07-20 10:45:17 +08:00
YuliangLiu0306
4631fef8a0
[fx]refactor tracer ( #1335 )
2022-07-19 15:50:42 +08:00
HELSON
f92c100ddd
[checkpoint] use gather_tensor in checkpoint and update its unit test ( #1339 )
2022-07-19 14:15:28 +08:00
ver217
0c51ff2c13
[hotfix] ZeroDDP use new process group ( #1333 )
...
* process group supports getting ranks in group
* chunk mgr receives a process group
* update unit test
* fix unit tests
2022-07-18 14:14:52 +08:00
Frank Lee
75abc75c15
[fx] fixed compatiblity issue with torch 1.10 ( #1331 )
2022-07-18 11:41:27 +08:00
ver217
7a05367101
[hotfix] shared model returns cpu state_dict ( #1328 )
2022-07-15 22:11:37 +08:00
Frank Lee
b2475d8c5c
[fx] fixed unit tests for torch 1.12 ( #1327 )
2022-07-15 18:22:15 +08:00
HELSON
d49708ae43
[hotfix] fix ddp for unit test test_gpt2 ( #1326 )
2022-07-15 18:19:52 +08:00
Frank Lee
250be4d31e
[utils] integrated colotensor with lazy init context ( #1324 )
...
* [utils] integrated colotensor with lazy init context
* polish code
* polish code
* polish code
2022-07-15 17:47:12 +08:00
YuliangLiu0306
e8acf55e8b
[fx] add balanced policy v2 ( #1251 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx] add balanced policy v2
* add unittest
2022-07-15 14:54:26 +08:00
XYE
ca2d3f284f
[fx] Add unit test and fix bugs for transform_mlp_pass ( #1299 )
...
* add test and fix bugs
* add functions back
* add comments
2022-07-15 14:37:58 +08:00
HELSON
1b41686461
[hotfix] fix unit test test_module_spec ( #1321 )
2022-07-15 14:02:32 +08:00
Jiarui Fang
9e4c6449b0
[checkpoint] add ColoOptimizer checkpointing ( #1316 )
2022-07-15 09:52:55 +08:00
ver217
7c70bfbefa
[hotfix] fix PipelineSharedModuleGradientHandler ( #1314 )
2022-07-14 17:31:13 +08:00
Jiarui Fang
85f933b58b
[Optimizer] Remove useless ColoOptimizer ( #1312 )
2022-07-14 16:57:48 +08:00
Jiarui Fang
9f10524313
[Optimizer] polish the init method of ColoOptimizer ( #1310 )
2022-07-14 16:37:33 +08:00
Jiarui Fang
3ef3791a3b
[checkpoint] add test for bert and hotfix save bugs ( #1297 )
2022-07-14 15:38:18 +08:00
Frank Lee
4f4d8c3656
[fx] added apex normalization to patched modules ( #1300 )
...
* [fx] added apex normalization to patched modules
* remove unused imports
2022-07-14 14:24:13 +08:00
Jiarui Fang
4165eabb1e
[hotfix] remove potiential circle import ( #1307 )
...
* make it faster
* [hotfix] remove circle import
2022-07-14 13:44:26 +08:00
HELSON
260a55804a
[hotfix] fix shape error in backward when using ColoTensor ( #1298 )
2022-07-13 23:06:12 +08:00
runluo
f83c4d6597
[NFC] polish colossalai/nn/layer/wrapper/pipeline_wrapper.py code style ( #1303 )
2022-07-13 19:01:07 +08:00
binmakeswell
7696cead8d
Recover kernal files
2022-07-13 12:08:21 +08:00
XYE
e83b2ce853
[NFC] polish colossalai/nn/layer/vanilla/layers.py code style ( #1295 )
2022-07-13 12:08:21 +08:00
Liping233
1000a41fd5
[NFC] polish colossalai/nn/layer/vanilla/__init__.py code style ( #1293 )
2022-07-13 12:08:21 +08:00
Maruyama_Aya
87f679aeae
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/kernels.h code style ( #1291 )
2022-07-13 12:08:21 +08:00
Wangbo Zhao(黑色枷锁)
552667825b
[NFC] polish colossalai/nn/layer/parallel_1d/layers.py code style ( #1290 )
2022-07-13 12:08:21 +08:00
doubleHU
d6f5ef8860
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/transform_kernels.cu code style ( #1286 )
2022-07-13 12:08:21 +08:00
Ziheng Qin
6d6c01e94d
[NFC] polish colossalai/__init__.py code style ( #1285 )
2022-07-13 12:08:21 +08:00
Jiatong Han
38e3ccd1e9
[NFC] polish colossalai/nn/layer/parallel_sequence/layers.py code style ( #1280 )
...
Co-authored-by: JThh <jiatong.han@u.nus.edu>
2022-07-13 12:08:21 +08:00
Boyuan Yao
b414eaa5db
[NFC] polish colossalai/nn/optimizer/lamb.py code style ( #1275 )
2022-07-13 12:08:21 +08:00
yuxuan-lou
5f6ab35d25
Hotfix/format ( #1274 )
...
* [NFC] Polish colossalai/kernel/cuda_native/csrc/multi_tensor_lamb.cu code style. (#937 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cuda_util.h code style
* [NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax.cpp code style
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2022-07-13 12:08:21 +08:00
Super Daniel
52d145a342
[NFC] polish colossalai/nn/lr_scheduler/onecycle.py code style ( #1269 )
2022-07-13 12:08:21 +08:00
Geng Zhang
0e06f62160
[NFC] polish colossalai/nn/layer/parallel_sequence/_operation.py code style ( #1266 )
2022-07-13 12:08:21 +08:00
binmakeswell
c95e18cdb9
[NFC] polish colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax.h code style ( #1270 )
2022-07-13 12:08:21 +08:00
xyupeng
94bfd35184
[NFC] polish colossalai/builder/builder.py code style ( #1265 )
2022-07-13 12:08:21 +08:00
DouJS
db13f96333
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_apply.cuh code style ( #1264 )
2022-07-13 12:08:21 +08:00
shenggan
5d7366b144
[NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax.h code style ( #1263 )
2022-07-13 12:08:21 +08:00
Zangwei Zheng
197a2c89e2
[NFC] polish colossalai/communication/collective.py ( #1262 )
2022-07-13 12:08:21 +08:00
ziyu huang
f1cafcc73a
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/dropout_kernels.cu code style ( #1261 )
...
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
2022-07-13 12:08:21 +08:00
Sze-qq
f8b9aaef47
[NFC] polish colossalai/kernel/cuda_native/csrc/type_shim.h code style ( #1260 )
2022-07-13 12:08:21 +08:00
superhao1995
f660152c73
[NFC] polish colossalai/nn/layer/parallel_3d/_operation.py code style ( #1258 )
...
Co-authored-by: Research <research@soccf-snr3-017.comp.nus.edu.sg>
2022-07-13 12:08:21 +08:00
Thunderbeee
9738fb0f78
[NFC] polish colossalai/nn/lr_scheduler/__init__.py ( #1255 )
...
code style
2022-07-13 12:08:21 +08:00
Kai Wang (Victor Kai)
50f2ad213f
[NFC] polish colossalai/engine/ophooks/utils.py code style ( #1256 )
2022-07-13 12:08:21 +08:00
Ofey Chan
2dd4d556fb
[NFC] polish colossalai/nn/init.py code style ( #1292 )
2022-07-13 10:51:55 +08:00
Jiarui Fang
556b9b7e1a
[hotfix] Dist Mgr gather torch version ( #1284 )
...
* make it faster
* [hotfix] torchvison fx tests
* [hotfix] rename duplicated named test_gpt.py
* [hotfix] dist mgr torch version
2022-07-13 00:18:56 +08:00
HELSON
abba4d84e1
[hotfix] fix bert model test in unitests ( #1272 )
2022-07-12 23:26:45 +08:00
ver217
7aadcbd070
hotfix colotensor _scan_for_pg_from_args ( #1276 )
2022-07-12 20:46:31 +08:00
oahzxl
0cf8e8e91c
[NFC] polish <colossalai/nn/lr_scheduler/poly.py> code style ( #1267 )
2022-07-12 18:18:14 +08:00
Jiarui Fang
c92f84fcdb
[tensor] distributed checkpointing for parameters ( #1240 )
2022-07-12 15:51:06 +08:00
Frank Lee
fb35460595
[fx] added ndim property to proxy ( #1253 )
2022-07-12 15:27:13 +08:00
Frank Lee
4a09fc0947
[fx] fixed tracing with apex-based T5 model ( #1252 )
...
* [fx] fixed tracing with apex-based T5 model
* polish code
* polish code
2022-07-12 15:19:25 +08:00
Frank Lee
7531c6271f
[fx] refactored the file structure of patched function and module ( #1238 )
...
* [fx] refactored the file structure of patched function and module
* polish code
2022-07-12 15:01:58 +08:00
YuliangLiu0306
17ed33350b
[hotfix] fix an assertion bug in base schedule. ( #1250 )
2022-07-12 14:20:02 +08:00
YuliangLiu0306
97d713855a
[fx] methods to get fx graph property. ( #1246 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* manipulation
* [fx]add graph manipulation methods.
* [fx]methods to get fx graph property.
* add unit test
* add docstring to explain top node and leaf node in this context
2022-07-12 14:10:37 +08:00
YuliangLiu0306
30b4fc0eb0
[fx]add split module pass and unit test from pipeline passes ( #1242 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx]add split module pass and unit test from pipeline passes
* fix MNASNet bug
* polish
2022-07-12 13:45:01 +08:00
Jiarui Fang
1aad903c15
[tensor] redistribute among different process groups ( #1247 )
...
* make it faster
* [tensor] rename convert_to_dist -> redistribute
* [tensor] ShardSpec and ReplicaSpec
* [tensor] redistribute among diff pgs
* polish code
2022-07-12 10:24:05 +08:00
Jiarui Fang
9bcd2fd4af
[tensor] a shorter shard and replicate spec ( #1245 )
2022-07-11 15:51:48 +08:00
Jiarui Fang
2699dfbbfd
[rename] convert_to_dist -> redistribute ( #1243 )
2022-07-11 13:05:44 +08:00
HELSON
f6add9b720
[tensor] redirect .data.__get__ to a tensor instance ( #1239 )
2022-07-11 11:41:29 +08:00
Jiarui Fang
20da6e48c8
[checkpoint] save sharded optimizer states ( #1237 )
2022-07-08 16:33:13 +08:00
Jiarui Fang
4a76084dc9
[tensor] add zero_like colo op, important for Optimizer ( #1236 )
2022-07-08 14:55:27 +08:00
Jiarui Fang
3b500984b1
[tensor] fix some unittests ( #1234 )
2022-07-08 14:18:30 +08:00
ver217
a45ddf2d5f
[hotfix] fix sharded optim step and clip_grad_norm ( #1226 )
2022-07-08 13:34:48 +08:00
HELSON
f071b500b6
[polish] polish __repr__ for ColoTensor, DistSpec, ProcessGroup ( #1235 )
2022-07-08 13:25:57 +08:00
HELSON
0453776def
[tensor] fix a assertion in colo_tensor cross_entropy ( #1232 )
2022-07-08 11:18:00 +08:00
Jiarui Fang
0e199d71e8
[hotfix] fx get comm size bugs ( #1233 )
...
* init a checkpoint dir
* [checkpoint]support resume for cosinewarmuplr
* [checkpoint]add unit test
* fix some bugs but still not OK
* fix bugs
* make it faster
* [checkpoint]support generalized scheduler
* polish
* [tensor] torch function return colotensor
* polish
* fix bugs
* remove debug info
* polish
* polish
* [tensor] test_model pass unittests
* polish
* [hotfix] fx get comm size bug
Co-authored-by: ZhaoYi1222 <zhaoyi9499@gmail.com>
2022-07-08 10:54:41 +08:00
HELSON
42ab36b762
[tensor] add unitest for colo_tensor 1DTP cross_entropy ( #1230 )
2022-07-07 19:17:23 +08:00
Yi Zhao
04537bf83e
[checkpoint]support generalized scheduler ( #1222 )
2022-07-07 18:16:38 +08:00
Jiarui Fang
a98319f023
[tensor] torch function return colotensor ( #1229 )
2022-07-07 18:09:18 +08:00
YuliangLiu0306
2b7dca44b5
[fx]get communication size between partitions ( #1224 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx]get communication size between partitions.
* polish
2022-07-07 16:22:00 +08:00
Frank Lee
84f2298a96
[fx] added patches for tracing swin transformer ( #1228 )
2022-07-07 15:20:13 +08:00
Frank Lee
b6cb5a47ad
[fx] added timm model tracing testing ( #1221 )
2022-07-07 14:02:17 +08:00
HELSON
280a81243d
[tensor] improve robustness of class 'ProcessGroup' ( #1223 )
2022-07-07 13:55:24 +08:00
Jiarui Fang
15d988f954
[tensor] sharded global process group ( #1219 )
2022-07-07 13:38:48 +08:00
Jiarui Fang
db1bef9032
[hotfix] fx shard 1d pass bug fixing ( #1220 )
2022-07-07 13:37:31 +08:00
Frank Lee
11973d892d
[fx] added torchvision model tracing testing ( #1216 )
...
* [fx] added torchvision model tracing testing
* remove unused imports
2022-07-06 21:37:56 +08:00
Jiarui Fang
52736205d9
[checkpoint] make unitest faster ( #1217 )
2022-07-06 17:39:46 +08:00