Commit Graph

1028 Commits (46c6cc79a90bf578cb92d8d2ab5b9e2ff0e818fd)

Author SHA1 Message Date
Super Daniel 52d145a342 [NFC] polish colossalai/nn/lr_scheduler/onecycle.py code style (#1269) 2022-07-13 12:08:21 +08:00
Geng Zhang 0e06f62160 [NFC] polish colossalai/nn/layer/parallel_sequence/_operation.py code style (#1266) 2022-07-13 12:08:21 +08:00
binmakeswell c95e18cdb9 [NFC] polish colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax.h code style (#1270) 2022-07-13 12:08:21 +08:00
xyupeng 94bfd35184 [NFC] polish colossalai/builder/builder.py code style (#1265) 2022-07-13 12:08:21 +08:00
DouJS db13f96333 [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_apply.cuh code style (#1264) 2022-07-13 12:08:21 +08:00
shenggan 5d7366b144 [NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax.h code style (#1263) 2022-07-13 12:08:21 +08:00
Zangwei Zheng 197a2c89e2 [NFC] polish colossalai/communication/collective.py (#1262) 2022-07-13 12:08:21 +08:00
ziyu huang f1cafcc73a [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/dropout_kernels.cu code style (#1261)
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
2022-07-13 12:08:21 +08:00
Sze-qq f8b9aaef47 [NFC] polish colossalai/kernel/cuda_native/csrc/type_shim.h code style (#1260) 2022-07-13 12:08:21 +08:00
superhao1995 f660152c73 [NFC] polish colossalai/nn/layer/parallel_3d/_operation.py code style (#1258)
Co-authored-by: Research <research@soccf-snr3-017.comp.nus.edu.sg>
2022-07-13 12:08:21 +08:00
Thunderbeee 9738fb0f78 [NFC] polish colossalai/nn/lr_scheduler/__init__.py (#1255)
code style
2022-07-13 12:08:21 +08:00
Kai Wang (Victor Kai) 50f2ad213f [NFC] polish colossalai/engine/ophooks/utils.py code style (#1256) 2022-07-13 12:08:21 +08:00
Ofey Chan 2dd4d556fb
[NFC] polish colossalai/nn/init.py code style (#1292) 2022-07-13 10:51:55 +08:00
Jiarui Fang 556b9b7e1a
[hotfix] Dist Mgr gather torch version (#1284)
* make it faster

* [hotfix] torchvison fx tests

* [hotfix] rename duplicated named test_gpt.py

* [hotfix] dist mgr torch version
2022-07-13 00:18:56 +08:00
Frank Lee 7e8114a8dd
[hotfix] skipped unsafe test cases (#1282) 2022-07-13 00:08:59 +08:00
Jiarui Fang 79fe7b027a
[hotfix] test model unittest hotfix (#1281) 2022-07-12 23:45:29 +08:00
Jiarui Fang e56731e916 [hotfix] test_gpt.py duplicated (#1279)
* make it faster

* [hotfix] torchvison fx tests

* [hotfix] rename duplicated named test_gpt.py
2022-07-12 23:29:17 +08:00
HELSON abba4d84e1
[hotfix] fix bert model test in unitests (#1272) 2022-07-12 23:26:45 +08:00
YuliangLiu0306 01ea68b2e6
[tests] remove T5 test skip decorator (#1271) 2022-07-12 23:25:30 +08:00
Frank Lee de498255b5
[release] v0.1.8 (#1278) 2022-07-12 23:21:32 +08:00
Jiarui Fang ca9d5ee91c
[hotfix] torchvison fx unittests miss import pytest (#1277) 2022-07-12 23:04:06 +08:00
ver217 7aadcbd070
hotfix colotensor _scan_for_pg_from_args (#1276) 2022-07-12 20:46:31 +08:00
oahzxl 0cf8e8e91c
[NFC] polish <colossalai/nn/lr_scheduler/poly.py> code style (#1267) 2022-07-12 18:18:14 +08:00
Jiarui Fang c92f84fcdb
[tensor] distributed checkpointing for parameters (#1240) 2022-07-12 15:51:06 +08:00
Sze-qq 49114d8df0 update GPT-3 visualisation 2022-07-12 15:50:32 +08:00
Frank Lee fb35460595
[fx] added ndim property to proxy (#1253) 2022-07-12 15:27:13 +08:00
Frank Lee 4a09fc0947
[fx] fixed tracing with apex-based T5 model (#1252)
* [fx] fixed tracing with apex-based T5 model

* polish code

* polish code
2022-07-12 15:19:25 +08:00
Frank Lee 7531c6271f
[fx] refactored the file structure of patched function and module (#1238)
* [fx] refactored the file structure of patched function and module

* polish code
2022-07-12 15:01:58 +08:00
YuliangLiu0306 17ed33350b
[hotfix] fix an assertion bug in base schedule. (#1250) 2022-07-12 14:20:02 +08:00
YuliangLiu0306 97d713855a
[fx] methods to get fx graph property. (#1246)
* [CLI] add CLI launcher

* Revert "[CLI] add CLI launcher"

This reverts commit df7e6506d4.

* manipulation

* [fx]add graph manipulation methods.

* [fx]methods to get fx graph property.

* add unit test

* add docstring to explain top node and leaf node in this context
2022-07-12 14:10:37 +08:00
YuliangLiu0306 30b4fc0eb0
[fx]add split module pass and unit test from pipeline passes (#1242)
* [CLI] add CLI launcher

* Revert "[CLI] add CLI launcher"

This reverts commit df7e6506d4.

* [fx]add split module pass and unit test from pipeline passes

* fix MNASNet bug

* polish
2022-07-12 13:45:01 +08:00
github-actions[bot] 762905da68
Automated submodule synchronization (#1241)
Co-authored-by: github-actions <github-actions@github.com>
2022-07-12 10:32:20 +08:00
Jiarui Fang 1aad903c15
[tensor] redistribute among different process groups (#1247)
* make it faster

* [tensor] rename convert_to_dist -> redistribute

* [tensor] ShardSpec and ReplicaSpec

* [tensor] redistribute among diff pgs

* polish code
2022-07-12 10:24:05 +08:00
Jiarui Fang 9bcd2fd4af
[tensor] a shorter shard and replicate spec (#1245) 2022-07-11 15:51:48 +08:00
Jiarui Fang 2699dfbbfd
[rename] convert_to_dist -> redistribute (#1243) 2022-07-11 13:05:44 +08:00
HELSON f6add9b720
[tensor] redirect .data.__get__ to a tensor instance (#1239) 2022-07-11 11:41:29 +08:00
Jiarui Fang 20da6e48c8
[checkpoint] save sharded optimizer states (#1237) 2022-07-08 16:33:13 +08:00
Jiarui Fang 4a76084dc9
[tensor] add zero_like colo op, important for Optimizer (#1236) 2022-07-08 14:55:27 +08:00
Jiarui Fang 3b500984b1
[tensor] fix some unittests (#1234) 2022-07-08 14:18:30 +08:00
ver217 a45ddf2d5f
[hotfix] fix sharded optim step and clip_grad_norm (#1226) 2022-07-08 13:34:48 +08:00
HELSON f071b500b6
[polish] polish __repr__ for ColoTensor, DistSpec, ProcessGroup (#1235) 2022-07-08 13:25:57 +08:00
HELSON 0453776def
[tensor] fix a assertion in colo_tensor cross_entropy (#1232) 2022-07-08 11:18:00 +08:00
Jiarui Fang 0e199d71e8
[hotfix] fx get comm size bugs (#1233)
* init a checkpoint dir

* [checkpoint]support resume for cosinewarmuplr

* [checkpoint]add unit test

* fix some bugs but still not OK

* fix bugs

* make it faster

* [checkpoint]support generalized scheduler

* polish

* [tensor] torch function return colotensor

* polish

* fix bugs

* remove debug info

* polish

* polish

* [tensor] test_model pass unittests

* polish

* [hotfix] fx get comm size bug

Co-authored-by: ZhaoYi1222 <zhaoyi9499@gmail.com>
2022-07-08 10:54:41 +08:00
HELSON 42ab36b762
[tensor] add unitest for colo_tensor 1DTP cross_entropy (#1230) 2022-07-07 19:17:23 +08:00
Yi Zhao 04537bf83e
[checkpoint]support generalized scheduler (#1222) 2022-07-07 18:16:38 +08:00
Jiarui Fang a98319f023
[tensor] torch function return colotensor (#1229) 2022-07-07 18:09:18 +08:00
Frank Lee 5581170890
[fx] fixed huggingface OPT and T5 results misalignment (#1227) 2022-07-07 16:29:58 +08:00
YuliangLiu0306 2b7dca44b5
[fx]get communication size between partitions (#1224)
* [CLI] add CLI launcher

* Revert "[CLI] add CLI launcher"

This reverts commit df7e6506d4.

* [fx]get communication size between partitions.

* polish
2022-07-07 16:22:00 +08:00
github-actions[bot] 4951f7d80c
Automated submodule synchronization (#1204)
Co-authored-by: github-actions <github-actions@github.com>
2022-07-07 15:22:45 +08:00
Frank Lee 84f2298a96
[fx] added patches for tracing swin transformer (#1228) 2022-07-07 15:20:13 +08:00