Commit Graph

1228 Commits (4af31d263dd12c9238607fa48e5fd0488cd8cf25)

Author SHA1 Message Date
Frank Lee dd14783f75
[kernel] fixed repeated loading of kernels (#2549)
2 years ago
ver217 5b1854309a
[hotfix] fix zero ddp warmup check (#2545)
2 years ago
oahzxl fa3d66feb9
support unet metainfo prop (#2544)
2 years ago
oahzxl 05671fcb42
[autochunk] support multi outputs chunk search (#2538)
2 years ago
oahzxl 63199c6687
[autochunk] support transformer (#2526)
2 years ago
HELSON a4ed9125ac
[hotfix] fix lightning error (#2529)
2 years ago
HELSON 66dfcf5281
[gemini] update the gpt example (#2527)
2 years ago
HELSON b528eea0f0
[zero] add zero wrappers (#2523)
2 years ago
Super Daniel c198c7c0b0
[hotfix] meta tensor default device. (#2510)
2 years ago
HELSON 077a5cdde4
[zero] fix gradient clipping in hybrid parallelism (#2521)
2 years ago
YuliangLiu0306 aa0f6686f9
[autoparallel] accelerate gpt2 training (#2495)
2 years ago
HELSON 707b11d4a0
[gemini] update ddp strict mode (#2518)
2 years ago
HELSON 2d1a7dfe5f
[zero] add strict ddp mode (#2508)
2 years ago
oahzxl c04f183237
[autochunk] support parsing blocks (#2506)
2 years ago
Super Daniel 35c0c0006e
[utils] lazy init. (#2148)
2 years ago
oahzxl 72341e65f4
[auto-chunk] support extramsa (#3) (#2504)
2 years ago
Ziyue Jiang 0f02b8c6e6
add avg partition (#2483)
2 years ago
アマデウス 99d9713b02 Revert "Update parallel_context.py (#2408)"
2 years ago
oahzxl ecccc91f21
[autochunk] support autochunk on evoformer (#2497)
2 years ago
oahzxl 5db3a5bf42
[fx] allow control of ckpt_codegen init (#2498)
2 years ago
HELSON d565a24849
[zero] add unit testings for hybrid parallelism (#2486)
2 years ago
oahzxl 4953b4ace1
[autochunk] support evoformer tracer (#2485)
2 years ago
YuliangLiu0306 67e1912b59
[autoparallel] support origin activation ckpt on autoprallel system (#2468)
2 years ago
Ziyue Jiang fef5c949c3
polish pp middleware (#2476)
2 years ago
HELSON a5dc4253c6
[zero] polish low level optimizer (#2473)
2 years ago
Frank Lee 8b7495dd54
[example] integrate seq-parallel tutorial with CI (#2463)
2 years ago
Jiarui Fang 867c8c2d3a
[zero] low level optim supports ProcessGroup (#2464)
2 years ago
Frank Lee 14d9299360
[cli] fixed hostname mismatch error (#2465)
2 years ago
Haofan Wang 9358262992
Fix False warning in initialize.py (#2456)
2 years ago
YuliangLiu0306 8221fd7485
[autoparallel] update binary elementwise handler (#2451)
2 years ago
HELSON 2bfeb24308
[zero] add warning for ignored parameters (#2446)
2 years ago
Frank Lee 39163417a1
[example] updated the hybrid parallel tutorial (#2444)
2 years ago
HELSON 5521af7877
[zero] fix state_dict and load_state_dict for ddp ignored parameters (#2443)
2 years ago
YuliangLiu0306 2731531bc2
[autoparallel] integrate device mesh initialization into autoparallelize (#2393)
2 years ago
Frank Lee c72c827e95
[cli] provided more details if colossalai run fail (#2442)
2 years ago
Super Daniel c41e59e5ad
[fx] allow native ckpt trace and codegen. (#2438)
2 years ago
YuliangLiu0306 41429b9b28
[autoparallel] add shard option (#2423)
2 years ago
HELSON 7829aa094e
[ddp] add is_ddp_ignored (#2434)
2 years ago
HELSON bb4e9a311a
[zero] add inference mode and its unit test (#2418)
2 years ago
Jiarui Fang 93f62dd152
[autochunk] add autochunk feature
2 years ago
HELSON dddacd2d2c
[hotfix] add norm clearing for the overflow step (#2416)
2 years ago
oahzxl 7ab2db206f adapt new fx
2 years ago
oahzxl e532679c95 Merge branch 'main' of https://github.com/oahzxl/ColossalAI into chunk
2 years ago
Haofan Wang 7d5640b9db
Update parallel_context.py (#2408)
2 years ago
oahzxl fd818cf144 change imports
2 years ago
oahzxl a591d45b29 add available
2 years ago
oahzxl 615e7e68d9 update doc
2 years ago
oahzxl 7d4abaa525 add doc
2 years ago
oahzxl 1be0ac3cbf add doc for trace indice
2 years ago
oahzxl 0b6af554df remove useless function
2 years ago