Commit Graph

58 Commits (5ecef13c165b9df6da761e3391bf3908d87f2465)

Author SHA1 Message Date
doubleHU f2da21a827 fix format (#586) 2022-04-06 11:40:59 +08:00
fanjinfucool ffad81e1d1 fix format (#585)
Co-authored-by: fanjifu <FAN>
2022-04-06 11:40:59 +08:00
Maruyama_Aya d2dc6049b5 fix format (#580) 2022-04-06 11:40:59 +08:00
yuxuan-lou cfb41297ff 'fix/format' (#573) 2022-04-06 11:40:59 +08:00
YuliangLiu0306 ade05a5d83
[refactor] pipeline, put runtime schedule into engine. (#627) 2022-04-03 20:46:45 +08:00
Jiarui Fang e956d93ac2
[refactor] memory utils (#577) 2022-04-01 09:22:33 +08:00
HELSON e6d50ec107
[zero] adapt zero for unsharded parameters (#561)
* support existing sharded and unsharded parameters in zero

* add unitest for moe-zero model init

* polish moe gradient handler
2022-03-31 18:34:11 +08:00
Jiarui Fang 7675366fce
[polish] rename col_attr -> colo_attr (#558) 2022-03-31 12:25:45 +08:00
ver217 014bac0c49
[zero] hijack p.grad in sharded model (#554)
* hijack p.grad in sharded model

* polish comments

* polish comments
2022-03-30 18:14:50 +08:00
Jiarui Fang f552b11294
[zero] label state for param fp16 and grad (#551) 2022-03-30 15:57:46 +08:00
Jiarui Fang 214da761d4
[zero] add stateful tensor (#549) 2022-03-30 13:51:37 +08:00
Liang Bowen ec5086c49c Refactored docstring to google style 2022-03-29 17:17:47 +08:00
Jie Zhu 73d36618a6
[profiler] add MemProfiler (#356)
* add memory trainer hook

* fix bug

* add memory trainer hook

* fix import bug

* fix import bug

* add trainer hook

* fix #370 git log bug

* modify `to_tensorboard` function to support better output

* remove useless output

* change the name of `MemProfiler`

* complete memory profiler

* replace error with warning

* finish trainer hook

* modify interface of MemProfiler

* modify `__init__.py` in profiler

* remove unnecessary pass statement

* add usage to doc string

* add usage to trainer hook

* new location to store temp data file
2022-03-29 12:48:34 +08:00
HELSON a30e2b4c24
[zero] adapt for no-leaf module in zero (#535)
only process module's own parameters in Zero context

add zero hooks for all modules that contrain parameters

gather parameters only belonging to module itself
2022-03-28 17:42:18 +08:00
Jiarui Fang 705f56107c
[zero] refactor model data tracing (#537) 2022-03-28 16:38:18 +08:00
Jiarui Fang 4d322b79da
[refactor] remove old zero code (#517) 2022-03-25 14:54:39 +08:00
Jiarui Fang 920c5889a7
[zero] add colo move inline (#521) 2022-03-25 14:02:55 +08:00
Jiarui Fang a445e118cf
[polish] polish singleton and global context (#500) 2022-03-23 18:03:39 +08:00
Jiarui Fang b334822163
[zero] polish sharded param name (#484)
* [zero] polish sharded param name

* polish code

* polish

* polish code

* polish

* polsih

* polish
2022-03-22 14:36:16 +08:00
Jiarui Fang 65c0f380c2
[format] polish name format for MOE (#481) 2022-03-21 23:19:47 +08:00
ver217 8d3250d74b
[zero] ZeRO supports pipeline parallel (#477) 2022-03-21 16:55:37 +08:00
HELSON aff9d354f7
[MOE] polish moe_env (#467) 2022-03-19 15:36:25 +08:00
HELSON 84fd7c1d4d
add moe context, moe utilities and refactor gradient handler (#455) 2022-03-18 16:38:32 +08:00
ver217 a241f61b34
[zero] Update initialize for ZeRO (#458)
* polish code

* shard strategy receive pg in shard() / gather()

* update zero engine

* polish code
2022-03-18 16:18:31 +08:00
ver217 9506a8beb2 use double buffer to handle grad 2022-03-16 14:24:09 +08:00
Jiarui Fang 56bb412e72
[polish] use GLOBAL_MODEL_DATA_TRACER (#417) 2022-03-15 11:29:46 +08:00
Jiarui Fang 21dc54e019
[zero] memtracer to record cuda memory usage of model data and overall system (#395) 2022-03-14 22:05:30 +08:00
ver217 88804aee49 add bucket tensor shard strategy 2022-03-14 14:48:32 +08:00
Xu Kai 54ee8d1254 Fix/format colossalai/engine/paramhooks/(#350) 2022-03-11 15:50:28 +08:00
yuxuan-lou 3b88eb2259 Flake8 code restyle 2022-03-11 15:50:28 +08:00
Jiarui Fang 44e4891f57 [zero] able to place params on cpu after zero init context (#365)
* place params on cpu after zero init context

* polish code
2022-03-11 15:50:28 +08:00
Jiarui Fang 10e2826426 move async memory to an individual directory (#345) 2022-03-11 15:50:28 +08:00
Frank Lee 6a3188167c set criterion as optional in colossalai initialize (#336) 2022-03-11 15:50:28 +08:00
Jie Zhu 3213554cc2 [profiler] add adaptive sampling to memory profiler (#330)
* fix merge conflict

modify unit test

remove unnessesary log info

reformat file

* remove unused module

* remove unnecessary sync function

* change doc string style from Google to Sphinx
2022-03-11 15:50:28 +08:00
ver217 1388671699 [zero] Update sharded model v2 using sharded param v2 (#323) 2022-03-11 15:50:28 +08:00
Jiarui Fang 11bddb6e55 [zero] update zero context init with the updated test utils (#327) 2022-03-11 15:50:28 +08:00
ver217 36f9a74ab2 fix sharded param hook and unit test 2022-03-11 15:50:28 +08:00
ver217 001ca624dd impl shard optim v2 and add unit test 2022-03-11 15:50:28 +08:00
Jie Zhu d344689274 [profiler] primary memory tracer 2022-03-11 15:50:28 +08:00
ver217 7aef75ca42 [zero] add sharded grad and refactor grad hooks for ShardedModel (#287) 2022-03-11 15:50:28 +08:00
Jiarui Fang 8d653af408 add a common util for hooks registered on parameter. (#292) 2022-03-11 15:50:28 +08:00
Jiarui Fang 5a560a060a Feature/zero (#279)
* add zero1 (#209)

* add zero1

* add test zero1

* update zero stage 1 develop (#212)

* Implement naive zero3 (#240)

* naive zero3 works well

* add zero3 param manager

* add TODOs in comments

* add gather full param ctx

* fix sub module streams

* add offload

* fix bugs of hook and add unit tests

* fix bugs of hook and add unit tests (#252)

* add gather full param ctx

* fix sub module streams

* add offload

* fix bugs of hook and add unit tests

* polish code and add state dict hook

* fix bug

* update unit test

* refactor reconstructed zero code

* clip_grad support zero3 and add unit test

* add unit test for Zero3ParameterManager

* [WIP] initialize the shard param class

* [WIP] Yet another sharded model implementation (#274)

* [WIP] initialize the shard param class

* [WIP] Yes another implementation of shardModel. Using a better hook method.

* torch.concat -> torch.cat

* fix test_zero_level_1.py::test_zero_level_1 unitest

* remove deepspeed implementation and refactor for the reconstructed zero module

* polish zero dp unittests

Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
2022-03-11 15:50:28 +08:00
アマデウス 9ee197d0e9 moved env variables to global variables; (#215)
added branch context;
added vocab parallel layers;
moved split_batch from load_batch to tensor parallel embedding layers;
updated gpt model;
updated unit test cases;
fixed few collective communicator bugs
2022-02-15 11:31:13 +08:00
Jiarui Fang 569357fea0
add pytorch hooks (#179)
* add pytorch hooks
fix #175

* remove licenses in src code

* add gpu memory tracer

* replacing print with logger in ophooks.
2022-01-25 22:20:54 +08:00
ver217 708404d5f8
fix pipeline forward return tensors (#176) 2022-01-21 15:46:02 +08:00
HELSON 0f8c7f9804
Fixed docstring in colossalai (#171) 2022-01-21 10:44:30 +08:00
Frank Lee e2089c5c15
adapted for sequence parallel (#163) 2022-01-20 13:44:51 +08:00
ver217 7bf1e98b97
pipeline last stage supports multi output (#151) 2022-01-17 15:57:47 +08:00
HELSON dceae85195
Added MoE parallel (#127) 2022-01-07 15:08:36 +08:00
ver217 293fb40c42
add scatter/gather optim for pipeline (#123) 2022-01-07 13:22:22 +08:00