Jiarui Fang
107b99ddb1
[zero] dump memory stats for sharded model ( #548 )
2022-03-30 09:38:44 +08:00
Ziyue Jiang
763dc325f1
[TP] Add gather_out arg to Linear ( #541 )
2022-03-30 09:35:46 +08:00
HELSON
8c90d4df54
[zero] add zero context manager to change config during initialization ( #546 )
2022-03-29 17:57:59 +08:00
Liang Bowen
ec5086c49c
Refactored docstring to google style
2022-03-29 17:17:47 +08:00
Jiarui Fang
53b1b6e340
[zero] non model data tracing ( #545 )
2022-03-29 15:45:48 +08:00
Jie Zhu
73d36618a6
[profiler] add MemProfiler ( #356 )
...
* add memory trainer hook
* fix bug
* add memory trainer hook
* fix import bug
* fix import bug
* add trainer hook
* fix #370 git log bug
* modify `to_tensorboard` function to support better output
* remove useless output
* change the name of `MemProfiler`
* complete memory profiler
* replace error with warning
* finish trainer hook
* modify interface of MemProfiler
* modify `__init__.py` in profiler
* remove unnecessary pass statement
* add usage to doc string
* add usage to trainer hook
* new location to store temp data file
2022-03-29 12:48:34 +08:00
ver217
fb841dd5c5
[zero] optimize grad offload ( #539 )
...
* optimize grad offload
* polish code
* polish code
2022-03-29 12:48:00 +08:00
Jiarui Fang
7d81b5b46e
[logging] polish logger format ( #543 )
2022-03-29 10:37:11 +08:00
ver217
1f90a3b129
[zero] polish ZeroInitContext ( #540 )
2022-03-29 09:09:04 +08:00
Jiarui Fang
c11ff81b15
[zero] get memory usage of sharded optim v2. ( #542 )
2022-03-29 09:08:18 +08:00
HELSON
a30e2b4c24
[zero] adapt for no-leaf module in zero ( #535 )
...
only process module's own parameters in Zero context
add zero hooks for all modules that contrain parameters
gather parameters only belonging to module itself
2022-03-28 17:42:18 +08:00
Jiarui Fang
705f56107c
[zero] refactor model data tracing ( #537 )
2022-03-28 16:38:18 +08:00
Jiarui Fang
a590ed0ba3
[zero] improve the accuracy of get_memory_usage of sharded param ( #538 )
2022-03-28 16:19:19 +08:00
Jiarui Fang
37cb70feec
[zero] get memory usage for sharded param ( #536 )
2022-03-28 15:01:21 +08:00
Jiarui Fang
05e33b2578
[zero] fix grad offload ( #528 )
...
* [zero] fix grad offload
* polish code
2022-03-25 18:23:25 +08:00
LuGY
105c5301c3
[zero]added hybrid adam, removed loss scale in adam ( #527 )
...
* [zero]added hybrid adam, removed loss scale of adam
* remove useless code
2022-03-25 18:03:54 +08:00
Jiarui Fang
8d8c5407c0
[zero] refactor model data tracing ( #522 )
2022-03-25 18:03:32 +08:00
Frank Lee
3601b2bad0
[test] fixed rerun_on_exception and adapted test cases ( #487 )
2022-03-25 17:25:12 +08:00
Jiarui Fang
4d322b79da
[refactor] remove old zero code ( #517 )
2022-03-25 14:54:39 +08:00
LuGY
6a3f9fda83
[cuda] modify the fused adam, support hybrid of fp16 and fp32 ( #497 )
2022-03-25 14:15:53 +08:00
Jiarui Fang
920c5889a7
[zero] add colo move inline ( #521 )
2022-03-25 14:02:55 +08:00
ver217
7be397ca9c
[log] polish disable_existing_loggers ( #519 )
2022-03-25 12:30:55 +08:00
Jiarui Fang
0bebda6ea5
[zero] fix init device bug in zero init context unittest ( #516 )
2022-03-25 12:24:18 +08:00
Jiarui Fang
7ef3507ace
[zero] show model data cuda memory usage after zero context init. ( #515 )
2022-03-25 11:23:35 +08:00
ver217
a2e61d61d4
[zero] zero init ctx enable rm_torch_payload_on_the_fly ( #512 )
...
* enable rm_torch_payload_on_the_fly
* polish docstr
2022-03-24 23:44:00 +08:00
Jiarui Fang
81145208d1
[install] run with out rich ( #513 )
2022-03-24 17:39:50 +08:00
Jiarui Fang
bca0c49a9d
[zero] use colo model data api in optimv2 ( #511 )
2022-03-24 17:19:34 +08:00
Jiarui Fang
9330be0f3c
[memory] set cuda mem frac ( #506 )
2022-03-24 16:57:13 +08:00
Jiarui Fang
0035b7be07
[memory] add model data tensor moving api ( #503 )
2022-03-24 14:29:41 +08:00
Jiarui Fang
a445e118cf
[polish] polish singleton and global context ( #500 )
2022-03-23 18:03:39 +08:00
ver217
9ec1ce6ab1
[zero] sharded model support the reuse of fp16 shard ( #495 )
...
* sharded model supports reuse fp16 shard
* rename variable
* polish code
* polish code
* polish code
2022-03-23 14:59:59 +08:00
HELSON
f24b5ed201
[MOE] remove old MoE legacy ( #493 )
2022-03-22 17:37:16 +08:00
ver217
c4c02424f3
[zero] sharded model manages ophooks individually ( #492 )
2022-03-22 17:33:20 +08:00
HELSON
c9023d4078
[MOE] support PR-MOE ( #488 )
2022-03-22 16:48:22 +08:00
ver217
a9ecb4b244
[zero] polish sharded optimizer v2 ( #490 )
2022-03-22 15:53:48 +08:00
ver217
62b0a8d644
[zero] sharded optim support hybrid cpu adam ( #486 )
...
* sharded optim support hybrid cpu adam
* update unit test
* polish docstring
2022-03-22 14:56:59 +08:00
Jiarui Fang
b334822163
[zero] polish sharded param name ( #484 )
...
* [zero] polish sharded param name
* polish code
* polish
* polish code
* polish
* polsih
* polish
2022-03-22 14:36:16 +08:00
HELSON
d7ea63992b
[MOE] add FP32LinearGate for MOE in NaiveAMP context ( #480 )
2022-03-22 10:50:20 +08:00
Jiarui Fang
65c0f380c2
[format] polish name format for MOE ( #481 )
2022-03-21 23:19:47 +08:00
ver217
8d3250d74b
[zero] ZeRO supports pipeline parallel ( #477 )
2022-03-21 16:55:37 +08:00
Frank Lee
83a847d058
[test] added rerun on exception for testing ( #475 )
...
* [test] added rerun on exception function
* polish code
2022-03-21 15:51:57 +08:00
HELSON
7544347145
[MOE] add unitest for MOE experts layout, gradient handler and kernel ( #469 )
2022-03-21 13:35:04 +08:00
ver217
3cb3fc275e
zero init ctx receives a dp process group ( #471 )
2022-03-21 11:18:55 +08:00
HELSON
aff9d354f7
[MOE] polish moe_env ( #467 )
2022-03-19 15:36:25 +08:00
HELSON
bccbc15861
[MOE] changed parallelmode to dist process group ( #460 )
2022-03-19 13:46:29 +08:00
ver217
fc8e6db005
[doc] Update docstring for ZeRO ( #459 )
...
* polish sharded model docstr
* polish sharded optim docstr
* polish zero docstr
* polish shard strategy docstr
2022-03-18 16:48:20 +08:00
HELSON
84fd7c1d4d
add moe context, moe utilities and refactor gradient handler ( #455 )
2022-03-18 16:38:32 +08:00
ver217
a241f61b34
[zero] Update initialize for ZeRO ( #458 )
...
* polish code
* shard strategy receive pg in shard() / gather()
* update zero engine
* polish code
2022-03-18 16:18:31 +08:00
ver217
642846d6f9
update sharded optim and fix zero init ctx ( #457 )
2022-03-18 15:44:47 +08:00
Jiarui Fang
e2e9f82588
Revert "[zero] update sharded optim and fix zero init ctx" ( #456 )
...
* Revert "polish code"
This reverts commit 8cf7ff08cf
.
* Revert "rename variables"
This reverts commit e99af94ab8
.
* Revert "remove surplus imports"
This reverts commit 46add4a5c5
.
* Revert "update sharded optim and fix zero init ctx"
This reverts commit 57567ee768
.
2022-03-18 15:22:43 +08:00
ver217
e99af94ab8
rename variables
2022-03-18 14:25:25 +08:00
ver217
57567ee768
update sharded optim and fix zero init ctx
2022-03-18 14:25:25 +08:00
Jiarui Fang
0fcfb1e00d
[test] make zero engine test really work ( #447 )
2022-03-17 17:24:25 +08:00
Jiarui Fang
237d08e7ee
[zero] hybrid cpu adam ( #445 )
2022-03-17 15:05:41 +08:00
Frank Lee
b72b8445c6
optimized context test time consumption ( #446 )
2022-03-17 14:40:52 +08:00
Jiarui Fang
496cbb0760
[hotfix] fix initialize bug with zero ( #442 )
2022-03-17 13:16:22 +08:00
Jiarui Fang
640a6cd304
[refactory] refactory the initialize method for new zero design ( #431 )
2022-03-16 19:29:37 +08:00
Frank Lee
bffd85bf34
added testing module ( #435 )
2022-03-16 17:20:05 +08:00
HELSON
dbdc9a7783
added Multiply Jitter and capacity factor eval for MOE ( #434 )
2022-03-16 16:47:44 +08:00
Frank Lee
b03b3ae99c
fixed mem monitor device ( #433 )
...
fixed mem monitor device
2022-03-16 15:25:02 +08:00
Frank Lee
14a7094243
fixed fp16 optimizer none grad bug ( #432 )
2022-03-16 14:35:46 +08:00
ver217
fce9432f08
sync before creating empty grad
2022-03-16 14:24:09 +08:00
ver217
ea6905a898
free param.grad
2022-03-16 14:24:09 +08:00
ver217
9506a8beb2
use double buffer to handle grad
2022-03-16 14:24:09 +08:00
Jiarui Fang
54229cd33e
[log] better logging display with rich ( #426 )
...
* better logger using rich
* remove deepspeed in zero requirements
2022-03-16 09:51:15 +08:00
HELSON
3f70a2b12f
removed noisy function during evaluation of MoE router ( #419 )
2022-03-15 12:06:09 +08:00
Jiarui Fang
adebb3e041
[zero] cuda margin space for OS ( #418 )
2022-03-15 12:02:19 +08:00
Jiarui Fang
56bb412e72
[polish] use GLOBAL_MODEL_DATA_TRACER ( #417 )
2022-03-15 11:29:46 +08:00
Jiarui Fang
23ba3fc450
[zero] refactory ShardedOptimV2 init method ( #416 )
2022-03-15 10:45:55 +08:00
Frank Lee
e79ea44247
[fp16] refactored fp16 optimizer ( #392 )
2022-03-15 10:05:38 +08:00
Jiarui Fang
21dc54e019
[zero] memtracer to record cuda memory usage of model data and overall system ( #395 )
2022-03-14 22:05:30 +08:00
Jiarui Fang
370f567e7d
[zero] new interface for ShardedOptimv2 ( #406 )
2022-03-14 20:48:41 +08:00
LuGY
a9c27be42e
Added tensor detector ( #393 )
...
* Added tensor detector
* Added the - states
* Allowed change include_cpu when detect()
2022-03-14 18:01:46 +08:00
1SAA
907ac4a2dc
fixed error when no collective communication in CommProfiler
2022-03-14 17:21:00 +08:00
Frank Lee
2fe68b359a
Merge pull request #403 from ver217/feature/shard-strategy
...
[zero] Add bucket tensor shard strategy
2022-03-14 16:29:28 +08:00
HELSON
dfd0363f68
polished output format for communication profiler and pcie profiler ( #404 )
...
fixed typing error
2022-03-14 16:07:45 +08:00
ver217
63469c0f91
polish code
2022-03-14 15:48:55 +08:00
ver217
88804aee49
add bucket tensor shard strategy
2022-03-14 14:48:32 +08:00
HELSON
7c079d9c33
[hotfix] fixed bugs in ShardStrategy and PcieProfiler ( #394 )
2022-03-11 18:12:46 +08:00
Frank Lee
1e4bf85cdb
fixed bug in activation checkpointing test ( #387 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
3af13a2c3e
[zero] polish ShardedOptimV2 unittest ( #385 )
...
* place params on cpu after zero init context
* polish code
* bucketzed cpu gpu tensor transter
* find a bug in sharded optim unittest
* add offload unittest for ShardedOptimV2.
* polish code and make it more robust
2022-03-11 15:50:28 +08:00
Jiang Zhuo
5a4a3b77d9
fix format ( #376 )
2022-03-11 15:50:28 +08:00
LuGY
de46450461
Added activation offload ( #331 )
...
* Added activation offload
* Fixed the import bug, used the pytest
2022-03-11 15:50:28 +08:00
Jiarui Fang
272ebfb57d
[bug] shard param during initializing the ShardedModelV2 ( #381 )
2022-03-11 15:50:28 +08:00
HELSON
8c18eb0998
[profiler] Fixed bugs in CommProfiler and PcieProfiler ( #377 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
b5f43acee3
[zero] find miss code ( #378 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
6b6002962a
[zero] zero init context collect numel of model ( #375 )
2022-03-11 15:50:28 +08:00
HELSON
1ed7c24c02
Added PCIE profiler to dectect data transmission ( #373 )
2022-03-11 15:50:28 +08:00
jiaruifang
d9217e1960
Revert "[zero] bucketized tensor cpu gpu copy ( #368 )"
...
This reverts commit bef05489b6
.
2022-03-11 15:50:28 +08:00
RichardoLuo
8539898ec6
flake8 style change ( #363 )
2022-03-11 15:50:28 +08:00
Kai Wang (Victor Kai)
53bb3bcc0a
fix format ( #362 )
2022-03-11 15:50:28 +08:00
ziyu huang
a77d73f22b
fix format parallel_context.py ( #359 )
...
Co-authored-by: huangziyu <202476410arsmart@gmail.com>
2022-03-11 15:50:28 +08:00
Zangwei
c695369af0
fix format constants.py ( #358 )
2022-03-11 15:50:28 +08:00
Yuer867
4a0f8c2c50
fix format parallel_2p5d ( #357 )
2022-03-11 15:50:28 +08:00
Liang Bowen
7eb87f516d
flake8 style ( #352 )
2022-03-11 15:50:28 +08:00
Xu Kai
54ee8d1254
Fix/format colossalai/engine/paramhooks/( #350 )
2022-03-11 15:50:28 +08:00
Maruyama_Aya
e83970e3dc
fix format ColossalAI\colossalai\context\process_group_initializer
2022-03-11 15:50:28 +08:00
yuxuan-lou
3b88eb2259
Flake8 code restyle
2022-03-11 15:50:28 +08:00
xuqifan897
148207048e
Qifan formated file ColossalAI\colossalai\nn\layer\parallel_1d\layers.py ( #342 )
2022-03-11 15:50:28 +08:00
Cautiousss
3a51d909af
fix format ( #332 )
...
Co-authored-by: 何晓昕 <cautious@r-205-106-25-172.comp.nus.edu.sg>
2022-03-11 15:50:28 +08:00