Jiarui Fang
c92f84fcdb
[tensor] distributed checkpointing for parameters ( #1240 )
2022-07-12 15:51:06 +08:00
Jiarui Fang
9bcd2fd4af
[tensor] a shorter shard and replicate spec ( #1245 )
2022-07-11 15:51:48 +08:00
Jiarui Fang
20da6e48c8
[checkpoint] save sharded optimizer states ( #1237 )
2022-07-08 16:33:13 +08:00
Jiarui Fang
3b500984b1
[tensor] fix some unittests ( #1234 )
2022-07-08 14:18:30 +08:00
ver217
a45ddf2d5f
[hotfix] fix sharded optim step and clip_grad_norm ( #1226 )
2022-07-08 13:34:48 +08:00
Yi Zhao
04537bf83e
[checkpoint]support generalized scheduler ( #1222 )
2022-07-07 18:16:38 +08:00
Jiarui Fang
52736205d9
[checkpoint] make unitest faster ( #1217 )
2022-07-06 17:39:46 +08:00
Jiarui Fang
f38006ea83
[checkpoint] checkpoint for ColoTensor Model ( #1196 )
2022-07-06 17:22:03 +08:00
Jiarui Fang
ae7d3f4927
[refactor] move process group from _DistSpec to ColoTensor. ( #1203 )
2022-07-06 16:15:16 +08:00
YuliangLiu0306
63d2a93878
[context]support arbitary module materialization. ( #1193 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [context]support arbitary module materialization.
* [test]add numerical check for lazy init context.
2022-07-04 10:12:02 +08:00
YuliangLiu0306
2053e138a2
[context]use meta tensor to init model lazily. ( #1187 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [context]use meta tensor to init model lazily.
* polish
* make module with device kwargs bypass the normal init.
* change unit test to adapt updated context.
2022-06-29 21:02:30 +08:00
YuliangLiu0306
e27645376d
[hotfix]different overflow status lead to communication stuck. ( #1175 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [hotfix]fix some bugs caused by refactored schedule.
* [hotfix]different overflow statu llead to communication stuck.
2022-06-27 09:53:57 +08:00
Jiarui Fang
4b9bba8116
[ColoTensor] rename APIs and add output_replicate to ComputeSpec ( #1168 )
2022-06-24 13:08:54 +08:00
Frank Lee
f8eec98ff5
[tensor] fixed non-serializable colo parameter during model checkpointing ( #1153 )
2022-06-22 11:43:38 +08:00
Frank Lee
73ad05fc8c
[zero] added error message to handle on-the-fly import of torch Module class ( #1135 )
...
* [zero] added error message to handle on-the-fly import of torch Module class
* polish code
2022-06-20 11:24:27 +08:00
Frank Lee
2b2dc1c86b
[pipeline] refactor the pipeline module ( #1087 )
...
* [pipeline] refactor the pipeline module
* polish code
2022-06-10 11:27:38 +08:00
Frank Lee
bad5d4c0a1
[context] support lazy init of module ( #1088 )
...
* [context] support lazy init of module
* polish code
2022-06-10 10:09:48 +08:00
Frank Lee
bfdc5ccb7b
[context] maintain the context object in with statement ( #1073 )
2022-06-07 10:48:45 +08:00
Jiarui Fang
49832b2344
[refactory] add nn.parallel module ( #1068 )
2022-06-06 15:34:41 +08:00
Jiarui Fang
a00644079e
reorgnize colotensor directory ( #1062 )
...
* reorgnize colotensor directory
* polish code
2022-06-03 18:04:22 +08:00
Ziyue Jiang
df9dcbbff6
[Tensor] add hybrid device demo and fix bugs ( #1059 )
2022-06-03 12:09:49 +08:00
Ziyue Jiang
7c530b9de2
[Tensor] add Parameter inheritance for ColoParameter ( #1041 )
...
* add Parameter inheritance for ColoParameter
* remove tricks
* remove tricks
* polish
* polish
2022-05-30 17:23:44 +08:00
Ziyue Jiang
6c5996a56e
[Tensor] add module check and bert test ( #1031 )
...
* add Embedding
* Add bert test
* polish
* add check module test
* polish
* polish
* polish
* polish
2022-05-26 18:15:42 +08:00
Ziyue Jiang
32291dd73f
[Tensor] add module handler for linear ( #1021 )
...
* add module spec for linear
* polish
* polish
* polish
2022-05-26 11:50:44 +08:00
ver217
007ca0df92
fix colo init context ( #1026 )
2022-05-25 20:41:58 +08:00
ver217
ad536e308e
[tensor] refactor colo-tensor ( #992 )
...
* refactor colo-tensor and update linear op
* polish code
* polish code
* update ops and unit tests
* update unit tests
* polish code
* rename dist_spec module
* polish code
* polish code
* remove unneeded import
* fix pipelinable
2022-05-19 12:44:59 +08:00
Ziyue Jiang
d73c2b1d79
[Tensor] fix init context ( #931 )
...
* change torch.Parameter to ColoParameter
* fix post assignment for init context
* polish
* polish
2022-05-11 15:48:12 +08:00
Ziyue Jiang
dfc88b85ea
[Tensor] simplify named param ( #928 )
...
* simplify ColoModulize
* simplify ColoModulize
* polish
* polish
2022-05-11 10:54:19 +08:00
YuliangLiu0306
32a45cd7ef
[pipelinable]use pipelinable to support GPT model. ( #903 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [pipelinable]use pipelinable to support GPT model.
* fix a bug caused by ShardedModel
* polish
* fix front func list
2022-05-11 09:23:58 +08:00
Ziyue Jiang
c195d2814c
[Tensor] add from_pretrained support and bert pretrained test ( #921 )
...
* add from_pretrained support and test
* polish
* polish
* polish
* polish
2022-05-09 16:11:47 +08:00
Jiarui Fang
ab95ec9aea
[Tensor] init ColoParameter ( #914 )
2022-05-06 12:57:14 +08:00
Jiarui Fang
d16671da75
[Tensor] initialize the ColoOptimizer ( #898 )
...
* [Tensor] activation is an attr of ColoTensor
* [Tensor] add optimizer
* only detach parameters in context
* polish code
2022-04-28 15:23:40 +08:00
Jiarui Fang
676f191532
[Tensor] activation is an attr of ColoTensor ( #897 )
2022-04-28 14:43:22 +08:00
Jiarui Fang
26c49639d8
[Tensor] overriding paramters() for Module using ColoTensor ( #889 )
2022-04-27 15:28:59 +08:00
ver217
4df6471f5d
fix import error ( #880 )
2022-04-26 19:28:40 +08:00
Jiarui Fang
d01d3b8cb0
colo init context add device attr. ( #866 )
2022-04-25 14:24:26 +08:00
YuliangLiu0306
c6930d8ddf
[pipelinable]use ColoTensor to replace dummy tensor. ( #853 )
2022-04-24 18:31:22 +08:00
ver217
232142f402
[utils] refactor profiler ( #837 )
...
* add model data profiler
* add a subclass of torch.profiler.profile
* refactor folder structure
* remove redundant codes
* polish code
* use GeminiMemoryManager
* fix import path
* fix stm profiler ext
* polish comments
* remove useless file
2022-04-24 17:03:59 +08:00
Jiarui Fang
62f059251b
[Tensor] init a tp network training unittest ( #849 )
2022-04-24 16:43:44 +08:00
ver217
0dea140760
[hotfix] add deconstructor for stateful tensor ( #848 )
...
* add deconstructor for stateful tensor
* fix colo init context
2022-04-24 15:03:04 +08:00
YuliangLiu0306
35ea6e1023
[pipelinable]use pipelinable context to initialize non-pipeline model ( #816 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [pipeline]add module lazy init feature to support large model initization.
* [pipeline]add to_layer_list and partition method to support arbitrary non-pp model
* refactor the module structure
* polish
* [pipelinable]add unit test for pipelinable
* polish
* polish
* Fix CodeFactor issues.
2022-04-24 13:03:12 +08:00
Jiarui Fang
8789850eea
Init Conext supports lazy allocate model memory ( #842 )
2022-04-22 18:03:35 +08:00
Jiarui Fang
eb1b89908c
[refactor] moving InsertPostInitMethodToModuleSubClasses to utils. ( #824 )
2022-04-21 16:03:18 +08:00
Jiarui Fang
227d1cd4b3
[gemini] APIs to set cpu memory capacity ( #809 )
2022-04-19 16:05:22 +08:00
Jiarui Fang
681addb512
[refactor] moving grad acc logic to engine ( #804 )
2022-04-19 14:03:21 +08:00
Jiarui Fang
4d9332b4c5
[refactor] moving memtracer to gemini ( #801 )
2022-04-19 10:13:08 +08:00
HELSON
84c6700b2a
[zero] refactor memstats_collector ( #746 )
2022-04-14 12:01:12 +08:00
HELSON
340e59f968
[utils] add synchronized cuda memory monitor ( #740 )
2022-04-13 10:50:54 +08:00
Jiarui Fang
53cb584808
[utils] correct cpu memory used and capacity in the context of multi-process ( #726 )
2022-04-12 14:57:54 +08:00
Frank Lee
2412429d54
[util] fixed activation checkpointing on torch 1.9 ( #719 )
2022-04-12 09:35:45 +08:00
Jiarui Fang
193dc8dacb
[refactor] refactor the memory utils ( #715 )
2022-04-11 16:47:57 +08:00
LuGY
140263a394
[hotfix]fixed bugs of assigning grad states to non leaf nodes ( #711 )
...
* fixed bugs of assigning grad states to non leaf nodes
* use detach()
2022-04-11 14:04:58 +08:00
ver217
ab8c6b4a0e
[zero] refactor memstats collector ( #706 )
...
* refactor memstats collector
* fix disposable
* polish code
2022-04-11 10:46:08 +08:00
ver217
3c9cd5bb5e
[zero] stateful tensor manager ( #687 )
...
* [WIP] stateful tensor manager
* add eviction strategy
* polish code
* polish code
* polish comment
* add unit test
* fix sampler bug
* polish code
* fix max sampling cnt resetting bug
* fix sampler bug
* polish code
* fix bug
* fix unit test
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2022-04-08 17:51:34 +08:00
Jiarui Fang
59bf2dc590
[zero] initialize a stateful tensor manager ( #614 )
2022-04-06 16:18:49 +08:00
Jiarui Fang
0aab52301e
[hotfix] fix a bug in model data stats tracing ( #655 )
2022-04-03 21:48:06 +08:00
HELSON
e5d615aeee
[hotfix] fix bugs in testing ( #659 )
...
* remove hybrid adam in test_moe_zero_optim
* fix activation checkpointing and its unitest
2022-04-02 21:58:47 +08:00
LuGY
1e2557e801
[zero] fixed the activation offload ( #647 )
...
* fixed the activation offload
* polish
2022-04-02 16:21:32 +08:00
ver217
f5d3a9c2b0
polish checkpoint docstring ( #637 )
2022-04-02 13:34:33 +08:00
HELSON
055fbf5be6
[zero] adapt zero for unsharded paramters (Optimizer part) ( #601 )
2022-04-01 20:10:47 +08:00
アマデウス
acae68eb04
[model checkpoint] updated checkpoint save/load utils ( #592 )
2022-04-01 16:49:21 +08:00
ver217
369a288bf3
polish utils docstring ( #620 )
2022-04-01 16:36:47 +08:00
LuGY
02b187c14f
[zero] add sampling time for memstats collector ( #610 )
2022-04-01 14:03:00 +08:00
アマデウス
54e688b623
moved ensure_path_exists to utils.common ( #591 )
2022-04-01 09:46:33 +08:00
Jiarui Fang
e956d93ac2
[refactor] memory utils ( #577 )
2022-04-01 09:22:33 +08:00
HELSON
e6d50ec107
[zero] adapt zero for unsharded parameters ( #561 )
...
* support existing sharded and unsharded parameters in zero
* add unitest for moe-zero model init
* polish moe gradient handler
2022-03-31 18:34:11 +08:00
ver217
7c6c427db1
[zero] trace states of fp16/32 grad and fp32 param ( #571 )
2022-03-31 16:26:54 +08:00
Jiarui Fang
7675366fce
[polish] rename col_attr -> colo_attr ( #558 )
2022-03-31 12:25:45 +08:00
Liang Bowen
2c45efc398
html refactor ( #555 )
2022-03-31 11:36:56 +08:00
Jiarui Fang
d1211148a7
[utils] update colo tensor moving APIs ( #553 )
2022-03-30 23:13:24 +08:00
Jiarui Fang
107b99ddb1
[zero] dump memory stats for sharded model ( #548 )
2022-03-30 09:38:44 +08:00
Liang Bowen
ec5086c49c
Refactored docstring to google style
2022-03-29 17:17:47 +08:00
Jiarui Fang
53b1b6e340
[zero] non model data tracing ( #545 )
2022-03-29 15:45:48 +08:00
Jie Zhu
73d36618a6
[profiler] add MemProfiler ( #356 )
...
* add memory trainer hook
* fix bug
* add memory trainer hook
* fix import bug
* fix import bug
* add trainer hook
* fix #370 git log bug
* modify `to_tensorboard` function to support better output
* remove useless output
* change the name of `MemProfiler`
* complete memory profiler
* replace error with warning
* finish trainer hook
* modify interface of MemProfiler
* modify `__init__.py` in profiler
* remove unnecessary pass statement
* add usage to doc string
* add usage to trainer hook
* new location to store temp data file
2022-03-29 12:48:34 +08:00
Jiarui Fang
c11ff81b15
[zero] get memory usage of sharded optim v2. ( #542 )
2022-03-29 09:08:18 +08:00
Jiarui Fang
705f56107c
[zero] refactor model data tracing ( #537 )
2022-03-28 16:38:18 +08:00
Jiarui Fang
05e33b2578
[zero] fix grad offload ( #528 )
...
* [zero] fix grad offload
* polish code
2022-03-25 18:23:25 +08:00
Jiarui Fang
8d8c5407c0
[zero] refactor model data tracing ( #522 )
2022-03-25 18:03:32 +08:00
Jiarui Fang
920c5889a7
[zero] add colo move inline ( #521 )
2022-03-25 14:02:55 +08:00
Jiarui Fang
0bebda6ea5
[zero] fix init device bug in zero init context unittest ( #516 )
2022-03-25 12:24:18 +08:00
Jiarui Fang
7ef3507ace
[zero] show model data cuda memory usage after zero context init. ( #515 )
2022-03-25 11:23:35 +08:00
Jiarui Fang
9330be0f3c
[memory] set cuda mem frac ( #506 )
2022-03-24 16:57:13 +08:00
Jiarui Fang
0035b7be07
[memory] add model data tensor moving api ( #503 )
2022-03-24 14:29:41 +08:00
Jiarui Fang
a445e118cf
[polish] polish singleton and global context ( #500 )
2022-03-23 18:03:39 +08:00
HELSON
f24b5ed201
[MOE] remove old MoE legacy ( #493 )
2022-03-22 17:37:16 +08:00
Jiarui Fang
b334822163
[zero] polish sharded param name ( #484 )
...
* [zero] polish sharded param name
* polish code
* polish
* polish code
* polish
* polsih
* polish
2022-03-22 14:36:16 +08:00
Jiarui Fang
65c0f380c2
[format] polish name format for MOE ( #481 )
2022-03-21 23:19:47 +08:00
HELSON
7544347145
[MOE] add unitest for MOE experts layout, gradient handler and kernel ( #469 )
2022-03-21 13:35:04 +08:00
HELSON
aff9d354f7
[MOE] polish moe_env ( #467 )
2022-03-19 15:36:25 +08:00
HELSON
84fd7c1d4d
add moe context, moe utilities and refactor gradient handler ( #455 )
2022-03-18 16:38:32 +08:00
Frank Lee
b72b8445c6
optimized context test time consumption ( #446 )
2022-03-17 14:40:52 +08:00
Jiarui Fang
496cbb0760
[hotfix] fix initialize bug with zero ( #442 )
2022-03-17 13:16:22 +08:00
Frank Lee
b03b3ae99c
fixed mem monitor device ( #433 )
...
fixed mem monitor device
2022-03-16 15:25:02 +08:00
Jiarui Fang
adebb3e041
[zero] cuda margin space for OS ( #418 )
2022-03-15 12:02:19 +08:00
Jiarui Fang
56bb412e72
[polish] use GLOBAL_MODEL_DATA_TRACER ( #417 )
2022-03-15 11:29:46 +08:00
Jiarui Fang
21dc54e019
[zero] memtracer to record cuda memory usage of model data and overall system ( #395 )
2022-03-14 22:05:30 +08:00
LuGY
a9c27be42e
Added tensor detector ( #393 )
...
* Added tensor detector
* Added the - states
* Allowed change include_cpu when detect()
2022-03-14 18:01:46 +08:00
1SAA
907ac4a2dc
fixed error when no collective communication in CommProfiler
2022-03-14 17:21:00 +08:00
HELSON
dfd0363f68
polished output format for communication profiler and pcie profiler ( #404 )
...
fixed typing error
2022-03-14 16:07:45 +08:00
HELSON
7c079d9c33
[hotfix] fixed bugs in ShardStrategy and PcieProfiler ( #394 )
2022-03-11 18:12:46 +08:00