Commit Graph

247 Commits (177c374401c7c507c3b4b5ec31cf7830c9b75c50)

Author SHA1 Message Date
Jiarui Fang 845856ea29
[Graph] building computing graph with ColoTensor, Linear only (#917) 2022-05-07 17:10:37 +08:00
Ziyue Jiang 75d221918a
[Tensor] add 1d vocab loss (#918)
* add 1d vocab loss

* polish
2022-05-07 15:49:14 +08:00
Ziyue Jiang dfaff4e243
[Tensor] fix test_model (#916)
* polish test_model

* polish
2022-05-06 18:06:22 +08:00
Jiarui Fang ed6426c300
[Tensor] polish model test (#915) 2022-05-06 17:07:56 +08:00
Ziyue Jiang 0fab86b12a
[Tensor] add a basic bert. (#911)
* add base bert test

* Add bert test

* polish

* remove test_bert

* polish
2022-05-06 15:03:43 +08:00
Jiarui Fang ab95ec9aea
[Tensor] init ColoParameter (#914) 2022-05-06 12:57:14 +08:00
Ziyue Jiang 193d629311
update pytest.mark.parametrize in tensor tests (#913) 2022-05-06 11:16:40 +08:00
Ziyue Jiang f593a5637e
[Tensor] add embedding tp1d row (#904) 2022-04-29 14:10:05 +08:00
Ziyue Jiang 2c0d19d755
[Tensor] add ColoTensor TP1Dcol Embedding (#899) 2022-04-28 17:45:06 +08:00
Jiarui Fang d16671da75
[Tensor] initialize the ColoOptimizer (#898)
* [Tensor] activation is an attr of ColoTensor

* [Tensor] add optimizer

* only detach parameters in context

* polish code
2022-04-28 15:23:40 +08:00
Jiarui Fang e76f76c08b
[Tensor] test parameters() as member function (#896) 2022-04-28 10:57:14 +08:00
Ziyue Jiang cb182da7c5
[tensor] refine linear and add gather for laynorm (#893)
* refine linear and add function to ColoTensor

* add gather for layernorm

* polish

* polish
2022-04-28 10:55:40 +08:00
Jiarui Fang 26c49639d8
[Tensor] overriding paramters() for Module using ColoTensor (#889) 2022-04-27 15:28:59 +08:00
Ziyue Jiang 1d0aba4153
[tensor] add ColoTensor 1Dcol (#888) 2022-04-27 14:13:55 +08:00
Jiarui Fang a0e5971692
[Tensor] test model check results for a simple net (#887) 2022-04-27 12:00:18 +08:00
Jiarui Fang 72cdc06875
[Tensor] make ColoTensor more robust for getattr (#886)
* [Tensor] make ColoTensor more robust for getattr

* polish

* polish
2022-04-27 10:57:49 +08:00
Ziyue Jiang 9bc5a77c31
[tensor] wrap function in the torch_tensor to ColoTensor (#881) 2022-04-26 20:13:56 +08:00
Jiarui Fang 7f76517a85
[Tensor] make a simple net works with 1D row TP (#879) 2022-04-26 18:11:47 +08:00
ver217 c4d903e64a
[gemini] accelerate adjust_layout() (#878)
* add lru cache

* polish code

* update unit test

* fix sharded optim
2022-04-26 18:08:31 +08:00
Jiarui Fang 909211453b
[Tensor] Add some attributes to ColoTensor (#877)
* [Tensor] add some function to ColoTensor

* torch.allclose

* rm torch.add
2022-04-26 15:10:47 +08:00
Jiarui Fang e43f83aa5c
[Tensor] get named parameters for model using ColoTensors (#874) 2022-04-26 14:08:01 +08:00
Jiarui Fang 96211c2cc8
[tensor] customized op returns ColoTensor (#875)
* [tensor] customized op returns ColoTensor

* polish

* polish code
2022-04-26 13:23:59 +08:00
Ziyue Jiang 26d4ab8b03
[Tensor] Add function to spec and update linear 1Drow and unit tests (#869) 2022-04-26 10:15:26 +08:00
Jiarui Fang 1190b2c4a4
[tensor] add cross_entrophy_loss (#868) 2022-04-25 16:01:52 +08:00
HELSON 3107817172
[gemini] add stateful tensor container (#867) 2022-04-25 14:58:16 +08:00
Jiarui Fang d01d3b8cb0
colo init context add device attr. (#866) 2022-04-25 14:24:26 +08:00
Jiarui Fang 126ba573a8
[Tensor] add layer norm Op (#852) 2022-04-25 11:49:20 +08:00
Frank Lee 1258af71cc
[ci] cache cuda extension (#860) 2022-04-25 10:03:47 +08:00
Ziyue Jiang bcc8655021
[Tensor ] Add 1Drow weight reshard by spec (#854) 2022-04-24 18:30:20 +08:00
Jiarui Fang 62f059251b
[Tensor] init a tp network training unittest (#849) 2022-04-24 16:43:44 +08:00
Ziyue Jiang 2a0a427e04
[tensor]add assert for colo_tensor 1Drow (#846) 2022-04-24 14:12:45 +08:00
Ziyue Jiang 05023ecfee
[Tensor] TP Linear 1D row (#843) 2022-04-24 13:43:12 +08:00
HELSON e5ea3fdeef
[gemini] add GeminiMemoryManger (#832)
* refactor StatefulTensor, tensor utilities

* add unitest for GeminiMemoryManager
2022-04-24 13:08:48 +08:00
YuliangLiu0306 35ea6e1023
[pipelinable]use pipelinable context to initialize non-pipeline model (#816)
* [CLI] add CLI launcher

* Revert "[CLI] add CLI launcher"

This reverts commit df7e6506d4.

* [pipeline]add module lazy init feature to support large model initization.

* [pipeline]add to_layer_list and partition method to support arbitrary non-pp model

* refactor the module structure

* polish

* [pipelinable]add unit test for pipelinable

* polish

* polish

* Fix CodeFactor issues.
2022-04-24 13:03:12 +08:00
Jiarui Fang ea0a2ed25f
[hotfix] the bug of numel() in ColoTensor (#845) 2022-04-24 12:32:10 +08:00
Jiarui Fang 8789850eea
Init Conext supports lazy allocate model memory (#842) 2022-04-22 18:03:35 +08:00
Frank Lee 943982d29a
[unittest] refactored unit tests for change in dependency (#838) 2022-04-22 15:39:07 +08:00
Frank Lee 01e9f834f5
[dependency] removed torchvision (#833)
* [dependency] removed torchvision

* fixed transforms
2022-04-22 15:24:35 +08:00
Jiarui Fang cb5a4778e1
Revert "[WIP] Applying ColoTensor on TP-1D-row Linear. (#831)" (#835)
This reverts commit ac88de6dfc.
2022-04-22 14:45:57 +08:00
Jiarui Fang ac88de6dfc
[WIP] Applying ColoTensor on TP-1D-row Linear. (#831)
* revert zero tensors back

* [tensor] init row 1d linear
2022-04-22 14:03:26 +08:00
Jiarui Fang 294a6060d0
[tensor] ZeRO use ColoTensor as the base class. (#828)
* [refactor] moving InsertPostInitMethodToModuleSubClasses to utils.

* [tensor] ZeRO use ColoTensor as the base class.

* polish
2022-04-22 12:00:48 +08:00
Ziyue Jiang 8e6fdb4f29
[tensor]fix test_linear (#826) 2022-04-21 17:18:56 +08:00
Ziyue Jiang 1a9e2c2dff
[tensor] fix kwargs in colo_tensor torch_funtion (#825) 2022-04-21 16:47:35 +08:00
Jiarui Fang 2ecc3d7a55
[tensor] lazy init (#823) 2022-04-21 15:40:23 +08:00
Jiarui Fang 660d2d1f1b
[Tensor] apply ColoTensor on Torch functions (#821)
* Revert "[zero] add ZeroTensorShardStrategy (#793)"

This reverts commit 88759e289e.

* [gemini] set cpu memory capacity

* [log] local throughput collecting

* polish

* polish

* polish

* polish code

* polish

* polish code

* add a new tensor structure and override linear for it

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* [tensor] renaming and reorganize directory structure.

* rm useless dir

* polish

* polish

* [tensor] hander the function not wrapped
2022-04-21 14:21:10 +08:00
Jiarui Fang 0ce8924ceb
[tensor] reorganize files (#820) 2022-04-21 14:15:48 +08:00
Jiarui Fang ab962b9735
[gemini] a new tensor structure (#818)
* Revert "[zero] add ZeroTensorShardStrategy (#793)"

This reverts commit 88759e289e.

* [gemini] set cpu memory capacity

* [log] local throughput collecting

* polish

* polish

* polish

* polish code

* polish

* polish code

* add a new tensor structure and override linear for it

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish

* polish
2022-04-21 11:42:37 +08:00
Jiarui Fang e761ad2cd7
Revert "[zero] add ZeroTensorShardStrategy (#793)" (#806) 2022-04-19 14:40:02 +08:00
HELSON 88759e289e
[zero] add ZeroTensorShardStrategy (#793) 2022-04-19 14:32:45 +08:00
Jiarui Fang 681addb512
[refactor] moving grad acc logic to engine (#804) 2022-04-19 14:03:21 +08:00
Jiarui Fang 4d9332b4c5
[refactor] moving memtracer to gemini (#801) 2022-04-19 10:13:08 +08:00
HELSON 4c4388c46e
[hotfix] fix memory leak in zero (#781) 2022-04-18 13:57:03 +08:00
Frank Lee 5a1a095b92
[test] refactored with the new rerun decorator (#763)
* [test] refactored with the new rerun decorator

* polish test case
2022-04-15 00:33:04 +08:00
Jiarui Fang 10ef8afdd2
[gemini] init genimi individual directory (#754) 2022-04-14 16:40:26 +08:00
ver217 dcca614eee
[hotfix] fix test_stateful_tensor_mgr (#762) 2022-04-14 15:50:09 +08:00
ver217 a93a7d7364
[hotfix] fix reuse_fp16_shard of sharded model (#756)
* fix reuse_fp16_shard

* disable test stm

* polish code
2022-04-14 14:56:46 +08:00
HELSON 84c6700b2a
[zero] refactor memstats_collector (#746) 2022-04-14 12:01:12 +08:00
ver217 e396bb71f2
[zero] add tensor placement policies (#743)
* add tensor placement policies

* polish comments

* polish comments

* update moe unit tests
2022-04-13 15:00:48 +08:00
HELSON 22c4b88d56
[zero] refactor ShardedParamV2 for convenience (#742) 2022-04-13 14:54:26 +08:00
Frank Lee f4f42d4c3c
[bug] fixed DDP compatibility with torch 1.8 (#739) 2022-04-13 00:08:46 +08:00
Jiarui Fang 53cb584808
[utils] correct cpu memory used and capacity in the context of multi-process (#726) 2022-04-12 14:57:54 +08:00
HELSON b9b469ea50
[moe] add checkpoint for moe zero test (#729) 2022-04-12 12:11:54 +08:00
FrankLeeeee e88a498c9c [test] removed trivial outdated test 2022-04-12 11:08:15 +08:00
FrankLeeeee 62b4ce7326 [test] added missing decorators to model checkpointing tests 2022-04-12 11:08:15 +08:00
Jiarui Fang 4d90a7b513
[refactor] zero directory (#724) 2022-04-11 23:13:02 +08:00
Frank Lee 20ab1f5520
[bug] fixed broken test_found_inf (#725) 2022-04-11 22:00:27 +08:00
Jiarui Fang 193dc8dacb
[refactor] refactor the memory utils (#715) 2022-04-11 16:47:57 +08:00
HELSON dbd96fe90a
[zero] check whether gradients have inf and nan in gpu (#712) 2022-04-11 15:40:13 +08:00
HELSON a9b8300d54
[zero] improve adaptability for not-shard parameters (#708)
* adapt post grad hooks for not-shard parameters
* adapt optimizer for not-shard parameters
* offload gradients for not-replicated parameters
2022-04-11 13:38:51 +08:00
ver217 ab8c6b4a0e
[zero] refactor memstats collector (#706)
* refactor memstats collector

* fix disposable

* polish code
2022-04-11 10:46:08 +08:00
HELSON ee112fe1da
[zero] adapt zero hooks for unsharded module (#699) 2022-04-08 20:23:26 +08:00
ver217 3c9cd5bb5e
[zero] stateful tensor manager (#687)
* [WIP] stateful tensor manager

* add eviction strategy

* polish code

* polish code

* polish comment

* add unit test

* fix sampler bug

* polish code

* fix max sampling cnt resetting bug

* fix sampler bug

* polish code

* fix bug

* fix unit test

Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2022-04-08 17:51:34 +08:00
HELSON d7ecaf362b
[zero] fix init bugs in zero context (#686)
* adapt model weight initialization for methods in Pytorch nn.init
2022-04-07 17:38:45 +08:00
Jiarui Fang 0aab52301e
[hotfix] fix a bug in model data stats tracing (#655) 2022-04-03 21:48:06 +08:00
YuliangLiu0306 ade05a5d83
[refactor] pipeline, put runtime schedule into engine. (#627) 2022-04-03 20:46:45 +08:00
HELSON e5d615aeee
[hotfix] fix bugs in testing (#659)
* remove hybrid adam in test_moe_zero_optim

* fix activation checkpointing and its unitest
2022-04-02 21:58:47 +08:00
HELSON b31daed4cf
fix bugs in CPU adam (#633)
* add cpu adam counter for all cpu adam

* fixed updating error in adam kernel
2022-04-02 17:04:05 +08:00
HELSON 055fbf5be6
[zero] adapt zero for unsharded paramters (Optimizer part) (#601) 2022-04-01 20:10:47 +08:00
アマデウス 354b7954d1
[model checkpoint] added unit tests for checkpoint save/load (#599) 2022-04-01 16:53:32 +08:00
FredHuang99 93f14d2a33
[zero] test zero tensor utils (#609) 2022-04-01 15:16:59 +08:00
Jiarui Fang e956d93ac2
[refactor] memory utils (#577) 2022-04-01 09:22:33 +08:00
HELSON e6d50ec107
[zero] adapt zero for unsharded parameters (#561)
* support existing sharded and unsharded parameters in zero

* add unitest for moe-zero model init

* polish moe gradient handler
2022-03-31 18:34:11 +08:00
ver217 7c6c427db1
[zero] trace states of fp16/32 grad and fp32 param (#571) 2022-03-31 16:26:54 +08:00
Jiarui Fang 7675366fce
[polish] rename col_attr -> colo_attr (#558) 2022-03-31 12:25:45 +08:00
ver217 014bac0c49
[zero] hijack p.grad in sharded model (#554)
* hijack p.grad in sharded model

* polish comments

* polish comments
2022-03-30 18:14:50 +08:00
Jiarui Fang f552b11294
[zero] label state for param fp16 and grad (#551) 2022-03-30 15:57:46 +08:00
Jiarui Fang 214da761d4
[zero] add stateful tensor (#549) 2022-03-30 13:51:37 +08:00
HELSON 8c90d4df54
[zero] add zero context manager to change config during initialization (#546) 2022-03-29 17:57:59 +08:00
Liang Bowen ec5086c49c Refactored docstring to google style 2022-03-29 17:17:47 +08:00
Jiarui Fang 53b1b6e340
[zero] non model data tracing (#545) 2022-03-29 15:45:48 +08:00
ver217 1f90a3b129
[zero] polish ZeroInitContext (#540) 2022-03-29 09:09:04 +08:00
Jiarui Fang c11ff81b15
[zero] get memory usage of sharded optim v2. (#542) 2022-03-29 09:08:18 +08:00
HELSON a30e2b4c24
[zero] adapt for no-leaf module in zero (#535)
only process module's own parameters in Zero context

add zero hooks for all modules that contrain parameters

gather parameters only belonging to module itself
2022-03-28 17:42:18 +08:00
Jiarui Fang 705f56107c
[zero] refactor model data tracing (#537) 2022-03-28 16:38:18 +08:00
Jiarui Fang a590ed0ba3
[zero] improve the accuracy of get_memory_usage of sharded param (#538) 2022-03-28 16:19:19 +08:00
Jiarui Fang 37cb70feec
[zero] get memory usage for sharded param (#536) 2022-03-28 15:01:21 +08:00
LuGY 105c5301c3
[zero]added hybrid adam, removed loss scale in adam (#527)
* [zero]added hybrid adam, removed loss scale of adam

* remove useless code
2022-03-25 18:03:54 +08:00
Jiarui Fang 8d8c5407c0
[zero] refactor model data tracing (#522) 2022-03-25 18:03:32 +08:00
Frank Lee 3601b2bad0
[test] fixed rerun_on_exception and adapted test cases (#487) 2022-03-25 17:25:12 +08:00
Jiarui Fang 4d322b79da
[refactor] remove old zero code (#517) 2022-03-25 14:54:39 +08:00