Commit Graph

38 Commits (7080a8edb08400d97ba4c31458f532e4ceeacf4b)

Author SHA1 Message Date
CsRic ea961d8fd1 [NFC] polish colossalai/zero/sharded_param/__init__.py code style (#1717)
Co-authored-by: ric <mkkt_bkkt@mail.ustc.edu.cn>
2022-10-19 12:20:51 +08:00
HELSON e5ea3fdeef
[gemini] add GeminiMemoryManger (#832)
* refactor StatefulTensor, tensor utilities

* add unitest for GeminiMemoryManager
2022-04-24 13:08:48 +08:00
Jiarui Fang 595bedf767
revert zero tensors back (#829) 2022-04-22 12:12:35 +08:00
Jiarui Fang 294a6060d0
[tensor] ZeRO use ColoTensor as the base class. (#828)
* [refactor] moving InsertPostInitMethodToModuleSubClasses to utils.

* [tensor] ZeRO use ColoTensor as the base class.

* polish
2022-04-22 12:00:48 +08:00
HELSON 4c4388c46e
[hotfix] fix memory leak in zero (#781) 2022-04-18 13:57:03 +08:00
ver217 a93a7d7364
[hotfix] fix reuse_fp16_shard of sharded model (#756)
* fix reuse_fp16_shard

* disable test stm

* polish code
2022-04-14 14:56:46 +08:00
HELSON 84c6700b2a
[zero] refactor memstats_collector (#746) 2022-04-14 12:01:12 +08:00
HELSON 22c4b88d56
[zero] refactor ShardedParamV2 for convenience (#742) 2022-04-13 14:54:26 +08:00
Jiarui Fang 4d90a7b513
[refactor] zero directory (#724) 2022-04-11 23:13:02 +08:00
HELSON ee112fe1da
[zero] adapt zero hooks for unsharded module (#699) 2022-04-08 20:23:26 +08:00
ver217 3c9cd5bb5e
[zero] stateful tensor manager (#687)
* [WIP] stateful tensor manager

* add eviction strategy

* polish code

* polish code

* polish comment

* add unit test

* fix sampler bug

* polish code

* fix max sampling cnt resetting bug

* fix sampler bug

* polish code

* fix bug

* fix unit test

Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2022-04-08 17:51:34 +08:00
Jiarui Fang 59bf2dc590
[zero] initialize a stateful tensor manager (#614) 2022-04-06 16:18:49 +08:00
Jiarui Fang e956d93ac2
[refactor] memory utils (#577) 2022-04-01 09:22:33 +08:00
ver217 7c6c427db1
[zero] trace states of fp16/32 grad and fp32 param (#571) 2022-03-31 16:26:54 +08:00
ver217 014bac0c49
[zero] hijack p.grad in sharded model (#554)
* hijack p.grad in sharded model

* polish comments

* polish comments
2022-03-30 18:14:50 +08:00
Jiarui Fang f552b11294
[zero] label state for param fp16 and grad (#551) 2022-03-30 15:57:46 +08:00
Jiarui Fang 214da761d4
[zero] add stateful tensor (#549) 2022-03-30 13:51:37 +08:00
Jiarui Fang c11ff81b15
[zero] get memory usage of sharded optim v2. (#542) 2022-03-29 09:08:18 +08:00
Jiarui Fang a590ed0ba3
[zero] improve the accuracy of get_memory_usage of sharded param (#538) 2022-03-28 16:19:19 +08:00
Jiarui Fang 37cb70feec
[zero] get memory usage for sharded param (#536) 2022-03-28 15:01:21 +08:00
Jiarui Fang 8d8c5407c0
[zero] refactor model data tracing (#522) 2022-03-25 18:03:32 +08:00
ver217 9ec1ce6ab1
[zero] sharded model support the reuse of fp16 shard (#495)
* sharded model supports reuse fp16 shard

* rename variable

* polish code

* polish code

* polish code
2022-03-23 14:59:59 +08:00
ver217 62b0a8d644
[zero] sharded optim support hybrid cpu adam (#486)
* sharded optim support hybrid cpu adam

* update unit test

* polish docstring
2022-03-22 14:56:59 +08:00
Jiarui Fang b334822163
[zero] polish sharded param name (#484)
* [zero] polish sharded param name

* polish code

* polish

* polish code

* polish

* polsih

* polish
2022-03-22 14:36:16 +08:00
ver217 9506a8beb2 use double buffer to handle grad 2022-03-16 14:24:09 +08:00
Jiarui Fang b5f43acee3 [zero] find miss code (#378) 2022-03-11 15:50:28 +08:00
jiaruifang d9217e1960 Revert "[zero] bucketized tensor cpu gpu copy (#368)"
This reverts commit bef05489b6.
2022-03-11 15:50:28 +08:00
Jiarui Fang 00670c870e [zero] bucketized tensor cpu gpu copy (#368) 2022-03-11 15:50:28 +08:00
Jiarui Fang 44e4891f57 [zero] able to place params on cpu after zero init context (#365)
* place params on cpu after zero init context

* polish code
2022-03-11 15:50:28 +08:00
Jiarui Fang ea2872073f [zero] global model data memory tracer (#360) 2022-03-11 15:50:28 +08:00
ver217 1388671699 [zero] Update sharded model v2 using sharded param v2 (#323) 2022-03-11 15:50:28 +08:00
Jiarui Fang 11bddb6e55 [zero] update zero context init with the updated test utils (#327) 2022-03-11 15:50:28 +08:00
Jiarui Fang de0468c7a8 [zero] zero init context (#321)
* add zero init context

* add more flags for zero init context
fix bug of repeated converting param to ShardedParamV2

* polish code
2022-03-11 15:50:28 +08:00
Jiarui Fang 90d3aef62c [zero] yet an improved sharded param (#311) 2022-03-11 15:50:28 +08:00
Jiarui Fang c9e7d9582d [zero] polish shard strategy (#310)
* init shard param from shape tuple

* add more unitest for shard param

* add set_payload method for ShardedParam

* [zero] add shareded tensor class

* polish code

* add shard stratgy

* move shard and gather logic to shard strategy from shard tensor.

* polish code
2022-03-11 15:50:28 +08:00
ver217 36f9a74ab2 fix sharded param hook and unit test 2022-03-11 15:50:28 +08:00
Jiarui Fang 80364c7686 [zero] sharded tensor (#305)
* init shard param from shape tuple

* add more unitest for shard param

* add set_payload method for ShardedParam

* [zero] add shareded tensor class

* polish code
2022-03-11 15:50:28 +08:00
Jiarui Fang e17e92c54d Polish sharded parameter (#297)
* init shard param from shape tuple

* add more unitest for shard param

* add more unittests to shareded param
2022-03-11 15:50:28 +08:00