HELSON
a1ce02d740
[zero] test gradient accumulation ( #1964 )
...
* [zero] fix memory leak for zero2
* [zero] test gradient accumulation
* [zero] remove grad clip test
2 years ago
Jiarui Fang
cc0ed7cf33
[Gemini] ZeROHookV2 -> GeminiZeROHook ( #1972 )
2 years ago
Jiarui Fang
c4739a725a
[Gemini] polish memstats collector ( #1962 )
2 years ago
Jiarui Fang
f7e276fa71
[Gemini] add GeminiAdamOptimizer ( #1960 )
2 years ago
HELSON
7066dfbf82
[zero] fix memory leak for zero2 ( #1955 )
2 years ago
HELSON
6e51d296f0
[zero] migrate zero1&2 ( #1878 )
...
* add zero1&2 optimizer
* rename test ditectory
* rename test files
* change tolerance in test
2 years ago
Zihao
20e255d4e8
MemStatsCollectorStatic ( #1765 )
2 years ago
HELSON
c6a1a62636
[hotfix] fix zero's incompatibility with checkpoint in torch-1.12 ( #1786 )
...
* [hotfix] fix zero's incompatibility with checkpoint in torch-1.12
* [zero] add cpu shard init
* [zero] add tiny example test
* [colo_tensor] fix bugs for torch-1.11
2 years ago
CsRic
ea961d8fd1
[NFC] polish colossalai/zero/sharded_param/__init__.py code style ( #1717 )
...
Co-authored-by: ric <mkkt_bkkt@mail.ustc.edu.cn>
2 years ago
HELSON
1468e4bcfc
[zero] add constant placement policy ( #1705 )
...
* fixes memory leak when paramter is in fp16 in ZeroDDP init.
* bans chunk releasement in CUDA. Only when a chunk is about to offload, it is allowed to release.
* adds a constant placement policy. With it, users can allocate a reserved caching memory space for parameters.
2 years ago
HELSON
b28991dd0a
[feature] A new ZeRO implementation ( #1644 )
2 years ago
Jiarui Fang
c5d39215f6
Revert "[feature] new zero implementation ( #1623 )" ( #1643 )
...
This reverts commit 5be118f405
.
2 years ago
HELSON
5be118f405
[feature] new zero implementation ( #1623 )
2 years ago
HELSON
f7f2248771
[moe] fix MoE bugs ( #1628 )
...
* remove forced FP32 modules
* correct no_shard-contexts' positions
2 years ago
ver217
c9e8ce67b8
fix move fp32 shards ( #1604 )
2 years ago
Fazzie-Maqianli
06dccdde44
[NFC] polish colossalai/zero/sharded_model/reduce_scatter.py code style ( #1554 )
2 years ago
ver217
821c6172e2
[utils] Impl clip_grad_norm for ColoTensor and ZeroOptimizer ( #1442 )
2 years ago
ver217
6df3e19be9
[hotfix] zero optim prevents calling inner optim.zero_grad ( #1422 )
2 years ago
ver217
8dced41ad0
[zero] zero optim state_dict takes only_rank_0 ( #1384 )
...
* zero optim state_dict takes only_rank_0
* fix unit test
2 years ago
ver217
828b9e5e0d
[hotfix] fix zero optim save/load state dict ( #1381 )
2 years ago
ver217
6b43c789fd
fix zero optim backward_by_grad and save/load ( #1353 )
2 years ago
ver217
d068af81a3
[doc] update rst and docstring ( #1351 )
...
* update rst
* add zero docstr
* fix docstr
* remove fx.tracer.meta_patch
* fix docstr
* fix docstr
* update fx rst
* fix fx docstr
* remove useless rst
2 years ago
ver217
ce470ba37e
[checkpoint] sharded optim save/load grad scaler ( #1350 )
2 years ago
ver217
7a05367101
[hotfix] shared model returns cpu state_dict ( #1328 )
2 years ago
Jiarui Fang
4165eabb1e
[hotfix] remove potiential circle import ( #1307 )
...
* make it faster
* [hotfix] remove circle import
2 years ago
ver217
a45ddf2d5f
[hotfix] fix sharded optim step and clip_grad_norm ( #1226 )
2 years ago
Jiarui Fang
a444633d13
warmup ratio configration ( #1192 )
2 years ago
Jiarui Fang
372f791444
[refactor] move chunk and chunkmgr to directory gemini ( #1182 )
2 years ago
ver217
9e1daa63d2
[zero] sharded optim supports loading local state dict ( #1170 )
...
* sharded optim supports loading local state dict
* polish code
* add unit test
2 years ago
ver217
561e90493f
[zero] zero optim supports loading local state dict ( #1171 )
...
* zero optim supports loading local state dict
* polish code
* add unit test
2 years ago
ver217
8106d7b8c7
[ddp] refactor ColoDDP and ZeroDDP ( #1146 )
...
* ColoDDP supports overwriting default process group
* rename ColoDDPV2 to ZeroDDP
* add docstr for ZeroDDP
* polish docstr
2 years ago
ver217
6690a61b4d
[hotfix] prevent nested ZeRO ( #1140 )
2 years ago
Frank Lee
15aab1476e
[zero] avoid zero hook spam by changing log to debug level ( #1137 )
2 years ago
ver217
a1a7899cae
[hotfix] fix zero init ctx numel ( #1128 )
2 years ago
ver217
f0a954f16d
[ddp] add set_params_to_ignore for ColoDDP ( #1122 )
...
* add set_params_to_ignore for ColoDDP
* polish code
* fix zero hook v2
* add unit test
* polish docstr
2 years ago
Frank Lee
14e5b11d7f
[zero] fixed api consistency ( #1098 )
2 years ago
Frank Lee
cb18922c47
[doc] added documentation to chunk and chunk manager ( #1094 )
...
* [doc] added documentation to chunk and chunk manager
* polish code
* polish code
* polish code
2 years ago
ver217
1f894e033f
[gemini] zero supports gemini ( #1093 )
...
* add placement policy
* add gemini mgr
* update mem stats collector
* update zero
* update zero optim
* fix bugs
* zero optim monitor os
* polish unit test
* polish unit test
* add assert
2 years ago
ver217
be01db37c8
[tensor] refactor chunk mgr and impl MemStatsCollectorV2 ( #1077 )
...
* polish chunk manager
* polish unit test
* impl add_extern_static_tensor for chunk mgr
* add mem stats collector v2
* polish code
* polish unit test
* polish code
* polish get chunks
2 years ago
ver217
c5cd3b0f35
[zero] zero optim copy chunk rather than copy tensor ( #1070 )
3 years ago
Jiarui Fang
49832b2344
[refactory] add nn.parallel module ( #1068 )
3 years ago
ver217
e3fde4ee6b
fix import error in sharded model v2 ( #1053 )
3 years ago
ver217
51b9a49655
[zero] add zero optimizer for ColoTensor ( #1046 )
...
* add zero optimizer
* torch ok
* unit test ok
* polish code
* fix bugs
* polish unit test
* polish zero optim
* polish colo ddp v2
* refactor folder structure
* add comment
* polish unit test
* polish zero optim
* polish unit test
3 years ago
ver217
9492a561c3
[tensor] ColoTensor supports ZeRo ( #1015 )
...
* impl chunk manager
* impl param op hook
* add reduce_chunk
* add zero hook v2
* add zero dp
* fix TensorInfo
* impl load balancing when using zero without chunk
* fix zero hook
* polish chunk
* fix bugs
* ddp ok
* zero ok
* polish code
* fix bugs about load balancing
* polish code
* polish code
* add ene-to-end test
* polish code
* polish code
* polish code
* fix typo
* add test_chunk
* fix bugs
* fix bugs
* polish code
3 years ago
ver217
7cfd6c827e
[zero] add load_state_dict for sharded model ( #894 )
...
* add load_state_dict for sharded model
* fix bug
* fix bug
* fix ckpt dtype and device
* support load state dict in zero init ctx
* fix bugs
3 years ago
ver217
c4d903e64a
[gemini] accelerate adjust_layout() ( #878 )
...
* add lru cache
* polish code
* update unit test
* fix sharded optim
3 years ago
HELSON
425b4a96b8
[gemini] polish stateful_tensor_mgr ( #876 )
3 years ago
ver217
d7e0303d1e
[zero] use GeminiMemoryManager when sampling model data ( #850 )
3 years ago
ver217
0f7ed8c192
fix _post_init_method of zero init ctx ( #847 )
3 years ago
HELSON
e5ea3fdeef
[gemini] add GeminiMemoryManger ( #832 )
...
* refactor StatefulTensor, tensor utilities
* add unitest for GeminiMemoryManager
3 years ago
Jiarui Fang
595bedf767
revert zero tensors back ( #829 )
3 years ago
Jiarui Fang
294a6060d0
[tensor] ZeRO use ColoTensor as the base class. ( #828 )
...
* [refactor] moving InsertPostInitMethodToModuleSubClasses to utils.
* [tensor] ZeRO use ColoTensor as the base class.
* polish
3 years ago
Jiarui Fang
eb1b89908c
[refactor] moving InsertPostInitMethodToModuleSubClasses to utils. ( #824 )
3 years ago
Jiarui Fang
3ddbd1bce1
[gemini] collect cpu-gpu moving volume in each iteration ( #813 )
3 years ago
Jiarui Fang
61c20b44bc
[log] local throughput metrics ( #811 )
...
* Revert "[zero] add ZeroTensorShardStrategy (#793 )"
This reverts commit 88759e289e
.
* [gemini] set cpu memory capacity
* [log] local throughput collecting
* polish
* polish
* polish
* polish code
* polish
3 years ago
ver217
dd92b90a68
[DO NOT MERGE] [zero] init fp16 params directly in ZeroInitContext ( #808 )
...
* init fp16 param directly
* polish code
3 years ago
Jiarui Fang
e761ad2cd7
Revert "[zero] add ZeroTensorShardStrategy ( #793 )" ( #806 )
3 years ago
HELSON
88759e289e
[zero] add ZeroTensorShardStrategy ( #793 )
3 years ago
Jiarui Fang
4d9332b4c5
[refactor] moving memtracer to gemini ( #801 )
3 years ago
Jiarui Fang
8711c706f4
[hotfix] fix grad offload when enabling reuse_fp16_shard
3 years ago
ver217
f1fa1a675f
fix grad offload when enabling reuse_fp16_shard
3 years ago
HELSON
4c4388c46e
[hotfix] fix memory leak in zero ( #781 )
3 years ago
HELSON
a65cbb7e4e
[zero] refactor shard and gather operation ( #773 )
3 years ago
ver217
6e553748a7
polish sharded optim docstr and warning ( #770 )
3 years ago
Jiarui Fang
10ef8afdd2
[gemini] init genimi individual directory ( #754 )
3 years ago
ver217
dcca614eee
[hotfix] fix test_stateful_tensor_mgr ( #762 )
3 years ago
ver217
a93a7d7364
[hotfix] fix reuse_fp16_shard of sharded model ( #756 )
...
* fix reuse_fp16_shard
* disable test stm
* polish code
3 years ago
ver217
8f7ce94b8e
[hotfix] fix auto tensor placement policy ( #753 )
3 years ago
HELSON
84c6700b2a
[zero] refactor memstats_collector ( #746 )
3 years ago
Jiarui Fang
3d7dc46d33
[zero] use factory pattern for tensor_placement_policy ( #752 )
3 years ago
ver217
4b048a8728
fix prepare grads in sharded optim ( #749 )
3 years ago
ver217
e396bb71f2
[zero] add tensor placement policies ( #743 )
...
* add tensor placement policies
* polish comments
* polish comments
* update moe unit tests
3 years ago
HELSON
22c4b88d56
[zero] refactor ShardedParamV2 for convenience ( #742 )
3 years ago
ver217
e6212f56cd
[hotfix] fix memory leak in backward of sharded model ( #741 )
3 years ago
Jiarui Fang
7db3ccc79b
[hotfix] remove duplicated param register to stateful tensor manager ( #728 )
3 years ago
Jiarui Fang
4d90a7b513
[refactor] zero directory ( #724 )
3 years ago
Jiarui Fang
193dc8dacb
[refactor] refactor the memory utils ( #715 )
3 years ago
HELSON
dbd96fe90a
[zero] check whether gradients have inf and nan in gpu ( #712 )
3 years ago
ver217
715b86eadd
[hotfix] fix stm cuda model data size ( #710 )
3 years ago
HELSON
a9b8300d54
[zero] improve adaptability for not-shard parameters ( #708 )
...
* adapt post grad hooks for not-shard parameters
* adapt optimizer for not-shard parameters
* offload gradients for not-replicated parameters
3 years ago
ver217
ab8c6b4a0e
[zero] refactor memstats collector ( #706 )
...
* refactor memstats collector
* fix disposable
* polish code
3 years ago
HELSON
ee112fe1da
[zero] adapt zero hooks for unsharded module ( #699 )
3 years ago
ver217
3c9cd5bb5e
[zero] stateful tensor manager ( #687 )
...
* [WIP] stateful tensor manager
* add eviction strategy
* polish code
* polish code
* polish comment
* add unit test
* fix sampler bug
* polish code
* fix max sampling cnt resetting bug
* fix sampler bug
* polish code
* fix bug
* fix unit test
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
3 years ago
HELSON
d7ecaf362b
[zero] fix init bugs in zero context ( #686 )
...
* adapt model weight initialization for methods in Pytorch nn.init
3 years ago
Jiarui Fang
59bf2dc590
[zero] initialize a stateful tensor manager ( #614 )
3 years ago
HELSON
17e73e62cc
[hotfix] fix bugs for unsharded parameters when restore data ( #664 )
3 years ago
Jiarui Fang
0aab52301e
[hotfix] fix a bug in model data stats tracing ( #655 )
3 years ago
Jiarui Fang
036404ca8a
Revert "[zero] polish init context ( #645 )" ( #657 )
3 years ago
Jiarui Fang
67b4928244
[zero] polish init context ( #645 )
3 years ago
HELSON
055fbf5be6
[zero] adapt zero for unsharded paramters (Optimizer part) ( #601 )
3 years ago
ver217
0ef8819c67
polish docstring of zero ( #612 )
3 years ago
ver217
9bee119104
[hotfix] fix sharded optim zero grad ( #604 )
...
* fix sharded optim zero grad
* polish comments
3 years ago
Jiarui Fang
e956d93ac2
[refactor] memory utils ( #577 )
3 years ago
HELSON
e6d50ec107
[zero] adapt zero for unsharded parameters ( #561 )
...
* support existing sharded and unsharded parameters in zero
* add unitest for moe-zero model init
* polish moe gradient handler
3 years ago
ver217
7c6c427db1
[zero] trace states of fp16/32 grad and fp32 param ( #571 )
3 years ago
Jiarui Fang
7675366fce
[polish] rename col_attr -> colo_attr ( #558 )
3 years ago
ver217
014bac0c49
[zero] hijack p.grad in sharded model ( #554 )
...
* hijack p.grad in sharded model
* polish comments
* polish comments
3 years ago
Jiarui Fang
f552b11294
[zero] label state for param fp16 and grad ( #551 )
3 years ago
Jiarui Fang
214da761d4
[zero] add stateful tensor ( #549 )
3 years ago
Jiarui Fang
107b99ddb1
[zero] dump memory stats for sharded model ( #548 )
3 years ago
HELSON
8c90d4df54
[zero] add zero context manager to change config during initialization ( #546 )
3 years ago
Jiarui Fang
53b1b6e340
[zero] non model data tracing ( #545 )
3 years ago
ver217
fb841dd5c5
[zero] optimize grad offload ( #539 )
...
* optimize grad offload
* polish code
* polish code
3 years ago
ver217
1f90a3b129
[zero] polish ZeroInitContext ( #540 )
3 years ago
Jiarui Fang
c11ff81b15
[zero] get memory usage of sharded optim v2. ( #542 )
3 years ago
HELSON
a30e2b4c24
[zero] adapt for no-leaf module in zero ( #535 )
...
only process module's own parameters in Zero context
add zero hooks for all modules that contrain parameters
gather parameters only belonging to module itself
3 years ago
Jiarui Fang
705f56107c
[zero] refactor model data tracing ( #537 )
3 years ago
Jiarui Fang
a590ed0ba3
[zero] improve the accuracy of get_memory_usage of sharded param ( #538 )
3 years ago
Jiarui Fang
37cb70feec
[zero] get memory usage for sharded param ( #536 )
3 years ago
Jiarui Fang
05e33b2578
[zero] fix grad offload ( #528 )
...
* [zero] fix grad offload
* polish code
3 years ago
Jiarui Fang
8d8c5407c0
[zero] refactor model data tracing ( #522 )
3 years ago
Jiarui Fang
4d322b79da
[refactor] remove old zero code ( #517 )
3 years ago
Jiarui Fang
920c5889a7
[zero] add colo move inline ( #521 )
3 years ago
Jiarui Fang
0bebda6ea5
[zero] fix init device bug in zero init context unittest ( #516 )
3 years ago
Jiarui Fang
7ef3507ace
[zero] show model data cuda memory usage after zero context init. ( #515 )
3 years ago
ver217
a2e61d61d4
[zero] zero init ctx enable rm_torch_payload_on_the_fly ( #512 )
...
* enable rm_torch_payload_on_the_fly
* polish docstr
3 years ago
Jiarui Fang
bca0c49a9d
[zero] use colo model data api in optimv2 ( #511 )
3 years ago
Jiarui Fang
0035b7be07
[memory] add model data tensor moving api ( #503 )
3 years ago
ver217
9ec1ce6ab1
[zero] sharded model support the reuse of fp16 shard ( #495 )
...
* sharded model supports reuse fp16 shard
* rename variable
* polish code
* polish code
* polish code
3 years ago
ver217
c4c02424f3
[zero] sharded model manages ophooks individually ( #492 )
3 years ago
ver217
a9ecb4b244
[zero] polish sharded optimizer v2 ( #490 )
3 years ago
ver217
62b0a8d644
[zero] sharded optim support hybrid cpu adam ( #486 )
...
* sharded optim support hybrid cpu adam
* update unit test
* polish docstring
3 years ago
Jiarui Fang
b334822163
[zero] polish sharded param name ( #484 )
...
* [zero] polish sharded param name
* polish code
* polish
* polish code
* polish
* polsih
* polish
3 years ago
ver217
8d3250d74b
[zero] ZeRO supports pipeline parallel ( #477 )
3 years ago
ver217
3cb3fc275e
zero init ctx receives a dp process group ( #471 )
3 years ago
ver217
fc8e6db005
[doc] Update docstring for ZeRO ( #459 )
...
* polish sharded model docstr
* polish sharded optim docstr
* polish zero docstr
* polish shard strategy docstr
3 years ago
ver217
a241f61b34
[zero] Update initialize for ZeRO ( #458 )
...
* polish code
* shard strategy receive pg in shard() / gather()
* update zero engine
* polish code
3 years ago
ver217
642846d6f9
update sharded optim and fix zero init ctx ( #457 )
3 years ago
Jiarui Fang
e2e9f82588
Revert "[zero] update sharded optim and fix zero init ctx" ( #456 )
...
* Revert "polish code"
This reverts commit 8cf7ff08cf
.
* Revert "rename variables"
This reverts commit e99af94ab8
.
* Revert "remove surplus imports"
This reverts commit 46add4a5c5
.
* Revert "update sharded optim and fix zero init ctx"
This reverts commit 57567ee768
.
3 years ago
ver217
e99af94ab8
rename variables
3 years ago
ver217
57567ee768
update sharded optim and fix zero init ctx
3 years ago
Jiarui Fang
0fcfb1e00d
[test] make zero engine test really work ( #447 )
3 years ago
Jiarui Fang
237d08e7ee
[zero] hybrid cpu adam ( #445 )
3 years ago
Jiarui Fang
496cbb0760
[hotfix] fix initialize bug with zero ( #442 )
3 years ago
Jiarui Fang
640a6cd304
[refactory] refactory the initialize method for new zero design ( #431 )
3 years ago
ver217
fce9432f08
sync before creating empty grad
3 years ago
ver217
ea6905a898
free param.grad
3 years ago
ver217
9506a8beb2
use double buffer to handle grad
3 years ago
Jiarui Fang
adebb3e041
[zero] cuda margin space for OS ( #418 )
3 years ago
Jiarui Fang
56bb412e72
[polish] use GLOBAL_MODEL_DATA_TRACER ( #417 )
3 years ago
Jiarui Fang
23ba3fc450
[zero] refactory ShardedOptimV2 init method ( #416 )
3 years ago
Frank Lee
e79ea44247
[fp16] refactored fp16 optimizer ( #392 )
3 years ago
Jiarui Fang
21dc54e019
[zero] memtracer to record cuda memory usage of model data and overall system ( #395 )
3 years ago
Jiarui Fang
370f567e7d
[zero] new interface for ShardedOptimv2 ( #406 )
3 years ago
ver217
63469c0f91
polish code
3 years ago
ver217
88804aee49
add bucket tensor shard strategy
3 years ago
HELSON
7c079d9c33
[hotfix] fixed bugs in ShardStrategy and PcieProfiler ( #394 )
3 years ago
Jiarui Fang
3af13a2c3e
[zero] polish ShardedOptimV2 unittest ( #385 )
...
* place params on cpu after zero init context
* polish code
* bucketzed cpu gpu tensor transter
* find a bug in sharded optim unittest
* add offload unittest for ShardedOptimV2.
* polish code and make it more robust
3 years ago
Jiarui Fang
272ebfb57d
[bug] shard param during initializing the ShardedModelV2 ( #381 )
3 years ago
Jiarui Fang
b5f43acee3
[zero] find miss code ( #378 )
3 years ago
Jiarui Fang
6b6002962a
[zero] zero init context collect numel of model ( #375 )
3 years ago
jiaruifang
d9217e1960
Revert "[zero] bucketized tensor cpu gpu copy ( #368 )"
...
This reverts commit bef05489b6
.
3 years ago
Jiarui Fang
00670c870e
[zero] bucketized tensor cpu gpu copy ( #368 )
3 years ago
Jiarui Fang
44e4891f57
[zero] able to place params on cpu after zero init context ( #365 )
...
* place params on cpu after zero init context
* polish code
3 years ago
ver217
253e54d98a
fix grad shape
3 years ago
Jiarui Fang
ea2872073f
[zero] global model data memory tracer ( #360 )
3 years ago
Jiarui Fang
cb34cd384d
[test] polish zero related unitest ( #351 )
3 years ago
ver217
d0ae0f2215
[zero] update sharded optim v2 ( #334 )
3 years ago
jiaruifang
5663616921
polish code
3 years ago
jiaruifang
7977422aeb
add bert for unitest and sharded model is not able to pass the bert case
3 years ago
ver217
1388671699
[zero] Update sharded model v2 using sharded param v2 ( #323 )
3 years ago
Jiarui Fang
11bddb6e55
[zero] update zero context init with the updated test utils ( #327 )
3 years ago
Jiarui Fang
de0468c7a8
[zero] zero init context ( #321 )
...
* add zero init context
* add more flags for zero init context
fix bug of repeated converting param to ShardedParamV2
* polish code
3 years ago
LuGY
a3269de5c9
[zero] cpu adam kernel ( #288 )
...
* Added CPU Adam
* finished the cpu adam
* updated the license
* delete useless parameters, removed resnet
* modified the method off cpu adam unittest
* deleted some useless codes
* removed useless codes
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
3 years ago
Jiarui Fang
90d3aef62c
[zero] yet an improved sharded param ( #311 )
3 years ago
Jiarui Fang
c9e7d9582d
[zero] polish shard strategy ( #310 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add set_payload method for ShardedParam
* [zero] add shareded tensor class
* polish code
* add shard stratgy
* move shard and gather logic to shard strategy from shard tensor.
* polish code
3 years ago
ver217
3092317b80
polish code
3 years ago
ver217
36f9a74ab2
fix sharded param hook and unit test
3 years ago
ver217
001ca624dd
impl shard optim v2 and add unit test
3 years ago
Jiarui Fang
74f77e314b
[zero] a shard strategy in granularity of tensor ( #307 )
3 years ago
Jiarui Fang
80364c7686
[zero] sharded tensor ( #305 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add set_payload method for ShardedParam
* [zero] add shareded tensor class
* polish code
3 years ago
ver217
b105371ace
rename shared adam to sharded optim v2
3 years ago
ver217
70814dc22f
fix master params dtype
3 years ago
ver217
795210dd99
add fp32 master params in sharded adam
3 years ago
ver217
a109225bc2
add sharded adam
3 years ago
Jiarui Fang
e17e92c54d
Polish sharded parameter ( #297 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add more unittests to shareded param
3 years ago
ver217
7aef75ca42
[zero] add sharded grad and refactor grad hooks for ShardedModel ( #287 )
3 years ago
Frank Lee
9afb5c8b2d
fixed typo in ShardParam ( #294 )
3 years ago
Frank Lee
e17e54e32a
added buffer sync to naive amp model wrapper ( #291 )
3 years ago
Jiarui Fang
5a560a060a
Feature/zero ( #279 )
...
* add zero1 (#209 )
* add zero1
* add test zero1
* update zero stage 1 develop (#212 )
* Implement naive zero3 (#240 )
* naive zero3 works well
* add zero3 param manager
* add TODOs in comments
* add gather full param ctx
* fix sub module streams
* add offload
* fix bugs of hook and add unit tests
* fix bugs of hook and add unit tests (#252 )
* add gather full param ctx
* fix sub module streams
* add offload
* fix bugs of hook and add unit tests
* polish code and add state dict hook
* fix bug
* update unit test
* refactor reconstructed zero code
* clip_grad support zero3 and add unit test
* add unit test for Zero3ParameterManager
* [WIP] initialize the shard param class
* [WIP] Yet another sharded model implementation (#274 )
* [WIP] initialize the shard param class
* [WIP] Yes another implementation of shardModel. Using a better hook method.
* torch.concat -> torch.cat
* fix test_zero_level_1.py::test_zero_level_1 unitest
* remove deepspeed implementation and refactor for the reconstructed zero module
* polish zero dp unittests
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
3 years ago
HELSON
0f8c7f9804
Fixed docstring in colossalai ( #171 )
3 years ago
ver217
9ef05ed1fc
try import deepspeed when using zero ( #130 )
3 years ago
Frank Lee
91c327cb44
fixed zero level 3 dtype bug ( #76 )
3 years ago
Frank Lee
35813ed3c4
update examples and sphnix docs for the new api ( #63 )
3 years ago
ver217
7d3711058f
fix zero3 fp16 and add zero3 model context ( #62 )
3 years ago
Frank Lee
da01c234e1
Develop/experiments ( #59 )
...
* Add gradient accumulation, fix lr scheduler
* fix FP16 optimizer and adapted torch amp with tensor parallel (#18 )
* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
* fixed trainer
* Revert "fixed trainer"
This reverts commit 2e0b0b7699
.
* improved consistency between trainer, engine and schedule (#23 )
Co-authored-by: 1SAA <c2h214748@gmail.com>
* Split conv2d, class token, positional embedding in 2d, Fix random number in ddp
Fix convergence in cifar10, Imagenet1000
* Integrate 1d tensor parallel in Colossal-AI (#39 )
* fixed 1D and 2D convergence (#38 )
* optimized 2D operations
* fixed 1D ViT convergence problem
* Feature/ddp (#49 )
* remove redundancy func in setup (#19 ) (#20 )
* use env to control the language of doc (#24 ) (#25 )
* Support TP-compatible Torch AMP and Update trainer API (#27 )
* Add gradient accumulation, fix lr scheduler
* fix FP16 optimizer and adapted torch amp with tensor parallel (#18 )
* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
* fixed trainer
* Revert "fixed trainer"
This reverts commit 2e0b0b7699
.
* improved consistency between trainer, engine and schedule (#23 )
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
* add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29 )
* add explanation for ViT example (#35 ) (#36 )
* support torch ddp
* fix loss accumulation
* add log for ddp
* change seed
* modify timing hook
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
* Feature/pipeline (#40 )
* remove redundancy func in setup (#19 ) (#20 )
* use env to control the language of doc (#24 ) (#25 )
* Support TP-compatible Torch AMP and Update trainer API (#27 )
* Add gradient accumulation, fix lr scheduler
* fix FP16 optimizer and adapted torch amp with tensor parallel (#18 )
* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
* fixed trainer
* Revert "fixed trainer"
This reverts commit 2e0b0b7699
.
* improved consistency between trainer, engine and schedule (#23 )
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
* add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29 )
* add explanation for ViT example (#35 ) (#36 )
* optimize communication of pipeline parallel
* fix grad clip for pipeline
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
* optimized 3d layer to fix slow computation ; tested imagenet performance with 3d; reworked lr_scheduler config definition; fixed launch args; fixed some printing issues; simplified apis of 3d layers (#51 )
* Update 2.5d layer code to get a similar accuracy on imagenet-1k dataset
* update api for better usability (#58 )
update api for better usability
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: puck_WCR <46049915+WANG-CR@users.noreply.github.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
Co-authored-by: アマデウス <kurisusnowdeng@users.noreply.github.com>
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
3 years ago