467 Commits (de2f581d43ba403808c6b5eb365f7c44a375fc70)
 

Author SHA1 Message Date
YuliangLiu0306 de2f581d43
[cli] added micro benchmarking for tp (#789) 3 years ago
YuliangLiu0306 cfadc9df8e
[cli] added distributed launcher command (#791) 3 years ago
Jiarui Fang 97cd9b03b3
[log] display tflops if available (#802) 3 years ago
Jiarui Fang 4d9332b4c5
[refactor] moving memtracer to gemini (#801) 3 years ago
Jiarui Fang 8711c706f4
[hotfix] fix grad offload when enabling reuse_fp16_shard 3 years ago
ver217 f1fa1a675f fix grad offload when enabling reuse_fp16_shard 3 years ago
HELSON 4c4388c46e
[hotfix] fix memory leak in zero (#781) 3 years ago
Ziyue Jiang 4b01da24cd
[TP] change the check assert in split batch 2d (#772) 3 years ago
ver217 846406a07a
[gemini] fix auto tensor placement policy (#775) 3 years ago
ver217 38102cf61a
update version (#779) 3 years ago
HELSON a65cbb7e4e
[zero] refactor shard and gather operation (#773) 3 years ago
Frank Lee 5a1a095b92
[test] refactored with the new rerun decorator (#763) 3 years ago
binmakeswell deaf99f4c9
[readme] sync CN readme (#766) 3 years ago
ver217 6e553748a7
polish sharded optim docstr and warning (#770) 3 years ago
LuGY 80e37eec42
fix the ckpt bugs when using DDP (#769) 3 years ago
Jiarui Fang 1f698f4406
[readme] polish readme (#764) 3 years ago
Frank Lee 920fe31526
[compatibility] used backward-compatible API for global process group (#758) 3 years ago
Frank Lee 4ea49cb536
[test] added a decorator for address already in use error with backward compatibility (#760) 3 years ago
Jiarui Fang 10ef8afdd2
[gemini] init genimi individual directory (#754) 3 years ago
ver217 dcca614eee
[hotfix] fix test_stateful_tensor_mgr (#762) 3 years ago
github-actions[bot] 6978980f6d
Automated submodule synchronization (#751) 3 years ago
ver217 a93a7d7364
[hotfix] fix reuse_fp16_shard of sharded model (#756) 3 years ago
ver217 8f7ce94b8e
[hotfix] fix auto tensor placement policy (#753) 3 years ago
HELSON 84c6700b2a
[zero] refactor memstats_collector (#746) 3 years ago
アマデウス b8899e0905
[TP] allow layernorm without bias (#750) 3 years ago
Jiarui Fang 3d7dc46d33
[zero] use factory pattern for tensor_placement_policy (#752) 3 years ago
ver217 4b048a8728
fix prepare grads in sharded optim (#749) 3 years ago
ver217 097772546e fix initialize about zero 3 years ago
ver217 e396bb71f2
[zero] add tensor placement policies (#743) 3 years ago
HELSON 22c4b88d56
[zero] refactor ShardedParamV2 for convenience (#742) 3 years ago
HELSON 340e59f968
[utils] add synchronized cuda memory monitor (#740) 3 years ago
ver217 e6212f56cd
[hotfix] fix memory leak in backward of sharded model (#741) 3 years ago
Frank Lee f4f42d4c3c
[bug] fixed DDP compatibility with torch 1.8 (#739) 3 years ago
Frank Lee a4e91bc87f
[bug] fixed grad scaler compatibility with torch 1.8 (#735) 3 years ago
Jiarui Fang 53cb584808
[utils] correct cpu memory used and capacity in the context of multi-process (#726) 3 years ago
Jiarui Fang 7db3ccc79b
[hotfix] remove duplicated param register to stateful tensor manager (#728) 3 years ago
binmakeswell 600e769a42
add video (#732) 3 years ago
Frank Lee a5c3f072f6
[bug] removed zero installation requirements (#731) 3 years ago
HELSON b9b469ea50
[moe] add checkpoint for moe zero test (#729) 3 years ago
Frank Lee 6f7d1362c9
[doc] removed outdated installation command (#730) 3 years ago
FrankLeeeee e88a498c9c [test] removed trivial outdated test 3 years ago
FrankLeeeee 62b4ce7326 [test] added missing decorators to model checkpointing tests 3 years ago
Frank Lee 1cb7bdad3b
[util] fixed communication API depth with PyTorch 1.9 (#721) 3 years ago
Frank Lee 2412429d54
[util] fixed activation checkpointing on torch 1.9 (#719) 3 years ago
Frank Lee 04ff5ea546
[utils] support detection of number of processes on current node (#723) 3 years ago
Jiarui Fang 4d90a7b513
[refactor] zero directory (#724) 3 years ago
Frank Lee 20ab1f5520
[bug] fixed broken test_found_inf (#725) 3 years ago
Jiarui Fang 193dc8dacb
[refactor] refactor the memory utils (#715) 3 years ago
HELSON dbd96fe90a
[zero] check whether gradients have inf and nan in gpu (#712) 3 years ago
ver217 715b86eadd
[hotfix] fix stm cuda model data size (#710) 3 years ago