15 Commits (457a0de79fd2d3602eba0ac78e606acb6401fc60)

Author SHA1 Message Date
Edenzzzz 2a25a2aff7
[Feature] optimize PP overlap (#5735) 5 months ago
flybird11111 8954a0c2e2 [LowLevelZero] low level zero support lora (#5153) 7 months ago
Elsa Granger d565df3821
[pipeline] A more general _communicate in p2p (#5062) 11 months ago
Wenhao Chen d799a3088f
[pipeline]: add p2p fallback order and fix interleaved pp deadlock (#5214) 11 months ago
Wenhao Chen 4fa689fca1
[pipeline]: fix p2p comm, add metadata cache and support llama interleaved pp (#5134) 11 months ago
Elsa Granger b2ad0d9e8f
[pipeline,shardformer] Fix p2p efficiency in pipeline, allow skipping loading weight not in weight_map when `strict=False`, fix llama flash attention forward, add flop estimation by megatron in llama benchmark (#5017) 1 year ago
github-actions[bot] 486d06a2d5
[format] applied code formatting on changed files in pull request 4820 (#4886) 1 year ago
Bin Jia 08a9f76b2f
[Pipeline Inference] Sync pipeline inference branch to main (#4820) 1 year ago
Hongxin Liu 079bf3cb26
[misc] update pre-commit and run all files (#4752) 1 year ago
LuGY a78daf6180
[shardformer] support interleaved pipeline (#4448) 1 year ago
Baizhou Zhang ed4c448488 [pipeline] rewrite t5 tests & support multi-tensor transmitting in pipeline (#4388) 1 year ago
Hongxin Liu 261eab02fb [plugin] add 3d parallel plugin (#4295) 1 year ago
Jianghai d0807122e2 [pipeline] test pure pipeline process using llama (#4218) 1 year ago
Jianghai e7cc62d735 [pipeline] All bert models (#4233) 1 year ago
Hongxin Liu 45fdc9b42c [pipeline] implement p2p communication (#4100) 1 year ago