189 Commits (37e35230ff4666231dd65435b5f7b2a2fcfaf9e6)

Author SHA1 Message Date
botbw c54c4fcd15
[hotfix] moe hybrid parallelism benchmark & follow-up fix (#6048) 2 months ago
Wenxuan Tan 8fd25d6e09
[Feature] Split cross-entropy computation in SP (#5959) 2 months ago
Wang Binluo eea37da6fa
[fp8] Merge feature/fp8_comm to main branch of Colossalai (#6016) 3 months ago
Edenzzzz f5c84af0b0
[Feature] Zigzag Ring attention (#5905) 3 months ago
flybird11111 0a51319113
[fp8] zero support fp8 linear. (#6006) 3 months ago
Hongxin Liu 8241c0c054
[fp8] support gemini plugin (#5978) 4 months ago
Hanks b480eec738
[Feature]: support FP8 communication in DDP, FSDP, Gemini (#5928) 4 months ago
flybird11111 0c10afd372
[FP8] rebase main (#5963) 4 months ago
hxwang 3e2b6132b7 [moe] clean legacy code 4 months ago
Edenzzzz 8cc8f645cd
[Examples] Add lazy init to OPT and GPT examples (#5924) 4 months ago
Hongxin Liu e86127925a
[plugin] support all-gather overlap for hybrid parallel (#5919) 4 months ago
GuangyaoZhang 6a20f07b80 remove all to all 4 months ago
GuangyaoZhang 5a310b9ee1 fix rebase 4 months ago
GuangyaoZhang 457a0de79f shardformer fp8 4 months ago
BurkeHulk 66018749f3 add fp8_communication flag in the script 4 months ago
Hongxin Liu c068ef0fa0
[zero] support all-gather overlap (#5898) 4 months ago
Edenzzzz 8ec24b6a4d
[Hoxfix] Fix CUDA_DEVICE_MAX_CONNECTIONS for comm overlap 5 months ago
Haze188 416580b314
[MoE/ZeRO] Moe refactor with zero refactor (#5821) 5 months ago
botbw 8e718a1421
[gemini] fixes for benchmarking (#5847) 5 months ago
Edenzzzz 2a25a2aff7
[Feature] optimize PP overlap (#5735) 5 months ago
Edenzzzz 8795bb2e80
Support 4d parallel + flash attention (#5789) 5 months ago
hxwang 154720ba6e [chore] refactor profiler utils 6 months ago
genghaozhe 87665d7922 correct argument help message 6 months ago
Haze188 4d097def96
[Gemini] add some code for reduce-scatter overlap, chunk prefetch in llama benchmark. (#5751) 6 months ago
genghaozhe b9269d962d add args.prefetch_num for benchmark 6 months ago
genghaozhe fba04e857b [bugs] fix args.profile=False DummyProfiler errro 6 months ago
hxwang ca674549e0 [chore] remove unnecessary test & changes 6 months ago
hxwang 63c057cd8e [example] add profile util for llama 6 months ago
botbw 2fc85abf43
[gemini] async grad chunk reduce (all-reduce&reduce-scatter) (#5713) 6 months ago
genghaozhe a280517dd9 remove unrelated file 6 months ago
genghaozhe 1ec92d29af remove perf log, unrelated file and so on 6 months ago
genghaozhe 5c6c5d6be3 remove comments 6 months ago
genghaozhe df63db7e63 remote comments 6 months ago
hxwang 2e68eebdfe [chore] refactor & sync 6 months ago
Yuanheng Zhao 12e7c28d5e
[hotfix] fix OpenMOE example import path (#5697) 7 months ago
Yuanheng Zhao 55cc7f3df7
[Fix] Fix Inference Example, Tests, and Requirements (#5688) 7 months ago
Edenzzzz c25f83c85f
fix missing pad token (#5690) 7 months ago
Hongxin Liu 7f8b16635b
[misc] refactor launch API and tensor constructor (#5666) 7 months ago
Tong Li 68ec99e946
[hotfix] add soft link to support required files (#5661) 7 months ago
Hongxin Liu 1b387ca9fe
[shardformer] refactor pipeline grad ckpt config (#5646) 7 months ago
傅剑寒 279300dc5f
[Inference/Refactor] Refactor compilation mechanism and unified multi hw (#5613) 7 months ago
binmakeswell f4c5aafe29
[example] llama3 (#5631) 7 months ago
Hongxin Liu 4de4e31818
[exampe] update llama example (#5626) 7 months ago
Edenzzzz d83c633ca6
[hotfix] Fix examples no pad token & auto parallel codegen bug; (#5606) 7 months ago
Hongxin Liu 641b1ee71a
[devops] remove post commit ci (#5566) 8 months ago
digger yu 341263df48
[hotfix] fix typo s/get_defualt_parser /get_default_parser (#5548) 8 months ago
digger yu a799ca343b
[fix] fix typo s/muiti-node /multi-node etc. (#5448) 8 months ago
Wenhao Chen e614aa34f3
[shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508) 8 months ago
Yuanheng Zhao 36c4bb2893
[Fix] Grok-1 use tokenizer from the same pretrained path (#5532) 8 months ago
Insu Jang 00525f7772
[shardformer] fix pipeline forward error if custom layer distribution is used (#5189) 8 months ago