3466 Commits (457a0de79fd2d3602eba0ac78e606acb6401fc60)
 

Author SHA1 Message Date
GuangyaoZhang 457a0de79f shardformer fp8 4 months ago
pre-commit-ci[bot] 51f916b11d [pre-commit.ci] auto fixes from pre-commit.com hooks 4 months ago
BurkeHulk 1f1b856354 Merge remote-tracking branch 'origin/feature/fp8_comm' into feature/fp8_comm 4 months ago
BurkeHulk 66018749f3 add fp8_communication flag in the script 4 months ago
BurkeHulk e88190184a support fp8 communication in pipeline parallelism 4 months ago
BurkeHulk 1e1959467e fix scaling algorithm in FP8 casting 4 months ago
GuangyaoZhang dbfa7d39fc fix typo 5 months ago
pre-commit-ci[bot] e17f835df7 [pre-commit.ci] auto fixes from pre-commit.com hooks 5 months ago
Hanks 6991819a97
Merge branch 'hpcaitech:main' into feature/fp8_comm 5 months ago
pre-commit-ci[bot] 7997683aac
[pre-commit.ci] pre-commit autoupdate (#5878) 5 months ago
Hongxin Liu 7afbc81d62
[quant] fix bitsandbytes version check (#5882) 5 months ago
Wang Binluo 6cd4c32be4
[shardformer] fix the moe (#5883) 5 months ago
Edenzzzz eb24fcd914
[Hotfix] Fix OPT gradient checkpointing forward 5 months ago
Haze188 ea94c07b95
[hotfix] fix the bug that large tensor exceed the maximum capacity of TensorBucket (#5879) 5 months ago
pre-commit-ci[bot] 7c2f79fa98
[pre-commit.ci] pre-commit autoupdate (#5572) 5 months ago
Edenzzzz 936d0b0f7b
[doc] Update llama + sp compatibility; fix dist optim table 5 months ago
Jianghai 8ab46b4000
[Shardformer] change qwen2 modeling into gradient checkpointing style (#5874) 5 months ago
HangXu f5a52e1600
fp8 operators for compressed communication 5 months ago
Haze188 416580b314
[MoE/ZeRO] Moe refactor with zero refactor (#5821) 5 months ago
flybird11111 773d9f964a
[shardformer]delete xformers (#5859) 5 months ago
Hongxin Liu eaea88cf9e
[release] update version (#5864) 5 months ago
Runyu Lu 3c7cda0c9a
[Inference]Lazy Init Support (#5785) 5 months ago
Guangyao Zhang d9d5e7ea1f
[shardformer] Support the T5ForTokenClassification model (#5816) 5 months ago
Hongxin Liu 5dfbcd7746
[zero] use bucket during allgather (#5860) 5 months ago
botbw 8e718a1421
[gemini] fixes for benchmarking (#5847) 5 months ago
Edenzzzz 2a25a2aff7
[Feature] optimize PP overlap (#5735) 5 months ago
binmakeswell 4ccaaaab63
[doc] add GPU cloud playground (#5851) 5 months ago
binmakeswell 7266f82d03
[doc] fix open sora model weight link (#5848) 5 months ago
binmakeswell 8f445729a4
[doc] opensora v1.2 news (#5846) 5 months ago
botbw 8a5c86439a
[gemini] fix missing return (#5845) 5 months ago
Hongxin Liu bd3e34fef6
[release] update version (#5833) 5 months ago
Yuanheng Zhao 7b249c76e5
[Fix] Fix spec-dec Glide LlamaModel for compatibility with transformers (#5837) 5 months ago
Guangyao Zhang fd1dc417d8
[shardformer] Change atol in test command-r weight-check to pass pytest (#5835) 5 months ago
Guangyao Zhang 2014cce870
[devops] Remove building on PR when edited to avoid skip issue (#5836) 5 months ago
Kai Lv 0adca5b688
[launch] Support IPv4 host initialization in launch (#5822) 5 months ago
Guangyao Zhang 639394b0d4
Merge pull request #5818 from GuangyaoZhang/command-r 5 months ago
Edenzzzz 7f9ec599be
[misc] Add dist optim to doc sidebar (#5806) 5 months ago
GuangyaoZhang 4adbc36913 Merge branch 'command-r' of github.com:GuangyaoZhang/ColossalAI into command-r 5 months ago
GuangyaoZhang d84d68601a change 'xxx if xxx else None' to 'xxx or None' 5 months ago
pre-commit-ci[bot] 996c65077e [pre-commit.ci] auto fixes from pre-commit.com hooks 5 months ago
GuangyaoZhang a83a2336e8 rebase master llama change 5 months ago
GuangyaoZhang 20c0b06ff5 Merge branch 'command-r' of github.com:GuangyaoZhang/ColossalAI into command-r 5 months ago
GuangyaoZhang 363cde6957 merge model and attention forward 5 months ago
GuangyaoZhang 7a2b08646f Remove CohereLayerNorm and use existing layernorm 5 months ago
GuangyaoZhang fe2e74c03a fix precommit 5 months ago
GuangyaoZhang 98da648a4a Fix Code Factor check 5 months ago
GuangyaoZhang f656d61778 change command 5 months ago
GuangyaoZhang 0b81163bc0 Copy llama to command 5 months ago
Edenzzzz 8795bb2e80
Support 4d parallel + flash attention (#5789) 5 months ago
GuangyaoZhang 3c7302ad0e merge model and attention forward 5 months ago