3506 Commits (2f9bce6686d1415a83d5726dc5ff02222c742582)
 

Author SHA1 Message Date
botbw 2f9bce6686
[moe] implement submesh initialization 4 months ago
haze188 a613edd517
solve hang when parallel mode = pp + dp 4 months ago
haze188 0210bead8c
[misc] solve booster hang by rename the variable 4 months ago
botbw b303ffe9f3
[zero] solve hang 4 months ago
botbw 2431694564
[moe] implement transit between non moe tp and ep 4 months ago
botbw dec6e25e99
[test] pass mixtral shardformer test 4 months ago
hxwang 61109c7843
[zero] solve hang 4 months ago
hxwang 000456bf94
[chore] handle non member group 4 months ago
hxwang 4fc6f9aa98
[test] mixtra pp shard test 4 months ago
hxwang 5a9490a46b
[moe] fix plugin 4 months ago
hxwang 6a9164a477
[test] add mixtral transformer test 4 months ago
hxwang 229db4bc16
[test] add mixtral for sequence classification 4 months ago
Tong Li f585d4e38e
[ColossalChat] Hotfix for ColossalChat (#5910) 4 months ago
Edenzzzz 8cc8f645cd
[Examples] Add lazy init to OPT and GPT examples (#5924) 4 months ago
Hongxin Liu e86127925a
[plugin] support all-gather overlap for hybrid parallel (#5919) 4 months ago
Hongxin Liu 73494de577
[release] update version (#5912) 4 months ago
Hongxin Liu 27a72f0de1 [misc] support torch2.3 (#5893) 4 months ago
アマデウス 530283dba0 fix object_to_tensor usage when torch>=2.3.0 (#5820) 4 months ago
Guangyao Zhang 2e28c793ce [compatibility] support torch 2.2 (#5875) 4 months ago
YeAnbang d8bf7e09a2
Merge pull request #5901 from hpcaitech/colossalchat 4 months ago
Guangyao Zhang 1c961b20f3
[ShardFormer] fix qwen2 sp (#5903) 4 months ago
Stephan Kö 45c49dde96
[Auto Parallel]: Speed up intra-op plan generation by 44% (#5446) 4 months ago
YeAnbang b3594d4d68 fix orpo cross entropy loss 4 months ago
Hongxin Liu c068ef0fa0
[zero] support all-gather overlap (#5898) 4 months ago
YeAnbang 115c4cc5a4 hotfix citation 4 months ago
YeAnbang e7a8634636 fix eval 4 months ago
YeAnbang dd9e1cdafe
Merge pull request #5850 from hpcaitech/rlhf_SimPO 4 months ago
pre-commit-ci[bot] 8a9721bafe [pre-commit.ci] auto fixes from pre-commit.com hooks 4 months ago
YeAnbang 33f15203d3 Merge branch 'main' of https://github.com/hpcaitech/ColossalAI into rlhf_SimPO 4 months ago
YeAnbang f6ef5c3609 fix style 4 months ago
YeAnbang d888c3787c add benchmark for sft, dpo, simpo, orpo. Add benchmarking result. Support lora with gradient checkpoint 4 months ago
Guangyao Zhang 669849d74b
[ShardFormer] Add Ulysses Sequence Parallelism support for Command-R, Qwen2 and ChatGLM (#5897) 4 months ago
YeAnbang 16f3451fe2 Merge branch 'main' of https://github.com/hpcaitech/ColossalAI into rlhf_SimPO 4 months ago
Edenzzzz fbf33ecd01
[Feature] Enable PP + SP for llama (#5868) 5 months ago
Runyu Lu 66abf1c6e8
[HotFix] CI,import,requirements-test for #5838 (#5892) 5 months ago
Runyu Lu cba20525a8
[Feat] Diffusion Model(PixArtAlpha/StableDiffusion3) Support (#5838) 5 months ago
Edenzzzz 8ec24b6a4d
[Hoxfix] Fix CUDA_DEVICE_MAX_CONNECTIONS for comm overlap 5 months ago
Haze188 3420921101
[shardformer] DeepseekMoE support (#5871) 5 months ago
pre-commit-ci[bot] 7997683aac
[pre-commit.ci] pre-commit autoupdate (#5878) 5 months ago
Hongxin Liu 7afbc81d62
[quant] fix bitsandbytes version check (#5882) 5 months ago
Wang Binluo 6cd4c32be4
[shardformer] fix the moe (#5883) 5 months ago
Edenzzzz eb24fcd914
[Hotfix] Fix OPT gradient checkpointing forward 5 months ago
Haze188 ea94c07b95
[hotfix] fix the bug that large tensor exceed the maximum capacity of TensorBucket (#5879) 5 months ago
pre-commit-ci[bot] 7c2f79fa98
[pre-commit.ci] pre-commit autoupdate (#5572) 5 months ago
Edenzzzz 936d0b0f7b
[doc] Update llama + sp compatibility; fix dist optim table 5 months ago
Jianghai 8ab46b4000
[Shardformer] change qwen2 modeling into gradient checkpointing style (#5874) 5 months ago
YeAnbang ff535204fe update transformers version 5 months ago
Haze188 416580b314
[MoE/ZeRO] Moe refactor with zero refactor (#5821) 5 months ago
YeAnbang a8af6ccb73 fix torch colossalai version 5 months ago
flybird11111 773d9f964a
[shardformer]delete xformers (#5859) 5 months ago