Commit Graph

2138 Commits (8fd25d6e09069a8437c6ebee8dd83e1de4c9b83d)

Author SHA1 Message Date
Guangyao Zhang 2e28c793ce [compatibility] support torch 2.2 (#5875)
4 months ago
Guangyao Zhang 1c961b20f3
[ShardFormer] fix qwen2 sp (#5903)
5 months ago
Stephan Kö 45c49dde96
[Auto Parallel]: Speed up intra-op plan generation by 44% (#5446)
5 months ago
pre-commit-ci[bot] 51f916b11d [pre-commit.ci] auto fixes from pre-commit.com hooks
5 months ago
BurkeHulk 1f1b856354 Merge remote-tracking branch 'origin/feature/fp8_comm' into feature/fp8_comm
5 months ago
BurkeHulk e88190184a support fp8 communication in pipeline parallelism
5 months ago
BurkeHulk 1e1959467e fix scaling algorithm in FP8 casting
5 months ago
Hongxin Liu c068ef0fa0
[zero] support all-gather overlap (#5898)
5 months ago
GuangyaoZhang dbfa7d39fc fix typo
5 months ago
Guangyao Zhang 669849d74b
[ShardFormer] Add Ulysses Sequence Parallelism support for Command-R, Qwen2 and ChatGLM (#5897)
5 months ago
Edenzzzz fbf33ecd01
[Feature] Enable PP + SP for llama (#5868)
5 months ago
Runyu Lu 66abf1c6e8
[HotFix] CI,import,requirements-test for #5838 (#5892)
5 months ago
Runyu Lu cba20525a8
[Feat] Diffusion Model(PixArtAlpha/StableDiffusion3) Support (#5838)
5 months ago
Edenzzzz 8ec24b6a4d
[Hoxfix] Fix CUDA_DEVICE_MAX_CONNECTIONS for comm overlap
5 months ago
Haze188 3420921101
[shardformer] DeepseekMoE support (#5871)
5 months ago
pre-commit-ci[bot] e17f835df7 [pre-commit.ci] auto fixes from pre-commit.com hooks
5 months ago
Hanks 6991819a97
Merge branch 'hpcaitech:main' into feature/fp8_comm
5 months ago
Hongxin Liu 7afbc81d62
[quant] fix bitsandbytes version check (#5882)
5 months ago
Wang Binluo 6cd4c32be4
[shardformer] fix the moe (#5883)
5 months ago
Edenzzzz eb24fcd914
[Hotfix] Fix OPT gradient checkpointing forward
5 months ago
Haze188 ea94c07b95
[hotfix] fix the bug that large tensor exceed the maximum capacity of TensorBucket (#5879)
5 months ago
pre-commit-ci[bot] 7c2f79fa98
[pre-commit.ci] pre-commit autoupdate (#5572)
5 months ago
Jianghai 8ab46b4000
[Shardformer] change qwen2 modeling into gradient checkpointing style (#5874)
5 months ago
HangXu f5a52e1600
fp8 operators for compressed communication
5 months ago
Haze188 416580b314
[MoE/ZeRO] Moe refactor with zero refactor (#5821)
5 months ago
flybird11111 773d9f964a
[shardformer]delete xformers (#5859)
5 months ago
Runyu Lu 3c7cda0c9a
[Inference]Lazy Init Support (#5785)
5 months ago
Guangyao Zhang d9d5e7ea1f
[shardformer] Support the T5ForTokenClassification model (#5816)
5 months ago
Hongxin Liu 5dfbcd7746
[zero] use bucket during allgather (#5860)
5 months ago
botbw 8e718a1421
[gemini] fixes for benchmarking (#5847)
5 months ago
Edenzzzz 2a25a2aff7
[Feature] optimize PP overlap (#5735)
5 months ago
botbw 8a5c86439a
[gemini] fix missing return (#5845)
5 months ago
Yuanheng Zhao 7b249c76e5
[Fix] Fix spec-dec Glide LlamaModel for compatibility with transformers (#5837)
5 months ago
Kai Lv 0adca5b688
[launch] Support IPv4 host initialization in launch (#5822)
5 months ago
GuangyaoZhang d84d68601a change 'xxx if xxx else None' to 'xxx or None'
5 months ago
GuangyaoZhang a83a2336e8 rebase master llama change
5 months ago
GuangyaoZhang 363cde6957 merge model and attention forward
5 months ago
GuangyaoZhang 7a2b08646f Remove CohereLayerNorm and use existing layernorm
5 months ago
GuangyaoZhang fe2e74c03a fix precommit
5 months ago
GuangyaoZhang f656d61778 change command
5 months ago
GuangyaoZhang 0b81163bc0 Copy llama to command
5 months ago
Edenzzzz 8795bb2e80
Support 4d parallel + flash attention (#5789)
5 months ago
flybird11111 2ddf624a86
[shardformer] upgrade transformers to 4.39.3 (#5815)
6 months ago
botbw 3bcbba9262
[gemini] quick fix on possible async operation (#5803)
6 months ago
Haze188 d9dddf574f
[Gemini] Use async stream to prefetch and h2d data moving (#5781)
6 months ago
Li Xingjian 8554585a5f
[Inference] Fix flash-attn import and add model test (#5794)
6 months ago
Hongxin Liu aa125bcc91
[shardformer] fix modeling of bloom and falcon (#5796)
6 months ago
Runyu Lu c0948aff97
[Inference]refactor baichuan (#5791)
6 months ago
char-1ee f5981e808e Remove flash attention backend
6 months ago
char-1ee ceba662d22 Clean up
6 months ago
char-1ee 5f398fc000 Pass inference model shard configs for module init
6 months ago
char-1ee eec77e5702 Fix tests and naming
6 months ago
char-1ee 04386d9eff Refactor modeling by adding attention backend
6 months ago
Hongxin Liu 73e88a5553
[shardformer] fix import (#5788)
6 months ago
Hongxin Liu b9d646fe9e
[misc] fix dist logger (#5782)
6 months ago
botbw 3f7e3131d9
[gemini] optimize reduce scatter d2h copy (#5760)
6 months ago
Edenzzzz 79f7a7b211
[misc] Accelerate CI for zero and dist optim (#5758)
6 months ago
flybird11111 50b4c8e8cf
[hotfix] fix llama flash attention forward (#5777)
6 months ago
yuehuayingxueluo b45000f839
[Inference]Add Streaming LLM (#5745)
6 months ago
Yuanheng Zhao 406443200f
[Hotfix] Add missing init file in inference.executor (#5774)
6 months ago
duanjunwen 1b76564e16
[test] Fix/fix testcase (#5770)
6 months ago
flybird11111 3f2be80530
fix (#5765)
6 months ago
botbw 023ea13cb5
Merge pull request #5749 from hpcaitech/prefetch
6 months ago
hxwang 8547562884 [chore] remove unnecessary assert since compute list might not be recorded
6 months ago
hxwang e5e3320948 [bug] continue fix
6 months ago
hxwang 936dd96dbb [bug] workaround for idx fix
6 months ago
Edenzzzz 5f8c0a0ac3
[Feature] auto-cast optimizers to distributed version (#5746)
6 months ago
hxwang ff507b755e Merge branch 'main' of github.com:hpcaitech/ColossalAI into prefetch
6 months ago
botbw 2fc85abf43
[gemini] async grad chunk reduce (all-reduce&reduce-scatter) (#5713)
6 months ago
Jianghai 85946d4236
[Inference]Fix readme and example for API server (#5742)
6 months ago
hxwang 15d21a077a Merge remote-tracking branch 'origin/main' into prefetch
6 months ago
binmakeswell 4647ec28c8
[inference] release (#5747)
6 months ago
Yuanheng Zhao df6747603f
[Colossal-Inference] (v0.1.0) Merge pull request #5739 from hpcaitech/feature/colossal-infer
6 months ago
Yuanheng Zhao bd38fe6b91
[NFC] Fix code factors on inference triton kernels (#5743)
6 months ago
botbw 13c06d36a3
[bug] fix early return (#5740)
6 months ago
Haze188 22ce873c3f
[Shardformer] Add parallel output for shardformer models(bloom, falcon) (#5702)
6 months ago
pre-commit-ci[bot] b3c0e6d871 [pre-commit.ci] auto fixes from pre-commit.com hooks
6 months ago
hxwang 137a7c341b [chore] fix init error
6 months ago
Yuanheng Zhao 8633c15da9 [sync] Sync feature/colossal-infer with main
6 months ago
Yuanheng Zhao d8b1ea4ac9
[doc] Update Inference Readme (#5736)
6 months ago
Yuanheng Zhao bdf9a001d6
[Fix/Inference] Add unsupported auto-policy error message (#5730)
6 months ago
genghaozhe 90d8d0183c remove personal comments
6 months ago
genghaozhe bfcb2d1ff8 refactor the code structure to solve the circular import
6 months ago
genghaozhe 1ec92d29af remove perf log, unrelated file and so on
6 months ago
genghaozhe 5c6c5d6be3 remove comments
6 months ago
genghaozhe 7416e4943b fix conflicts to beautify the code
6 months ago
genghaozhe d22bf30ca6 implement auto policy prefetch and modify a little origin code.
6 months ago
pre-commit-ci[bot] f1918e18a5 [pre-commit.ci] auto fixes from pre-commit.com hooks
6 months ago
hxwang a55a9e298b [gemini] init auto policy prefetch
6 months ago
Yuanheng Zhao 283c407a19
[Inference] Fix Inference Generation Config and Sampling (#5710)
6 months ago
genghaozhe 06a3a100b3 remove unrelated code
6 months ago
genghaozhe 3d625ca836 add some todo Message
6 months ago
flybird11111 9d83c6d715
[lazy] fix lazy cls init (#5720)
6 months ago
botbw e57812c672
[chore] Update placement_policy.py
6 months ago
Yuanheng Zhao 8bcfe360fd
[example] Update Inference Example (#5725)
7 months ago
genghaozhe 013690a86b remove set(all_chunks)
7 months ago
hxwang 6efbadba25 [chore] remove debugging info
7 months ago
hxwang 20701d4533 [chore] remove print
7 months ago
hxwang f45f8a2aa7 [gemini] maxprefetch means maximum work to keep
7 months ago
genghaozhe fc2248cf99 Merge branch 'prefetch' of github.com:botbw/ColossalAI into feature/prefetch
7 months ago
pre-commit-ci[bot] 6bbe956316 [pre-commit.ci] auto fixes from pre-commit.com hooks
7 months ago
hxwang 82b25524ff Merge branch 'prefetch' of github.com:botbw/ColossalAI into prefetch
7 months ago
genghaozhe 1f6b57099c Merge branch 'prefetch' of github.com:botbw/ColossalAI into botbw-prefetch
7 months ago
hxwang 2e68eebdfe [chore] refactor & sync
7 months ago
pre-commit-ci[bot] 5bedea6e10 [pre-commit.ci] auto fixes from pre-commit.com hooks
7 months ago
hxwang 4148ceed9f [gemini] use compute_chunk to find next chunk
7 months ago
hxwang b2e9745888 [chore] sync
7 months ago
hxwang 6e38eafebe [gemini] prefetch chunks
7 months ago
Jianghai f47f2fbb24
[Inference] Fix API server, test and example (#5712)
7 months ago
Runyu Lu 74c47921fa
[Fix] Llama3 Load/Omit CheckpointIO Temporarily (#5717)
7 months ago
Edenzzzz 43995ee436
[Feature] Distributed optimizers: Lamb, Galore, CAME and Adafactor (#5694)
7 months ago
Steve Luo 7806842f2d
add paged-attetionv2: support seq length split across thread block (#5707)
7 months ago
Runyu Lu 18d67d0e8e
[Feat]Inference RPC Server Support (#5705)
7 months ago
hugo-syn 393c8f5b7f
[hotfix] fix inference typo (#5438)
7 months ago
yuehuayingxueluo de4bf3dedf
[Inference]Adapt repetition_penalty and no_repeat_ngram_size (#5708)
7 months ago
Wang Binluo 537f6a3855
[Shardformer]fix the num_heads assert for llama model and qwen model (#5704)
7 months ago
Wang Binluo a3cc68ca93
[Shardformer] Support the Qwen2 model (#5699)
7 months ago
傅剑寒 bfad39357b
[Inference/Feat] Add quant kvcache interface (#5700)
7 months ago
flybird11111 d4c5ef441e
[gemini]remove registered gradients hooks (#5696)
7 months ago
CjhHa1 bc9063adf1 resolve rebase conflicts on Branch feat/online-serving
7 months ago
Jianghai 61a1b2e798 [Inference] Fix bugs and docs for feat/online-server (#5598)
7 months ago
CjhHa1 7bbb28e48b [Inference] resolve rebase conflicts
7 months ago
Jianghai c064032865 [Online Server] Chat Api for streaming and not streaming response (#5470)
7 months ago
Jianghai de378cd2ab [Inference] Finish Online Serving Test, add streaming output api, continuous batching test and example (#5432)
7 months ago
Jianghai 69cd7e069d [Inference] ADD async and sync Api server using FastAPI (#5396)
7 months ago
yuehuayingxueluo d482922035
[Inference] Support the logic related to ignoring EOS token (#5693)
7 months ago
yuehuayingxueluo 9c2fe7935f
[Inference]Adapt temperature processing logic (#5689)
7 months ago
Wang Binluo 22297789ab
Merge pull request #5684 from wangbluo/parallel_output
7 months ago
Yuanheng Zhao 55cc7f3df7
[Fix] Fix Inference Example, Tests, and Requirements (#5688)
7 months ago
Yuanheng Zhao f9afe0addd
[hotfix] Fix KV Heads Number Assignment in KVCacheManager (#5695)
7 months ago
wangbluo 4e50cce26b fix the mistral model
7 months ago
wangbluo a8408b4d31 remove comment code
7 months ago
pre-commit-ci[bot] ca56b93d83 [pre-commit.ci] auto fixes from pre-commit.com hooks
7 months ago
wangbluo 108ddfb795 add parallel_output for the opt model
7 months ago
pre-commit-ci[bot] 88f057ce7c [pre-commit.ci] auto fixes from pre-commit.com hooks
7 months ago
flybird11111 77ec773388
[zero]remove registered gradients hooks (#5687)
7 months ago
Yuanheng Zhao 8754abae24 [Fix] Fix & Update Inference Tests (compatibility w/ main)
7 months ago
Yuanheng Zhao 56ed09aba5 [sync] resolve conflicts of merging main
7 months ago
Yuanheng Zhao 537a3cbc4d
[kernel] Support New KCache Layout - Triton Kernel (#5677)
7 months ago
wangbluo 2632916329 remove useless code
7 months ago
yuehuayingxueluo f79963199c
[inference]Add alibi to flash attn function (#5678)
7 months ago
wangbluo 9efc79ef24 add parallel output for mistral model
7 months ago
Steve Luo 5cd75ce4c7
[Inference/Kernel] refactor kvcache manager and rotary_embedding and kvcache_memcpy oper… (#5663)
7 months ago
yuehuayingxueluo 5f00002e43
[Inference] Adapt Baichuan2-13B TP (#5659)
7 months ago
Wang Binluo d3f34ee8cc
[Shardformer] add assert for num of attention heads divisible by tp_size (#5670)
7 months ago
flybird11111 6af6d6fc9f
[shardformer] support bias_gelu_jit_fused for models (#5647)
7 months ago
Hongxin Liu 7f8b16635b
[misc] refactor launch API and tensor constructor (#5666)
7 months ago
linsj20 91fa553775 [Feature] qlora support (#5586)
7 months ago
flybird11111 8954a0c2e2 [LowLevelZero] low level zero support lora (#5153)
7 months ago
Baizhou Zhang 14b0d4c7e5 [lora] add lora APIs for booster, support lora for TorchDDP (#4981)
7 months ago