Commit Graph

2881 Commits (8d56c9c3894b882c94810a575bb3eb13fe7e2a98)
 

Author SHA1 Message Date
Hongxin Liu 8d56c9c389
[misc] remove outdated submodule (#5070)
1 year ago
Cuiqing Li (李崔卿) bce919708f
[Kernels]added flash-decoidng of triton (#5063)
1 year ago
Xu Kai fd6482ad8c
[inference] Refactor inference architecture (#5057)
1 year ago
flybird11111 bc09b95f50
[exampe] fix llama example' loss error when using gemini plugin (#5060)
1 year ago
Wenhao Chen 3c08f17348
[hotfix]: modify create_ep_hierarchical_group and add test (#5032)
1 year ago
flybird11111 97cd0cd559
[shardformer] fix llama error when transformers upgraded. (#5055)
1 year ago
flybird11111 3e02154710
[gemini] gemini support extra-dp (#5043)
1 year ago
Elsa Granger b2ad0d9e8f
[pipeline,shardformer] Fix p2p efficiency in pipeline, allow skipping loading weight not in weight_map when `strict=False`, fix llama flash attention forward, add flop estimation by megatron in llama benchmark (#5017)
1 year ago
Cuiqing Li (李崔卿) 28052a71fb
[Kernels]Update triton kernels into 2.1.0 (#5046)
1 year ago
Orion-Zheng 43ad0d9ef0 fix wrong EOS token in ColossalChat
1 year ago
Zhongkai Zhao 70885d707d
[hotfix] Suport extra_kwargs in ShardConfig (#5031)
1 year ago
flybird11111 576a2f7b10
[gemini] gemini support tensor parallelism. (#4942)
1 year ago
Jun Gao a4489384d5
[shardformer] Fix serialization error with Tensor Parallel state saving (#5018)
1 year ago
Wenhao Chen 724441279b
[moe]: fix ep/tp tests, add hierarchical all2all (#4982)
1 year ago
Yuanchen 239cd92eff
Support mtbench (#5025)
1 year ago
Xuanlei Zhao f71e63b0f3
[moe] support optimizer checkpoint (#5015)
1 year ago
Hongxin Liu 67f5331754
[misc] add code owners (#5024)
1 year ago
Jianghai ef4c14a5e2
[Inference] Fix bug in ChatGLM2 Tensor Parallelism (#5014)
1 year ago
github-actions[bot] c36e782d80
[format] applied code formatting on changed files in pull request 4926 (#5007)
1 year ago
littsk 1a3315e336
[hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926)
1 year ago
Baizhou Zhang d99b2c961a
[hotfix] fix grad accumulation plus clipping for gemini (#5002)
1 year ago
Xuanlei Zhao dc003c304c
[moe] merge moe into main (#4978)
1 year ago
Hongxin Liu 8993c8a817
[release] update version (#4995)
1 year ago
Bin Jia b6696beb04
[Pipeline Inference] Merge pp with tp (#4993)
1 year ago
ppt0011 335cb105e2
[doc] add supported feature diagram for hybrid parallel plugin (#4996)
1 year ago
Baizhou Zhang c040d70aa0
[hotfix] fix the bug of repeatedly storing param group (#4951)
1 year ago
littsk be82b5d4ca
[hotfix] Fix the bug where process groups were not being properly released. (#4940)
1 year ago
Cuiqing Li (李崔卿) 4f0234f236
[doc]Update doc for colossal-inference (#4989)
1 year ago
Yuanchen abe071b663
fix ColossalEval (#4992)
1 year ago
Cuiqing Li 459a88c806
[Kernels]Updated Triton kernels into 2.1.0 and adding flash-decoding for llama token attention (#4965)
1 year ago
Jianghai cf579ff46d
[Inference] Dynamic Batching Inference, online and offline (#4953)
1 year ago
アマデウス 4e4a10c97d
updated c++17 compiler flags (#4983)
1 year ago
Bin Jia 1db6727678
[Pipeline inference] Combine kvcache with pipeline inference (#4938)
1 year ago
Jianghai c6cd629e7a
[Inference]ADD Bench Chatglm2 script (#4963)
1 year ago
Xu Kai 785802e809
[inference] add reference and fix some bugs (#4937)
1 year ago
Hongxin Liu b8e770c832
[test] merge old components to test to model zoo (#4945)
1 year ago
Cuiqing Li 3a41e8304e
[Refactor] Integrated some lightllm kernels into token-attention (#4946)
1 year ago
digger yu 11009103be
[nfc] fix some typo with colossalai/ docs/ etc. (#4920)
1 year ago
github-actions[bot] 486d06a2d5
[format] applied code formatting on changed files in pull request 4820 (#4886)
1 year ago
Zhongkai Zhao c7aa319ba0
[test] add no master test for low level zero plugin (#4934)
1 year ago
Hongxin Liu 1f5d2e8062
[hotfix] fix torch 2.0 compatibility (#4936)
1 year ago
Baizhou Zhang 21ba89cab6
[gemini] support gradient accumulation (#4869)
1 year ago
github-actions[bot] a41cf88e9b
[format] applied code formatting on changed files in pull request 4908 (#4918)
1 year ago
Hongxin Liu 4f68b3f10c
[kernel] support pure fp16 for cpu adam and update gemini optim tests (#4921)
1 year ago
Zian(Andy) Zheng 7768afbad0 Update flash_attention_patch.py
1 year ago
Xu Kai 611a5a80ca
[inference] Add smmoothquant for llama (#4904)
1 year ago
Zhongkai Zhao a0684e7bd6
[feature] support no master weights option for low level zero plugin (#4816)
1 year ago
Xu Kai 77a9328304
[inference] add llama2 support (#4898)
1 year ago
Baizhou Zhang 39f2582e98
[hotfix] fix lr scheduler bug in torch 2.0 (#4864)
1 year ago
littsk 83b52c56cd
[feature] Add clip_grad_norm for hybrid_parallel_plugin (#4837)
1 year ago