2851 Commits (feature/lora)
 

Author SHA1 Message Date
linsj20 fcf776ff1b
[Feature] LoRA rebased to main branch (#5622) 7 months ago
linsj20 52a2dded36
[Feature] qlora support (#5586) 7 months ago
flybird11111 cabc1286ca
[LowLevelZero] low level zero support lora (#5153) 11 months ago
Baizhou Zhang c5fd4aa6e8
[lora] add lora APIs for booster, support lora for TorchDDP (#4981) 1 year ago
Xu Kai 785802e809
[inference] add reference and fix some bugs (#4937) 1 year ago
Hongxin Liu b8e770c832
[test] merge old components to test to model zoo (#4945) 1 year ago
Cuiqing Li 3a41e8304e
[Refactor] Integrated some lightllm kernels into token-attention (#4946) 1 year ago
digger yu 11009103be
[nfc] fix some typo with colossalai/ docs/ etc. (#4920) 1 year ago
github-actions[bot] 486d06a2d5
[format] applied code formatting on changed files in pull request 4820 (#4886) 1 year ago
Zhongkai Zhao c7aa319ba0
[test] add no master test for low level zero plugin (#4934) 1 year ago
Hongxin Liu 1f5d2e8062
[hotfix] fix torch 2.0 compatibility (#4936) 1 year ago
Baizhou Zhang 21ba89cab6
[gemini] support gradient accumulation (#4869) 1 year ago
github-actions[bot] a41cf88e9b
[format] applied code formatting on changed files in pull request 4908 (#4918) 1 year ago
Hongxin Liu 4f68b3f10c
[kernel] support pure fp16 for cpu adam and update gemini optim tests (#4921) 1 year ago
Zian(Andy) Zheng 7768afbad0 Update flash_attention_patch.py 1 year ago
Xu Kai 611a5a80ca
[inference] Add smmoothquant for llama (#4904) 1 year ago
Zhongkai Zhao a0684e7bd6
[feature] support no master weights option for low level zero plugin (#4816) 1 year ago
Xu Kai 77a9328304
[inference] add llama2 support (#4898) 1 year ago
Baizhou Zhang 39f2582e98
[hotfix] fix lr scheduler bug in torch 2.0 (#4864) 1 year ago
littsk 83b52c56cd
[feature] Add clip_grad_norm for hybrid_parallel_plugin (#4837) 1 year ago
Hongxin Liu df63564184
[gemini] support amp o3 for gemini (#4872) 1 year ago
ppt0011 c1fab951e7
Merge pull request #4889 from ppt0011/main 1 year ago
littsk ffd9a3cbc9
[hotfix] fix bug in sequence parallel test (#4887) 1 year ago
ppt0011 1dcaf249bd [doc] add reminder for issue encountered with hybrid adam 1 year ago
Xu Kai fdec650bb4
fix test llama (#4884) 1 year ago
Bin Jia 08a9f76b2f
[Pipeline Inference] Sync pipeline inference branch to main (#4820) 1 year ago
Camille Zhong 652adc2215 Update README.md 1 year ago
Camille Zhong afe10a85fd Update README.md 1 year ago
Camille Zhong d6c4b9b370 Update main README.md 1 year ago
Camille Zhong 3043d5d676 Update modelscope link in README.md 1 year ago
flybird11111 6a21f96a87
[doc] update advanced tutorials, training gpt with hybrid parallelism (#4866) 1 year ago
Blagoy Simandoff 8aed02b957
[nfc] fix minor typo in README (#4846) 1 year ago
Camille Zhong cd6a962e66 [NFC] polish code style (#4799) 1 year ago
Michelle 07ed155e86 [NFC] polish colossalai/inference/quant/gptq/cai_gptq/__init__.py code style (#4792) 1 year ago
littsk eef96e0877 polish code for gptq (#4793) 1 year ago
Hongxin Liu cb3a25a062
[checkpointio] hotfix torch 2.0 compatibility (#4824) 1 year ago
ppt0011 ad23460cf8
Merge pull request #4856 from KKZ20/test/model_support_for_low_level_zero 1 year ago
ppt0011 81ee91f2ca
Merge pull request #4858 from Shawlleyw/main 1 year ago
shaoyuw c97a3523db fix: typo in comment of low_level_zero plugin 1 year ago
Zhongkai Zhao db40e086c8 [test] modify model supporting part of low_level_zero plugin (including correspoding docs) 1 year ago
Xu Kai d1fcc0fa4d
[infer] fix test bug (#4838) 1 year ago
Jianghai 013a4bedf0
[inference]fix import bug and delete down useless init (#4830) 1 year ago
Yuanheng Zhao 573f270537
[Infer] Serving example w/ ray-serve (multiple GPU case) (#4841) 1 year ago
Yuanheng Zhao 3a74eb4b3a
[Infer] Colossal-Inference serving example w/ TorchServe (single GPU case) (#4771) 1 year ago
Tong Li ed06731e00
update Colossal (#4832) 1 year ago
Xu Kai c3bef20478
add autotune (#4822) 1 year ago
binmakeswell 822051d888
[doc] update slack link (#4823) 1 year ago
Yuanchen 1fa8c5e09f
Update Qwen-7B results (#4821) 1 year ago
flybird11111 be400a0936
[chat] fix gemini strategy (#4698) 1 year ago
Tong Li bbbcac26e8
fix format (#4815) 1 year ago