YeAnbang
845ea7214e
Merge branch 'main' of https://github.com/hpcaitech/ColossalAI into kto
2024-07-18 07:55:43 +00:00
YeAnbang
09d5ffca1a
add kto
2024-07-18 07:54:11 +00:00
Hongxin Liu
e86127925a
[plugin] support all-gather overlap for hybrid parallel ( #5919 )
...
* [plugin] fixed all-gather overlap support for hybrid parallel
2024-07-18 15:33:03 +08:00
GuangyaoZhang
5b969fd831
fix shardformer fp8 communication training degradation
2024-07-18 07:16:36 +00:00
Guangyao Zhang
d0bdb51f48
Merge pull request #5899 from BurkeHulk/SP_fp8
...
[Feature] FP8 communication in ShardFormer
2024-07-18 10:46:59 +08:00
Hongxin Liu
73494de577
[release] update version ( #5912 )
2024-07-17 17:29:59 +08:00
GuangyaoZhang
6a20f07b80
remove all to all
2024-07-17 07:14:55 +00:00
GuangyaoZhang
5a310b9ee1
fix rebase
2024-07-17 03:43:23 +00:00
GuangyaoZhang
457a0de79f
shardformer fp8
2024-07-16 06:56:51 +00:00
Hongxin Liu
27a72f0de1
[misc] support torch2.3 ( #5893 )
...
* [misc] support torch2.3
* [devops] update compatibility ci
* [devops] update compatibility ci
* [devops] add debug
* [devops] add debug
* [devops] add debug
* [devops] add debug
* [devops] remove debug
* [devops] remove debug
2024-07-16 13:59:25 +08:00
アマデウス
530283dba0
fix object_to_tensor usage when torch>=2.3.0 ( #5820 )
2024-07-16 13:59:25 +08:00
Guangyao Zhang
2e28c793ce
[compatibility] support torch 2.2 ( #5875 )
...
* Support Pytorch 2.2.2
* keep build_on_pr file and update .compatibility
2024-07-16 13:59:25 +08:00
Hanks
9470701110
Merge pull request #5885 from BurkeHulk/feature/fp8_comm
...
Feature/fp8 comm
2024-07-16 11:37:05 +08:00
YeAnbang
d8bf7e09a2
Merge pull request #5901 from hpcaitech/colossalchat
...
[Chat] fix eval: add in training evaluation, fix orpo sft loss bug
2024-07-16 11:07:32 +08:00
Guangyao Zhang
1c961b20f3
[ShardFormer] fix qwen2 sp ( #5903 )
2024-07-15 13:58:06 +08:00
Stephan Kö
45c49dde96
[Auto Parallel]: Speed up intra-op plan generation by 44% ( #5446 )
...
* Remove unnecessary calls to deepcopy
* Build DimSpec's difference dict only once
This change considerably speeds up construction speed of DimSpec objects. The difference_dict is the same for each DimSpec object, so a single copy of it is enough.
* Fix documentation of DimSpec's difference method
2024-07-15 12:05:06 +08:00
YeAnbang
b3594d4d68
fix orpo cross entropy loss
2024-07-15 02:12:05 +00:00
pre-commit-ci[bot]
51f916b11d
[pre-commit.ci] auto fixes from pre-commit.com hooks
...
for more information, see https://pre-commit.ci
2024-07-12 07:33:45 +00:00
BurkeHulk
1f1b856354
Merge remote-tracking branch 'origin/feature/fp8_comm' into feature/fp8_comm
...
# Conflicts:
# colossalai/quantization/fp8.py
2024-07-12 15:29:41 +08:00
BurkeHulk
66018749f3
add fp8_communication flag in the script
2024-07-12 15:26:17 +08:00
BurkeHulk
e88190184a
support fp8 communication in pipeline parallelism
2024-07-12 15:25:25 +08:00
BurkeHulk
1e1959467e
fix scaling algorithm in FP8 casting
2024-07-12 15:23:37 +08:00
Hongxin Liu
c068ef0fa0
[zero] support all-gather overlap ( #5898 )
...
* [zero] support all-gather overlap
* [zero] add overlap all-gather flag
* [misc] fix typo
* [zero] update api
2024-07-11 18:59:59 +08:00
YeAnbang
115c4cc5a4
hotfix citation
2024-07-11 06:05:05 +00:00
YeAnbang
e7a8634636
fix eval
2024-07-11 03:35:03 +00:00
YeAnbang
dd9e1cdafe
Merge pull request #5850 from hpcaitech/rlhf_SimPO
...
[Chat] Rlhf support SimPO
2024-07-11 09:14:12 +08:00
pre-commit-ci[bot]
8a9721bafe
[pre-commit.ci] auto fixes from pre-commit.com hooks
...
for more information, see https://pre-commit.ci
2024-07-10 10:44:32 +00:00
YeAnbang
33f15203d3
Merge branch 'main' of https://github.com/hpcaitech/ColossalAI into rlhf_SimPO
2024-07-10 10:39:34 +00:00
YeAnbang
f6ef5c3609
fix style
2024-07-10 10:37:17 +00:00
YeAnbang
d888c3787c
add benchmark for sft, dpo, simpo, orpo. Add benchmarking result. Support lora with gradient checkpoint
2024-07-10 10:17:08 +00:00
GuangyaoZhang
dbfa7d39fc
fix typo
2024-07-10 08:13:26 +00:00
Guangyao Zhang
669849d74b
[ShardFormer] Add Ulysses Sequence Parallelism support for Command-R, Qwen2 and ChatGLM ( #5897 )
2024-07-10 11:34:25 +08:00
YeAnbang
16f3451fe2
Merge branch 'main' of https://github.com/hpcaitech/ColossalAI into rlhf_SimPO
2024-07-10 02:32:07 +00:00
Edenzzzz
fbf33ecd01
[Feature] Enable PP + SP for llama ( #5868 )
...
* fix cross-PP-stage position id length diff bug
* fix typo
* fix typo
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* use a one cross entropy func for all shardformer models
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-07-09 18:05:20 +08:00
Runyu Lu
66abf1c6e8
[HotFix] CI,import,requirements-test for #5838 ( #5892 )
...
* [Hot Fix] CI,import,requirements-test
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-07-08 22:32:06 +08:00
Runyu Lu
cba20525a8
[Feat] Diffusion Model(PixArtAlpha/StableDiffusion3) Support ( #5838 )
...
* Diffusion Model Inference support
* Stable Diffusion 3 Support
* pixartalpha support
2024-07-08 16:02:07 +08:00
Edenzzzz
8ec24b6a4d
[Hoxfix] Fix CUDA_DEVICE_MAX_CONNECTIONS for comm overlap
...
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
2024-07-05 20:02:36 +08:00
Haze188
3420921101
[shardformer] DeepseekMoE support ( #5871 )
...
* [Feature] deepseek moe expert parallel implement
* [misc] fix typo, remove redundant file (#5867 )
* [misc] fix typo
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [Feature] deepseek support & unit test
* [misc] remove debug code & useless print
* [misc] fix typos (#5872 )
* [Feature] remove modeling file, use auto config. (#5884 )
* [misc] fix typos
* [Feature] deepseek support via auto model, remove modeling file
* [misc] delete useless file
* [misc] fix typos
* [Deepseek] remove redundant code (#5888 )
* [misc] fix typos
* [Feature] deepseek support via auto model, remove modeling file
* [misc] delete useless file
* [misc] fix typos
* [misc] remove redundant code
* [Feature/deepseek] resolve comment. (#5889 )
* [misc] fix typos
* [Feature] deepseek support via auto model, remove modeling file
* [misc] delete useless file
* [misc] fix typos
* [misc] remove redundant code
* [misc] mv module replacement into if branch
* [misc] add some warning message and modify some code in unit test
* [misc] fix typos
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-07-05 16:13:58 +08:00
pre-commit-ci[bot]
e17f835df7
[pre-commit.ci] auto fixes from pre-commit.com hooks
...
for more information, see https://pre-commit.ci
2024-07-04 12:47:17 +00:00
Hanks
6991819a97
Merge branch 'hpcaitech:main' into feature/fp8_comm
2024-07-04 20:34:41 +08:00
pre-commit-ci[bot]
7997683aac
[pre-commit.ci] pre-commit autoupdate ( #5878 )
...
updates:
- [github.com/pre-commit/mirrors-clang-format: v18.1.7 → v18.1.8](https://github.com/pre-commit/mirrors-clang-format/compare/v18.1.7...v18.1.8 )
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-07-04 13:46:41 +08:00
Hongxin Liu
7afbc81d62
[quant] fix bitsandbytes version check ( #5882 )
...
* [quant] fix bitsandbytes version check
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-07-04 11:33:23 +08:00
Wang Binluo
6cd4c32be4
[shardformer] fix the moe ( #5883 )
2024-07-03 20:02:19 +08:00
Edenzzzz
eb24fcd914
[Hotfix] Fix OPT gradient checkpointing forward
...
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
2024-07-03 14:57:57 +08:00
Haze188
ea94c07b95
[hotfix] fix the bug that large tensor exceed the maximum capacity of TensorBucket ( #5879 )
2024-07-02 12:42:02 +08:00
pre-commit-ci[bot]
7c2f79fa98
[pre-commit.ci] pre-commit autoupdate ( #5572 )
...
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/PyCQA/autoflake: v2.2.1 → v2.3.1](https://github.com/PyCQA/autoflake/compare/v2.2.1...v2.3.1 )
- [github.com/pycqa/isort: 5.12.0 → 5.13.2](https://github.com/pycqa/isort/compare/5.12.0...5.13.2 )
- [github.com/psf/black-pre-commit-mirror: 23.9.1 → 24.4.2](https://github.com/psf/black-pre-commit-mirror/compare/23.9.1...24.4.2 )
- [github.com/pre-commit/mirrors-clang-format: v13.0.1 → v18.1.7](https://github.com/pre-commit/mirrors-clang-format/compare/v13.0.1...v18.1.7 )
- [github.com/pre-commit/pre-commit-hooks: v4.3.0 → v4.6.0](https://github.com/pre-commit/pre-commit-hooks/compare/v4.3.0...v4.6.0 )
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-07-01 17:16:41 +08:00
Edenzzzz
936d0b0f7b
[doc] Update llama + sp compatibility; fix dist optim table
...
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
2024-07-01 17:07:22 +08:00
Jianghai
8ab46b4000
[Shardformer] change qwen2 modeling into gradient checkpointing style ( #5874 )
2024-07-01 16:45:09 +08:00
HangXu
f5a52e1600
fp8 operators for compressed communication
...
cast_to_fp8, cast_from_fp8, all_reduce_fp8
2024-07-01 13:44:21 +08:00
YeAnbang
ff535204fe
update transformers version
2024-06-28 06:24:30 +00:00