digger yu
71321a07cf
fix typo change dosen't to doesn't ( #5308 )
10 months ago
Frank Lee
8823cc4831
Merge pull request #5310 from hpcaitech/feature/npu
...
Feature/npu
10 months ago
Frank Lee
7cfed5f076
[feat] refactored extension module ( #5298 )
...
* [feat] refactored extension module
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
10 months ago
digger yu
bce9499ed3
fix some typo ( #5307 )
10 months ago
flybird11111
f7e3f82a7e
fix llama pretrain ( #5287 )
10 months ago
ver217
148469348a
Merge branch 'main' into sync/npu
11 months ago
Wenhao Chen
ef4f0ee854
[hotfix]: add pp sanity check and fix mbs arg ( #5268 )
...
* fix: fix misleading mbs arg
* feat: add pp sanity check
* fix: fix 1f1b sanity check
11 months ago
binmakeswell
c174c4fc5f
[doc] fix doc typo ( #5256 )
...
* [doc] fix annotation display
* [doc] fix llama2 doc
11 months ago
Hongxin Liu
d202cc28c0
[npu] change device to accelerator api ( #5239 )
...
* update accelerator
* fix timer
* fix amp
* update
* fix
* update bug
* add error raise
* fix autocast
* fix set device
* remove doc accelerator
* update doc
* update doc
* update doc
* use nullcontext
* update cpu
* update null context
* change time limit for example
* udpate
* update
* update
* update
* [npu] polish accelerator code
---------
Co-authored-by: Xuanlei Zhao <xuanlei.zhao@gmail.com>
Co-authored-by: zxl <43881818+oahzxl@users.noreply.github.com>
11 months ago
Xuanlei Zhao
dd2c28a323
[npu] use extension for op builder ( #5172 )
...
* update extension
* update cpu adam
* update is
* add doc for cpu adam
* update kernel
* update commit
* update flash
* update memory efficient
* update flash attn
* update flash attention loader
* update api
* fix
* update doc
* update example time limit
* reverse change
* fix doc
* remove useless kernel
* fix
* not use warning
* update
* update
11 months ago
Wenhao Chen
3c0d82b19b
[pipeline]: support arbitrary batch size in forward_only mode ( #5201 )
...
* fix: remove drop last in val & test dataloader
* feat: add run_forward_only, support arbitrary bs
* chore: modify ci script
11 months ago
Wenhao Chen
4fa689fca1
[pipeline]: fix p2p comm, add metadata cache and support llama interleaved pp ( #5134 )
...
* test: add more p2p tests
* fix: remove send_forward_recv_forward as p2p op list need to use the same group
* fix: make send and receive atomic
* feat: update P2PComm fn
* feat: add metadata cache in 1f1b
* feat: add metadata cache in interleaved pp
* feat: modify is_xx_stage fn
* revert: add _broadcast_object_list
* feat: add interleaved pp in llama policy
* feat: set NCCL_BUFFSIZE in HybridParallelPlugin
11 months ago
flybird11111
21aa5de00b
[gemini] hotfix NaN loss while using Gemini + tensor_parallel ( #5150 )
...
* fix
aaa
fix
fix
fix
* fix
* fix
* test ci
* fix ci
fix
12 months ago
binmakeswell
177c79f2d1
[doc] add moe news ( #5128 )
...
* [doc] add moe news
* [doc] add moe news
* [doc] add moe news
1 year ago
Wenhao Chen
7172459e74
[shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert ( #5088 )
...
* [shardformer] implement policy for all GPT-J models and test
* [shardformer] support interleaved pipeline parallel for bert finetune
* [shardformer] shardformer support falcon (#4883 )
* [shardformer]: fix interleaved pipeline for bert model (#5048 )
* [hotfix]: disable seq parallel for gptj and falcon, and polish code (#5093 )
* Add Mistral support for Shardformer (#5103 )
* [shardformer] add tests to mistral (#5105 )
---------
Co-authored-by: Pengtai Xu <henryxu880@gmail.com>
Co-authored-by: ppt0011 <143150326+ppt0011@users.noreply.github.com>
Co-authored-by: flybird11111 <1829166702@qq.com>
Co-authored-by: eric8607242 <e0928021388@gmail.com>
1 year ago
digger yu
d5661f0f25
[nfc] fix typo change directoty to directory ( #5111 )
1 year ago
Xuanlei Zhao
3acbf6d496
[npu] add npu support for hybrid plugin and llama ( #5090 )
...
* llama 3d
* update
* fix autocast
1 year ago
flybird11111
aae496631c
[shardformer]fix flash attention, when mask is casual, just don't unpad it ( #5084 )
...
* fix flash attn
* fix
fix
1 year ago
Hongxin Liu
1cd7efc520
[inference] refactor examples and fix schedule ( #5077 )
...
* [setup] refactor infer setup
* [hotfix] fix infenrece behavior on 1 1 gpu
* [exmaple] refactor inference examples
1 year ago
Bin Jia
4e3959d316
[hotfix/hybridengine] Fix init model with random parameters in benchmark ( #5074 )
...
* fix init model with random parameters
* fix example
1 year ago
github-actions[bot]
8921a73c90
[format] applied code formatting on changed files in pull request 5067 ( #5072 )
...
Co-authored-by: github-actions <github-actions@github.com>
1 year ago
Xu Kai
fb103cfd6e
[inference] update examples and engine ( #5073 )
...
* update examples and engine
* fix choices
* update example
1 year ago
Hongxin Liu
e5ce4c8ea6
[npu] add npu support for gemini and zero ( #5067 )
...
* [npu] setup device utils (#5047 )
* [npu] add npu device support
* [npu] support low level zero
* [test] update npu zero plugin test
* [hotfix] fix import
* [test] recover tests
* [npu] gemini support npu (#5052 )
* [npu] refactor device utils
* [gemini] support npu
* [example] llama2+gemini support npu
* [kernel] add arm cpu adam kernel (#5065 )
* [kernel] add arm cpu adam
* [optim] update adam optimizer
* [kernel] arm cpu adam remove bf16 support
1 year ago
Cuiqing Li (李崔卿)
bce919708f
[Kernels]added flash-decoidng of triton ( #5063 )
...
* added flash-decoidng of triton based on lightllm kernel
* add req
* clean
* clean
* delete build.sh
---------
Co-authored-by: cuiqing.li <lixx336@gmail.com>
1 year ago
Xu Kai
fd6482ad8c
[inference] Refactor inference architecture ( #5057 )
...
* [inference] support only TP (#4998 )
* support only tp
* enable tp
* add support for bloom (#5008 )
* [refactor] refactor gptq and smoothquant llama (#5012 )
* refactor gptq and smoothquant llama
* fix import error
* fix linear import torch-int
* fix smoothquant llama import error
* fix import accelerate error
* fix bug
* fix import smooth cuda
* fix smoothcuda
* [Inference Refactor] Merge chatglm2 with pp and tp (#5023 )
merge chatglm with pp and tp
* [Refactor] remove useless inference code (#5022 )
* remove useless code
* fix quant model
* fix test import bug
* mv original inference legacy
* fix chatglm2
* [Refactor] refactor policy search and quant type controlling in inference (#5035 )
* [Refactor] refactor policy search and quant type controling in inference
* [inference] update readme (#5051 )
* update readme
* update readme
* fix architecture
* fix table
* fix table
* [inference] udpate example (#5053 )
* udpate example
* fix run.sh
* fix rebase bug
* fix some errors
* update readme
* add some features
* update interface
* update readme
* update benchmark
* add requirements-infer
---------
Co-authored-by: Bin Jia <45593998+FoolPlayer@users.noreply.github.com>
Co-authored-by: Zhongkai Zhao <kanezz620@gmail.com>
1 year ago
flybird11111
bc09b95f50
[exampe] fix llama example' loss error when using gemini plugin ( #5060 )
...
fix llama example
1 year ago
Elsa Granger
b2ad0d9e8f
[pipeline,shardformer] Fix p2p efficiency in pipeline, allow skipping loading weight not in weight_map when `strict=False`, fix llama flash attention forward, add flop estimation by megatron in llama benchmark ( #5017 )
...
* Use p2p
* Cannot bidirectonal send p2p
* Refactor tensor creation and serialization in P2P
communication
* Fix llama forward args in flash attention
* Add flop estimate from megatron
* Support loading weight not in weight_map when strict=False in hybrid_parallel
* Use send_forward_recv_backward, etc in 1f1b
* Use dataclass for metdata
Remove torch.cuda.synchronize() as suggested
* Add comment about the torch.cuda.synchronize for potential error
* Typo
* Update hybrid_parallel_checkpoint_io.py
* Update p2p.py
* Update one_f_one_b.py
* Update p2p.py
---------
Co-authored-by: flybird11111 <1829166702@qq.com>
1 year ago
Cuiqing Li (李崔卿)
28052a71fb
[Kernels]Update triton kernels into 2.1.0 ( #5046 )
...
* update flash-context-attention
* adding kernels
* fix
* reset
* add build script
* add building process
* add llama2 exmaple
* add colossal-llama2 test
* clean
* fall back test setting
* fix test file
* clean
* clean
* clean
---------
Co-authored-by: cuiqing.li <lixx336@gmail.com>
1 year ago
Zhongkai Zhao
70885d707d
[hotfix] Suport extra_kwargs in ShardConfig ( #5031 )
...
* [refactor]: replace inference args with extra_kwargs in ShardConfig
* modify shardconfig
* polish code
* fix policy bug in llama
* fix bug in auto policy
* remove setattr in ShardConfig
1 year ago
Wenhao Chen
724441279b
[moe]: fix ep/tp tests, add hierarchical all2all ( #4982 )
...
* fix: add warning for EP different behavior
* fix: use shard_data in ep & tp model
* to: add used_capacity
* fix: fix router test
* feat: add create_ep_node_group
* feat: add create_ep_hierarchical_group fn
* feat: add HierarchicalAllToAll
* test: add hierarchical all2all test
* fix: fix test errors
* fix: simplify create_ep_hierarchical_group
* fix: add hierarchical_alltoall arg
* fix: fix environ typo
* revert: revert process mesh order
* to: add todo mark
* fix: skip hierarchical_comm if torch < 1.13.1
1 year ago
Xuanlei Zhao
f71e63b0f3
[moe] support optimizer checkpoint ( #5015 )
...
* Refactor MoE Manager setup method
* unshard optim ckpt
* optim io
* update transformer version
* update requirements
* update ckpt
* update ckpt
* update ckpt
* fix engine
* fix engine
1 year ago
Xuanlei Zhao
dc003c304c
[moe] merge moe into main ( #4978 )
...
* update moe module
* support openmoe
1 year ago
Cuiqing Li
459a88c806
[Kernels]Updated Triton kernels into 2.1.0 and adding flash-decoding for llama token attention ( #4965 )
...
* adding flash-decoding
* clean
* adding kernel
* adding flash-decoding
* add integration
* add
* adding kernel
* adding kernel
* adding triton 2.1.0 features for inference
* update bloom triton kernel
* remove useless vllm kernels
* clean codes
* fix
* adding files
* fix readme
* update llama flash-decoding
---------
Co-authored-by: cuiqing.li <lixx336@gmail.com>
1 year ago
アマデウス
4e4a10c97d
updated c++17 compiler flags ( #4983 )
1 year ago
Jianghai
c6cd629e7a
[Inference]ADD Bench Chatglm2 script ( #4963 )
...
* add bench chatglm
* fix bug and make utils
---------
Co-authored-by: CjhHa1 <cjh18671720497outlook.com>
1 year ago
Xu Kai
785802e809
[inference] add reference and fix some bugs ( #4937 )
...
* add reference and fix some bugs
* update gptq init
---------
Co-authored-by: Xu Kai <xukai16@foxamil.com>
1 year ago
Cuiqing Li
3a41e8304e
[Refactor] Integrated some lightllm kernels into token-attention ( #4946 )
...
* add some req for inference
* clean codes
* add codes
* add some lightllm deps
* clean codes
* hello
* delete rms files
* add some comments
* add comments
* add doc
* add lightllm deps
* add lightllm cahtglm2 kernels
* add lightllm cahtglm2 kernels
* replace rotary embedding with lightllm kernel
* add some commnets
* add some comments
* add some comments
* add
* replace fwd kernel att1
* fix a arg
* add
* add
* fix token attention
* add some comments
* clean codes
* modify comments
* fix readme
* fix bug
* fix bug
---------
Co-authored-by: cuiqing.li <lixx336@gmail.com>
Co-authored-by: CjhHa1 <cjh18671720497@outlook.com>
1 year ago
Xu Kai
611a5a80ca
[inference] Add smmoothquant for llama ( #4904 )
...
* [inference] add int8 rotary embedding kernel for smoothquant (#4843 )
* [inference] add smoothquant llama attention (#4850 )
* add smoothquant llama attention
* remove uselss code
* remove useless code
* fix import error
* rename file name
* [inference] add silu linear fusion for smoothquant llama mlp (#4853 )
* add silu linear
* update skip condition
* catch smoothquant cuda lib exception
* prcocess exception for tests
* [inference] add llama mlp for smoothquant (#4854 )
* add llama mlp for smoothquant
* fix down out scale
* remove duplicate lines
* add llama mlp check
* delete useless code
* [inference] add smoothquant llama (#4861 )
* add smoothquant llama
* fix attention accuracy
* fix accuracy
* add kv cache and save pretrained
* refactor example
* delete smooth
* refactor code
* [inference] add smooth function and delete useless code for smoothquant (#4895 )
* add smooth function and delete useless code
* update datasets
* remove duplicate import
* delete useless file
* refactor codes (#4902 )
* rafactor code
* add license
* add torch-int and smoothquant license
1 year ago
Blagoy Simandoff
8aed02b957
[nfc] fix minor typo in README ( #4846 )
1 year ago
Xu Kai
d1fcc0fa4d
[infer] fix test bug ( #4838 )
...
* fix test bug
* delete useless code
* fix typo
1 year ago
Jianghai
013a4bedf0
[inference]fix import bug and delete down useless init ( #4830 )
...
* fix import bug and release useless init
* fix
* fix
* fix
1 year ago
Yuanheng Zhao
573f270537
[Infer] Serving example w/ ray-serve (multiple GPU case) ( #4841 )
...
* fix imports
* add ray-serve with Colossal-Infer tp
* trivial: send requests script
* add README
* fix worker port
* fix readme
* use app builder and autoscaling
* trivial: input args
* clean code; revise readme
* testci (skip example test)
* use auto model/tokenizer
* revert imports fix (fixed in other PRs)
1 year ago
Yuanheng Zhao
3a74eb4b3a
[Infer] Colossal-Inference serving example w/ TorchServe (single GPU case) ( #4771 )
...
* add Colossal-Inference serving example w/ TorchServe
* add dockerfile
* fix dockerfile
* fix dockerfile: fix commit hash, install curl
* refactor file structure
* revise readme
* trivial
* trivial: dockerfile format
* clean dir; revise readme
* fix comments: fix imports and configs
* fix formats
* remove unused requirements
1 year ago
binmakeswell
822051d888
[doc] update slack link ( #4823 )
1 year ago
flybird11111
26cd6d850c
[fix] fix weekly runing example ( #4787 )
...
* [fix] fix weekly runing example
* [fix] fix weekly runing example
1 year ago
Xu Kai
946ab56c48
[feature] add gptq for inference ( #4754 )
...
* [gptq] add gptq kernel (#4416 )
* add gptq
* refactor code
* fix tests
* replace auto-gptq
* rname inferance/quant
* refactor test
* add auto-gptq as an option
* reset requirements
* change assert and check auto-gptq
* add import warnings
* change test flash attn version
* remove example
* change requirements of flash_attn
* modify tests
* [skip ci] change requirements-test
* [gptq] faster gptq cuda kernel (#4494 )
* [skip ci] add cuda kernels
* add license
* [skip ci] fix max_input_len
* format files & change test size
* [skip ci]
* [gptq] add gptq tensor parallel (#4538 )
* add gptq tensor parallel
* add gptq tp
* delete print
* add test gptq check
* add test auto gptq check
* [gptq] combine gptq and kv cache manager (#4706 )
* combine gptq and kv cache manager
* add init bits
* delete useless code
* add model path
* delete usless print and update test
* delete usless import
* move option gptq to shard config
* change replace linear to shardformer
* update bloom policy
* delete useless code
* fix import bug and delete uselss code
* change colossalai/gptq to colossalai/quant/gptq
* update import linear for tests
* delete useless code and mv gptq_kernel to kernel directory
* fix triton kernel
* add triton import
1 year ago
Baizhou Zhang
df66741f77
[bug] fix get_default_parser in examples ( #4764 )
1 year ago
Wenhao Chen
7b9b86441f
[chat]: update rm, add wandb and fix bugs ( #4471 )
...
* feat: modify forward fn of critic and reward model
* feat: modify calc_action_log_probs
* to: add wandb in sft and rm trainer
* feat: update train_sft
* feat: update train_rm
* style: modify type annotation and add warning
* feat: pass tokenizer to ppo trainer
* to: modify trainer base and maker base
* feat: add wandb in ppo trainer
* feat: pass tokenizer to generate
* test: update generate fn tests
* test: update train tests
* fix: remove action_mask
* feat: remove unused code
* fix: fix wrong ignore_index
* fix: fix mock tokenizer
* chore: update requirements
* revert: modify make_experience
* fix: fix inference
* fix: add padding side
* style: modify _on_learn_batch_end
* test: use mock tokenizer
* fix: use bf16 to avoid overflow
* fix: fix workflow
* [chat] fix gemini strategy
* [chat] fix
* sync: update colossalai strategy
* fix: fix args and model dtype
* fix: fix checkpoint test
* fix: fix requirements
* fix: fix missing import and wrong arg
* fix: temporarily skip gemini test in stage 3
* style: apply pre-commit
* fix: temporarily skip gemini test in stage 1&2
---------
Co-authored-by: Mingyan Jiang <1829166702@qq.com>
1 year ago
Hongxin Liu
079bf3cb26
[misc] update pre-commit and run all files ( #4752 )
...
* [misc] update pre-commit
* [misc] run pre-commit
* [misc] remove useless configuration files
* [misc] ignore cuda for clang-format
1 year ago
github-actions[bot]
3c6b831c26
[format] applied code formatting on changed files in pull request 4743 ( #4750 )
...
Co-authored-by: github-actions <github-actions@github.com>
1 year ago