Commit Graph

3461 Commits (fbf33ecd019ce0e075b76b628e6e8a319cfc43e3)

Author SHA1 Message Date
github-actions[bot] e6707a6e8d
[format] applied code formatting on changed files in pull request 5510 (#5517)
Co-authored-by: github-actions <github-actions@github.com>
2024-03-27 11:21:03 +08:00
Hongxin Liu 19e1a5cf16
[shardformer] update colo attention to support custom mask (#5510)
* [feature] refactor colo attention (#5462)

* [extension] update api

* [feature] add colo attention

* [feature] update sdpa

* [feature] update npu attention

* [feature] update flash-attn

* [test] add flash attn test

* [test] update flash attn test

* [shardformer] update modeling to fit colo attention (#5465)

* [misc] refactor folder structure

* [shardformer] update llama flash-attn

* [shardformer] fix llama policy

* [devops] update tensornvme install

* [test] update llama test

* [shardformer] update colo attn kernel dispatch

* [shardformer] update blip2

* [shardformer] update chatglm

* [shardformer] update gpt2

* [shardformer] update gptj

* [shardformer] update opt

* [shardformer] update vit

* [shardformer] update colo attention mask prep

* [shardformer] update whisper

* [test] fix shardformer tests (#5514)

* [test] fix shardformer tests

* [test] fix shardformer tests
2024-03-27 11:19:32 +08:00
Edenzzzz 9a3321e9f4
Merge pull request #5515 from Edenzzzz/fix_layout_convert
Fix layout convertor caching
2024-03-26 19:51:02 +08:00
Edenzzzz 18edcd5368 Empty-Commit 2024-03-26 19:50:41 +08:00
Edenzzzz 61da3fbc52 fixed layout converter caching and updated tester 2024-03-26 17:22:27 +08:00
傅剑寒 e6496dd371
[Inference] Optimize request handler of llama (#5512)
* optimize request_handler

* fix ways of writing
2024-03-26 16:37:14 +08:00
Rocky Duan cbe34c557c
Fix ColoTensorSpec for py11 (#5440) 2024-03-26 15:56:49 +08:00
Hongxin Liu a7790a92e8
[devops] fix example test ci (#5504) 2024-03-26 15:09:05 +08:00
Yuanheng Zhao 131f32a076
[fix] fix grok-1 example typo (#5506) 2024-03-26 10:19:42 +08:00
flybird11111 0688d92e2d
[shardformer]Fix lm parallel. (#5480)
* fix

* padding vocab_size when using pipeline parallellism

padding vocab_size when using pipeline parallellism

fix

fix

* fix

* fix

fix

fix

* fix gather output

* fix

* fix

* fix

fix resize embedding

fix resize embedding

* fix resize embedding

fix

* revert

* revert

* revert

* fix lm forward distribution

* fix

* test ci

* fix
2024-03-25 17:21:51 +08:00
Runyu Lu 6251d68dc9
[fix] PR #5354 (#5501)
* [fix]

* [fix]

* Update config.py docstring

* [fix] docstring align

* [fix] docstring align

* [fix] docstring align
2024-03-25 15:24:17 +08:00
Runyu Lu 1d626233ce
Merge pull request #5434 from LRY89757/colossal-infer-cuda-graph
[feat] cuda graph support and refactor non-functional api
2024-03-25 14:55:59 +08:00
Runyu Lu 68e9396bc0 [fix] merge conflicts 2024-03-25 14:48:28 +08:00
binmakeswell 34e909256c
[release] grok-1 inference benchmark (#5500)
* [release] grok-1 inference benchmark

* [release] grok-1 inference benchmark

* [release] grok-1 inference benchmark

* [release] grok-1 inference benchmark

* [release] grok-1 inference benchmark
2024-03-25 14:42:51 +08:00
yuehuayingxueluo 87079cffe8
[Inference]Support FP16/BF16 Flash Attention 2 And Add high_precision Flag To Rotary Embedding (#5461)
* Support FP16/BF16 Flash Attention 2

* fix bugs in test_kv_cache_memcpy.py

* add context_kv_cache_memcpy_kernel.cu

* rm typename MT

* add tail process

* add high_precision

* add high_precision to config.py

* rm unused code

* change the comment for the high_precision parameter

* update test_rotary_embdding_unpad.py

* fix vector_copy_utils.h

* add comment for self.high_precision when using float32
2024-03-25 13:40:34 +08:00
Wenhao Chen bb0a668fee
[hotfix] set return_outputs=False in examples and polish code (#5404)
* fix: simplify merge_batch

* fix: use return_outputs=False to eliminate extra memory consumption

* feat: add return_outputs warning

* style: remove `return_outputs=False` as it is the default value
2024-03-25 12:31:09 +08:00
Runyu Lu ff4998c6f3 [fix] remove unused comment 2024-03-25 12:00:57 +08:00
Runyu Lu 9fe61b4475 [fix] 2024-03-25 11:37:58 +08:00
Yuanheng Zhao 5fcd7795cd
[example] update Grok-1 inference (#5495)
* revise grok-1 example

* remove unused arg in scripts

* prevent re-installing torch

* update readme

* revert modifying colossalai requirements

* add perf

* trivial

* add tokenizer url
2024-03-24 20:24:11 +08:00
binmakeswell 6df844b8c4
[release] grok-1 314b inference (#5490)
* [release] grok-1 inference

* [release] grok-1 inference

* [release] grok-1 inference
2024-03-22 15:48:12 +08:00
Hongxin Liu 848a574c26
[example] add grok-1 inference (#5485)
* [misc] add submodule

* remove submodule

* [example] support grok-1 tp inference

* [example] add grok-1 inference script

* [example] refactor code

* [example] add grok-1 readme

* [exmaple] add test ci

* [exmaple] update readme
2024-03-21 18:07:22 +08:00
Runyu Lu 5b017d6324 [fix] 2024-03-21 15:55:25 +08:00
Runyu Lu 606603bb88 Merge branch 'feature/colossal-infer' of https://github.com/hpcaitech/ColossalAI into colossal-infer-cuda-graph 2024-03-21 14:25:22 +08:00
Runyu Lu 4eafe0c814 [fix] unused option 2024-03-21 11:28:42 +08:00
binmakeswell d158fc0e64
[doc] update open-sora demo (#5479)
* [doc] update open-sora demo

* [doc] update open-sora demo

* [doc] update open-sora demo
2024-03-20 16:08:41 +08:00
傅剑寒 7ff42cc06d
add vec_type_trait implementation (#5473) 2024-03-19 18:36:40 +08:00
傅剑寒 b96557b5e1
Merge pull request #5469 from Courtesy-Xs/add_vec_traits
Refactor vector utils
2024-03-19 13:53:26 +08:00
Runyu Lu aabc9fb6aa [feat] add use_cuda_kernel option 2024-03-19 13:24:25 +08:00
xs_courtesy 48c4f29b27 refactor vector utils 2024-03-19 11:32:01 +08:00
binmakeswell bd998ced03
[doc] release Open-Sora 1.0 with model weights (#5468)
* [doc] release Open-Sora 1.0 with model weights

* [doc] release Open-Sora 1.0 with model weights

* [doc] release Open-Sora 1.0 with model weights
2024-03-18 18:31:18 +08:00
flybird11111 5e16bf7980
[shardformer] fix gathering output when using tensor parallelism (#5431)
* fix

* padding vocab_size when using pipeline parallellism

padding vocab_size when using pipeline parallellism

fix

fix

* fix

* fix

fix

fix

* fix gather output

* fix

* fix

* fix

fix resize embedding

fix resize embedding

* fix resize embedding

fix

* revert

* revert

* revert
2024-03-18 15:55:11 +08:00
傅剑寒 b6e9785885
Merge pull request #5457 from Courtesy-Xs/ly_add_implementation_for_launch_config
add implementatino for GetGPULaunchConfig1D
2024-03-15 11:23:44 +08:00
xs_courtesy 5724b9e31e add some comments 2024-03-15 11:18:57 +08:00
Runyu Lu 6e30248683 [fix] tmp for test 2024-03-14 16:13:00 +08:00
xs_courtesy 388e043930 add implementatino for GetGPULaunchConfig1D 2024-03-14 11:13:40 +08:00
Runyu Lu d02e257abd
Merge branch 'feature/colossal-infer' into colossal-infer-cuda-graph 2024-03-14 10:37:05 +08:00
Runyu Lu ae24b4f025 diverse tests 2024-03-14 10:35:08 +08:00
Runyu Lu 1821a6dab0 [fix] pytest and fix dyn grid bug 2024-03-13 17:28:32 +08:00
yuehuayingxueluo f366a5ea1f
[Inference/kernel]Add Fused Rotary Embedding and KVCache Memcopy CUDA Kernel (#5418)
* add rotary embedding kernel

* add rotary_embedding_kernel

* add fused rotary_emb and kvcache memcopy

* add fused_rotary_emb_and_cache_kernel.cu

* add fused_rotary_emb_and_memcopy

* fix bugs in fused_rotary_emb_and_cache_kernel.cu

* fix ci bugs

* use vec memcopy and opt the  gloabl memory access

* fix code style

* fix test_rotary_embdding_unpad.py

* codes revised based on the review comments

* fix bugs about include path

* rm inline
2024-03-13 17:20:03 +08:00
Steve Luo ed431de4e4
fix rmsnorm template function invocation problem(template function partial specialization is not allowed in Cpp) and luckily pass e2e precision test (#5454) 2024-03-13 16:00:55 +08:00
Hongxin Liu f2e8b9ef9f
[devops] fix compatibility (#5444)
* [devops] fix compatibility

* [hotfix] update compatibility test on pr

* [devops] fix compatibility

* [devops] record duration during comp test

* [test] decrease test duration

* fix falcon
2024-03-13 15:24:13 +08:00
傅剑寒 6fd355a5a6
Merge pull request #5452 from Courtesy-Xs/fix_include_path
fix include path
2024-03-13 11:26:41 +08:00
xs_courtesy c1c45e9d8e fix include path 2024-03-13 11:21:06 +08:00
Steve Luo b699f54007
optimize rmsnorm: add vectorized elementwise op, feat loop unrolling (#5441) 2024-03-12 17:48:02 +08:00
傅剑寒 368a2aa543
Merge pull request #5445 from Courtesy-Xs/refactor_infer_compilation
Refactor colossal-infer code arch
2024-03-12 14:14:37 +08:00
digger yu 385e85afd4
[hotfix] fix typo s/keywrods/keywords etc. (#5429) 2024-03-12 11:25:16 +08:00
xs_courtesy 095c070a6e refactor code 2024-03-11 17:06:57 +08:00
Camille Zhong da885ed540
fix tensor data update for gemini loss caluculation (#5442) 2024-03-11 13:49:58 +08:00
傅剑寒 21e1e3645c
Merge pull request #5435 from Courtesy-Xs/add_gpu_launch_config
Add query and other components
2024-03-11 11:15:29 +08:00
Runyu Lu 633e95b301 [doc] add doc 2024-03-11 10:56:51 +08:00